repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
quantumlib/ReCirq
docs/qaoa/example_problems.ipynb
apache-2.0
[ "Copyright 2020 Google", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "QAOA example problems\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/cirq/experiments/qaoa/example_problems\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/qaoa/example_problems.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/ReCirq/blob/master/docs/qaoa/example_problems.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/qaoa/example_problems.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>\n\nThe shallowest depth version of the Quantum Approximate Optimization Algorithm (QAOA) consists of the application of two unitary operators: the problem unitary and the driver unitary. The first of these depends on the parameter $\\gamma$ and applies a phase to pairs of bits according to the problem-specific cost operator $C$:\n$$\n U_C ! \\left(\\gamma \\right) = e^{-i \\gamma C } = \\prod_{j < k} e^{-i \\gamma w_{jk} Z_j Z_k}\n$$\nwhereas the driver unitary depends on the parameter $\\beta$, is problem-independent, and serves to drive transitions between bitstrings within the superposition state:\n$$\n \\newcommand{\\gammavector}{\\boldsymbol{\\gamma}}\n \\newcommand{\\betavector}{\\boldsymbol{\\beta}}\n U_B ! \\left(\\beta \\right) = e^{-i \\beta B} = \\prod_j e^{- i \\beta X_j},\n \\quad \\qquad\n B = \\sum_j X_j\n$$\nwhere $X_j$ is the Pauli $X$ operator on qubit $j$. These operators can be implemented by sequentially evolving under each term of the product; specifically the problem unitary is applied with a sequence of two-body interactions while the driver unitary is a single qubit rotation on each qubit. For higher-depth versions of the algorithm the two unitaries are sequentially re-applied each with their own $\\beta$ or $\\gamma$. The number of applications of the pair of unitaries is represented by the hyperparameter $p$ with parameters $\\gammavector = (\\gamma_1, \\dots, \\gamma_p)$ and $\\betavector = (\\beta_1, \\dots, \\beta_p)$. For $n$ qubits, we prepare the parameterized state\n$$\n \\newcommand{\\bra}[1]{\\langle #1|}\n \\newcommand{\\ket}[1]{|#1\\rangle}\n | \\gammavector , \\betavector \\rangle = U_B(\\beta_p) U_C(\\gamma_p ) \\cdots U_B(\\beta_1) U_C(\\gamma_1 ) \\ket{+}^{\\otimes n},\n$$ \nwhere $\\ket{+}^{\\otimes n}$ is the symmetric superposition of computational basis states.\n<img src=\"./images/qaoa_circuit.png\" alt=\"QAOA circuit\"/>\nThe optimization problems we study in this work are defined through a cost function with a corresponding quantum operator C given by\n$$\n C = \\sum_{j < k} w_{jk} Z_j Z_k\n$$\nwhere $Z_j$ dnotes the Pauli $Z$ operator on qubit $j$, and the $w_{jk}$ correspond to scalar weights with values ${0, \\pm1}$. Because these clauses act on at most two qubits, we are able to associate a graph with a given problem instance with weighted edges given by the $w_{jk}$ adjacency matrix.\nSetup\nInstall the ReCirq package:", "try:\n import recirq\nexcept ImportError:\n !pip install git+https://github.com/quantumlib/ReCirq", "Now import Cirq, ReCirq and the module dependencies:", "import networkx as nx\nimport numpy as np\nimport scipy.optimize\nimport cirq\nimport recirq\n\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\n# theme colors\nQBLUE = '#1967d2'\nQRED = '#ea4335ff'\nQGOLD = '#fbbc05ff'", "Hardware grid\nFirst, we study problem graphs which match the connectivity of our hardware, which we term \"Hardware Grid problems\". Despite results showing that problems on such graphs are efficient to solve on average, we study these problems as they do not require routing. This family of problems is composed of random instances generated by sampling $w_{ij}$ to be $\\pm 1$ for edges in the device topology or a subgraph thereof.", "from recirq.qaoa.problems import get_all_hardware_grid_problems\nimport cirq.contrib.routing as ccr\n\nhg_problems = get_all_hardware_grid_problems(\n device_graph=ccr.gridqubits_to_graph_device(recirq.get_device_obj_by_name('Sycamore23').qubits),\n central_qubit=cirq.GridQubit(6,3),\n n_instances=10,\n rs=np.random.RandomState(5)\n) \n\ninstance_i = 0\nn_qubits = 23\nproblem = hg_problems[n_qubits, instance_i]\n\nfig, ax = plt.subplots(figsize=(6,5))\npos = {i: coord for i, coord in enumerate(problem.coordinates)}\nnx.draw_networkx(problem.graph, pos=pos, with_labels=False, node_color=QBLUE)\nif True: # toggle edge labels\n edge_labels = {(i1, i2): f\"{weight:+d}\"\n for i1, i2, weight in problem.graph.edges.data('weight')}\n nx.draw_networkx_edge_labels(problem.graph, pos=pos, edge_labels=edge_labels)\nax.axis('off')\nfig.tight_layout()", "Sherrington-Kirkpatrick model\nNext, we study instances of the Sherrington-Kirkpatrick (SK) model, defined on the complete graph with $w_{ij}$ randomly chosen to be $\\pm 1$. This is a canonical example of a frustrated spin glass and is most penalized by routing, which can be performed optimally using the linear swap networks at the cost of a linear increase in circuit depth.", "from recirq.qaoa.problems import get_all_sk_problems\n\nn_qubits = 17\nall_sk_problems = get_all_sk_problems(max_n_qubits=17, n_instances=10, rs=np.random.RandomState(5))\nsk_problem = all_sk_problems[n_qubits, instance_i]\n\nfig, ax = plt.subplots(figsize=(6,5))\npos = nx.circular_layout(sk_problem.graph)\nnx.draw_networkx(sk_problem.graph, pos=pos, with_labels=False, node_color=QRED)\nif False: # toggle edge labels\n edge_labels = {(i1, i2): f\"{weight:+d}\"\n for i1, i2, weight in sk_problem.graph.edges.data('weight')}\n nx.draw_networkx_edge_labels(sk_problem.graph, pos=pos, edge_labels=edge_labels)\nax.axis('off')\nfig.tight_layout()", "3-regular MaxCut\nFinally, we study instances of the MaxCut problem on 3-regular graphs. This is a prototypical discrete optimization problem with a low, fixed node degree but a high dimension which cannot be trivially mapped to a planar architecture. It more closely matches problems of industrial interest. For these problems, we use an automated routing algorithm to heuristically insert SWAP operations.", "from recirq.qaoa.problems import get_all_3_regular_problems\n\nn_qubits = 22\ninstance_i = 0\nthreereg_problems = get_all_3_regular_problems(max_n_qubits=22, n_instances=10, rs=np.random.RandomState(5))\nthreereg_problem = threereg_problems[n_qubits, instance_i]\n\nfig, ax = plt.subplots(figsize=(6,5))\npos = nx.spring_layout(threereg_problem.graph, seed=11)\nnx.draw_networkx(threereg_problem.graph, pos=pos, with_labels=False, node_color=QGOLD)\nif False: # toggle edge labels\n edge_labels = {(i1, i2): f\"{weight:+d}\"\n for i1, i2, weight in threereg_problem.graph.edges.data('weight')}\n nx.draw_networkx_edge_labels(threereg_problem.graph, pos=pos, edge_labels=edge_labels)\nax.axis('off')\nfig.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Unidata/unidata-python-workshop
notebooks/Time_Series/Basic Time Series Plotting.ipynb
mit
[ "<a name=\"top\"></a>\n<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>Basic Time Series Plotting</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\n<div style=\"float:right; width:250 px\"><img src=\"http://matplotlib.org/_images/date_demo.png\" alt=\"METAR\" style=\"height: 300px;\"></div>\n\nOverview:\n\nTeaching: 45 minutes\nExercises: 30 minutes\n\nQuestions\n\nHow can we obtain buoy data from the NDBC?\nHow are plots created in Python?\nWhat features does Matplotlib have for improving our time series plots?\nHow can multiple y-axes be used in a single plot?\n\nObjectives\n\n<a href=\"#loaddata\">Obtaining data</a>\n<a href=\"#basictimeseries\">Basic timeseries plotting</a>\n<a href=\"#multiy\">Multiple y-axes</a>\n\n<a name=\"loaddata\"></a>\nObtaining Data\nTo learn about time series analysis, we first need to find some data and get it into Python. In this case we're going to use data from the National Data Buoy Center. We'll use the pandas library for our data subset and manipulation operations after obtaining the data with siphon. \nEach buoy has many types of data availabe, you can read all about it in the NDBC Web Data Guide. There is a mechanism in siphon to see which data types are available for a given buoy.", "from siphon.simplewebservice.ndbc import NDBC\n\ndata_types = NDBC.buoy_data_types('46042')\nprint(data_types)", "In this case, we'll just stick with the standard meteorological data. The \"realtime\" data from NDBC contains approximately 45 days of data from each buoy. We'll retreive that record for buoy 51002 and then do some cleaning of the data.", "df = NDBC.realtime_observations('46042')\n\ndf.tail()", "Let's get rid of the columns with all missing data. We could use the drop method and manually name all of the columns, but that would require us to know which are all NaN and that sounds like manual labor - something that programmers hate. Pandas has the dropna method that allows us to drop rows or columns where any or all values are NaN. In this case, let's drop all columns with all NaN values.", "df = df.dropna(axis='columns', how='all')\n\ndf.head()", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Use the realtime_observations method to retreive supplemental data for buoy 41002. **Note** assign the data to something other that df or you'll have to rerun the data download cell above. We suggest using the name supl_obs.</li>\n </ul>\n</div>", "# Your code goes here\n# supl_obs =", "Solution", "# %load solutions/get_obs.py", "Finally, we need to trim down the data. The file contains 45 days worth of observations. Let's look at the last week's worth of data.", "import pandas as pd\nidx = df.time >= (pd.Timestamp.utcnow() - pd.Timedelta(days=7))\ndf = df[idx]\ndf.head()", "We're almost ready, but now the index column is not that meaningful. It starts at a non-zero row, which is fine with our initial file, but let's re-zero the index so we have a nice clean data frame to start with.", "df.reset_index(drop=True, inplace=True)\ndf.head()", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"basictimeseries\"></a>\nBasic Timeseries Plotting\nMatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. We're going to learn the basics of creating timeseries plots with matplotlib by plotting buoy wind, gust, temperature, and pressure data.", "# Convention for import of the pyplot interface\nimport matplotlib.pyplot as plt\n\n# Set-up to have matplotlib use its support for notebook inline plots\n%matplotlib inline", "We'll start by plotting the windspeed observations from the buoy.", "plt.rc('font', size=12)\nfig, ax = plt.subplots(figsize=(10, 6))\n\n# Specify how our lines should look\nax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')\n\n# Same as above\nax.set_xlabel('Time')\nax.set_ylabel('Speed (m/s)')\nax.set_title('Buoy Wind Data')\nax.grid(True)\nax.legend(loc='upper left');", "Our x axis labels look a little crowded - let's try only labeling each day in our time series.", "# Helpers to format and locate ticks for dates\nfrom matplotlib.dates import DateFormatter, DayLocator\n\n# Set the x-axis to do major ticks on the days and label them like '07/20'\nax.xaxis.set_major_locator(DayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%m/%d'))\n\nfig", "Now we can add wind gust speeds to the same plot as a dashed yellow line.", "# Use linestyle keyword to style our plot\nax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--',\n label='Wind Gust')\n# Redisplay the legend to show our new wind gust line\nax.legend(loc='upper left')\n\nfig", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Create your own figure and axes (<code>myfig, myax = plt.subplots(figsize=(10, 6))</code>) which plots temperature.</li>\n <li>Change the x-axis major tick labels to display the shortened month and date (i.e. 'Sep DD' where DD is the day number). Look at the\n <a href=\"https://docs.python.org/3.6/library/datetime.html#strftime-and-strptime-behavior\">\n table of formatters</a> for help.\n <li>Make sure you include a legend and labels!</li>\n <li><b>BONUS:</b> try changing the <code>linestyle</code>, e.g., a blue dashed line.</li>\n </ul>\n</div>", "# Your code goes here\n", "Solution\n<div class=\"alert alert-info\">\n <b>Tip</b>:\n If your figure goes sideways as you try multiple things, try running the notebook up to this point again\n by using the Cell -> Run All Above option in the menu bar.\n</div>", "# %load solutions/basic_plot.py", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"multiy\"></a>\nMultiple y-axes\nWhat if we wanted to plot another variable in vastly different units on our plot? <br/>\nLet's return to our wind data plot and add pressure.", "# plot pressure data on same figure\nax.plot(df.time, df.pressure, color='black', label='Pressure')\nax.set_ylabel('Pressure')\n\nax.legend(loc='upper left')\n\nfig", "That is less than ideal. We can't see detail in the data profiles! We can create a twin of the x-axis and have a secondary y-axis on the right side of the plot. We'll create a totally new figure here.", "fig, ax = plt.subplots(figsize=(10, 6))\naxb = ax.twinx()\n\n# Same as above\nax.set_xlabel('Time')\nax.set_ylabel('Speed (m/s)')\nax.set_title('Buoy Data')\nax.grid(True)\n\n# Plotting on the first y-axis\nax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')\nax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust')\nax.legend(loc='upper left');\n\n# Plotting on the second y-axis\naxb.set_ylabel('Pressure (hPa)')\naxb.plot(df.time, df.pressure, color='black', label='pressure')\n\nax.xaxis.set_major_locator(DayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%b %d'))\n", "We're closer, but the data are plotting over the legend and not included in the legend. That's because the legend is associated with our primary y-axis. We need to append that data from the second y-axis.", "fig, ax = plt.subplots(figsize=(10, 6))\naxb = ax.twinx()\n\n# Same as above\nax.set_xlabel('Time')\nax.set_ylabel('Speed (m/s)')\nax.set_title('Buoy 41056 Wind Data')\nax.grid(True)\n\n# Plotting on the first y-axis\nax.plot(df.time, df.wind_speed, color='tab:orange', label='Windspeed')\nax.plot(df.time, df.wind_gust, color='tab:olive', linestyle='--', label='Wind Gust')\n\n# Plotting on the second y-axis\naxb.set_ylabel('Pressure (hPa)')\naxb.plot(df.time, df.pressure, color='black', label='pressure')\n\nax.xaxis.set_major_locator(DayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%b %d'))\n\n# Handling of getting lines and labels from all axes for a single legend\nlines, labels = ax.get_legend_handles_labels()\nlines2, labels2 = axb.get_legend_handles_labels()\naxb.legend(lines + lines2, labels + labels2, loc='upper left');", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n Create your own plot that has the following elements:\n <ul>\n <li>A blue line representing the wave height measurements.</li>\n <li>A green line representing wind speed on a secondary y-axis</li>\n <li>Proper labels/title.</li>\n <li>**Bonus**: Make the wave height data plot as points only with no line. Look at the documentation for the linestyle and marker arguments.</li>\n </ul>\n</div>", "# Your code goes here\n", "Solution", "# %load solutions/adv_plot.py", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jesford/cluster-lensing
fitting_a_model.ipynb
mit
[ "Model fitting with cluster-lensing & emcee", "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn; seaborn.set()\n\nfrom clusterlensing import ClusterEnsemble\n\nimport emcee\nimport corner\n\n% matplotlib inline\n\nimport matplotlib\nmatplotlib.rcParams[\"axes.labelsize\"] = 20\nmatplotlib.rcParams[\"legend.fontsize\"] = 12", "Generate a noisy measurement to fit", "logm_true = 14\noff_true = 0.3\n\nnbins = 10\n\nredshifts = [0.2]\nmass = [10**logm_true]\noffsets = [off_true]\nrbins = np.logspace(np.log10(0.1), np.log10(5), num = nbins)\n\ncdata = ClusterEnsemble(redshifts)\ncdata.m200 = mass\ncdata.calc_nfw(rbins=rbins, offsets=offsets)\ndsigma_true = cdata.deltasigma_nfw.mean(axis=0).value\n\n# add scatter with a stddev of 20% of data\nnoise = np.random.normal(scale=dsigma_true*0.2, size=nbins)\ny = dsigma_true + noise\nyerr = np.abs(dsigma_true/3) # 33% error bars\n\nplt.plot(rbins, dsigma_true, 'bo-', label='True $\\Delta\\Sigma(R)$')\nplt.plot(rbins, y, 'g^-', label='Noisy $\\Delta\\Sigma(R)$')\nplt.errorbar(rbins, y, yerr=yerr, color='g', linestyle='None')\nplt.xscale('log')\nplt.legend(loc='best')\nplt.show()", "Write down likelihood, prior, and posterior probilities\nThe model parameters are the mass and centroid offsets. Redshift is assumed to be known.", "# probability of the data given the model\ndef lnlike(theta, z, rbins, data, stddev):\n logm, offsets = theta\n \n # calculate the model\n c = ClusterEnsemble(z)\n c.m200 = [10 ** logm]\n c.calc_nfw(rbins=rbins, offsets=[offsets])\n model = c.deltasigma_nfw.mean(axis=0).value\n \n diff = data - model\n lnlikelihood = -0.5 * np.sum(diff**2 / stddev**2)\n return lnlikelihood\n\n# uninformative prior\ndef lnprior(theta):\n logm, offset = theta\n if 10 < logm < 16 and 0.0 <= offset < 5.0:\n return 0.0\n else:\n return -np.inf\n\n# posterior probability\ndef lnprob(theta, z, rbins, data, stddev):\n lp = lnprior(theta)\n if not np.isfinite(lp):\n return -np.inf\n else:\n return lp + lnlike(theta, z, rbins, data, stddev)", "Sample the posterior using emcee", "ndim = 2\nnwalkers = 20\np0 = np.random.rand(ndim * nwalkers).reshape((nwalkers, ndim))\np0[:,0] = p0[:,0] + 13.5 # start somewhere close to true logm ~ 14\n\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, \n args=(redshifts, rbins, y, yerr), threads=8)\n\n# the MCMC chains take some time: about 49 minutes for the 500 samples below\ni_can_wait = False # or can you? Set to True to run the MCMC chains\n\nif i_can_wait:\n pos, prob, state = sampler.run_mcmc(p0, 500)", "Check walker positions for burn-in", "if i_can_wait:\n fig, axes = plt.subplots(2, 1, sharex=True, figsize=(8, 6))\n axes[0].plot(sampler.chain[:, :, 0].T, color=\"k\", alpha=0.4)\n axes[0].axhline(logm_true, color=\"g\", lw=2)\n axes[0].set_ylabel(\"log-mass\")\n\n axes[1].plot(sampler.chain[:, :, 1].T, color=\"k\", alpha=0.4)\n axes[1].axhline(off_true, color=\"g\", lw=2)\n axes[1].set_ylabel(\"offset\")\n axes[1].set_xlabel(\"step number\")", "Model parameter results", "if i_can_wait:\n burn_in_step = 50 # based on a rough look at the walker positions above\n samples = sampler.chain[:, burn_in_step:, :].reshape((-1, ndim))\n\nelse:\n # read in a previously generated chain\n samples = np.loadtxt('samples.txt')\n\nfig = corner.corner(samples,\n labels=[\"$\\mathrm{log}M_{200}$\", \"$\\sigma_\\mathrm{off}$\"],\n truths=[logm_true, off_true])\nfig.savefig('cornerplot.png')\n\n# save the chain for later\nnp.savetxt('samples.txt', samples)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
antoniomezzacapo/qiskit-tutorial
community/hello_world/string_comparison.ipynb
apache-2.0
[ "<img src=\"../images/qiskit-heading.gif\" alt=\"Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook\" width=\"500 px\" align=\"left\">\nComparing Strings with Quantum Superpositon\nThe latest version of this notebook is available on https://github.com/QISKit/qiskit-tutorial.\nFor more information about how to use the IBM Q Experience (QX), consult the tutorials, or check out the community.\n\nContributors\nRudy Raymond\nMotivation\nIf we can use quantum states to represent genetic codes, we may be able to compare them, and/or find similar genetic codes quickly. \nFor example, according to this site the starts of the genetic codes for the Yeast Mitochondrial, Protozoan Mitochondrial, and Bacterial Code are respectively as follow.", "YEAST = \"----------------------------------MM----------------------------\"\nPROTOZOAN = \"--MM---------------M------------MMMM---------------M------------\"\nBACTERIAL = \"---M---------------M------------MMMM---------------M------------\"", "Notice that each of the codes is represented by a bitstring of length 64. By comparing characters at the same position in the strings, we can see that Protozoan's is closer to Bacterial's than Yeast's. \nExploiting quantum superposition, we can create quantum states by using only 7 qubits such that each of the quantum states corresponds to the genetic code of Yeast, Protozoan, and Bacterial. We then compare the closeness of their genetic codes by comparing their quantum states, which is made possible by the reversibility of quantum circuit.\nThe reversibility of quantum circuit to test the similarity of quantum states works as follow. Assume that we can create a quantum superposition starting from all-zero states by a quantum circuit. Then by inverting the same quantum circuit and we give it the same quantum superposition as input, we will get exactly all-zero bits as the output. Now, when we give a similar quantum superposition as input to the inverted circuit, we can still get all-zero bits as the output with probability proportional to the similarity of the quantum states: the more similar, the more we observe all-zero bits. \nThus, to decide which code (Yeast's or Bacterial's) is the most similar to the Protozoan, we can do the following:\n\nWe first prepare the quantum state that encodes the Protozoan's\nWe then use the quantum state as inputs to the inverted circuits that each prepare the quantum state of Yeast's and Bacterial's. Run and measure the circuits\nOutput the name of the inverted circuit whose measurements result in more frequent measurements of all-zero bits. \n\nQuantum Superposition for Bitstrings\nA qubit can be in a superposition of two basis states: \"0\" and \"1\" at the same time. Going further, two qubits can be in a superposition of four basis states: \"00\", \"01\", \"10\", and \"11\". In general, $n$ qubits can be in a superposition of $2^n$ (exponential in the number of qubits!) basis states. \nHere, we show a simple example to create quantum superpositon for bitstrings and use them to compare the similarity between two bitstrings. This tutorial makes use the quantum state initialization function and circuit inversion. It also illustrates the power of loading data into quantum states. \nComparing bitstrings of length 64 with 7 qubits\nLet say we have three genetic codes as above.\nYEAST = \"----------------------------------MM----------------------------\"\nPROTOZOAN = \"--MM---------------M------------MMMM---------------M------------\"\nBACTERIAL = \"---M---------------M------------MMMM---------------M------------\"\nLet use 7 qubits to encode the above codes: the first 6 qubits for indexing the location in the code (because we have 64 positions that we number from 0 to 63), and the last qubit for the content of the code (we use \"0\" for \"-\" and \"1\" for \"M\"). Thus, numbering the position of the code from left to right, we can create quantum states for each of the code as below: \n\\begin{eqnarray}\n|YEAST \\rangle &=& \\frac{1}{8} \\left( |000000\\rangle |0\\rangle + |000001\\rangle |0\\rangle + |000010\\rangle |0\\rangle + |000011\\rangle |0\\rangle + \\ldots \\right) \\\n|PROTOZOAN \\rangle &=& \\frac{1}{8} \\left( |000000\\rangle |0\\rangle + |000001\\rangle |0\\rangle + |000010\\rangle |1\\rangle + |000011\\rangle |1\\rangle + \\ldots \\right) \\\n|BACTERIAL \\rangle &=& \\frac{1}{8} \\left( |000000\\rangle |0\\rangle + |000001\\rangle |0\\rangle + |000010\\rangle |0\\rangle + |000011\\rangle |1\\rangle + \\ldots \\right)\n\\end{eqnarray}\nThe first four codes of Yeast's are all \"-\", and therefore at the above all of the second registers of the corresponding state are \"0\". And so on. \nCreating quantum superposition for genetic codes\nBelow is the python function to create a quantum superposition for a given genetic code as above.", "import sys\nimport numpy as np\nimport math\nfrom qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister\nfrom qiskit import CompositeGate\nfrom qiskit import execute, register, available_backends\n\ndef encode_bitstring(bitstring, qr, cr, inverse=False):\n \"\"\"\n create a circuit for constructing the quantum superposition of the bitstring\n \"\"\"\n n = math.ceil(math.log2(len(bitstring))) + 1 #number of qubits\n assert n > 2, \"the length of bitstring must be at least 2\"\n \n qc = QuantumCircuit(qr, cr)\n \n #the probability amplitude of the desired state\n desired_vector = np.array([ 0.0 for i in range(2**n) ]) #initialize to zero\n amplitude = np.sqrt(1.0/2**(n-1))\n \n for i, b in enumerate(bitstring):\n pos = i * 2\n if b == \"1\" or b == \"M\":\n pos += 1\n desired_vector[pos] = amplitude\n if not inverse:\n qc.initialize(desired_vector, [ qr[i] for i in range(n) ] )\n qc.barrier(qr)\n else:\n qc.initialize(desired_vector, [ qr[i] for i in range(n) ] ).inverse() #invert the circuit\n for i in range(n):\n qc.measure(qr[i], cr[i])\n print()\n return qc", "We can now create quantum circuits to create the quantum states for the Yeast's, Protozoan's, and Bacterial's.", "n = math.ceil(math.log2(len(YEAST))) + 1 #number of qubits\nqr = QuantumRegister(n)\ncr = ClassicalRegister(n)\n\nqc_yeast = encode_bitstring(YEAST, qr, cr)\nqc_protozoan = encode_bitstring(PROTOZOAN, qr, cr)\nqc_bacterial = encode_bitstring(BACTERIAL, qr, cr)\n\ncircs = {\"YEAST\": qc_yeast, \"PROTOZOAN\": qc_protozoan, \"BACTERIAL\": qc_bacterial}", "Inverting quantum circuit\nWe can easily invert a quantum circuit by inverse() function. These inversed circuits are desirable to compute the closeness of the quantum states.", "inverse_qc_yeast = encode_bitstring(YEAST, qr, cr, inverse=True)\ninverse_qc_protozoan = encode_bitstring(PROTOZOAN, qr, cr, inverse=True)\ninverse_qc_bacterial = encode_bitstring(BACTERIAL, qr, cr, inverse=True)\n\ninverse_circs = {\"YEAST\": inverse_qc_yeast, \"PROTOZOAN\": inverse_qc_protozoan, \"BACTERIAL\": inverse_qc_bacterial}", "Comparing bitsrings\nWe can now compare how close the starts of the genetic codes of Protozoan to Yeast's and Bacterial's by performing the test.", "print(\"Available backends:\", available_backends())\n\nkey = \"PROTOZOAN\" #the name of the code used as key to find similar ones\n\n# use local simulator\nbackend = \"local_qasm_simulator\"\nshots = 1000\n\ncombined_circs = {}\ncount = {}\n\nmost_similar, most_similar_score = \"\", -1.0\n\nfor other_key in inverse_circs:\n if other_key == key:\n continue\n \n combined_circs[other_key] = circs[key] + inverse_circs[other_key] #combined circuits to look for similar codes\n job = execute(combined_circs[other_key], backend=backend,shots=shots)\n st = job.result().get_counts(combined_circs[other_key])\n if \"0\"*n in st:\n sim_score = st[\"0\"*n]/shots\n else:\n sim_score = 0.0\n \n print(\"Similarity score of\",key,\"and\",other_key,\"is\",sim_score)\n if most_similar_score < sim_score:\n most_similar, most_similar_score = other_key, sim_score\n\nprint(\"[ANSWER]\", key,\"is most similar to\", most_similar)", "We observe that the test can be used to determine which code is closer: bacterial's is closer to protozoan's than yeast's. \nThere are many other genetic codes listed at bioinformatics.org which can be used as input strings. In general, DNA has four nucleotides: \"A\", \"C\", \"G\", and \"T\". Thus, instead of one qubit like in this notebook, two qubits are required to encode the nucleotides. However, the asymptotic amount of quantum bits for encoding the whole sequence of length $N$ is still in the order of $\\log{N}$, which is exponentially small. \nDeep Dive\nThe technique of using circuit inversion to measure how close two quantum states has been used in many literature. For example, it is used for Quantum Kernel Estimation in Havlicek et al., 2018 for supervised learning. The idea of using quantum superposition to encode bistrings appeared in Quantum Fingerprinting where a quantum exponential advantage is shown for a communication task of comparing two bitstrings. \nThe intuition of why combining a circuit which creates a quantum state with another circuit which is the inverted circuit of creating another quantum state can be used to measure how close two quantum states is as follow. \nAll operations (except measurements) in quantum computers are unitary and hence, distance preserving. This means if we apply the same operation (or, circuit) to two states that are similar, the resulting states will also be similar. All those operations are also reversible, that means, if we know a circuit $C$ to create a particular quantum state $|\\phi\\rangle$ from the all-zero state, we can also design the circuit $C'$ that transforms back the quantum state $|\\phi\\rangle$ to the all-zero state. Now, if we apply $C'$ to a quantum state $|\\psi\\rangle$ which is similar to $|\\phi\\rangle$, we will obtain a quantum state which is also similar to the all-zero state. The distance of the resulting state to the all-zero state is the same as the distance between $|\\phi\\rangle$ and $|\\psi\\rangle$. \nWe can notice that the similarity of two different quantum states can be very close to zero, and thus making difficult to find the discrepancies. However, we can use encoding techniques, such as, by employing repetition code, to guarantee that different quantum states are separated far enough. In general, we can exploit error correcting codes, such as, Justesen code, or locality-sensitive hashing to encode bitstrings efficiently." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mbakker7/ttim
notebooks/ttim_slugtest.ipynb
mit
[ "Slug test analysis in an unconfined aquifer\nThe data is taken from the AQTESOLVE website. \nButler (1998) presents results from a slug test in a partially penetrating well that is screened in unconsolidated alluvial deposits consisting of sand and gravel with interbedded clay. The aquifer has a thickness $H=47.87$ m. The depth to the top of the well screen is 16.7 m, and the screen of the well is 1.52 m long. The radius of the well is 0.125 m, and the radius of the casing is 0.064 m. The slug displacement is 0.671 m.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import fmin\nimport pandas as pd\nfrom ttim import *\n\n# problem definitions\nrw = 0.125 # well radius\nrc = 0.064 # well casing radius\nL = 1.52 # screen length\nzbot = -47.87 # aquifer thickness\nwelltop = -16.77 # top of screen\ndelh = 0.671 # slug displacement in the well\n#\nwellbot = welltop - L # bottom of screen\nQ = np.pi * rc**2 * delh # volume of slug\n\n# loading data\ndata = np.loadtxt('data/slugtest.txt') # time and drawdouwn\ntime, dd = data[:,0], data[:,1]\ntd = time/60/60/24 #time in days\nprint('minimum and maximum time:', td.min(), td.max())\n\ndd", "Flow is simulated with a quasi three-dimensional model consisting of Nlayers mode layers. The top and bottom of the aquifer are impermeable.\nThe horizontal hydraulic conductivity $k$ and elastic storage $S_s$ are unkonwn. Phreatic storage and vertical anisotropy are not simulated. The variable p contains the two unknown parameters. The well is modeled with the Well element. The type is specified as slug, adn the initially displaced volume is specified as $Q$.", "ml = Model3D(kaq=100, z=[0, -0.5, welltop, wellbot, zbot],\n Saq=1e-4, kzoverkh=1, tmin=1e-6, tmax=0.01) \nw = Well(ml, xw=0, yw=0, rw=rw, tsandQ=[(0.0, -Q)],\n layers=2, rc=rc, wbstype='slug')\nml.solve()\nprint('k:', ml.aq.kaq)\nprint('T: ', ml.aq.T)\nprint('c: ', ml.aq.c)\ncal = Calibrate(ml)\ncal.set_parameter(name='kaq0_3', initial=10)\ncal.set_parameter(name='Saq0_3', initial=1e-3)\ncal.series(name='obs1', x=0, y=0, layer=2, t=td, h=dd)\ncal.fit()\nprint('k:', ml.aq.kaq)\nprint('T: ', ml.aq.T)\nprint('c: ', ml.aq.c)\n\nhm = ml.head(0, 0, td, layers=2)\nplt.figure(figsize=(12, 6))\nplt.semilogx(time, dd / delh, 'ko', label='Observed')\nplt.semilogx(time, hm[0] / delh, 'b', label='TTim')\nplt.ylim([0, 1])\nplt.xlabel('time [s]')\nplt.ylabel('h / delh')\nplt.legend(loc='best')\nplt.title('TTim Slug Test Analysis');\n\nr = pd.DataFrame(columns=['Kr [m/day]','Ss [1/m]'],\n index=['TTim', 'AQTESOLV'])\nr.loc['TTim'] = cal.parameters['optimal'].values\nr.loc['AQTESOLV'] = [4.034, 0.000384]\nr", "Verify with fmin", "def sse(p, returnheads=False):\n ml = Model3D(kaq=p[0], z=[0, -0.5, welltop, wellbot, zbot],\n Saq=p[1], kzoverkh=1, tmin=1e-6, tmax=0.01) \n w = Well(ml, xw=0, yw=0, rw=rw, tsandQ=[(0.0, -Q)],\n layers=2, rc=rc, wbstype='slug')\n ml.solve(silent = '.')\n hm = ml.head(0, 0, td, 2)\n if returnheads: return hm\n se = np.sum((hm[0] - dd)**2)\n return se\n\npopt = fmin(sse, [3, 1e-4])\nprint('optimal parameters:', popt)\nprint('sse:', sse(popt))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mri/cmip6/models/sandbox-2/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: MRI\nSource ID: SANDBOX-2\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:19\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mri', 'sandbox-2', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
carlosclavero/PySimplex
Documentation/Tutorial librería Simplex.py.ipynb
gpl-3.0
[ "Simplex.py\nEn el siguiente tutorial, se van a ver todos los métodos con los que cuenta la librería Simplex.py. Por supuesto, una aplicación de muchos de ellos, siguiendo una secuencia, correcta, podría dar lugar a la resolución de un problema de programación lineal. Sin embargo, obtener una solcuión desde esta perspectiva, es algo mucho más largo y complejo, siendo mucho más fácil usar el programa SimplexSolver.py.\nPara el uso de la librería, se ha creado una clase auxiliar llamada rational. Esta clase representa a los números racionales. Cada objeto de esa clase contará con un númerador y un denominador, de tal forma que si se quiere definir un número entero, habrá que asignarle denominador 1. La forma de definir un objeto rational es la siguiente:\n rational(3,2) # Esto define el número 3/2\nEl tutorial se va a dividir en cuatro partes, las mismas en las que se divide la librería. La primera, muestra los métodos creados para realizar operaciones con racionales(muchos de ellos se utilizan simplemente para las comprobaciones de parámetros de entrada de otros métodos). La segunda parte, serán operaciones con matrices y arrays(tales como invertir una matriz), que han tenido que ser redefinidas para que puedan ser utilizadas con la clase rational. La tercera parte, son los métodos utilizados para alcanzar la solución mediante el método Simplex, y la cuarta, será la formada por aquellos métodos que permiten obtener la solución gráfica. \nA continuación se exponen los métodos de la librería, con explicaciones y ejemplos de cada uno de ellos.\nNOTA 1: Siempre que se hable de variables del problema, hay que considerar, que la primera variable será la 0, es decir x0. \nNOTA 2: Los \"imports\" necesarios se realizan en la primera celda, para ejecutar cualquiera de las siguientes, sin errores, debe ejecutar primero la celda que contiene los \"imports\". Si realiza una ejecución en su propio entorno de programación, debe importar estas dos clases, para que los métodos se ejecuten sin errores(por favor, consulte en detalle el manual de instalación que hay en la misma localización que este manual):\n from PySimplex import Simplex\nfrom PySimplex import rational \nimport numpy as np\nOperaciones con rational\nconvertStringToRational\nEste método recibe un número en un string, y devuelve el número como un rational. Si no recibe un string, devuelve None. Ejemplos:", "from PySimplex import Simplex\nfrom PySimplex import rational\nimport numpy as np\n\nnumber=\"2\"\nprint(Simplex.convertStringToRational(number))\n\nnumber=\"2/5\"\nprint(Simplex.convertStringToRational(number))\n\n# Si recibe algo que no es un string, devuelve None\nnumber=2\nprint(Simplex.convertStringToRational(number))", "convertLineToRationalArray\nEste método recibe un string, que contiene un conjunto de números separados por un espacio, y devuelve los números en un array de numpy con elementos rational.Si no recibe un string, devuelve None. Ejemplos:", "line=\"3 4 5\"\nprint(Simplex.printMatrix((np.asmatrix(Simplex.convertLineToRationalArray(line)))))\n\nline=\"3 4/5 5\"\nprint(Simplex.printMatrix((np.asmatrix(Simplex.convertLineToRationalArray(line)))))\n\n# Si se le pasa algo que no es un string, devuelve None\nprint(Simplex.convertLineToRationalArray(4))", "rationalToFloat\nEste método recibe un objeto rational, y devuelve su valor en float. Lo que hace es realizar la división entre el númerador y el denominador. En caso de no pasar un rational como parámetro, devuelve None.", "a=rational(3,4)\nSimplex.rationalToFloat(a)\n\na=rational(3,1)\nSimplex.rationalToFloat(a)\n\n# Si no se introduce un rational, devuelve None\na=3.0\nprint(Simplex.rationalToFloat(a))", "* listPointsRationalToFloat*\nEste método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero con las coordenadas en float. En caso de no introducir una lista de rational, devuelve None. Ejemplos:", "rationalList=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2)\n ,rational(4,5)),(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]\nSimplex.listPointsRationalToFloat(rationalList)\n\n# Si recibe algo que no es una lista de puntos con coordenadas rational,devuelve None\nrationalList=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]\nprint(Simplex.listPointsRationalToFloat(rationalList))", "isAListOfRationalPoints\nEste método recibe una lista, y devuelve True, si todos los elementos son puntos(tuplas)con coordenadas rational o False, si hay algún elemento que no es un punto cuyas coordenadas sean rational. En caso de no pasar una lista, devuelve None. Ejemplos:", "lis=[(rational(1,2),rational(5,7)),(rational(4,5),rational(4,6)),(rational(4,9),rational(9,8))]\nSimplex.isAListOfRationalPoints(lis)\n\nlis=[(rational(1,2),rational(5,7)),(4,rational(4,6)),(rational(4,9),rational(9,8))]\nSimplex.isAListOfRationalPoints(lis)\n\n# Si recibe algo que no es una lista devuelve None\nlis=np.array([(rational(1,2),rational(5,7)),(4,rational(4,6)),(rational(4,9),rational(9,8))])\nprint(Simplex.isAListOfRationalPoints(lis))", "isAListOfPoints\nEste método recibe una lista, y devuelve True, si todos los elementos son puntos(tuplas) o False, si hay algún elemento que no es un punto. En caso de no pasar una lista, devuelve None. Ejemplos:", "# Si todos los elementos son puntos(tuplas), devuelve True\nlis=[(3,4),(5,6),(7,8),(8,10)]\nSimplex.isAListOfPoints(lis)\n\n# Si recibe una lista cuyos elementos, no son todos puntos(tuplas), devuelve False\nlis=[3,5,6,(6,7)]\nSimplex.isAListOfPoints(lis)\n\n# Si recibe algo que no es una lista devuelve None\nprint(Simplex.isAListOfPoints(3))", "isARationalMatrix\nEste método recibe una matriz de numpy o un array bidimensional de numpy, y comprueba si todos los elementos del mismo, son rational, en ese caso devuelve True. En otro caso devuelve False. Si no recibe una matriz o un array de numpy, devuelve None. Ejemplos:", "mat=np.matrix([[rational(1,2),rational(5,7)],[rational(5,8),rational(9,3)]])\nSimplex.isARationalMatrix(mat)\n\nmat=np.array([[rational(1,2),rational(5,7)],[rational(5,8),rational(9,3)]])\nSimplex.isARationalMatrix(mat)\n\nmat=np.matrix([[1,rational(5,7)],[rational(5,8),rational(9,3)]])\nSimplex.isARationalMatrix(mat)\n\n# Si recibe algo que no es una matriz o un array de numpy\nmat=[rational(1,2),rational(5,7)]\nprint(Simplex.isARationalMatrix(mat))", "isARationalArray\nEste método recibe un array de numpy, y comprueba si todos los elementos del mismo, son rational, en ese caso devuelve True. En otro caso devuelve False. Si no recibe una matriz o un array de numpy, devuelve None. Ejemplos:", "arr=np.array([rational(1,2),rational(5,7),rational(4,5)])\nSimplex.isARationalArray(arr)\n\narr=np.array([rational(1,2),6,rational(4,5)])\nSimplex.isARationalArray(arr)\n\n# Si recibe algo que no es una matriz o un array de numpy\narr=[rational(1,2),rational(5,7),rational(4,5)]\nprint(Simplex.isARationalArray(arr))", "Operaciones con matrices\ndeterminant\nEste método recibe una matriz de numpy, con componentes rational, y devuelve el determinante de la matriz. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. También admite un array de numpy bidimensional.Ejemplos:", "matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\ndet=Simplex.determinant(matrix)\nprint(det)\n\n# Si la matriz no es cuadrada, devuelve None\nmatrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])\nprint(Simplex.determinant(matrix))\n\n# También admite un array de numpy bidimensional\nmatrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nprint(Simplex.determinant(matrix))\n\n# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None\nprint(Simplex.determinant(3))", "coFactorMatrix\nEste método recibe una matriz de numpy, con componentes rational, y devuelve la matriz de cofactores. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:", "matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nm=Simplex.coFactorMatrix(matrix)\nprint(Simplex.printMatrix(m))\n\n# Si la matriz no es cuadrada, devuelve None\nmatrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])\nprint(Simplex.coFactorMatrix(matrix))\n\n# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None\nmatrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nprint(Simplex.coFactorMatrix(matrix))", "adjMatrix\nEste método recibe una matriz de numpy, con componentes rational, y devuelve la matriz de adjuntos. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:", "matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nm=Simplex.adjMatrix(matrix)\nprint(Simplex.printMatrix(m))\n\n# Si la matriz no es cuadrada, devuelve None\nmatrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])\nprint(Simplex.adjMatrix(matrix))\n\n# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None\nmatrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nprint(Simplex.invertMatrix(matrix))", "invertMatrix\nEste método recibe una matriz de numpy, con componentes rational, y devuelve la matriz inversa. La matriz debe ser cuadrada. Si se introduce algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None. Ejemplos:", "matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nm=Simplex.invertMatrix(matrix)\nprint(Simplex.printMatrix(m))\n\n# Si la matriz no es cuadrada, devuelve None\nmatrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(5,4),rational(3,9)]])\nprint(Simplex.invertMatrix(matrix))\n\n# Si recibe algo que no es una matriz cuadrada de numpy, con elementos rational, devuelve None\nmatrix=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nprint(Simplex.invertMatrix(matrix))", "initializeMatrix\nEste método recibe unas dimensiones y devuelve una matriz de numpy, con elementos rational,de valor 0. Si los valores introducidos no son enteros, devuelve None. Ejemplos:", "m=Simplex.initializeMatrix(3, 2)\nprint(Simplex.printMatrix(m))\n\n# Si se introduce algo que no son enteros, devuelve None\nprint(Simplex.initializeMatrix(4.0,3.0))", "createRationalIdentityMatrix\nEste método recibe un número y devuelve una matriz identidad de numpy, con elementos rational. Si el valor introducido no es entero, devuelve None. Ejemplos:", "m=Simplex.createRationalIdentityMatrix(3)\nprint(Simplex.printMatrix(m))\n\n# Si se introduce algo que es un entero, devuelve None\nprint(Simplex.createRationalIdentityMatrix(4.0))", "multNumMatrix\nEste método recibe un número en forma rational y una matriz de numpy, con componentes rational, y devuelve la matriz del producto del número por la matriz introducida.Si se introduce algo que no es un rational como número o una matriz de numpy, con elementos rational,como matriz, devuelve None. Ejemplos:", "matrix=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])\nnum= rational(3,4)\nm = Simplex.multNumMatrix(num, matrix)\nprint(Simplex.printMatrix(m))\n\n# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None\nnum = 4\nprint(Simplex.multNumMatrix(num, matrix))", "twoMatrixEqual\nEste método recibe dos matrices de numpy, con componentes rational, y devuelve True,si son iguales, o False, si no lo son. Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. Ejemplos:", "matrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])\nmatrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])\nSimplex.twoMatrixEqual(matrix1, matrix2)\n\nmatrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])\nmatrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])\nSimplex.twoMatrixEqual(matrix1, matrix2)\n\n# Si las dimensiones no son iguales, devuelve False\nmatrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])\nmatrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nSimplex.twoMatrixEqual(matrix1, matrix2)\n\n# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None\nprint(Simplex.twoMatrixEqual(matrix1, 3))", "printMatrix\nEste método recibe una matriz de numpy, con componentes rational, y la pasa a formato string.Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. También admite un array de numpy bidimensional. Ejemplos:", "matrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])\nprint(Simplex.printMatrix(matrix2))\n\n# También admite un array de numpy bidimensional\nmatrix2=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(9,6),rational(6,1)]])\nprint(Simplex.printMatrix(matrix2))\n\n# Si recibe algo que no es una matriz de numpy o un array bidimensional, con elementos rational, devuelve None\nprint(Simplex.printMatrix(matrix2))", "multMatrix\nEste método recibe dos matrices de numpy, con componentes rational, y devuelve la matriz resultado del producto de las dos matrices introducidas. Si el número de columnas de la primera matriz, y el número de filas de la segunda, no son iguales, las matrices no se pueden multiplicar y devuelve None. Si se introduce algo que no es una matriz de numpy, con elementos rational, devuelve None. Ejemplos:", "matrix1=np.matrix([[rational(4,7),rational(8,9),rational(2,5)],[rational(2,4),rational(3,4),rational(7,5)]])\nmatrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])\nm=Simplex.multMatrix(matrix1, matrix2)\nprint(Simplex.printMatrix(m))\n\n# Si el número de columnas de la primera matriz, y el número de filas de la segunda, no son iguales, devuelve None\nmatrix1=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nmatrix2=np.matrix([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])\nprint(Simplex.multMatrix(matrix1, matrix2))\n\n# Si recibe algo que no es una matriz de numpy, con elementos rational, devuelve None\nmatrix1=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)]])\nmatrix2=np.array([[rational(4,7),rational(8,9)],[rational(2,4),rational(3,4)],[rational(4,6),rational(9,1)]])\nprint(Simplex.multMatrix(matrix1, matrix2))", "Método Simplex\nvariablesNoiteration\nEste método se utiliza para calcular las variables que no están en la iteración. Recibe como parámetro, una matrix numpy, que contiene las restricciones del problema y un array numpy, que contiene las variables que ya están en la iteración(estas variables no tienen porqué aparecer ordenadas en el array). El método funciona, con matrices de tipo entero, de tipo float y de tipo rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Si todo es correcto, devolverá array numpy, con las variables que no están en la iteración. Ejemplos:", "matrix=np.matrix([[1,3,4,4,5],[12,45,67,78,9],[3,4,3,5,6]])\nvariablesIteration=np.array([1,3,4])\nSimplex.variablesNoiteration(matrix,variablesIteration)\n\nvariablesIteration=np.array([3,4,1])\nSimplex.variablesNoiteration(matrix,variablesIteration)\n\n# EL método funciona con matrices con elementos rational\nmatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],\n [rational(4,1),rational(6,4),rational(9,2)]])\nvariablesIteration=np.array([3,4,1])\nSimplex.variablesNoiteration(matrix,variablesIteration)\n\n#Si le introduzco algo que no sea una matriz de numpy en el primer parámetro o algo que no sea un array de numpy en el segundo,me\n#devuelve None\nprint(Simplex.variablesNoiteration(3,variablesIteration))", "calcMinNoNan\nEste método se utiliza para calcular cuál es el mínimo valor, de un conjunto de valores. Recibe un array de numpy, con los valores. El método selecciona aquellos valores que sean rational, y calcula el mínimo. En caso de que los parámetros introducidos no sean correctos, devolverá None. Si todo es correcto, devolverá el mínimo valor no negativo o None, en caso de que no haya valores rational. Ejemplos:", "setOfVal=np.array([rational(1,4),rational(4,7),rational(6,8),rational(6,4)])\nprint(Simplex.calcMinNoNan(setOfVal))\n\nsetOfVal=np.array([np.nan,rational(4,7),rational(6,8),rational(6,4)])\nprint(Simplex.calcMinNoNan(setOfVal))\n\n#Si le paso un conjunto de valores, TODOS no rational, devuelve None\nsetOfVal=np.array([np.nan,np.nan,np.nan,np.nan])\nprint(Simplex.calcMinNoNan(setOfVal))\n\n#Si le algo que no es array numpy, devuelve None\nprint(Simplex.calcMinNoNan(2))", "calculateIndex\nEste método recibe un array de numpy, y un valor, y devuelve la posición dentro del array donde se encuentra la primera ocurrencia de dicho valor. En caso de que dicho valor no aparezca en el array, se devolverá None. El método funciona con conjuntos de números enteros y con conjuntos de rational. En caso de que los parámetros introducidos no sean correctos, devolverá None.Ejemplos:", "array=np.array([3,4,5,6,7,2,3,6])\nvalue= 3\nSimplex.calculateIndex(array,value)\n\n#Si introduzco un valor que no está en el array, devuelve None\nvalue=78\nprint(Simplex.calculateIndex(array,value))\n\n# El método funciona también con rational\nvalue=rational(4,7)\narray=np.array([rational(1,4),rational(4,7),rational(6,8),rational(6,4)])\nSimplex.calculateIndex(array,value)\n\n#Si introduzco algo que no es un array en el primer parámetro o algo que no es un número en el segundo, devuelve None\nprint(Simplex.calculateIndex(4,value))", "calculateBaseIteration\nEste método calcula la base de la iteración, y la devuelve en una matriz numpy. Para ello, recibe la matriz que contiene todas las restricciones del problema(sin signo ni recursos), y las columnas que forman parte de la iteración(no tienen porqué aparecer ordenadas en el array). La matriz, puede ser de valores enteros o rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "totalMatrix=np.matrix([[1,2,3,4,5],[2,6,7,8,9],[6,3,4,5,6]])\ncolumnsOfIteration=np.array([1,2,0])\nSimplex.calculateBaseIteration(totalMatrix,columnsOfIteration)\n\n# El método funciona también con matrices con elementos rational\ncolumnsOfIteration=np.array([1,2,0])\ntotalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),\n rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),\n rational(3,1)]])\nprint(Simplex.printMatrix(Simplex.calculateBaseIteration(totalMatrix,columnsOfIteration))) \n\n\n# Si le paso más columnas de las que hay en la matriz total, me devolverá None\ncolumnsOfIteration=np.array([0,1,2,3,4,5,6])\nprint(Simplex.calculateBaseIteration(totalMatrix,columnsOfIteration))\n\n# Si le introduzco algo que no sea una matriz de numpy en el primer parámetro o algo que no sea un array de numpy en el segundo\n# ,me devuelve None\nprint(Simplex.calculateBaseIteration(4,columnsOfIteration))", "showBase\nEste método recibe una matriz numpy con elementos rational, que se supone que será la base de una iteración, acompañado del nombre que se le quiera asignar, y la muestra por pantalla, con el nombre que se le asigna (B), dentro de la iteración. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "base=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],\n [rational(4,1),rational(6,4),rational(9,2)]])\nSimplex.showBase(base,\"B\")\n\n#Si se le pasa algo que no es una matriz de numpy con elementos rational en el primer parámetro, o un string en el segundo, me \n# devuelve None\nprint(Simplex.showBase(3,\"B\"))", "calculateIterationSolution\nEste método calcula la solución de una iteración, para las variables de la misma, y la devuelve en un array de numpy. Para ello, recibe la base de la iteración, en una matriz numpy y también recibe el vector de recursos en un array de numpy. Los elementos de la matriz y el array, deben ser rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "base=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),rational(1,3)],\n [rational(4,1),rational(6,4),rational(9,2)]])\n\nresourcesVector=np.array([rational(2,1),rational(33,2),rational(52,8)])\nprint(Simplex.printMatrix(np.asmatrix(Simplex.calculateIterationSolution(base,resourcesVector))))\n\n#Si le paso un vector de recursos, que tenga un longitud diferente al número de filas de la matriz, me duvuelve None\nresourcesVector=np.array([rational(2,1),rational(33,2)])\nprint(Simplex.calculateIterationSolution(base,resourcesVector))\n\n#Si le paso algo que no es una matriz de numpy de elementos rational en el primer parámetro o un array de numpy con elementos \n# rational en el segundo, me devuelve None\nprint(Simplex.calculateIterationSolution(base,4))", "showSolution\nEste método recibe la solución de una iteración, y la muestra con el nombre que se le asigna en ella (\"x\"). La solución deberá ser pasada en un numpy array en forma de columna con elementos rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "sol=np.array([[rational(2,2)],[rational(5,3)],[rational(6,1)],[rational(7,8)]])\nSimplex.showSolution(sol)\n\n#Si le paso algo que no es un array numpy con elementos rational, me devuelve None\nsol=np.array([[2],[5],[6],[7]])\nprint(Simplex.showSolution(sol))", "calculateCB\nEste método calcula el valor del vector función, para una iteración. Para ello recibe en un array numpy, las columnas de la iteración, y en otro array numpy, el vector de función completo del problema. Si todo es correcto, se devuelve en un array numpy, el vector de la función para las columnas introducidas. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "columnsOfIteration=np.array([0,2,3])\nfunctionVector= np.array([0,1,2,3,5,5,6])\nSimplex.calculateCB(columnsOfIteration,functionVector)\n\n# El método también funciona con elementos rational\ncolumnsOfIteration=np.array([0,2])\nfunctionVector= np.array([rational(0,1),rational(2,3),rational(5,5)])\nprint(Simplex.printMatrix(np.asmatrix(Simplex.calculateCB(columnsOfIteration,functionVector))))\n\n# Si meto más columnas de las que tiene el vector función, me devuelve None\ncolumnsOfIteration=np.array([0,1,2])\nfunctionVector= np.array([0,1])\nprint(Simplex.calculateCB(columnsOfIteration,functionVector))\n\n# Si meto algo por parámetro que no es un array de numpy en cualquiera de los dos parámetros, me devuelve None\nprint(Simplex.calculateCB([0,1],functionVector))", "showCB\nEste método, recibe un array numpy de elementos rational, que contiene el valor del vector función, y simplemente lo muestra por pantalla, con el correspondiente nombre que se le asigna(\"CB\"). En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "CBValue= np.array([rational(0,1),rational(2,3),rational(5,5)])\nSimplex.showCB(CBValue)\n\n#Si se le pasa algo que no es un array numpy de elementos rational, devuelve None\nCBValue= np.array([0,1,4,6])\nprint(Simplex.showCB(CBValue))", "calculateFunctionValueOfIteration\nEste método recibe la solución de la iteración, y el vector de la función para la misma, y devuelve una matriz numpy que contiene el valor de la función para dicha iteración. Es necesario que la solución se pase como un array numpy en forma de columna(como muestra el ejemplo). El vector de la función debe ser un array de numpy, en forma de fila. Ambos arrays, deben ser de elementos rational. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "# La solución se debe pasar como un array en forma de columna\nsolution=np.array([[rational(2,1)],[rational(3,2)],[rational(2,5)]])\nCB = np.array([rational(0,1),rational(2,3),rational(5,5)])\nprint(Simplex.printMatrix(Simplex.calculateFunctionValueOfIteration(solution,CB)))\n\n#Si el tamaño de uno de los parámetros difiere del otro, devuelve None\nsolution=np.array([[rational(2,1)],[rational(3,2)],[rational(2,5)]])\nCB = np.array([rational(0,1),rational(5,5)])\nprint(Simplex.calculateFunctionValueOfIteration(solution,CB))\n\n#Si recibe algo que no es un array numpy con elementos rational en cualquiera de los dos parámetros, devuelve None\nprint(Simplex.calculateFunctionValueOfIteration(solution,3))", "showFunctionValue\nEste método recibe una matriz numpy que contiene la solución de la función, para la iteración, y la muestra por pantalla con su nombre(\"z\"). El método funciona tambiñen si se pasa la matriz con elementos rational En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "functionValue=np.matrix([34])\nSimplex.showFunctionValue(functionValue)\n\n# El método funciona también con metrices rational\nfunctionValue=np.matrix([rational(34,1)])\nSimplex.showFunctionValue(functionValue)\n\n#En caso de recibir algo que no es una matriz numpy, devuelve None\nfunctionValue=np.matrix([34])\nprint(Simplex.showFunctionValue(4))", "calculateYValues\nEste método calcula los valores de y, para una iteración. Para ello recibe la base de la iteración en una matriz numpy, la matriz total que contiene todas las restricciones del problema (sin signo, ni recursos) en una matriz numpy y las variables que no pertenecen a la iteración, en un array numpy. Los elementos de ambas matrices, deben ser rational. Si todos los parámetros introducidos son correctos, se devuelve en un array de numpy los valores, de cada una de las y para la iteración. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "variablesNoIteration=np.array([3,4])\niterationBase=np.matrix([[rational(6,7),rational(4,5),rational(3,1)],[rational(2,3),rational(7,6),\n rational(1,3)], [rational(4,1),rational(6,4),rational(9,2)]])\ntotalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),\n rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),\n rational(3,1)]])\nprint(Simplex.printMatrix(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix)))\n\n#Si el número de variables fuera de la iteración, es mayor que el número total de variables, se devuelve None\nvariablesNoIteration=np.array([0,1,2,3,4,5])\nprint(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix))\n\n#Si el la base tiene más o menos filas, que la matriz total, devuelve None\nvariablesNoIteration=np.array([3,4])\niterationBase=np.matrix([[rational(6,7),rational(4,5),rational(3,1)], [rational(4,1),rational(6,4),rational(9,2)]])\ntotalMatrix=np.matrix([[rational(6,7),rational(4,5),rational(3,1),rational(5,3),rational(2,1)],[rational(2,3),rational(7,6),\n rational(1,3),rational(2,5),rational(9,5)], [rational(4,1),rational(6,4),rational(9,2),rational(4,5),\n rational(3,1)]])\nprint(Simplex.calculateYValues(variablesNoIteration,iterationBase,totalMatrix))\n\n#Si se introduce algo que no sea una matriz numpy de rational en el segundo y tercer parámetro, o un array numpy en el primer \n# parámetro, devuelve None\n\nprint(Simplex.calculateYValues(variablesNoIteration,4,totalMatrix))", "showYValues\nEste método recibe un array numpy que contiene las variables que no pertenecen a la iteración, y los valores de y en un array de numpy con elementos rational, y los muestra por pantalla con su nombre(\"y\"+número de la variable). En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "variablesNoIteration=np.array([1,3])\ny = np.array([[rational(2,3),rational(4,6)],[rational(3,2),rational(4,1)]])\nSimplex.showYValues(variablesNoIteration,y)\n\n#Si se pasa algo que no sea un array numpy en cualquiera de los dos parámetros,siendo el segundo de elementos rational, \n# devuelve None\nprint(Simplex.showYValues(690,y))", "calculateZC\nEste método calcula los valores de la regla de entrada, y los devuelve en un array de numpy. Para ello recibe el vector de la función completo, en un array de numpy; las variables que no están dentro de la iteración, en un array de numpy; el vector de la función para la iteración, en un array de numpy y por último, los valores de y para la iteración, en un numpy array. Todos los arrays deben tener elementos rational, excepto en el de las variables que no están en la iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "functionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])\nvariablesNoIteration= np.array([0,2,3])\nCB = np.array([rational(2,1),rational(0,1)])\ny = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)\n ,rational(-1,1)]])\nprint(Simplex.printMatrix(np.asmatrix(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))))\n\n# Si se le pasa algo que no es un array numpy en cualquiera de los parámetros, devuelve None\nprint(Simplex.calculateZC(89,variablesNoIteration,CB,y))\n\n# Si el tamaño del vector de recursos para la iteración, es mayor que el tamaño de los resultados de y, devuelve None\nfunctionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])\nvariablesNoIteration= np.array([0,2,3])\nCB = np.array([rational(2,1),rational(0,1),rational(3,2),rational(2,1),rational(4,3)])\ny = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)\n ,rational(-1,1)]])\nprint(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))\n\n# Si hay más variables fuera de la iteración que variables en el vector de función total,se devuelve None\nfunctionVector= np.array([rational(1,1),rational(3,1),rational(4,1),rational(5,1),rational(5,1)])\nvariablesNoIteration= np.array([0,1,2,3,4,5,6])\nCB = np.array([rational(2,1),rational(0,1)])\ny = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)\n ,rational(-1,1)]])\nprint(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))\n\n# Si el tamaño del vector función para la iteración es mayor que el del vector total de la función, devuelve None:\nfunctionVector= np.array([rational(1,1),rational(3,1)])\nvariablesNoIteration= np.array([0,1,2,3,4,5,6])\nCB = np.array([rational(2,1),rational(0,1),rational(4,1),rational(5,1),rational(5,1)])\ny = np.array([[rational(2,1),rational(1,1)],[rational(-1,1),rational(-3,1)],[rational(1,1),rational(1,1)],[rational(0,1)\n ,rational(-1,1)]])\nprint(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))\n\n# Si se introduce algo que no es un array de numpy, devuelve None(el primer, tercer y cuarto parámetro deben tener elementos \n# rational)\nfunctionVector=np.array([3,-6,-3])\nprint(Simplex.calculateZC(functionVector,variablesNoIteration,CB,y))", "showZCValues\nEste método recibe en un array de numpy los valores de la regla de entrada(Z_C) y en otro array de numpy,las variables que no pertenecen a la iteración. Si todos los parámetros son correctos, muestra por pantalla los valores de la regla de entrada con su nombre asociado(\"Z_C\"+número de la variable). El método funciona tanto con elementos rational, como con elementos enteros.. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "variablesNoIteration= np.array([0,2,3])\nZ_C=np.array([3,-6,-3])\nSimplex.showZCValues(variablesNoIteration,Z_C)\n\n# También funciona con rational\nvariablesNoIteration= np.array([0,2,3])\nZ_C=np.array([rational(3,5),rational(-6,2),rational(-3,1)])\nSimplex.showZCValues(variablesNoIteration,Z_C)\n\n# Si la longitud de los valores de la regla de entrada, es diferente del número de valores que hay en la iteración, devuelve None\nZ_C=np.array([3,-6])\nprint(Simplex.showZCValues(variablesNoIteration,Z_C))\n\n# Si lo que se introduce no es un array de numpy, en cualquiera de los dos parámetros, devuelve None\nprint(Simplex.showZCValues(3,Z_C))", "thereIsAnotherIteration\nEste método recibe los valores de la regla de entrada en un array de numpy. Devuelve True, si hay otra iteración; -1, si hay infinitas soluciones o False, si no hay más iteraciones. El método funciona tanto con elementos rational, como con elementos enteros. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "inputRuleValues=np.array([3,-6])\nSimplex.thereIsAnotherIteration(inputRuleValues)\n\ninputRuleValues=np.array([0,-6])\nSimplex.thereIsAnotherIteration(inputRuleValues)\n\ninputRuleValues=np.array([0,6])\nSimplex.thereIsAnotherIteration(inputRuleValues)\n\ninputRuleValues=np.array([1,6])\nSimplex.thereIsAnotherIteration(inputRuleValues)\n\n# El método funciona también con rational\ninputRuleValues=np.array([rational(1,3),rational(-2,3)])\nSimplex.thereIsAnotherIteration(inputRuleValues)\n\n#Si se le pasa algo que no sea un array de numpy, devuelve None\nprint(Simplex.thereIsAnotherIteration(2))", "showNextIteration\nEste método muestra mediante una explicación, cuál es la solución dada por el método anterior. Si recibe True, muestra la explicación para cuando el problema no ha terminado, y hay más iteraciones; si recibe False, muestra la expliación para cuando el problema ha terminado y si recibe -1, muestra la explicación para cuando hay infinitas soluciones. En caso de que reciba algo distinto a esto, devuelve None. Ejemplos:", "Simplex.showNextIteration(True)\n\nSimplex.showNextIteration(False)\n\nSimplex.showNextIteration(-1)\n\n# Si recibe algo distinto a True,False o -1, devuelve None\nprint(Simplex.showNextIteration(-2))", "calculateVarWhichEnter\nEste método recibe un array de numpy que contiene las variables que no están en la iteración, y otro array de numpy que contiene los valores de la regla de entrada. Si los parámetros de entrada son correctos, se devuelve la variable que debe entrar en la siguiente iteración(el que tenga el valor mínimo). El método funciona tanto con elementos rational, como con elementos enteros. En caso de que los parámetros introducidos no sean correctos, devolverá None. Ejemplos:", "variablesNoIteration=np.array([0,2,3])\ninputRuleValues=np.array([3,-6,-3])\nSimplex.calculateVarWhichEnter(variablesNoIteration,inputRuleValues)\n\n# El método también funciona con elementos rational\nvariablesNoIteration=np.array([0,2,3])\ninputRuleValues=np.array([rational(3,9),rational(-6,2),rational(-3,2)])\nSimplex.calculateVarWhichEnter(variablesNoIteration,inputRuleValues)\n\n# Si se recibe algo que no es un array de numpy en cualquiera de los dos parámetros, devuelve None\nprint(Simplex.calculateVarWhichEnter(variablesNoIteration,5))", "showVarWhichEnter\nEste método recibe la variable que entra y la muestra por pantalla, indicando que esa es la variable que entra. En caso de no recibir un número por parámetro, devuelve None. Ejemplos:", "variableWhichEnter= 2\nSimplex.showVarWhichEnter(variableWhichEnter)\n\n#Si lo que recibe por parámetro no es un número, devuelve None\nprint(Simplex.showVarWhichEnter(\"adsf\"))", "calculateExitValues\nEste método recibe los valores de la regla de entrada en un array de numpy, los valores de y en otro array de numpy, y la solución de esa iteración en un array de numpy, en forma de columna. Todos los elementos de los arrays deben ser rational. Si todos los parámetros se introducen de forma correcta, se devuelven los valores de la regla de salida. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "inputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])\nyValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),\n rational(5,1),rational(6,1)]])\nsol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])\nSimplex.calculateExitValues(inputRuleValues,yValues,sol)\n\n#Si el número de valores de la regla de entrada es diferente que el número de valores de y, devuelve None\ninputRuleValues=np.array([rational(2,1),rational(-3,1)])\nyValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),\n rational(5,1),rational(6,1)]])\nsol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])\nprint(Simplex.calculateExitValues(inputRuleValues,yValues,sol))\n\n#Si el número de valores de la regla de entrada es diferente que el número de valores de y, devuelve None\ninputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])\nyValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),]])\nsol=np.array([[rational(1,1)],[rational(0,1)],[rational(-4,2)]])\nprint(Simplex.calculateExitValues(inputRuleValues,yValues,sol))\n\n#Si la longitud de la solución es menor que el número de valores de algún conjunto de y, devuelve None\ninputRuleValues=np.array([rational(2,1),rational(-3,1),rational(-4,3)])\nyValues=np.array([[rational(2,1),rational(3,1),rational(4,1)],[rational(4,1),rational(6,1),rational(8,1),],[rational(3,1),\n rational(5,1),rational(6,1)]])\nsol=np.array([[rational(1,1)],[rational(0,1)]])\nprint(Simplex.calculateExitValues(inputRuleValues,yValues,sol))\n\n#Si recibe algo que no sea un array de numpy con elementos rational en cualquiera de los parámetros, devuelve None\nprint(Simplex.calculateExitValues(inputRuleValues,66,sol))", "showExitValues\nEste método recibe en un array de numpy con elementos rational los valores de la regla de salida, y los muestra por pantalla, acompañados de el nombre que reciben(\"O\"), y de cuál será el criterio de elección del valor de salida(min). En caso de que no reciba un array de numpy, devuelve None. Ejemplos:", "exitValues=np.array([rational(1,2),rational(-3,2),rational(0,1),rational(5,2)])\nSimplex.showExitValues(exitValues)\n\n#Si recibe algo que no es una array de numpy con elementos rational, devuelve None\nexitValues=np.array([1,-3,0,5])\nprint(Simplex.showExitValues(exitValues))", "calculateO\nEste método calcula el valor de O, para un conjunto de valores de salida que recibe por parámetro como un array de numpy. Este valor será el de los valores recibidos. El cálculo de qué valores tienen denominador negativo o 0, se hace en el método calculateExitValues, luego aquí se recibirá un array con valores rational y Nan.Si todos los valores son Nan, devolverá None. En caso de que no reciba un array de numpy, devuelve None. Ejemplos:", "exitValues=np.array([rational(1,3),rational(-3,2),rational(0,1),rational(5,4)])\nprint(Simplex.calculateO(exitValues))\n\n#Si todos los valores recibidos son Nan, se omitirán y devolverá None\nexitValues=np.array([np.nan,np.nan,np.nan,np.nan])\nprint(Simplex.calculateO(exitValues))\n\n#Si recibe algo que no es una array de numpy con elementos rational o Nan, devuelve None\nexitValues=np.array([-1,-3,-3,-5])\nprint(Simplex.calculateO(exitValues))", "showOValue\nEste método recibe el valor de O, y simplemente lo muestra por pantalla, con su nombre asociado(\"O\"). En caso de no recibir un número por parámetro, devuelve None. Ejemplos:", "O = 3\nSimplex.showOValue(O)\n\nO = rational(3,4)\nSimplex.showOValue(O)\n\n#Si lo que recibe por parámetro no es un nuúmero, devuelve None\nprint(Simplex.showOValue([4,3]))", "calculateVarWhichExit\nEste método recibe en un array de numpy las variables o columnas que pertenecen a la iteración(deben aparecer ordenadas en función de lo que se esté realizando en el problema), y en otro array de numpy, los valores de la regla de salida, que deben ser rational o Nan. Si los parámetros introducidos son correctos, devuelve el valor de la variable que saldrá en esta iteración, o None, en caso de que todos los valores sean Nan. En caso de no recibir como parámetro un array de numpy, devolverá None. Ejemplos:", "outputRuleValues=np.array([rational(1,2),rational(-3,-2),rational(0,1),rational(5,7)])\ncolumnsOfIteration=np.array([0,2,3])\nSimplex.calculateVarWhichExit(columnsOfIteration,outputRuleValues)\n\n#Si los valores de la regla de salida, son todos negativos o divididos por 0, es decir, le pasamos Nan, devuelve None\noutputRuleValues=np.array([np.nan,np.nan,np.nan,np.nan])\nprint(Simplex.calculateVarWhichExit(columnsOfIteration,outputRuleValues))\n\n# Si recibe algo que no es un array de numpy en ambos parámetros, devuelve None\noutputRuleValues=np.array([1,-3,0,5])\nprint(Simplex.calculateVarWhichExit(4,outputRuleValues))", "showVarWhichExit\nEste método recibe la variable que sale por parámetro, y la muestra por pantalla, acompañado de una indicación de que esa es la variable que saldrá en esta iteración. En caso de no recibir un número por parámetro, devuelve None. Ejemplos:", "varWhichExit=4\nSimplex.showVarWhichExit(varWhichExit)\n\n# Si lo que recibe por parámetro no es un número, devuelve None.\nprint(Simplex.showVarWhichExit(np.array([3,4])))", "showIterCol\nEste método recibe un array de numpy con las columnas o variables de la iteración, y simplemente las muestra por pantalla, acompañado de una indicación de que esas son las variables de la iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "columnsOfIteration=np.array([3,4,5])\nSimplex.showIterCol(columnsOfIteration)\n\n# Si recibe algo que no sea un array de numpy, devuelve None\nprint(Simplex.showIterCol(3))", "solveIteration\nEste método recibe por parámetro la matriz completa de las restricciones del problema(sin signos ni recursos) en una matriz de numpy, y luego recibe tres arrays de numpy, que contienen el vector de recursos,el valor de todas las variables en la función, y las columnas o variables de la presente iteración. Los elementos de la matriz, los recursos y el vector de la función deben ser rational. En caso de que todos los parámetros introducidos sean correctos, muestra por pantalla el desarrollo de la iteración, y finalmente devuelve, la solución de la iteración,el valor de la función para la iteración, cuál sería la variable que entraría, cuál la variable que saldría y un valor que indica si habría más iteraciones(True),no hay más iteraciones(False) o el número de soluciones es elevado(-1). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "totalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),\n rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)\n ,rational(9,1),rational(0,1), rational(1,1)]])\nfunctionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])\nb = np.array([rational(2,1),rational(4,1),rational(1,1)])\ncolumnsOfIteration=np.array([3,4,5])\nSimplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration)\n\n# Si hay distinto número de recursos(b), que restricciones, devuelve None\ntotalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),\n rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)\n ,rational(9,1),rational(0,1), rational(1,1)]])\nfunctionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])\nb = np.array([[rational(2,1)],[rational(4,1)]])\ncolumnsOfIteration=np.array([3,4,5])\nprint(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))\n\n# Si la función tiene diferente número de variables que las restricciones, devuelve None\ntotalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),\n rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)\n ,rational(9,1),rational(0,1), rational(1,1)]])\nfunctionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1)])\nb = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])\ncolumnsOfIteration=np.array([3,4,5])\nprint(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))\n\n# Si el número de columnas o variables de la iteración, no se corresponde con el número de restricciones, devuelve None\ntotalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),\n rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)\n ,rational(9,1),rational(0,1), rational(1,1)]])\nfunctionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])\nb = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])\ncolumnsOfIteration=np.array([3,4])\nprint(Simplex.solveIteration(totalMatrix,b,functionVector,columnsOfIteration))\n\n# Si recibe por parámetro, algo que no es una matriz de numpy con elementos rational en el primer parámetro, o un array de numpy \n# con elementos rational(excepto en las columnas de la iteración, que son valores enteros) en el resto, devuelve None.\ntotalMatrix= np.matrix([[rational(-1,1),rational(4,1),rational(5,1),rational(7,1),rational(0,1),rational(0,1)],[rational(4,1),\n rational(6,1),rational(7,1),rational(0,1),rational(1,1),rational(0,1)],[rational(7,1),rational(-2,1),rational(-3,1)\n ,rational(9,1),rational(0,1), rational(1,1)]])\nfunctionVector =np.array([rational(2,1),rational(-3,1),rational(5,1),rational(0,1),rational(0,1),rational(1,1)])\nb = np.array([[rational(2,1)],[rational(4,1)],[rational(1,1)]])\ncolumnsOfIteration=np.array([3,4,5])\nprint(Simplex.solveIteration(4,b,functionVector,columnsOfIteration))", "identityColumnIsInMatrix\nEste método recibe una matriz de numpy con elementos rational, y un número que se corresponde, con el índice de una columna de la matriz identidad. Si todos los parámetros son correctos, devolverá el índice de la columna de la matriz pasada, donde se encuentra la columna de la matriz identidad. En caso de que la columna de la matriz identidad indicada no se encuentre en la matriz, devolverá None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "matrix=np.matrix([[rational(3,2),rational(0,1),rational(1,1)],[rational(3,5),rational(4,5),rational(0,1)],[rational(5,6),\n rational(7,8),rational(0,1)]])\ncolumn=0\n'''Se busca la columna 0 de la matriz identidad: [[1],\n [0],\n [0]]'''\nSimplex.identityColumnIsInMatrix(matrix,column)\n\n# Si la columna de la matriz identidad no está en la matriz, devuelve None\ncolumn=2\nprint(Simplex.identityColumnIsInMatrix(matrix,column))\n\n# Si la columna pasada, aparece más de una vez, devolverá la primera\nmatrix=np.matrix([[rational(1,1),rational(0,1),rational(1,1)],[rational(0,1),rational(4,5),rational(0,1)],[rational(0,1),\n rational(7,8),rational(0,1)]])\ncolumn=0\nSimplex.identityColumnIsInMatrix(matrix,column)\n\n# Si se pasa un número mayor o igual que el número de columnas que tiene la matriz, devuelve None\nmatrix=np.matrix([[rational(1,1),rational(0,1),rational(1,1)],[rational(0,1),rational(4,5),rational(0,1)],[rational(0,1),\n rational(7,8),rational(0,1)]])\ncolumn=4\nprint(Simplex.identityColumnIsInMatrix(matrix,column))\n\n# Si se pasa algo que no es una matriz de numpy con elementos rational en el primer parámetro o algo que no es un número en el \n# segundo parámetro, devuelve None\nprint(Simplex.identityColumnIsInMatrix(matrix,\"[2,3]\"))", "variablesFirstIteration\nEste método recibe una matriz de numpy, que será la matriz completa del problema y que debe tener elementos rational. Si todos los parámetros son correctos, calcula cuáles son las variables de la primera iteración del problema(es decir, donde están las columnas de la matriz identidad, en la matriz pasada)en un array de numpy. En caso de que alguna de las columnas de la matriz identidad no aparezca, devuelve None en su posición. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "totalMatrix=np.matrix([[rational(1,1),rational(2,1),rational(3,1),rational(4,1),rational(0,1)],[rational(0,1),rational(3,1),\n rational(4,1),rational(7,1),rational(1,1)]])\nSimplex.variablesFirstIteration(totalMatrix)\n\n# En caso de que una de las columnas de la matriz identidad, no aparezca, devuelve None\ntotalMatrix=np.matrix([[rational(1,1),rational(2,1),rational(3,1),rational(4,1),rational(0,1)],[rational(1,1),rational(3,1),\n rational(4,1),rational(7,1),rational(1,1)]])\nSimplex.variablesFirstIteration(totalMatrix)\n\n# En caso de que una columna de la matriz identidad aparezca más de una vez, solo devuelve la primera\ntotalMatrix=np.matrix([[rational(1,1),rational(1,1),rational(3,1),rational(4,1),rational(0,1)],[rational(0,1),rational(0,1),\n rational(4,1),rational(7,1),rational(1,1)]])\nSimplex.variablesFirstIteration(totalMatrix)\n\n# Si recibe algo que no es una matriz de numpy de elementos rational, devuelve None\nprint(Simplex.variablesFirstIteration(4))", "calculateColumnsOfIteration\nEste método recibe la variable que entrará en la siguiente iteración, la variable que saldrá en la siguiente iteración, y en un array de numpy, las variables de la iteración anterior. Si los parámetros son correctos, devuelve en un array de numpy, las variables de la iteración actual. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "variableWhichEnters=4\nvariableWhichExits=3\npreviousVariables=np.array([1,3,5])\nSimplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,previousVariables)\n\n# Si se intenta sacar una variable que no está, no saca nada\nvariableWhichEnters=4\nvariableWhichExits=6\npreviousVariables=np.array([1,3,5])\nSimplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,previousVariables)\n\n# Si se mete algo que no es un array de numpy en el tercer parámetro,o algo que no es un número en los dos primeros, devuelve\n# None\nprint(Simplex.calculateColumnsOfIteration(variableWhichEnters,variableWhichExits,3))", "completeSolution\nEste método recibe las variables de la iteración en un array de numpy, el número total de variables del problema, y la solución de la iteración en un array de numpy, con todos sus elementos rational. Si todos los parámetros se introducen de forma correcta, devolverá la solución completa, es decir, el valor de cada una de las variables para dicha iteración. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "variablesOfLastIter=np.array([2,3,4])\nnumberOfVariables=6\niterationSolution=np.array([rational(4,1),rational(6,4),rational(7,3)])\nprint(Simplex.printMatrix(Simplex.completeSolution(variablesOfLastIter,numberOfVariables,iterationSolution)))\n\n# Si el número de variables de la última iteración es diferente que la longitud de la solución, devuelve None\nvariablesOfLastIter=np.array([3,4])\nnumberOfVariables=6\niterationSolution=np.array([rational(4,1),rational(6,4),rational(7,3)])\nprint(Simplex.completeSolution(variablesOfLastIter,numberOfVariables,iterationSolution))\n\n# Si recibe algo que no es un array de numpy en el primer y tercer parámetro(este debe ser de elementos rational), o algo que\n# no es un número en el segundo, devuelve None\nprint(Simplex.completeSolution(variablesOfLastIter,[9,9],iterationSolution))", "addIdentityColumns\nEste método recibe una matriz de numpy con elementos rational, y devuelve en una matriz de numpy, cuáles son las columnas de la matriz identidad que no tiene. En caso de que ya tenga todas las columnas de la matriz identidad, devuelve un array vacío. En caso de recibir algo que no sea una matriz de numpy, devuelve None. Ejemplos:", "matrixInitial=np.matrix([[rational(3,2),rational(4,3),rational(6,3)],[rational(6,9),rational(7,3),rational(8,5)],[rational(4,3),\n rational(5,4),rational(7,5)]])\nprint(Simplex.printMatrix(Simplex.addIdentityColumns(matrixInitial)))\n\n# Si ya hay alguna columna de la matriz identidad, devuelve solo las que faltan\nmatrixInitial=np.matrix([[rational(3,4),rational(1,1),rational(6,3)],[rational(6,4),rational(0,1),rational(8,9)],[rational(4,5),\n rational(0,1),rational(7,6)]])\nprint(Simplex.printMatrix(Simplex.addIdentityColumns(matrixInitial)))\n\n# Si ya están todas las columnas de la mtriz identidad, devuelve un array vacío\nmatrixInitial=np.matrix([[rational(0,1),rational(1,1),rational(0,1)],[rational(1,1),rational(0,1),rational(0,1)],[rational(0,1),\n rational(0,1),rational(1,1)]])\nSimplex.addIdentityColumns(matrixInitial)\n\n# Si se pasa algo que no es una matriz de numpy con elementos rational, devuelve None\nprint(Simplex.addIdentityColumns(4))", "isStringList\nEste método recibe una lista y comprueba si todos los elementos de la misma son strings, en ese caso devuelve True. Si algún elemento de la lista no es un string devuelve False.Se utiliza principalmente para comprobar que los parámetros de entrada de algunos métodos son correctos. En caso de no introducir una lista, devuelve None. Ejemplos:", "lis=[\"hola\",\"adios\",\"hasta luego\"]\nSimplex.isStringList(lis)\n\nlis=[\"hola\",4,\"hasta luego\"]\nSimplex.isStringList(lis)\n\n# Si recibe algo que no es una lista, devuelve None\nprint(Simplex.isStringList(4))", "calculateArtificialValueInFunction\nEste método calcula y devuelve el coeficiente de la variable artificial para la función objetivo. Aunque como sabemos este valor será infinito y se añadirá con coeficiente negativo, basta con que este valor sea superior a la suma de los valores absolutos de los coeficientes que ya están en el vector función. El método funciona tanto con valores enteros, como con rational, pero siempre devolverá un rational. En caso de recibir algo que no es un array de numpy, devuelve None. Ejemplos:", "array=np.array([2,3,4,5])\nprint(Simplex.calculateArtificialValueInFunction(array))\n\narray=np.array([2,3,4,-5])\nprint(Simplex.calculateArtificialValueInFunction(array))\n\narray=np.array([rational(2,5),rational(3,4),rational(4,9),rational(-5,7)])\nprint(Simplex.calculateArtificialValueInFunction(array))\n\n#Si recibe algo que no es una rray de Numpy, devuelve None\nprint(Simplex.calculateArtificialValueInFunction(4))", "addArtificialVariablesToFunctionVector\nEste método recibe un array de numpy con elementos rational, que contiene los coeficientes de la función objetivo(vector función), y un número, que será el número de variables artificiales que se desea añadir. Si se introducen los parámetros de forma correcta, devolverá un array de numpy, que contendrá el vector función completo, ya con los coeficientes de las variables artificiales añadidos. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "vector=np.array([rational(3,1),rational(4,1),rational(5,1),rational(6,1)])\nnumOfArtificialVariables= 2\nprint(Simplex.printMatrix(np.asmatrix(Simplex.addArtificialVariablesToFunctionVector\n (vector,numOfArtificialVariables))))\n\n#Si se pasa algo que no es un array de numpy con elementos rational en el primer parámetro, o algo que no es un número en \n# el segundo, devuelve None\nprint(Simplex.addArtificialVariablesToFunctionVector(vector,[2,3]))", "calculateWhichAreArtificialVariables\nEste método recibe un array de numpy, que contiene los coeficientes de la función objetivo, con las variables artificiales incluidas(en orden), y un número que representa el número de variables artificiales que hay. Si los parámetros son correctos, devolverá cuáles son las variables artificiales. El método funciona tanto con elementos rational, como con enteros. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "vector=np.array([3,4,5,6,-20,-40])\nnumOfArtificialVariables= 2\nSimplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)\n\n# Si no se han incluido las variables artificiales, supone que son las últimas\nvector=np.array([3,4,5,6])\nnumOfArtificialVariables= 2\nSimplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)\n\nvector=np.array([rational(3,2),rational(4,4),rational(5,6),rational(6,9),rational(-20,1),rational(-40,1)])\nnumOfArtificialVariables= 2\nSimplex.calculateWhichAreArtificialVariables(vector,numOfArtificialVariables)\n\n#Si se introduce algo que no es un array de numpy en el primer valor, o algo que no es un número en el segundo, devuelve None\nnumOfArtificialVariables= 2\nprint(Simplex.calculateWhichAreArtificialVariables(2,numOfArtificialVariables))", "checkValueOfArtificialVariables\nEste método recibe una lista que contiene las variables artificiales del problema, y en un array de numpy con elementos rational, la solución al mismo. Si los parámetros se introducen correctamente, el método comprueba si alguna de las variables artificiales, toma un valor positivo, y en ese caso las devuelve en una lista(si esto ocurriera el problema no tendría solución). Este método es algo especial, puesto que no sigue le funcionamiento de los demás. En este caso recibe las variables artificiales, pero empezando a contar desde la 0,(en el primer ejemplo entonces, 4 y 5, serán las dos últimas). Sin embargo, las variables que devuelve, son empezando a contar desde la 1. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "varArtificial=[4,5]\nsolution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(3,1)],[rational(9,1)]])\nSimplex.checkValueOfArtificialVariables(varArtificial,solution)\n\nvarArtificial=[4,5]\nsolution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(-3,1)],[rational(-9,1)]])\nSimplex.checkValueOfArtificialVariables(varArtificial,solution)\n\nvarArtificial=[4,5]\nsolution=np.array([[rational(34,2)],[rational(56,4)],[rational(7,8)],[rational(89,7)],[rational(0,1)],[rational(9,1)]])\nSimplex.checkValueOfArtificialVariables(varArtificial,solution)\n\n# Si recibe algo que no sea una lista en el primer parámetro o un array de numpy de elementos rational en el segundo, devuelve \n# None\nprint(Simplex.checkValueOfArtificialVariables(5,solution))", "omitComments\nEste método recibe una lista de strings, y lo que hace es eliminar aquellas ocurrencias que comiencen por el caracter \"//\" o \"#\". También en aquellas ocurrencias que estos caracteres aparezcan en cualquier parte de la cadena, elimina la subcadena a partir de estos caracteres. Devolverá la lista, ya con estas ocurrencias eliminadas. Se utiliza para eliminar comentarios. En caso de recibir algo que no sea una lista, devuelve None.Ejemplos:", "listOfstrings=[\"//hola\",\"2 3 4 <=4 //first\",\"#hola\",\"adios\"]\nSimplex.omitComments(listOfstrings)\n\n# En caso de no recibir una lista de strings, devuelve None\nprint(Simplex.omitComments([5,3]))", "proccessFile\nEste método recibe un archivo por parámetro, que debe contener un problema de programación lineal en el siguiente formato:\nY devuelve en este orden, la matriz de restricciones en una matriz numpy,el vector de recursos en un array de numpy, los signos de las restricciones en una lista de strings y un string que contiene la función objetivo a optimizar. Para ver como abrir un archivo consultar los ejemplos. Si se le pasa algo que no es un archivo, devuelve None Ejemplos:", "# Introducir aquí la ruta del archivo a abrir\nfile = open('../Files/file2.txt','r')\nproblem=Simplex.proccessFile(file)\nprint(Simplex.printMatrix(problem[0]))\nprint(Simplex.printMatrix(np.asmatrix(problem[1])))\nprint(problem[2])\nprint(problem[3])\n\n#En caso de que se le pase algo que no sea un archivo, devuelve None\nprint(Simplex.proccessFile(4))", "convertFunctionToMax\nEste método recibe un string que contiene la función objetivo del problema en el siguiente formato:\nmax/min 2 -3\nEl método devuelve en un array de numpy de elementos rational con los coeficientes de la función, en forma de maximización, puesto que es como se utiliza en la forma estándar, luego si introduzco una función de minimización, me devolverá los coeficientes cambiados de signo. En caso de que lo que le pase no sea un string, devuelve None. Ejemplo:", "function=\"max 2 -3\"\nprint(Simplex.printMatrix(np.asmatrix(Simplex.convertFunctionToMax(function))))\n\nfunction=\"min 2 -3\\n\"\nprint(Simplex.printMatrix(np.asmatrix(Simplex.convertFunctionToMax(function))))\n\n# Si recibe algo que no es un string devuelve None\nfunction=\"min 2 -3\\n\"\nprint(Simplex.convertFunctionToMax(3))", "invertSign\nEste método recibe un string que contiene un signo (debe ser <,<=,>,>=,=) y devuelve en otro string su signo opuesto. En caso de no recibir un string por parámetro, devuelve None. Ejemplos:", "previousSign=\"<\"\nSimplex.invertSign(previousSign)\n\npreviousSign=\">\"\nSimplex.invertSign(previousSign)\n\npreviousSign=\"<=\"\nSimplex.invertSign(previousSign)\n\npreviousSign=\">=\"\nSimplex.invertSign(previousSign)\n\npreviousSign=\"=\"\nSimplex.invertSign(previousSign)\n\n#Si introduzco algo que no sea un string, me devuelve None\npreviousSign=3\nprint(Simplex.invertSign(previousSign))", "negativeToPositiveResources\nEste método se utiliza para cambiar a positivos, los recursos que sean negativos, ya que esto no debe darse. Para ello, realiza las transformaciones necesarias, devolviendo un matriz de numpy con elementos rational que contiene las restricciones, un array de numpy con elementos rational que contiene los recursos, y una lista de strings con los signos de cada restricción, con todos los cambios ya realizados. Los parámetros de entrada son los mismos que las salidas que proporciona, pero con las transformaciones sin realizar, es decir, una matriz de numpy, un array de numpy y una lista de strings. Para los recursos que sean positivos, no se realiza transformación alguna, sino que simplemente devuelve lo que recibe. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "matrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],\n [rational(3,1),rational(4,2),rational(6,4)]])\nresources=np.array([rational(1,4),rational(-4,1),rational(5,2)])\nsign=[\"<=\",\"<\",\">\"]\nstd=Simplex.negativeToPositiveResources(matrix,resources,sign)\nprint(Simplex.printMatrix(std[0]))\nprint(Simplex.printMatrix(np.asmatrix(std[1])))\nprint(std[2])\n\nmatrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],\n [rational(3,1),rational(4,2),rational(6,4)]])\nresources=np.array([rational(1,4),rational(4,1),rational(5,2)])\nsign=[\"<=\",\"<\",\">\"]\nstd=Simplex.negativeToPositiveResources(matrix,resources,sign)\nprint(Simplex.printMatrix(std[0]))\nprint(Simplex.printMatrix(np.asmatrix(std[1])))\nprint(std[2])\n\n# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None\nmatrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],\n [rational(3,1),rational(4,2),rational(6,4)]])\nresources=np.array([rational(1,4),rational(-4,1)])\nsign=[\"<=\",\"<\",\">\"]\nstd=Simplex.negativeToPositiveResources(matrix,resources,sign)\nprint(Simplex.negativeToPositiveResources(matrix,resources,sign))\n\n# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz, \n# devuelve None\nmatrix=np.matrix([[rational(1,2),rational(2,3),rational(4,9)],[rational(4,3),rational(6,2),rational(7,4)],\n [rational(3,1),rational(4,2),rational(6,4)]])\nresources=np.array([rational(1,4),rational(-4,1),rational(5,2)])\nsign=[\"<=\",\"<\"]\nstd=Simplex.negativeToPositiveResources(matrix,resources,sign)\nprint(Simplex.negativeToPositiveResources(matrix,resources,sign))\n\n# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un \n# array de numpy con elementos rational en el segundo, o algo que no es una lista de strings, en el tercero,devuelve None\nresources=np.array([1,-4,5])\nsign=[\"<=\",\"<\",\">\"]\nprint(Simplex.negativeToPositiveResources(matrix,resources,sign))", "convertToStandardForm\nEste método recibe una martriz de numpy con elementos rational que contendrá las restricciones del problema, un array de numpy con elementos rational, que contendrá el vector de recursos, una lista de strings que contiene los signos de las restricciones y un string que contendrá la función en el formato \"max/min 2 -3\". Si todos los parámetros introducidos son correctos, el método devolverá los parámetros que ha recibido, pero transformados a la forma estándar(en el caso de la función la devuelve ya en un array de numpy con elementos rational, en su forma de maximización). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])\nresources=np.array([rational(10,1),rational(15,1)])\nsign=[\"<=\",\">=\"]\nfunction=\"min -2 -3 -4 \"\nstd=Simplex.convertToStandardForm(matrix,resources,sign,function)\nprint(Simplex.printMatrix(std[0]))\nprint(Simplex.printMatrix(np.asmatrix(std[1])))\nprint(std[2])\nprint(Simplex.printMatrix(np.asmatrix(std[3])))\n\n# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None\nmatrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])\nresources=np.array([rational(10,1),rational(15,1),rational(52,1)])\nsign=[\"<=\",\">=\"]\nfunction=\"min -2 -3 -4 \"\nprint(Simplex.convertToStandardForm(matrix,resources,sign,function))\n\n# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz, \n# devuelve None\nmatrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])\nresources=np.array([rational(10,1),rational(15,1)])\nsign=[\"<=\",\">=\",\"=\"]\nfunction=\"min -2 -3 -4 \"\nprint(Simplex.convertToStandardForm(matrix,resources,sign,function))\n\n# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un \n# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero o algo que no es un\n# string en el cuarto,devuelve None\nmatrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])\nresources=np.array([rational(10,1),rational(15,1)])\nfunction=\"min -2 -3 -4 \"\nprint(Simplex.convertToStandardForm(matrix,resources,[4,0],function))", "showStandarForm\nEste método recibe una matriz de numpy con elementos rational que es la matriz de coeficientes, un array de numpy con elementos rational que es el vector de recursos y un array de numpy con elementos rational que es el vector de la función a optimizar. Todos los parámetros son introducidos en forma estándar y son mostrados, en un formato más visual. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "matrix=np.matrix([[rational(3,1),rational(2,1),rational(1,1)],[rational(2,1),rational(5,1),rational(3,1)]])\nresources=np.array([rational(10,1),rational(15,1)])\nfunction=np.array([rational(14,6),rational(25,2)])\nSimplex.showStandarForm(matrix,resources,function)\n\n# Si recibe algo que no es una matriz de numpy con elementos rational, en el primer parámetro, algo que no es un array de numpy \n# con elementos rational en el segundo y tercer parámetro, devuelve None\nfunction=np.array([3,4])\nprint(Simplex.showStandarForm(matrix,resources,function))", "solveProblem\nEste método resuelve el problema de programación lineal que se le pasa por parámetro. Para ello, recibe una matriz de numpy con elementos rational que contiene las restricciones, sin signos ni recursos, un array de numpy con elementos rational que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato \"max/min 2 -3\" y un valor True o False, que determina si se quiere obtener también la solución del problema dual al introducido. El método devuelve en este orden la solución del problema(valor de las variables),el valor de la función objetivo para esa solución, una explicación del tipo de problema y el valor de las variables de la solución del problema dual, en caso de que se introduzca True, como último parámetro. No es necesario que se introduzca el problema en forma estándar puesto que el método ya realiza la transformación internamente.En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "# Si se pasa False no devuelve la solución dual\nmatrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1),rational(0,1)])\nsigns=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\nsolutionOfDualProblem=False\nsol=Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem)\nprint(Simplex.printMatrix(np.asmatrix(sol[0])))\nprint(Simplex.printMatrix(sol[1]))\nprint(sol[2])\n\n# Si se pasa True devolverá la soución dual\nmatrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1),rational(0,1)])\nsigns=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\nsolutionOfDualProblem=True\nsol=Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem)\nprint(Simplex.printMatrix(np.asmatrix(sol[0])))\nprint(Simplex.printMatrix(sol[1]))\nprint(sol[2])\nprint(Simplex.printMatrix(np.asmatrix(sol[3])))\n\n# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None\nmatrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1)])\nsigns=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\nsolutionOfDualProblem=True\nprint(Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem))\n\n# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz, \n# devuelve None\nmatrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1),rational(0,1)])\nsign=[\"<=\",\"<=\",\">=\",\"=\"]\nfunction=\"max 2 1\"\nsolutionOfDualProblem=True\nprint(Simplex.solveProblem(matrix,resources,sign,function,solutionOfDualProblem))\n\n# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un \n# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero,algo que no es un string\n# en el cuarto o algo que no sea True o False en el quinto,devuelve None\nmatrix=np.matrix([[2,1],[1,-1],[5,2]])\nresources=np.array([18,8,4])\nsign=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\nprint(Simplex.solveProblem(matrix,resources,sign,function,True))", "dualProblem\nEste método recibe un problema de programación lineal y devuelve el problema dual del pasado por parámetro. Para ello, recibe una matriz de numpy con elementos rational que contiene las restricciones, sin signos ni recursos, un array de numpy con elementos rational que contiene los recursos, una lista de strings, que contienen los signos de las restricciones y un string que contiene la función en el formato \"max/min 2 -3\". El método devuelve el problema dual en este orden una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones y un string que contiene la función en el formato \"max/min 2 -3\". No es necesario que se introduzca el problema en forma estándar(tampoco en forma simétrica de maximización) puesto que el método ya realiza la transformación internamente. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "matrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1),rational(0,1)])\nsign=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\ndual=Simplex.dualProblem(matrix,resources,sign,function)\nprint(Simplex.printMatrix(dual[0]))\nprint(Simplex.printMatrix(np.asmatrix(dual[1])))\nprint(dual[2])\nprint(dual[3])\n\n# Si la longitud del vector de recursos, es diferente del número de filas de la matriz, devuelve None\nmatrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1)])\nsign=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\nprint(Simplex.dualProblem(matrix,resources,sign,function))\n\n# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz, \n# devuelve None\nmatrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1),rational(0,1)])\nsign=[\"<=\",\"<=\",\">=\",\"<=\"]\nfunction=\"max 2 1\"\nprint(Simplex.dualProblem(matrix,resources,sign,function))\n\n# Si se pasa por parámetro algo que no es una matriz de numpy con elementos rational en el primer parámetro, algo que no es un \n# array de numpy con elementos rational en el segundo,algo que no es una lista de strings en el tercero o algo que no es un \n# string en el cuarto \nmatrix=np.matrix([[2,1,4],[6,-4,-7],[8,12,9]])\nresources=np.array([[1],[8],[10]])\nsign=[\"<=\",\"<=\",\">=\"]\nfunction=\"min 3 10 0\"\nprint(Simplex.dualProblem(matrix,resources,sign,function))", "calculateSolutionOfDualProblem\nEste método recibe las columnas o variables de la última iteración del problema en un array de numpy, el vector de la función en su forma de maximización en un array de numpy, y la matriz inicial con las restricciones del problema, en una matriz de numpy. Es necesario que tanto la matriz como la función, se encuentren en la forma estándar. Si la introducción de parámetros es correcta, se devuelve la solución del problema dual, en un array de numpy. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "colsOfIteration=np.array([3,4,1])\ntotalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],\n [rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),\n rational(7,1),rational(1,1),rational(0,1)]])\nfunction=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1)])\nprint(Simplex.printMatrix(np.asmatrix(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,\n totalMatrix))))\n\n# Si se pasa un número mayor de columnas(variables) del que hay en la matriz o en la función devuelve None\ncolsOfIteration=np.array([3,4,1,5,6,2])\ntotalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],\n [rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),\n rational(7,1),rational(1,1),rational(0,1)]])\nfunction=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1)])\nprint(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))\n\n# Si el número de columnas(variables) de la función es mayor que el de la matriz, devuelve None\ncolsOfIteration=np.array([3,4,1])\ntotalMatrix = np.matrix([[rational(2,1),rational(3,1),rational(4,1),rational(0,1),rational(1,1)],\n [rational(3,1),rational(4,1),rational(7,1),rational(0,1),rational(0,1)],[rational(2,1),rational(6,1),\n rational(7,1),rational(1,1),rational(0,1)]])\nfunction=np.array([rational(3,1),rational(6,1),rational(-7,1),rational(0,1),rational(0,1),rational(7,1)])\nprint(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))\n\n# Si se pasa algo que no es un array de numpy en el primer o el segundo parámetro(este debe ser de elementos rational), o algo \n# que no es una matriz de numpy con elementos rational en el tercero, devuelve None\ncolsOfIteration=np.array([3,4,1])\ntotalMatrix = np.matrix([[2,3,4,0,1],[3,4,7,0,0],[2,6,7,1,0]])\nfunction=np.array([3,6,-7,0,0,4])\nprint(Simplex.calculateSolutionOfDualProblem(colsOfIteration,function,totalMatrix))", "Solución gráfica\nconvertToPlotFunction\nEste método transforma una restricción en una función para ser representada. Para ello, recibe un array de numpy que contiene la restricción(todos los coeficientes deben ser rational), sin signo ni recurso,un string que contiene el signo, un rational que es el recurso que contiene los recursos, y una variable que será el linespace para su representación. Además de devolver la función, devuelve un string, con la función. Si el valor de y en la restricción es 0, devuelve un rational, en lugar de una función. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "# Si se le pasa todo correcto, devuelve una función, y un string con la función\nlineOfMatrix=np.array([rational(3,4),rational(2,1)])\nsign=\"<=\"\nresource=rational(4,1)\nx = np.linspace(0, 10)\nSimplex.convertToPlotFunction(lineOfMatrix, sign, resource, x)\n\n# Si se le pasa una restricción con la segunda componente 0, devuelve un número\nlineOfMatrix=np.array([rational(3,4),rational(0,1)])\nsign=\"<=\"\nresource=rational(4,1)\nx = np.linspace(0, 10)\nSimplex.convertToPlotFunction(lineOfMatrix, sign, resource, x)\n\n# Si se le pasa una restricción que no tiene 2 componentes o tiene más de 2,devuelve None\nlineOfMatrix=np.array([rational(3,4)])\nprint(Simplex.convertToPlotFunction(lineOfMatrix, sign,\n resource, x))\n\n# Si se le pasa algo que no es un array de numpy de rational en el primer parámetro, algo que no es un string en el segundo, algo\n#que no es un rational en el tercero o algo que no es un array de numpy en el tercero,devuelve None\n\nprint(Simplex.convertToPlotFunction(lineOfMatrix, sign,\n 4, x))", "* showFunction*\nEste método recibe una función y la representa. Para ello recibe una función,o un número si la función es de tipo y=n, una variable que será el linespace para representarlo y un string que será la etiqueta que se le dará a la función. Es necesario después de ejecutar este método hacer plt.show(). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "% matplotlib inline\nimport matplotlib.pyplot as plt\nfunction=lambda x: 3*x+1\nx=np.linspace(0, 10)\nlabel=\"3x+1 = 2\"\nSimplex.showFunction(function, x, label)\nplt.show()\n\n# Se le puede pasar un número si la función es de tipo y=n\nx=np.linspace(0, 10)\nlabel=\"3x+1 = 2\"\nSimplex.showFunction(4,x, label)\nplt.show()\n\n# Si se le pasa algo que no es una función o un número en el primer elemento, algo que no es un array de numpy en el segundo, o \n# algo que no es un string en el tercero, devuelve None\nprint(Simplex.showFunction(np.array([3,4,5]),x, label))", "* eliminateRepeatedPoints*\nEste método recibe una lista de puntos(en forma de tupla) y devuelve la misma lista, con los puntos repetidos eliminados. Con enteros y rational, funciona exactamente, no así con float si los números tienen muchos decimales, puesto que podría considerar por ejemplo 5.33333 y 5.33334 como dos números distintos, cuando podrían ser el mismo. En caso de no recibir una lista, devuelve None. Ejemplos:", "# Como vemos en este caso elimina un punto que está repetido\nseq=[(rational(2,1),rational(3,4)),(rational(6,1),rational(7,4)),(rational(2,1),rational(3,4)),(rational(5,2),rational(3,4)),]\nSimplex.eliminateRepeatedPoints(seq)\n\n# Con enteros funciona perfectamente\nseq=[(3,1),(4,5),(4,5),(2,1)]\nSimplex.eliminateRepeatedPoints(seq)\n\n# Con float no funciona exactamente\nseq=[(3.0,1.1),(4.0,5.0),(4.000001,5.0),(2.0,1.0)]\nSimplex.eliminateRepeatedPoints(seq)\n\n# Si no se introduce un lista, devuelve None\nprint(Simplex.eliminateRepeatedPoints(4))", "* eliminatePoints*\nEste método recibe dos listas, y devuelve una lista con los elementos de la primera lista que no están en la segunda. Se puede utilizar para eliminar puntos(tuplas) o cualquier elemento. Igual que el método anterior, con float no funciona exactamente.Si no recibe dos listas, devuelve None. Ejemplos:", "# Con enteros funciona perfectamente\nlist1=[(3,1),(4,5),(6,7)]\nlist2=[(2,5),(4,5),(4,8)]\nSimplex.eliminatePoints(list1, list2)\n\n# Con rational funciona perfectamente\nlist1=[rational(5,1),rational(2,5),rational(6,1)]\nlist2=[rational(8,7),rational(2,5),rational(10,8)]\nSimplex.eliminatePoints(list1, list2)\n\n# Con float no funciona exactamente\nlist1=[(3.0,1.0),(4.0,5.0),(6.0,7.0)]\nlist2=[(2.0,5.0),(4.000001,5.0),(4.0,8.0)]\nSimplex.eliminatePoints(list1, list2)\n\n# Si recibe algo que no sean dos listas, devuelve None\nprint(Simplex.eliminatePoints(3, list2))", "calculatePointOfSolution\nEst método recibe un array de numpy con los coeficientes de la función a optimizar(en forma de maximización),una lista de puntos cuyas coordenadas son rational, y un rational con el valor de la función objetivo optimizada. El método devuelve cuál es el punto que alcanza el valor pasado. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "functionVector=np.array([rational(2,1),rational(3,1)])\npoints=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]\nsolution = rational(19,4)\nSimplex.calculatePointOfSolution(functionVector, points, solution)\n\nfunctionVector=np.array([rational(2,1),rational(3,1)])\npoints=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]\nsolution = rational(18,3)\nprint(Simplex.calculatePointOfSolution(functionVector, points, solution))\n\n# Si recibe algo que no sea un array de numpy en el primer parámetro, una lista de puntos rational en el segundo, o un rational \n# en el tercero, devuelve None\nprint(Simplex.calculatePointOfSolution(functionVector, points, 3.0))", "calculateSolution\nEste método recibe una función a optimizar en un string, en el formato que se puede ver en los ejemplos. Recibe un conjunto de puntos cuyas coordenas son rational. El método devuelve el valor de la función optimizada, y cuál es el punto de los pasados que la optimiza.Si la lista no tiene puntos, devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "function=\"max 2 3\"\npoints=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]\nsol=Simplex.calculateSolution(function, points)\nprint(sol[0])\nprint(sol[1])\n\nfunction=\"min 2 3\"\npoints=[(rational(4,2),rational(3,4)),(rational(5,4),rational(6,8)),(rational(1,4),rational(6,1))]\nsol=Simplex.calculateSolution(function, points)\nprint(sol[0])\nprint(sol[1])\n\n# Si la lista esta vacía, devuelve None\nprint(Simplex.calculateSolution(function,[]))\n\n# Si recibe algo que no es un string en el primer parámetro o una lista de puntos rational en el segundo devuelve None\nprint(Simplex.calculateSolution(function, 4))", "intersectionPoint\nEste método calcula el punto de intersección entre dos restricciones de tipo \"=\". Recibe dos array de numpy, cuyos componenetes deben ser rational, que contienen los coeficientes de las restricciones, y recibe también los recursos de cada restricción en dos rational. En caso de que no haya punto de intersección entre ellas, devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "line1=np.array([rational(2,1),rational(3,4)])\nline2=np.array([rational(8,3),rational(7,9)])\nresource1=rational(3,1)\nresource2=rational(4,1)\npoint=Simplex.intersectionPoint(line1, line2, resource1, resource2)\nprint(\"(\"+str(point[0])+\",\"+str(point[1])+\")\")\n\n# Si no hay punto de intersección, devuelve None\nline1=np.array([rational(2,1),rational(3,4)])\nline2=np.array([rational(2,1),rational(3,4)])\nresource1=rational(3,1)\nresource2=rational(4,1)\nprint(Simplex.intersectionPoint(line1, line2, resource1, resource2))\n\n# Si se introduce algo que no es un array de rational de longitud 2 en los dos primeros parámetros, o algo que no es un rational,\n# en los dos últimos, devuelve None\nprint(Simplex.intersectionPoint(3, line2, resource1, resource2))", "eliminateNegativePoints\nEste método recibe una lista de puntos cuyas coordenadas son rational, y devuelve la lista, sin aquellos puntos con coordenadas negativas. Si recibe algo que no es una lista de puntos rational, devuelve None. Ejemplos:", "points=[(rational(4,2),rational(-3,4)),(rational(5,4),rational(6,-8)),(rational(1,4),rational(6,1))]\nSimplex.eliminateNegativePoints(points)\n\n# Si recibe algo que no es una lista de puntos rational, devuelve None\npoints=[(4,2),(6,-8),(6,1)]\nprint(Simplex.eliminateNegativePoints(points))", "calculateAllIntersectionPoints\nEste método recibe un array de arrays de numpy con todas las restricciones, sin signo ni recursos, y un array de numpy con los recursos de cada restricción. El método devuelve en una lista, todos los puntos de intersección entre las restricciones y de las restricciones con los ejes de coordenadas positivos. También añade el punto (0,0). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])\nresources=np.array([rational(3,1),rational(2,1),rational(4,1)])\nSimplex.calculateAllIntersectionPoints(matrix, resources)\n\n# Si el número de restricciones es distinto del de recursos, devuelve None\nmatrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])\nresources=np.array([rational(3,1),rational(2,1)])\nprint(Simplex.calculateAllIntersectionPoints(matrix, resources))\n\n# Si recibe algo que no sea un array de numpy, con elementos rational, devuelve None\nprint(Simplex.calculateAllIntersectionPoints(matrix, 4))", "calculateNotBoundedIntersectionPoints\nEste método recibe un array de arrays de numpy con todas las restricciones, sin signo ni recursos,un array de numpy con los recursos de cada restricción y los máximos valores de x y de y que se van a representar, en dos ratioanl. El método devuelve en una lista, los puntos de intersección entre las restricciones y los ejes imaginarios constituidos en los máximos puntos representados. Por ejemplo si se pasa constX=3 y constY=4, devolverá los puntos de intersección entre las restricciones y los ejes y=3 y x=4 . También añade el punto de intersección entre los dos hipotéticos ejes(en el ejemplo anterior el punto (4,3). En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "matrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)],[rational(6,1),rational(0,1)]])\nresources=np.array([rational(3,1),rational(2,1),rational(4,1)])\nconstX= rational(10,1)\nconstY= rational(8,1)\nSimplex.calculateNotBoundedIntersectionPoints(matrix, resources, constX, constY)\n\nmatrix=np.array([[rational(3,4),rational(3,1)],[rational(4,5),rational(9,1)]])\nresources=np.array([rational(3,1),rational(2,1),rational(4,1)])\nconstX= rational(10,1)\nconstY= rational(8,1)\nprint(Simplex.calculateNotBoundedIntersectionPoints(matrix, resources, constX, constY))\n\n# Si recibe algo que no sea un array de numpy, con elementos rational, en los dos primeros parámetros o algo que no sea un\n# rational en los dos últimos, devuelve None\nprint(Simplex.calculateNotBoundedIntersectionPoints(matrix, resources, np.array([rational(4,5)]), constY))", "checkIfIsSolution\nEste método recibe una restricción, con los coeficentes de la misma en una array de numpy, la solución a probar en una tupla, el signo en un string y el recurso en un número. El método devuelve True, si la solución satisface la restricción, o False si no la satisface. El método funciona con enteros y rational, perfectamente, pero con float, no es del todo exacto. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "# Si cumple la inecuación\ninecuation=np.array([3,4])\nsolution=(1,1)\nsign=\">=\"\nresource=6\nSimplex.checkIfIsSolution(inecuation, solution, sign, resource)\n\n# Con rational también funciona\ninecuation=np.array([rational(3,2),rational(4,3)])\nsolution=(rational(2,1),rational(1,1))\nsign=\"<=\"\nresource=rational(5,1)\nSimplex.checkIfIsSolution(inecuation, solution, sign, resource)\n\n# Si la inecuación no se cumple\ninecuation=np.array([3,4])\nsolution=(1,1)\nsign=\"=\"\nresource=6\nSimplex.checkIfIsSolution(inecuation, solution, sign, resource)\n\n# No funciona exactamente con float\ninecuation=np.array([3.0,4.0])\nsolution=(1.0,1.0)\nsign=\"=\"\nresource=7.00001\nSimplex.checkIfIsSolution(inecuation, solution, sign, resource)\n\n# Si se introduce algo que no se un array de numpy de longitud 2 en el primer parámetro, una tupla en el segundo, un string en el \n# tercero o un número en el último, devuelve None\nprint(Simplex.checkIfIsSolution(inecuation, solution, sign,np.array([3,4])))", "calculateFeasibleRegion\nEste método recibe un conjunto de puntos en una lista, un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos y una lista de string con los signos. El método devuelve la lista de puntos introducidos, que cumplen todas las restricciones, es decir pertenecen a la región factible. El método funciona tanto con rational, como con enteros, no siendo tan exacto con float. Si ningún punto pertenece a la región factible, devolverá una lista vacía. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "# El método funciona con valores rational, eliminando los puntos que no pertencen a la región factible\npoints=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),\n (rational(-30,1),rational(1,2))]\ninecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),\n np.array([rational(8,1),rational(-7,1)])])\n\nresources=np.array([rational(50,1),rational(32,1),rational(40,1)])\nsign=[\"<=\",\"<=\",\"<=\"]\nSimplex.calculateFeasibleRegion(points, inecuations, resources, sign)\n\n# El método funciona con valores enteros, eliminando los puntos que no pertencen a la región factible\npoints=[(0,5),(5,0),(10,12),(-30,1)] \ninecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])\nresources=np.array([50,32,40])\nsign=[\"<=\",\"<=\",\"<=\"]\nSimplex.calculateFeasibleRegion(points, inecuations, resources, sign)\n\n# El número de restricciones tiene que ser igual que el de signos y el de recursos\npoints=[(0,5),(5,0),(10,12),(-30,1)] \ninecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])\nresources=np.array([50,32])\nsign=[\"<=\",\"<=\",\"<=\"]\nprint(Simplex.calculateFeasibleRegion(points, inecuations, resources, sign))\n\n# Si se introduce algo que no es una lista, en el primer parámetro, un array de numpy en el segundo y tercer parámetro, o una \n# lista de strings, en el cuarto parámetro, devuelve None\ninecuations=np.matrix([np.array([2,1]),np.array([1,-1]),np.array([5,2])])\nprint(Simplex.calculateFeasibleRegion(points, inecuations, resources, sign))", "calculateMaxScale\nEste método recibe una lista de puntos, y devuelve el máximo valor de la coordenada x y de la coordenada y. Se utiliza para saber cuál es el punto máximo que se debe representar. En caso de no recibir una lista, devuelve None. Ejemplos:", "points=[(4,3),(5,6),(1,-2)]\nSimplex.calculateMaxScale(points)\n\npoints=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),\n (rational(-30,1),rational(1,2))]\nSimplex.calculateMaxScale(points)\n\npoints=[(4.6,3.7),(5.0,6.5),(1.2,-2.5)]\nSimplex.calculateMaxScale(points)\n\n# Si recibe algo que no es una lista, devuelve None\nprint(Simplex.calculateMaxScale(3))", "calculateMinScale\nEste método recibe una lista de puntos, y devuelve el mínimo valor de la coordenada x y de la coordenada y. Se utiliza para saber cuál es el punto mínimo que se debe representar. En caso de no recibir una lista, devuelve None. Ejemplos:", "points=[(4,3),(5,6),(1,-2)]\nSimplex.calculateMinScale(points)\n\npoints=[(rational(0,1),rational(5,1)),(rational(5,1),rational(0,1)),(rational(10,1),rational(12,1)),\n (rational(-30,1),rational(1,2))]\nSimplex.calculateMinScale(points)\n\npoints=[(4.6,3.7),(5.0,6.5),(1.2,-2.5)]\nSimplex.calculateMinScale(points)\n\n# Si recibe algo que no es una lista, devuelve None\nprint(Simplex.calculateMinScale(3))", "checkIfPointInFeasibleRegion\nEste método recibe un punto en una tupla, un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos y una lista de string con los signos. El método devuelve True, si el punto cumple todas las restricciones, es decir pertenece a la región factible, y False, si no pertenece. El método funciona tanto con rational, como con enteros, no siendo tan exacto con float. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "point=(rational(0,1),rational(5,1))\ninecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),\n np.array([rational(8,1),rational(-7,1)])])\n\nresources=np.array([rational(50,1),rational(32,1),rational(40,1)])\nsign=[\"<=\",\"<=\",\"<=\"]\nSimplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)\n\npoint=(rational(-30,1),rational(1,2))\ninecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),\n np.array([rational(8,1),rational(-7,1)])])\n\nresources=np.array([rational(50,1),rational(32,1),rational(40,1)])\nsign=[\"<=\",\"<=\",\"<=\"]\nSimplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)\n\n# El método funciona con valores enteros, eliminando los puntos que no pertencen a la región factible\npoints=(0,5)\ninecuations=np.array([np.array([-7,10]),np.array([2,1]), np.array([8,-7])])\nresources=np.array([50,32,40])\nsign=[\"<=\",\"<=\",\"<=\"]\nSimplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign)\n\n# El número de restricciones tiene que ser igual que el de signos y el de recursos\npoints=(0,5)\ninecuations=np.array([np.array([-7,10]),np.array([2,1])])\nresources=np.array([50,32,40])\nsign=[\"<=\",\"<=\",\"<=\"]\nprint(Simplex.checkIfPointInFeasibleRegion(point, inecuations, resources, sign))\n\n# Si se introduce algo que no es una tupla, en el primer parámetro, un array de numpy en el segundo y tercer parámetro, o una \n# lista de strings, en el cuarto parámetro, devuelve None\nprint(Simplex.checkIfPointInFeasibleRegion(4, inecuations, resources, sign))", "calculateIntegerPoints\nEste método recibe un conjunto de restricciones en un array de numpy, sin signos ni recursos,un array de numpy con los recursos, una lista de string con los signos y dos tuplas, con el mínimo y el máximo punto a representar. El método devuelve una lista con todos los puntos enteros que pertenecen a esa región factible y que son menores que el punto máximo. Todos los elementos de las restricciones, recursos y de la tupla, deben ser rational. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "# Puntos calculados con rational\ninecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),\n np.array([rational(8,1),rational(-7,1)])])\n\nresources=np.array([rational(50,1),rational(32,1),rational(40,1)])\nsign=[\"<=\",\"<=\",\"<=\"]\nscale1=(rational(0,1),rational(0,1))\nscale=(rational(10,1),rational(10,1))\nSimplex.calculateIntegerPoints(inecuations, resources, sign, scale1,scale)\n\n# El número de restricciones tiene que ser igual que el de signos y el de recursos\ninecuations=np.array([np.array([rational(-7,1),rational(10,1)]),np.array([rational(2,1),rational(1,1)]),\n np.array([rational(8,1),rational(-7,1)])])\n\nresources=np.array([rational(50,1),rational(32,1),rational(40,1)])\nsign=[\"<=\",\"<=\"]\nscale=(rational(10,1),rational(10,1))\nprint(Simplex.calculateIntegerPoints(inecuations, resources, sign, scale1, scale))\n\n# Si se introduce algo que no es un array de numpy de rational en el primer y segundo parámetro,una lista de strings, en el\n# tercer parámetro,o una tupla en el último parámetro devuelve None\nprint(Simplex.calculateIntegerPoints(inecuations, resources, sign, scale1, 4))", "centre\nEste método recibe una lista de puntos, y devuelve el punto que está en el centro del polígono que forman dichos puntos. Las coordenadas de los puntos deben ser rational. En caso de no pasar una lista de puntos rational, devuelve None. Ejemplos:", "points=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2),rational(4,5)),\n (rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]\npoint=Simplex.centre(points)\nprint(\"(\"+str(point[0])+\",\"+str(point[1])+\")\")\n\n# Si recibe algo que no es una lista de puntos rational, devuelve None\npoints=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]\nprint(Simplex.centre(points))", "isThePoint\nEste método recibe una lista de puntos, cuyas coordenadas son rational, un valor, que es el cálculo de la distancia al centro, y el centro de los puntos de la lista. El método devuelve el punto de la lista cuya distancia al centro, es el valor introducido. Si ningún punto, cumple la distancia devuelve None. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "listPoints=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2)\n ,rational(4,5)),(rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]\nM = (1.811574074074074,1.1288359788359787)\nvalue = 2.7299657524245156\npoint=Simplex.isThePoint(listPoints, value, M)\nprint(\"(\"+str(point[0])+\",\"+str(point[1])+\")\")\n\n# En caso de no recibir una lista de puntos rational, en el primer parámetro, un número en el segundo o una tupla en el tercero, \n# devuelve None(ver si coge float en el centro)\nprint(Simplex.isThePoint(listPoints, value, 4))", "calculateOrder\nEste método recibe una lista de puntos, cuyas coordenadas son rational, y devuelve la misma lista de puntos, pero ordenadas en sentido horario. En caso de no introducir una lista de rational, devuelve None. Ejemplos:", "listPoints=[(rational(4,5),rational(1,2)),(rational(4,2),rational(3,1)),(rational(8,3),rational(3,5)),(rational(7,2),\n rational(4,5)), (rational(7,9),rational(4,9)),(rational(9,8),rational(10,7))]\nSimplex.calculateOrder(listPoints)\n\n# Si recibe algo que no es una lista de puntos con coordenadas rational\nlistPoints=[(4.0,5.0),(4.0,3.0),(8.0,5.0),(7.0,4.0),(7.0,9.0),(10.0,4.0)]\nprint(Simplex.calculateOrder(listPoints))", "pointIsInALine\nEste método recibe un punto en una tupla, una restricción sin signos ni recursos en un array de numpy, y el recurso, como un número. El método devuelve True, si el punto, esta sobre la línea que representa la restricción en el plano, en otro caso devuelve False. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "# Si el punto está en la línea, devuelve True\npoint = (3,4)\nline = np.array([3,2])\nresource = 17\nSimplex.pointIsInALine(point, line, resource)\n\n# El método funciona con rational\npoint = (rational(3,1),rational(4,2))\nline = np.array([rational(3,3),rational(2,1)])\nresource = rational(7,1)\nSimplex.pointIsInALine(point, line, resource)\n\n# Si el punto no está en la línea, devuelve False\npoint = (3,4)\nline = np.array([3,2])\nresource = 10\nSimplex.pointIsInALine(point, line, resource)\n\n# El método no funciona exactamente con float\npoint = (3.0,4.0)\nline = np.array([3.0,2.0])\nresource = 17.00001\nSimplex.pointIsInALine(point, line, resource)\n\n# En caso de no recibir una tupla,en el primer parámetro, un array de numpy en el segundo o un número en el tercero, devuelve \n# None\nprint(Simplex.pointIsInALine(point, 3, resource))", "deleteLinePointsOfList\nEste método recibe un conjunto de puntos en una lista, un array de numpy con un conjunto de restricciones sin signos, ni recursos, y un array de numpy con los recursos de las restricciones. El método devuelve la lista de puntos, pero sin aquellos puntos que están en la línea que representa alguna de las restricciones introducidas. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "# Elimina el último punto que está en una línea\nlistPoints=[(rational(3,1),rational(5,7)),(rational(5,8),rational(6,2)),(rational(4,6),rational(8,9)),(rational(8,1),\n rational(2,1))]\nmatrix=np.array([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1),rational(0,1)])\nSimplex.deleteLinePointsOfList(listPoints, matrix, resources)\n\n# Si recibe algo que no es una lista de puntos con coordenadas rational,o algo que no es un array de numpy con elementos rational\n# en el segundo y tercer parámetro,devuelve None\nprint(Simplex.deleteLinePointsOfList(listPoints, 4, resources))", "showProblemSolution\nEste método resuelve el problema de programación lineal que se le pasa por parámetro, de manera gráfica. Para ello, recibe una matriz de numpy que contiene las restricciones, sin signos ni recursos, un array de numpy que contiene los recursos, una lista de strings, que contienen los signos de las restricciones, un string que contiene la función en el formato \"max/min 2 -3\" y un valor False o un nombre, que determina si se quiere guardar la imagen en el archivo con el nombre indicado. El método muestra la solución gráfica, siempre que el problema tenga solo 2 variables, en otro caso devuelve None. No es necesario que se introduzca el problema en forma estándar. En caso de que los parámetros introducidos no sean correctos(ver ejemplos), devolverá None. Ejemplos:", "%matplotlib inline\nmatrix=np.matrix([[rational(2,1),rational(1,1)],[rational(1,1),rational(-1,1)],[rational(5,1),rational(2,1)]])\nresources=np.array([rational(18,1),rational(8,1),rational(0,1)])\nsigns=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\nsave= False\nSimplex.showProblemSolution(matrix, resources, signs, function, save)\n\n# Si el número de signos es diferente a la longitud del vector de recursos o diferente del número de filas de la matriz, \n# devuelve None\nmatrix=np.matrix([[2,1],[1,-1],[5,2]])\nresources=np.array([[18],[8]])\nsigns=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\nsave=False\nprint(Simplex.showProblemSolution(matrix, resources, signs, function, save))\n\n# Si se pasa por parámetro algo que no es una matriz de numpy en el primer parámetro con elementos rational, algo que no es un \n# array de numpy con elementos rationalen el segundo,algo que no es una lista de strings en el tercero,algo que no es un string\n# en el cuarto o algo que no sea False o un string en el quinto,devuelve None\n\nmatrix=np.matrix([[2,1],[1,-1],[5,2]])\nresources=np.array([[18],[8],[4]])\nsigns=[\"<=\",\"<=\",\">=\"]\nfunction=\"max 2 1\"\nprint(Simplex.showProblemSolution(matrix, resources, signs, function, False))", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rvperry/phys202-2015-work
assignments/assignment12/FittingModelsEx02.ipynb
mit
[ "Fitting Models Exercise 2\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt", "Fitting a decaying oscillation\nFor this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays:\n\ntdata: an array of time values\nydata: an array of y values\ndy: the absolute uncertainties (standard deviations) in y\n\nYour job is to fit the following model to this data:\n$$ y(t) = A e^{-\\lambda t} \\cos{\\omega t + \\delta} $$\nFirst, import the data using NumPy and make an appropriately styled error bar plot of the raw data.", "data=np.load('decay_osc.npz')\ntdata=data['tdata']\nydata=data['ydata']\ndy=data['dy']\n\ntdata,ydata,dy\n\nplt.plot(tdata,ydata)\n\nplt.errorbar?\n\nplt.errorbar(tdata,ydata,dy,fmt='k.')\n\nassert True # leave this to grade the data import and raw data plot", "Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters:\n\nPrint the parameters estimates and uncertainties.\nPlot the raw and best fit model.\nYou will likely have to pass an initial guess to curve_fit to get a good fit.\nTreat the uncertainties in $y$ as absolute errors by passing absolute_sigma=True.", "def model(t,A,o,l,d):\n return A*np.exp(-l*t)*np.cos(o*t)+d\n\ntheta_best,theta_cov=opt.curve_fit(model,tdata,ydata,np.array((6,1,1,0)),dy,absolute_sigma=True)\nprint('A = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))\nprint('omega = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))\nprint('lambda = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2])))\nprint('delta = {0:.3f} +/- {1:.3f}'.format(theta_best[3], np.sqrt(theta_cov[3,3])))\n\ntfit=np.linspace(0,20,100)\nA,o,l,d=theta_best\nyfit=A*np.exp(-l*tfit)*np.cos(o*tfit)+d\nplt.plot(tfit,yfit)\nplt.plot(tdata,ydata,'k.')\nplt.xlabel('time')\nplt.ylabel('y')\nplt.title('Decaying Oscillator')\nplt.axhline(0,color='lightgray')\n\nassert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
spulido99/Programacion
Camilo/Taller 1.ipynb
mit
[ "Taller 1: Básico de Python\n\nFunciones\nListas\nDiccionarios\n\nEste taller es para resolver problemas básicos de python. Manejo de listas, diccionarios, etc.\nEl taller debe ser realizado en un Notebook de Jupyter en la carpeta de cada uno. Debe haber commits con el avance del taller. Debajo de cada pregunta hay una celda para el código.\nBasico de Python\n1. Qué versión de python está corriendo?", "import platform\nplatform.python_version()", "2. Calcule el área de un circulo de radio 5", "r = 5\n\na = (r**2) * 3.141596\n\nprint a", "3. Escriba código que imprima todos los colores de que están en color_list_1 y no estan presentes en color_list_2\nResultado esperado : \n{'Black', 'White'}", "color_list_1 = set([\"White\", \"Black\", \"Red\"])\ncolor_list_2 = set([\"Red\", \"Green\"])\n\nprint color_list_1\n\nprint color_list_1 - color_list_2\n\n # Resultado = []\n # for i in color_list_1:\n # if not color_list_1[i] in color_list_2:\n # Resultado += color_list_1[i]\n # else:\n # pass\n # print Resultado", "4 Imprima una línea por cada carpeta que compone el Path donde se esta ejecutando python\ne.g. C:/User/sergio/code/programación\nSalida Esperada:\n+ User\n+ sergio\n+ code\n+ programacion", "import os\nwkd = os.getcwd()\n\nwkd.split(\"/\")\n \n\n\n", "Manejo de Listas\n5. Imprima la suma de números de my_list", "my_list = [5,7,8,9,17]\n\nprint my_list\n\nsuma = 0\n\nfor i in my_list:\n \n suma += i\n \nprint suma\n\n ", "6. Inserte un elemento_a_insertar antes de cada elemento de my_list", "elemento_a_insertar = 'E'\nmy_list = [1, 2, 3, 4]", "La salida esperada es una lista así: [E, 1, E, 2, E, 3, E, 4]", "print my_list\nprint elemento_a_insertar\n\nmy_list.insert(0, elemento_a_insertar)\nmy_list.insert(2, elemento_a_insertar)\nmy_list.insert(4, elemento_a_insertar)\nmy_list.insert(6, elemento_a_insertar)\n\nprint my_list", "7. Separe my_list en una lista de lista cada N elementos", "N = 3\nmy_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']", "Salida Epserada: [['a', 'd', 'g', 'j', 'm'], ['b', 'e', 'h', 'k', 'n'], ['c', 'f', 'i', 'l']]", "\n#new_list = [i**2 for i in range(5)] # lamda functions () to apply a function to each variable in a list and creat another\n#print new_list\n\n# function zip to pare lists of the same length. function enumerate.\n\nx = [4,2,5,6]\ny = [5,3,1,6]\n\nz = zip(x,y)\n\nprint z\n\n\nN= 3\nnew_list = [[] for _ in range(N)]\nfor i, item in enumerate(my_list):\n new_list[i % N].append(item)\nprint new_list\n", "8. Encuentra la lista dentro de list_of_lists que la suma de sus elementos sea la mayor", "list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]", "Salida Esperada: [10, 11, 12]", "print max(list_of_lists)", "Manejo de Diccionarios\n9. Cree un diccionario que para cada número de 1 a N de llave tenga como valor N al cuadrado", "N = 5", "Salida Esperada: {1:1, 2:4, 3:9, 4:16, 5:25}", "Dict = {}\n\nDict[1] = 1**2 \nDict[2] = 2**2 \nDict[3] = 3**2 \nDict[4] = 4**2 \nDict[5] = 5**2 \n\n\nprint Dict\n\nN=5\nD = {}\nfor i in range(N):\n D[i] = i**2\nprint D\n", "10. Concatene los diccionarios en dictionary_list para crear uno nuevo", "dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]", "Salida Esperada: {1: 10, 2: 20, 3: 30, 4: 40, 5: 50, 6: 60}", "new_dic = {}\nfor i in range(len(dictionary_list)):\n new_dic.update(dictionary_list[i])\n\nprint new_dic\n\n\nDicc = {}\nfor i in dictionary_list:\n for k in i:\n Dicc[k] = i[k]\nprint Dicc", "11. Añada un nuevo valor \"cuadrado\" con el valor de \"numero\" de cada diccionario elevado al cuadrado", "dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}]", "Salida Esperada: [{'numero': 10, 'cantidad': 5, 'cuadrado': 100} , {'numero': 12, 'cantidad': 3, , 'cuadrado': 144}, {'numero': 5, 'cantidad': 45, , 'cuadrado': 25}]", "\nfor i in range(0,len(dictionary_list)):\n \n n = dictionary_list[i]['numero']\n sqr = n**2\n dictionary_list[i]['cuadrado'] = sqr\n \nprint dictionary_list", "Manejo de Funciones\n12. Defina y llame una función que reciba 2 parametros y solucione el problema 3", "def loca(list1,list2):\n print list1 - list2\n \nloca(color_list_1, color_list_2)", "13. Defina y llame una función que reciva de parametro una lista de listas y solucione el problema 8", "def marx(lista):\n return max(lista)\n\nprint marx(list_of_lists)", "14. Defina y llame una función que reciva un parametro N y resuleva el problema 9", "\ndef dic(N):\n Dict ={}\n for i in range(1,N):\n Dict[i] = i**2\n return Dict\n\nprint dic(4)\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NORCatUofC/rain
nexrad-etl/Validate NEXRAD with Weather Underground.ipynb
mit
[ "import numpy as np\nimport pandas as pd\nimport os, re, boto3\nfrom botocore.handlers import disable_signing\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Processing Test\n\nConsolidating the returned CSVs into one is relatively painless\nMain issue is that for some reason the time is still in GMT, and needs 5 hours in milliseconds subtracted from the epoch\nValidating against Weather Underground read from O'Hare", "s3_client = boto3.client('s3')\nresource = boto3.resource('s3')\n# Disable signing for anonymous requests to public bucket\nresource.meta.client.meta.events.register('choose-signer.s3.*', disable_signing)\n\ndef file_list(client, bucket, prefix=''):\n paginator = client.get_paginator('list_objects')\n for result in client.list_objects(Bucket=bucket, Prefix=prefix, Delimiter='/')['Contents']:\n yield result.get('Key')\n\ngen_s3_files = list(file_list(s3_client, 'nexrad-etl', prefix='test-aug3/'))\n\nfor i, f in enumerate(gen_s3_files):\n s3_client.download_file('nexrad-etl',f,'test-aug3/nexrad{}.csv'.format(i))\n\nfolder_files = os.listdir(os.path.join(os.getcwd(), 'test-aug3'))\nnexrad_df_list = list()\nfor f in folder_files:\n if f.endswith('.csv'):\n try:\n nexrad_df_list.append(pd.read_csv('test-aug3/{}'.format(f)))\n except:\n #print(f)\n pass\nprint(len(nexrad_df_list))\n\nmerged_nexrad = pd.concat(nexrad_df_list)\nmerged_nexrad['timestamp'] = pd.to_datetime(((merged_nexrad['timestamp'] / 1000) - (5*3600*1000)), unit='ms')\n#merged_nexrad['timestamp'] = pd.to_datetime(merged_nexrad['timestamp'] / 1000, unit='ms')\nmerged_nexrad = merged_nexrad.set_index(pd.DatetimeIndex(merged_nexrad['timestamp']))\nmerged_nexrad = merged_nexrad.sort_values('timestamp')\nmerged_nexrad = merged_nexrad.fillna(0.0)\n# Get diff between previous two reads\nmerged_nexrad['diff'] = merged_nexrad['timestamp'].diff()\nmerged_nexrad = merged_nexrad[1:]\nprint(merged_nexrad.shape)\n\nmerged_nexrad.index.min()\n\nmerged_nexrad['diff'] = (merged_nexrad['diff'] / np.timedelta64(1, 'm')).astype(float) / 60\nmerged_nexrad.head()\n\naug_day_ohare = merged_nexrad['2016-08-12'][['timestamp','60666','diff']]\naug_day_ohare.head()\n\naug_day_ohare['60666'] = (aug_day_ohare['60666']*aug_day_ohare['diff'])/25.4\naug_day_ohare.head()", "NEXRAD at O'Hare Zip 60666", "# Checking against Weather Underground read for O'Hare on this day\nprint(aug_day_ohare['60666'].sum())\naug_day_ohare['60666'].plot()", "Wunderground", "wunderground = pd.read_csv('test-aug3/aug-12.csv')\nwunderground['PrecipitationIn'] = wunderground['PrecipitationIn'].fillna(0.0)\nwunderground['TimeCDT'] = pd.to_datetime(wunderground['TimeCDT'])\nwunderground = wunderground.set_index(pd.DatetimeIndex(wunderground['TimeCDT']))\nwund_hour = wunderground['PrecipitationIn'].resample('1H').max()\nprint(wund_hour.sum())\nwund_hour.plot()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mtchem/Twitter-Politics
similarity_analysis.ipynb
mit
[ "Comparing President Trump's Tweets and Executive Office Activity using NLP\n\nThis notebook compares the documents published by the Executive Office of the President (of the United States of America) from January 20, 2017, to December 8th, 2017, and his tweets during the same time period. The data wrangling steps can be found in this GitHub repo (https://github.com/mtchem/Twitter-Politics/blob/master/Data_Wrangle.ipynb)", "# imports\nimport pandas as pd\nimport numpy as np\nimport itertools\n# imports for cosine similarity with NMF\nfrom sklearn.decomposition import NMF\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.preprocessing import normalize\nfrom sklearn.feature_extraction import text \n# imports for data visualization\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n# special matplotlib argument for in notebook improved plots\nfrom matplotlib import rcParams\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")", "Part 1: Data Wrangle\nLoad and transform the data for analysis", "# load federal document data from pickle file\nfed_reg_data = r'data/fed_reg_data.pickle'\nfed_data = pd.read_pickle(fed_reg_data)\n# load twitter data from csv\ntwitter_file_path = r'data/twitter_01_20_17_to_3-2-18.pickle'\ntwitter_data = pd.read_pickle(twitter_file_path)\n\n# Change the index (date), to a column\nfed_data['date'] = fed_data.index\ntwitter_data['date'] = twitter_data.index", "Combine data for analysis\n<p> Create a dataframe that contains:\n<ul>\n <li> Each document, from both data sets, as a string </li>\n <li> The date the text was published </li>\n <li> A label for the type of document (0= twitter doc, 1= federal doc) </li>\n</ul>\n</p>", "# keep text strings and rename columns\nfed = fed_data[['str_text', 'date']].rename({'str_text': 'texts'}, axis = 'columns')\ntweet = twitter_data[['text', 'date']].rename({'text': 'texts'}, axis = 'columns')\n\n# Add a label for the type of document (Tweet = 0, Fed = 1)\ntweet['label'] = 0\nfed['label'] = 1\n\n# concatinate the dataframes\ncomb_text = pd.concat([fed,tweet])\n\n# Re_index so that each doc has a unique id_number\ncomb_text = comb_text.reset_index()\ncomb_text['ID'] = range(0,len(comb_text))\n\n# Look at the dataframe to make sure it works\ncomb_text = comb_text[['texts','date','label', 'ID']]\ncomb_text.head(3)", "Transform text data into a word-frequency array\n<p> Computers cannot understand a text like humans, so in order to analyze text data, I first need to make every word a feature (column) in an array, where each document (row) is represented by a weighted* frequency of each word (column) they contain. An example text and array are shown below.\n</p>\n\n<p> Using Scikit Learn to create a word-frequency array:\n<ul>\n <li> Define list of stop words (nonsense or non-meaninful words, such as 'the', 'a', 'of', 'q34fqwer3'). </li>\n <li> Instantiate a tf-idf object (term frequency-inverse document frequency reweighting), that removes the stop words, and filters any word that appears in 99% of the documents</li>\n <li> Create a matrix representation of the documents </li>\n <li> Create list of the words each feature(column) represents </li>\n <li> Print a list of the excluded words </li>\n</ul>\n</p>\n\n*Weighting the word frequencies lowers the importance that very frequently used domain-specific words are considered less important during the analysis", "# nonsense words, and standard words like proclimation and dates\nmore_stop = set(['presidential', 'documents', 'therfore','i','donald', 'j', 'trump', 'president', 'order', \n 'authority', 'vested', 'articles','january','february','march','april','may','june','july','august','september','october',\n 'november','december','jan','feb','mar','apr','jun','jul','aug','sep','oct','nov','dec',\n '2017','2018','act','agencies','agency','wh','rtlwanjjiq','pmgil08opp','blkgzkqemw','qcdljff3wn','erycjgj23r ','fzep1e9mo7','m0hmpbuz6c','rdo6jt2pip','kyv866prde','aql4jlvndh',\n 'tx5snacaas','t0eigo6lp8','jntoth0mol','8b8aya7v1s', 'x25t9tqani','q7air0bum2','ypfvhtq8te','ejxevz3a1r','1zo6zc2pxt',\n 'strciewuws','lhos4naagl','djlzvlq6tj', 'theplumlinegs', '3eyf3nir4b','cbewjsq1a3','lvmjz9ax0u',\n 'dw0zkytyft','sybl47cszn','6sdcyiw4kt','¼ï','yqf6exhm7x','cored8rfl2','6xjxeg1gss','dbvwkddesd',\n 'ncmsf4fqpr','twunktgbnb','ur0eetseno','ghqbca7yii','cbqrst4ln4','c3zikdtowc','6snvq0dzxn','ekfrktnvuy',\n 'k2jakipfji','œthe ','p1fh8jmmfa','vhmv7qoutk','mkuhbegzqs','ajic3flnki','mvjbs44atr',\n 'wakqmkdpxa','e0bup1k83z','ðÿ','ºðÿ','µðÿ','eqmwv1xbim','hlz48rlkif','td0rycwn8c','vs4mnwxtei','75wozgjqop',\n 'e1q36nkt8g','u8inojtf6d','rmq1a5bdon','5cvnmhnmuh','pdg7vqqv6m','s0s6xqrjsc','5cvnmhnmuh','wlxkoisstg',\n 'tmndnpbj3m','dnzrzikxhd','4qckkpbtcr','x8psdeb2ur','fejgjt4xp9','evxfqavnfs','aty8r3kns2','pdg7vqqv6m','nqhi7xopmw',\n 'lhos4naagl','32tfova4ov','zkyoioor62','np7kyhglsv','km0zoaulyh','kwvmqvelri','pirhr7layt',\n 'v3aoj9ruh4','https','cg4dzhhbrv','qojom54gy8','75wozgjqop','aty8r3kns2','nxrwer1gez','rvxcpafi2a','vb0ao3s18d',\n 'qggwewuvek','ddi1ywi7yz','r5nxc9ooa4','6lt9mlaj86','1jb53segv4','vhmv7qoutk','i7h4ryin3h',\n 'aql4jlvndh','yfv0wijgby','nonhjywp4j','zomixteljq','iqum1rfqso','2nl6slwnmh','qejlzzgjdk',\n 'p3crvve0cy','s0s6xqrjsc','gkockgndtc','2nl6slwnmh','zkyoioor62','clolxte3d4','iqum1rfqso',\n 'msala9poat','p1f12i9gvt','mit2lj7q90','qejlzzgjdk','pjldxy3hd9','vjzkgtyqb9','b2nqzj53ft',\n 'tpz7eqjluh','enyxyeqgcp','avlrroxmm4','2kuqfkqbsx','kwvmqvelri','œi','9lxx1iqo7m','vdtiyl0ua7',\n 'dmhl7xieqv','3jbddn8ymj','gysxxqazbl','ðÿž','tx5snacaas','4igwdl4kia','kqdbvxpekk','1avysamed4',\n 'cr4i8dvunc','bsp5f3pgbz','rlwst30gud','rlwst30gud','g4elhh9joh', '2017', 'January', 'kuqizdz4ra', \n 'nvdvrrwls4','ymuqsvvtsb', 'rgdu9plvfk','bk7sdv9phu','b5qbn6llze','xgoqphywrt ','hscs4y9zjk ',\n 'soamdxxta8','erycjgj23r','ryyp51mxdq','gttk3vjmku','j882zbyvkj','9pfqnrsh1z','ubbsfohmm7',\n 'xshsynkvup','xwofp9z9ir','1iw7tvvnch','qeeknfuhue','riqeibnwk2','seavqk5zy5','7ef6ac6kec',\n 'htjhrznqkj','8vsfl9mzxx','xgoqphywrt','zd0fkfvhvx','apvbu2b0jd','mstwl628xe','4hnxkr3ehw','mjij7hg3eu',\n '1majwrga3d','x6fuuxxyxe','6eqfmrzrnv','h1zi5xrkeo','kju0moxchk','trux3wzr3u','suanjs6ccz',\n 'ecf5p4hjfz','m5ur4vv6uh','8j7y900vgk','7ef6ac6kec','d0aowhoh4x','aqqzmt10x7','zauqz4jfwv',\n 'bmvjz1iv2a','gtowswxinv','1w3lvkpese','8n4abo9ihp','f6jo60i0ul','od7l8vpgjq','odlz2ndrta',\n '9tszrcc83j','6ocn9jfmag','qyt4bchvur','wkqhymcya3','tp4bkvtobq','baqzda3s2e','March','April',\n 'op2xdzxvnc','d7es6ie4fy','proclamation','hcq9kmkc4e','rf9aivvb7g','sutyxbzer9','s0t3ctqc40','aw0av82xde'])\n# defines all stop words\nmy_stop = text.ENGLISH_STOP_WORDS.union(more_stop)\n\n# Instantiate TfidfVectorizer to remove common english words, and any word used in 99% of the documents\ntfidf = TfidfVectorizer(stop_words = my_stop , max_df = 0.99)\n\n# create matrix representation of all documents\ntext_mat = tfidf.fit_transform(comb_text.texts)\n\n# make a list of feature words\nwords = tfidf.get_feature_names()", "Excluded Words\n<p> \n Below is a printed list of all of the excluded words. I include this because I am not a political scientist or a linguist. What I consider to be nonsense maybe important and you may want to modify this list.\n</p>", "# print excluded words from the matrix features\nprint(tfidf.get_stop_words())", "Part 2: Analysis\nUse unsupervised machine learning to analyze both President Trump's tweets, official presidential actions and explore any correlation between the two\n\nPart 2A: Determine the document's topics\n<p> Model the documents with non-zero matrix factorization (NMF):\n<ul>\n <li> Instantiate NMF model with 260 components (1/10th the number of documents) and initialized with Nonnegative Double Singular Value Decomposition (NNDSVD, better for sparseness)</li>\n <li> Fit(learn the NMF model for the tf-idf matrix) model</li>\n <li> Transform the model, which applies the fit to the matirix </li>\n <li> Make a dataframe with the NMF components for each word </li>\n</ul>\n</p>", "# instantiate model\nNMF_model = NMF(n_components=260 , init = 'nndsvd')\n\n# fit the model\nNMF_model.fit(text_mat)\n\n# transform the text frequecy matrix using the fitted NMF model\nnmf_features = NMF_model.transform(text_mat)\n\n# create a dataframe with words as a columns, NMF components as rows\ncomponents_df = pd.DataFrame(NMF_model.components_, columns = words)\n", "Part 2B: Find the top 5 topic words (components) for each document\n<p> Using the components dataframe create a dictionary with components as keys, and top words as values:\n<ul>\n <li> Make an empty dictionary and loop through each row of NMF components</li>\n <li> Add to the dictionary where the key is the NMF component and the value is the topic words for that component (the column names with the largest component values)</li>\n\n</ul>\n</p>", "# create dictionary with the key = component, value = top 5 words\ntopic_dict = {}\nfor i in range(0,260):\n component = components_df.iloc[i, :]\n topic_dict[i] = component.nlargest()\n\n# look at a few of the component topics\nprint(topic_dict[0].index)\nprint(topic_dict[7].index)", "Part 2C: Cosine Similarity\n<p> The informal and non-regular grammar used in tweets makes a direct comparison with documents published by the Executive Office, which uses formal vocabulary and grammar, difficult. Therefore, I will use the metric, cosine similarity, which compares the distance between feature vectors, instead of direct word comparison. Higher cosine similarities between two documents indicate greater topic similarity.\n</p>\n\n<p>Calculating cosine similarities of NMF features:\n<ul>\n <li> Normalize NMF features (calculated in part 2A)</li>\n <li> Create dataframe where each row contains the normalized NMF features for a document and its ID number</li>\n <li> Look at each row(decomposed article) and calculate its cosine similarity to all other document's normalized NMF features </li>\n <li> Create a dictionary where the key is the document ID, and the value is a pandas series of the 5 most similar documents (including its self)</li>", "# normalize previouly found nmf features\nnorm_features = normalize(nmf_features)\n\n#dataframe of document's NMF features, where rows are documents and columns are NMF components\ndf_norms = pd.DataFrame(norm_features)\n\n# initialize empty dictionary\nsimilarity_dict= {}\n# loop through each row of the df_norms dataframe\nfor i in range(len(norm_features)):\n # isolate one row, by ID number\n row = df_norms.loc[i]\n # calculate the top cosine similarities\n top_sim = (df_norms.dot(row)).nlargest()\n # append results to dictionary\n similarity_dict[i] = (top_sim.index, top_sim) ", "Part 3: Use the cosine similarity results to explore how (or if) President Trump's tweets and official actions correlate\n\nPart 3A: Find Twitter documents that have at least one federal document in its top 5 cosine similarity scores (and vice versa)\n<p> Using the results of part 2C, find the types of documents are the most similar, then sum the labels (0=twitter, 1= federal document). If similar documents are a mix of tweets and federal documents, then the sum of their value will be either 1,2,3 or 4.\n<ul>\n <li> Create a dataframe with the document ID number as the index and the document type label (tweet = 0, fed_doc = 1)</li>\n <li> Loop through each document in the dataframe and use the similarity dictionary to find the list of most similar document ID numbers and the sum of the similarity scores</li>\n <li> For each list of similar documents, sum the value of the document type labels. If the sum value is 1, 2, 3, or 4, that means there are both tweets and federal documents in the group</li>\n\n</ul>\n\n</p>", "# dataframe with document ID and labels\ndoc_label_df = comb_text[['label', 'ID']].copy().set_index('ID')\n\n# inialize list for the sum of all similar documents label\nlabel_sums =[]\nsimilarity_score_sum = []\n# loop through all of the documents\nfor doc_num in doc_label_df.index:\n # sum the similarity scores\n similarity_sum = similarity_dict[doc_num][1].sum()\n similarity_score_sum.append(similarity_sum)\n \n \n #find the list of similar documents\n similar_doc_ID_list = list(similarity_dict[doc_num][0]) \n # loop through labels\n s_label = 0\n for ID_num in similar_doc_ID_list:\n # sum the label values for each similar document\n s_label = s_label + doc_label_df.loc[ID_num].label\n \n # append the sum of the labels for ONE document\n label_sums.append(s_label)\n\n \n\n# add the similarity score sum to dataframe as separate column\ndoc_label_df['similarity_score_sum'] = similarity_score_sum\n\n# add the similar document's summed label value to the dataframe as a separate column\ndoc_label_df['sum_of_labels'] = label_sums \n", "Part 3B: Look at the topics of tweets that have similar federal documents (and vice versa)\n<p> Isolate documents with mixed types of similar documents and high similarity scores\n<ul>\n <li> Filter dataframe to include only top_similar_label_sums with a value of 1, 2, 3, or 4</li>\n <li> Filter again to only include groups with high combinded similarity scores</li>\n <li> Remove and duplicate groups </li>\n\n</ul>\n\n</p>", "# Filter dataframe for federal documents with similar tweets, and vice versa\ndf_filtered = doc_label_df[doc_label_df['sum_of_labels'] != 0][doc_label_df['sum_of_labels'] != 5].copy().reset_index()\n\n# Make sure it worked\nprint(df_filtered.head())\nprint(len(df_filtered))\n\n# Look at the ones that have all top 5 documents with a cosine similarity score of 0.9 or above. \n#The sum of scores need to be 4.6 or higher\nsimilar_score_min = 4.6\nhighly_similar = df_filtered[df_filtered.similarity_score_sum >= similar_score_min]", "Remove duplicate highly similar groups", "# create a list of all the group lists\ndoc_groups = []\nfor doc_id in highly_similar.ID:\n doc_groups.append(sorted(list(similarity_dict[doc_id][0])))\n\n# make the interior lists tuples, then make a set of them\nunique_groups = set([tuple(x) for x in doc_groups])\n\nunique_groups", "Part 3C: Manually look at the documents. Are they similar?\nComponents = 100 , Highly similar score = 4.9\n<p> Four of the 5 unique groups are basically the same \n <ul> {(58, 80, 105, 149, 1139),\n (58, 80, 126, 149, 1139),\n (58, 80, 126, 185, 1139),\n (58, 80, 149, 185, 1139),\n (131, 170, 478, 479, 2044)}\n </ul>\n\n Thoses components (58, 80, 105, 126, 149, 185, 1139) are all about national emergencies. The 5 group is about national security and national emergencies\n\n</p>\n\nComponents = 260 , Highly similar cutoff score = 4.6\n6 unique groups can be further distilled to one set (27, 28, 229, 248, 196, 203,2576, 2546, 204, 1151, 1892)", "print(comb_text.texts.loc[1892])\nprint(comb_text.texts.loc[27])", "Conclusion\n<p>\n There does seem to be some general similarities between President Trump's tweets and official federal action. However, the topics are quite vague. Such as tweets about specific White House officals are grouped with the federal documents that define who is on different committees in the Executive Office. \n</p>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
julienchastang/unidata-python-workshop
notebooks/Surface_Data/Surface Data with Siphon and MetPy.ipynb
mit
[ "<a name=\"top\"></a>\n<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>Working with Surface Observations in Siphon and MetPy</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\n<div style=\"float:right; width:250 px\"><img src=\"http://weather-geek.net/images/metar_what.png\" alt=\"METAR\" style=\"height: 200px;\"></div>\n\nOverview:\n\nTeaching: 20 minutes\nExercises: 20 minutes\n\nQuestions\n\nWhat's the best way to get surface station data from a THREDDS data server?\nWhat's the best way to make a station plot of data?\nHow can I request a time series of data for a single station?\n\nObjectives\n\n<a href=\"#ncss\">Use the netCDF Subset Service (NCSS) to request a portion of the data</a>\n<a href=\"#stationplot\">Download data for a single time across stations and create a station plot</a>\n<a href=\"#timeseries\">Request a time series of data and plot</a>\n\n<a name=\"ncss\"></a>\n1. Using NCSS to get point data", "from siphon.catalog import TDSCatalog\n\n# copied from the browser url box\nmetar_cat_url = ('http://thredds.ucar.edu/thredds/catalog/'\n 'irma/metar/catalog.xml?dataset=irma/metar/Metar_Station_Data_-_Irma_fc.cdmr')\n\n# Parse the xml\ncatalog = TDSCatalog(metar_cat_url)\n\n# what datasets are here?\nprint(list(catalog.datasets))\n\nmetar_dataset = catalog.datasets['Feature Collection']", "Once we've grabbed the \"Feature Collection\" dataset, we can request a subset of the data:", "# Can safely ignore the warnings\nncss = metar_dataset.subset()", "What variables do we have available?", "ncss.variables", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"stationplot\"></a>\n2. Making a station plot\n\nMake new NCSS query\nRequest data closest to a time", "from datetime import datetime\n\nquery = ncss.query()\nquery.lonlat_box(north=34, south=24, east=-80, west=-90)\nquery.time(datetime(2017, 9, 10, 12))\nquery.variables('temperature', 'dewpoint', 'altimeter_setting',\n 'wind_speed', 'wind_direction', 'sky_coverage')\nquery.accept('csv')\n\n# Get the data\ndata = ncss.get_data(query)\ndata", "Now we need to pull apart the data and perform some modifications, like converting winds to components and convert sky coverage percent to codes (octets) suitable for plotting.", "import numpy as np\n\nimport metpy.calc as mpcalc\nfrom metpy.units import units\n\n# Since we used the CSV data, this is just a dictionary of arrays\nlats = data['latitude']\nlons = data['longitude']\ntair = data['temperature']\ndewp = data['dewpoint']\nalt = data['altimeter_setting']\n\n# Convert wind to components\nu, v = mpcalc.wind_components(data['wind_speed'] * units.knots, data['wind_direction'] * units.degree)\n\n# Need to handle missing (NaN) and convert to proper code\ncloud_cover = 8 * data['sky_coverage'] / 100.\ncloud_cover[np.isnan(cloud_cover)] = 10\ncloud_cover = cloud_cover.astype(np.int)\n\n# For some reason these come back as bytes instead of strings\nstid = np.array([s.tostring().decode() for s in data['station']])", "Create the map using cartopy and MetPy!\nOne way to create station plots with MetPy is to create an instance of StationPlot and call various plot methods, like plot_parameter, to plot arrays of data at locations relative to the center point.\nIn addition to plotting values, StationPlot has support for plotting text strings, symbols, and plotting values using custom formatting.\nPlotting symbols involves mapping integer values to various custom font glyphs in our custom weather symbols font. MetPy provides mappings for converting WMO codes to their appropriate symbol. The sky_cover function below is one such mapping.", "%matplotlib inline\n\nimport cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nimport matplotlib.pyplot as plt\n\nfrom metpy.plots import StationPlot, sky_cover\n\n# Set up a plot with map features\nfig = plt.figure(figsize=(12, 12))\nproj = ccrs.Stereographic(central_longitude=-95, central_latitude=35)\nax = fig.add_subplot(1, 1, 1, projection=proj)\nax.add_feature(cfeature.STATES, edgecolor='black')\nax.coastlines(resolution='50m')\nax.gridlines()\n\n# Create a station plot pointing to an Axes to draw on as well as the location of points\nstationplot = StationPlot(ax, lons, lats, transform=ccrs.PlateCarree(),\n fontsize=12)\nstationplot.plot_parameter('NW', tair, color='red')\n\n# Add wind barbs\nstationplot.plot_barb(u, v)\n\n# Plot the sky cover symbols in the center. We give it the integer code values that\n# should be plotted, as well as a mapping class that can convert the integer values\n# to the appropriate font glyph.\nstationplot.plot_symbol('C', cloud_cover, sky_cover)", "Notice how there are so many overlapping stations? There's a utility in MetPy to help with that: reduce_point_density. This returns a mask we can apply to data to filter the points.", "# Project points so that we're filtering based on the way the stations are laid out on the map\nproj = ccrs.Stereographic(central_longitude=-95, central_latitude=35)\nxy = proj.transform_points(ccrs.PlateCarree(), lons, lats)\n\n# Reduce point density so that there's only one point within a 200km circle\nmask = mpcalc.reduce_point_density(xy, 200000)", "Now we just plot with arr[mask] for every arr of data we use in plotting.", "# Set up a plot with map features\nfig = plt.figure(figsize=(12, 12))\nax = fig.add_subplot(1, 1, 1, projection=proj)\nax.add_feature(cfeature.STATES, edgecolor='black')\nax.coastlines(resolution='50m')\nax.gridlines()\n\n# Create a station plot pointing to an Axes to draw on as well as the location of points\nstationplot = StationPlot(ax, lons[mask], lats[mask], transform=ccrs.PlateCarree(),\n fontsize=12)\nstationplot.plot_parameter('NW', tair[mask], color='red')\nstationplot.plot_barb(u[mask], v[mask])\nstationplot.plot_symbol('C', cloud_cover[mask], sky_cover)", "More examples for MetPy Station Plots:\n- MetPy Examples\n- MetPy Symbol list\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Modify the station plot (reproduced below) to include dewpoint, altimeter setting, as well as the station id. The station id can be added using the `plot_text` method on `StationPlot`.</li>\n <li>Re-mask the data to be a bit more finely spaced, say: 75km</li>\n <li>Bonus Points: Use the `formatter` argument to `plot_parameter` to only plot the 3 significant digits of altimeter setting. (Tens, ones, tenths)</li>\n </ul>\n</div>", "# Use reduce_point_density\n\n# Set up a plot with map features\nfig = plt.figure(figsize=(12, 12))\nax = fig.add_subplot(1, 1, 1, projection=proj)\nax.add_feature(cfeature.STATES, edgecolor='black')\nax.coastlines(resolution='50m')\nax.gridlines()\n\n# Create a station plot pointing to an Axes to draw on as well as the location of points\n\n# Plot dewpoint\n\n# Plot altimeter setting--formatter can take a function that formats values\n\n# Plot station id\n\n# %load solutions/reduce_density.py\n", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"timeseries\"></a>\n3. Time Series request and plot\n\nLet's say we want the past days worth of data...\n...for Boulder (i.e. the lat/lon)\n...for the variables mean sea level pressure, air temperature, wind direction, and wind_speed", "from datetime import timedelta\n\n# define the time range we are interested in\nend_time = datetime(2017, 9, 12, 0)\nstart_time = end_time - timedelta(days=2)\n\n# build the query\nquery = ncss.query()\nquery.lonlat_point(-80.25, 25.8)\nquery.time_range(start_time, end_time)\nquery.variables('altimeter_setting', 'temperature', 'dewpoint',\n 'wind_direction', 'wind_speed')\nquery.accept('csv')", "Let's get the data!", "data = ncss.get_data(query)\n\nprint(list(data.keys()))", "What station did we get?", "station_id = data['station'][0].tostring()\nprint(station_id)", "That indicates that we have a Python bytes object, containing the 0-255 values corresponding to 'K', 'M', 'I', 'A'. We can decode those bytes into a string:", "station_id = station_id.decode('ascii')\nprint(station_id)", "Let's get the time into datetime objects. We see we have an array with byte strings in it, like station id above.", "data['time']", "So we can use a list comprehension to turn this into a list of date time objects:", "time = [datetime.strptime(s.decode('ascii'), '%Y-%m-%dT%H:%M:%SZ') for s in data['time']]", "Now for the obligatory time series plot...", "from matplotlib.dates import DateFormatter, AutoDateLocator\n\nfig, ax = plt.subplots(figsize=(10, 6))\nax.plot(time, data['wind_speed'], color='tab:blue')\n\nax.set_title(f'Site: {station_id} Date: {time[0]:%Y/%m/%d}')\nax.set_xlabel('Hour of day')\nax.set_ylabel('Wind Speed')\nax.grid(True)\n\n# Improve on the default ticking\nlocator = AutoDateLocator()\nhoursFmt = DateFormatter('%H')\nax.xaxis.set_major_locator(locator)\nax.xaxis.set_major_formatter(hoursFmt)", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Pick a different location</li>\n <li>Plot temperature and dewpoint together on the same plot</li>\n </ul>\n</div>", "# Your code goes here\n\n\n# %load solutions/time_series.py", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
development/tutorials/settings.ipynb
gpl-3.0
[ "Advanced: Settings\nThe Bundle also contains a few Parameters that provide settings for that Bundle. Note that these are not system-wide and only apply to the current Bundle. They are however maintained when saving and loading a Bundle.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.4,<2.5\"", "As always, let's do imports and initialize a longger and a new Bundle.", "import phoebe\nfrom phoebe import u # units\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()", "Accessing Settings\nSettings are found with their own context in the Bundle and can be accessed through the get_setting method", "b.get_setting()", "or via filtering/twig access", "b['setting']", "and can be set as any other Parameter in the Bundle\nAvailable Settings\nNow let's look at each of the available settings and what they do\nphoebe_version\nphoebe_version is a read-only parameter in the settings to store the version of PHOEBE used.\ndict_set_all\ndict_set_all is a BooleanParameter (defaults to False) that controls whether attempting to set a value to a ParameterSet via dictionary access will set all the values in that ParameterSet (if True) or raise an error (if False)", "b['dict_set_all@setting']\n\nb['teff@component']", "In our default binary there are temperatures ('teff') parameters for each of the components ('primary' and 'secondary'). If we were to do:\nb['teff@component'] = 6000\nthis would raise an error. Under-the-hood, this is simply calling:\nb.set_value('teff@component', 6000)\nwhich of course would also raise an error.\nIn order to set both temperatures to 6000, you would either have to loop over the components or call the set_value_all method:", "b.set_value_all('teff@component', 4000)\nprint(b['value@teff@primary@component'], b['value@teff@secondary@component'])", "If you want dictionary access to use set_value_all instead of set_value, you can enable this parameter", "b['dict_set_all@setting'] = True\nb['teff@component'] = 8000\nprint(b['value@teff@primary@component'], b['value@teff@secondary@component'])", "Now let's disable this so it doesn't confuse us while looking at the other options", "b.set_value_all('teff@component', 6000)\nb['dict_set_all@setting'] = False", "dict_filter\ndict_filter is a Parameter that accepts a dictionary. This dictionary will then always be sent to the filter call which is done under-the-hood during dictionary access.", "b['incl']", "In our default binary, there are several inclination parameters - one for each component ('primary', 'secondary', 'binary') and one with the constraint context (to keep the inclinations aligned).\nThis can be inconvenient... if you want to set the value of the binary's inclination, you must always provide extra information (like '@component').\nInstead, we can always have the dictionary access search in the component context by doing the following", "b['dict_filter@setting'] = {'context': 'component'}\n\nb['incl']", "Now we no longer see the constraint parameters.\nAll parameters are always accessible with method access:", "b.filter(qualifier='incl')", "Now let's reset this option... keeping in mind that we no longer have access to the 'setting' context through twig access, we'll have to use methods to clear the dict_filter", "b.set_value('dict_filter@setting', {})", "run_checks_compute (/figure/solver/solution)\nThe run_checks_compute option allows setting the default compute option(s) sent to b.run_checks, including warnings in the logger raised by interactive checks (see phoebe.interactive_checks_on).\nSimilar options also exist for checks at the figure, solver, and solution level.", "b['run_checks_compute@setting']\n\nb.add_dataset('lc')\nb.add_compute('legacy')\nprint(b.run_checks())\n\nb['run_checks_compute@setting'] = ['phoebe01']\n\nprint(b.run_checks())", "auto_add_figure, auto_remove_figure\nThe auto_add_figure and auto_remove_figure determine whether new figures are automatically added to the Bundle when new datasets, distributions, etc are added. This is False by default within Python, but True by default within the UI clients.", "b['auto_add_figure']\n\nb['auto_add_figure'].description\n\nb['auto_remove_figure']\n\nb['auto_remove_figure'].description", "web_client, web_client_url\nThe web_client and web_client_url settings determine whether the client is opened in a web-browser or with the installed desktop client whenever calling b.ui or b.ui_figures. For more information, see the UI from Jupyter tutorial.", "b['web_client']\n\nb['web_client'].description\n\nb['web_client_url']\n\nb['web_client_url'].description" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/en-snapshot/guide/estimator.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Estimators\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/estimator\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/guide/estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\n\nWarning: Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details.\n\nThis document introduces tf.estimator—a high-level TensorFlow\nAPI. Estimators encapsulate the following actions:\n\nTraining\nEvaluation\nPrediction\nExport for serving\n\nTensorFlow implements several pre-made Estimators. Custom estimators are still suported, but mainly as a backwards compatibility measure. Custom estimators should not be used for new code. All Estimators—pre-made or custom ones—are classes based on the tf.estimator.Estimator class.\nFor a quick example, try Estimator tutorials. For an overview of the API design, check the white paper.\nSetup", "!pip install -U tensorflow_datasets\n\nimport tempfile\nimport os\n\nimport tensorflow as tf\nimport tensorflow_datasets as tfds", "Advantages\nSimilar to a tf.keras.Model, an estimator is a model-level abstraction. The tf.estimator provides some capabilities currently still under development for tf.keras. These are:\n\nParameter server based training\nFull TFX integration\n\nEstimators Capabilities\nEstimators provide the following benefits:\n\nYou can run Estimator-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimator-based models on CPUs, GPUs, or TPUs without recoding your model.\nEstimators provide a safe distributed training loop that controls how and when to: \nLoad data\nHandle exceptions\nCreate checkpoint files and recover from failures\nSave summaries for TensorBoard\n\n\n\nWhen writing an application with Estimators, you must separate the data input pipeline from the model. This separation simplifies experiments with different datasets.\nUsing pre-made Estimators\nPre-made Estimators enable you to work at a much higher conceptual level than the base TensorFlow APIs. You no longer have to worry about creating the computational graph or sessions since Estimators handle all the \"plumbing\" for you. Furthermore, pre-made Estimators let you experiment with different model architectures by making only minimal code changes. tf.estimator.DNNClassifier, for example, is a pre-made Estimator class that trains classification models based on dense, feed-forward neural networks.\nA TensorFlow program relying on a pre-made Estimator typically consists of the following four steps:\n1. Write an input functions\nFor example, you might create one function to import the training set and another function to import the test set. Estimators expect their inputs to be formatted as a pair of objects:\n\nA dictionary in which the keys are feature names and the values are Tensors (or SparseTensors) containing the corresponding feature data\nA Tensor containing one or more labels\n\nThe input_fn should return a tf.data.Dataset that yields pairs in that format. \nFor example, the following code builds a tf.data.Dataset from the Titanic dataset's train.csv file:", "def train_input_fn():\n titanic_file = tf.keras.utils.get_file(\"train.csv\", \"https://storage.googleapis.com/tf-datasets/titanic/train.csv\")\n titanic = tf.data.experimental.make_csv_dataset(\n titanic_file, batch_size=32,\n label_name=\"survived\")\n titanic_batches = (\n titanic.cache().repeat().shuffle(500)\n .prefetch(tf.data.AUTOTUNE))\n return titanic_batches", "The input_fn is executed in a tf.Graph and can also directly return a (features_dics, labels) pair containing graph tensors, but this is error prone outside of simple cases like returning constants.\n2. Define the feature columns.\nEach tf.feature_column identifies a feature name, its type, and any input pre-processing. \nFor example, the following snippet creates three feature columns.\n\nThe first uses the age feature directly as a floating-point input. \nThe second uses the class feature as a categorical input.\nThe third uses the embark_town as a categorical input, but uses the hashing trick to avoid the need to enumerate the options, and to set the number of options.\n\nFor further information, check the feature columns tutorial.", "age = tf.feature_column.numeric_column('age')\ncls = tf.feature_column.categorical_column_with_vocabulary_list('class', ['First', 'Second', 'Third']) \nembark = tf.feature_column.categorical_column_with_hash_bucket('embark_town', 32)", "3. Instantiate the relevant pre-made Estimator.\nFor example, here's a sample instantiation of a pre-made Estimator named LinearClassifier:", "model_dir = tempfile.mkdtemp()\nmodel = tf.estimator.LinearClassifier(\n model_dir=model_dir,\n feature_columns=[embark, cls, age],\n n_classes=2\n)", "For more information, you can go the linear classifier tutorial.\n4. Call a training, evaluation, or inference method.\nAll Estimators provide train, evaluate, and predict methods.", "model = model.train(input_fn=train_input_fn, steps=100)\n\nresult = model.evaluate(train_input_fn, steps=10)\n\nfor key, value in result.items():\n print(key, \":\", value)\n\nfor pred in model.predict(train_input_fn):\n for key, value in pred.items():\n print(key, \":\", value)\n break", "Benefits of pre-made Estimators\nPre-made Estimators encode best practices, providing the following benefits:\n\nBest practices for determining where different parts of the computational graph should run, implementing strategies on a single machine or on a\n cluster.\nBest practices for event (summary) writing and universally useful\n summaries.\n\nIf you don't use pre-made Estimators, you must implement the preceding features yourself.\nCustom Estimators\nThe heart of every Estimator—whether pre-made or custom—is its model function, model_fn, which is a method that builds graphs for training, evaluation, and prediction. When you are using a pre-made Estimator, someone else has already implemented the model function. When relying on a custom Estimator, you must write the model function yourself.\n\nNote: A custom model_fn will still run in 1.x-style graph mode. This means there is no eager execution and no automatic control dependencies. You should plan to migrate away from tf.estimator with custom model_fn. The alternative APIs are tf.keras and tf.distribute. If you still need an Estimator for some part of your training you can use the tf.keras.estimator.model_to_estimator converter to create an Estimator from a keras.Model.\n\nCreate an Estimator from a Keras model\nYou can convert existing Keras models to Estimators with tf.keras.estimator.model_to_estimator. This is helpful if you want to modernize your model code, but your training pipeline still requires Estimators. \nInstantiate a Keras MobileNet V2 model and compile the model with the optimizer, loss, and metrics to train with:", "keras_mobilenet_v2 = tf.keras.applications.MobileNetV2(\n input_shape=(160, 160, 3), include_top=False)\nkeras_mobilenet_v2.trainable = False\n\nestimator_model = tf.keras.Sequential([\n keras_mobilenet_v2,\n tf.keras.layers.GlobalAveragePooling2D(),\n tf.keras.layers.Dense(1)\n])\n\n# Compile the model\nestimator_model.compile(\n optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'])", "Create an Estimator from the compiled Keras model. The initial model state of the Keras model is preserved in the created Estimator:", "est_mobilenet_v2 = tf.keras.estimator.model_to_estimator(keras_model=estimator_model)", "Treat the derived Estimator as you would with any other Estimator.", "IMG_SIZE = 160 # All images will be resized to 160x160\n\ndef preprocess(image, label):\n image = tf.cast(image, tf.float32)\n image = (image/127.5) - 1\n image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))\n return image, label\n\ndef train_input_fn(batch_size):\n data = tfds.load('cats_vs_dogs', as_supervised=True)\n train_data = data['train']\n train_data = train_data.map(preprocess).shuffle(500).batch(batch_size)\n return train_data", "To train, call Estimator's train function:", "est_mobilenet_v2.train(input_fn=lambda: train_input_fn(32), steps=50)", "Similarly, to evaluate, call the Estimator's evaluate function:", "est_mobilenet_v2.evaluate(input_fn=lambda: train_input_fn(32), steps=10)", "For more details, please refer to the documentation for tf.keras.estimator.model_to_estimator.\nSaving object-based checkpoints with Estimator\nEstimators by default save checkpoints with variable names rather than the object graph described in the Checkpoint guide. tf.train.Checkpoint will read name-based checkpoints, but variable names may change when moving parts of a model outside of the Estimator's model_fn. For forwards compatibility saving object-based checkpoints makes it easier to train a model inside an Estimator and then use it outside of one.", "import tensorflow.compat.v1 as tf_compat\n\ndef toy_dataset():\n inputs = tf.range(10.)[:, None]\n labels = inputs * 5. + tf.range(5.)[None, :]\n return tf.data.Dataset.from_tensor_slices(\n dict(x=inputs, y=labels)).repeat().batch(2)\n\nclass Net(tf.keras.Model):\n \"\"\"A simple linear model.\"\"\"\n\n def __init__(self):\n super(Net, self).__init__()\n self.l1 = tf.keras.layers.Dense(5)\n\n def call(self, x):\n return self.l1(x)\n\ndef model_fn(features, labels, mode):\n net = Net()\n opt = tf.keras.optimizers.Adam(0.1)\n ckpt = tf.train.Checkpoint(step=tf_compat.train.get_global_step(),\n optimizer=opt, net=net)\n with tf.GradientTape() as tape:\n output = net(features['x'])\n loss = tf.reduce_mean(tf.abs(output - features['y']))\n variables = net.trainable_variables\n gradients = tape.gradient(loss, variables)\n return tf.estimator.EstimatorSpec(\n mode,\n loss=loss,\n train_op=tf.group(opt.apply_gradients(zip(gradients, variables)),\n ckpt.step.assign_add(1)),\n # Tell the Estimator to save \"ckpt\" in an object-based format.\n scaffold=tf_compat.train.Scaffold(saver=ckpt))\n\ntf.keras.backend.clear_session()\nest = tf.estimator.Estimator(model_fn, './tf_estimator_example/')\nest.train(toy_dataset, steps=10)", "tf.train.Checkpoint can then load the Estimator's checkpoints from its model_dir.", "opt = tf.keras.optimizers.Adam(0.1)\nnet = Net()\nckpt = tf.train.Checkpoint(\n step=tf.Variable(1, dtype=tf.int64), optimizer=opt, net=net)\nckpt.restore(tf.train.latest_checkpoint('./tf_estimator_example/'))\nckpt.step.numpy() # From est.train(..., steps=10)", "SavedModels from Estimators\nEstimators export SavedModels through tf.Estimator.export_saved_model.", "input_column = tf.feature_column.numeric_column(\"x\")\n\nestimator = tf.estimator.LinearClassifier(feature_columns=[input_column])\n\ndef input_fn():\n return tf.data.Dataset.from_tensor_slices(\n ({\"x\": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16)\nestimator.train(input_fn)", "To save an Estimator you need to create a serving_input_receiver. This function builds a part of a tf.Graph that parses the raw data received by the SavedModel. \nThe tf.estimator.export module contains functions to help build these receivers.\nThe following code builds a receiver, based on the feature_columns, that accepts serialized tf.Example protocol buffers, which are often used with tf-serving.", "tmpdir = tempfile.mkdtemp()\n\nserving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n tf.feature_column.make_parse_example_spec([input_column]))\n\nestimator_base_path = os.path.join(tmpdir, 'from_estimator')\nestimator_path = estimator.export_saved_model(estimator_base_path, serving_input_fn)", "You can also load and run that model, from python:", "imported = tf.saved_model.load(estimator_path)\n\ndef predict(x):\n example = tf.train.Example()\n example.features.feature[\"x\"].float_list.value.extend([x])\n return imported.signatures[\"predict\"](\n examples=tf.constant([example.SerializeToString()]))\n\nprint(predict(1.5))\nprint(predict(3.5))", "tf.estimator.export.build_raw_serving_input_receiver_fn allows you to create input functions which take raw tensors rather than tf.train.Examples.\nUsing tf.distribute.Strategy with Estimator (Limited support)\ntf.estimator is a distributed training TensorFlow API that originally supported the async parameter server approach. tf.estimator now supports tf.distribute.Strategy. If you're using tf.estimator, you can change to distributed training with very few changes to your code. With this, Estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs. This support in Estimator is, however, limited. Check out the What's supported now section below for more details.\nUsing tf.distribute.Strategy with Estimator is slightly different than in the Keras case. Instead of using strategy.scope, now you pass the strategy object into the RunConfig for the Estimator.\nYou can refer to the distributed training guide for more information.\nHere is a snippet of code that shows this with a premade Estimator LinearRegressor and MirroredStrategy:", "mirrored_strategy = tf.distribute.MirroredStrategy()\nconfig = tf.estimator.RunConfig(\n train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)\nregressor = tf.estimator.LinearRegressor(\n feature_columns=[tf.feature_column.numeric_column('feats')],\n optimizer='SGD',\n config=config)", "Here, you use a premade Estimator, but the same code works with a custom Estimator as well. train_distribute determines how training will be distributed, and eval_distribute determines how evaluation will be distributed. This is another difference from Keras where you use the same strategy for both training and eval.\nNow you can train and evaluate this Estimator with an input function:", "def input_fn():\n dataset = tf.data.Dataset.from_tensors(({\"feats\":[1.]}, [1.]))\n return dataset.repeat(1000).batch(10)\nregressor.train(input_fn=input_fn, steps=10)\nregressor.evaluate(input_fn=input_fn, steps=10)", "Another difference to highlight here between Estimator and Keras is the input handling. In Keras, each batch of the dataset is split automatically across the multiple replicas. In Estimator, however, you do not perform automatic batch splitting, nor automatically shard the data across different workers. You have full control over how you want your data to be distributed across workers and devices, and you must provide an input_fn to specify how to distribute your data.\nYour input_fn is called once per worker, thus giving one dataset per worker. Then one batch from that dataset is fed to one replica on that worker, thereby consuming N batches for N replicas on 1 worker. In other words, the dataset returned by the input_fn should provide batches of size PER_REPLICA_BATCH_SIZE. And the global batch size for a step can be obtained as PER_REPLICA_BATCH_SIZE * strategy.num_replicas_in_sync.\nWhen performing multi-worker training, you should either split your data across the workers, or shuffle with a random seed on each. You can check an example of how to do this in the Multi-worker training with Estimator tutorial.\nAnd similarly, you can use multi worker and parameter server strategies as well. The code remains the same, but you need to use tf.estimator.train_and_evaluate, and set TF_CONFIG environment variables for each binary running in your cluster.\n<a name=\"estimator_support\"></a>\nWhat's supported now?\nThere is limited support for training with Estimator using all strategies except TPUStrategy. Basic training and evaluation should work, but a number of advanced features such as v1.train.Scaffold do not. There may also be a number of bugs in this integration and there are no plans to actively improve this support (the focus is on Keras and custom training loop support). If at all possible, you should prefer to use tf.distribute with those APIs instead.\n| Training API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |\n|:--------------- |:------------------ |:------------- |:----------------------------- |:------------------------ |:------------------------- |\n| Estimator API | Limited support | Not supported | Limited support | Limited support | Limited support |\nExamples and tutorials\nHere are some end-to-end examples that show how to use various strategies with Estimator:\n\nThe Multi-worker Training with Estimator tutorial shows how you can train with multiple workers using MultiWorkerMirroredStrategy on the MNIST dataset.\nAn end-to-end example of running multi-worker training with distribution strategies in tensorflow/ecosystem using Kubernetes templates. It starts with a Keras model and converts it to an Estimator using the tf.keras.estimator.model_to_estimator API.\nThe official ResNet50 model, which can be trained using either MirroredStrategy or MultiWorkerMirroredStrategy." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sarvex/tensorflow
tensorflow/lite/examples/experimental_new_converter/Keras_LSTM_fusion_Codelab.ipynb
apache-2.0
[ "Overview\nThis CodeLab demonstrates how to build a fused TFLite LSTM model for MNIST recognition using Keras, and how to convert it to TensorFlow Lite.\nThe CodeLab is very similar to the Keras LSTM CodeLab. However, we're creating fused LSTM ops rather than the unfused versoin.\nAlso note: We're not trying to build the model to be a real world application, but only demonstrate how to use TensorFlow Lite. You can a build a much better model using CNN models. For a more canonical lstm codelab, please see here.\nStep 0: Prerequisites\nIt's recommended to try this feature with the newest TensorFlow nightly pip build.", "!pip install tf-nightly", "Step 1: Build the MNIST LSTM model.", "import numpy as np\nimport tensorflow as tf\n\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Input(shape=(28, 28), name='input'),\n tf.keras.layers.LSTM(20, time_major=False, return_sequences=True),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='output')\n])\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\nmodel.summary()", "Step 2: Train & Evaluate the model.\nWe will train the model using MNIST data.", "# Load MNIST dataset.\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0\nx_train = x_train.astype(np.float32)\nx_test = x_test.astype(np.float32)\n\n# Change this to True if you want to test the flow rapidly.\n# Train with a small dataset and only 1 epoch. The model will work poorly\n# but this provides a fast way to test if the conversion works end to end.\n_FAST_TRAINING = False\n_EPOCHS = 5\nif _FAST_TRAINING:\n _EPOCHS = 1\n _TRAINING_DATA_COUNT = 1000\n x_train = x_train[:_TRAINING_DATA_COUNT]\n y_train = y_train[:_TRAINING_DATA_COUNT]\n\nmodel.fit(x_train, y_train, epochs=_EPOCHS)\nmodel.evaluate(x_test, y_test, verbose=0)", "Step 3: Convert the Keras model to TensorFlow Lite model.", "run_model = tf.function(lambda x: model(x))\n# This is important, let's fix the input size.\nBATCH_SIZE = 1\nSTEPS = 28\nINPUT_SIZE = 28\nconcrete_func = run_model.get_concrete_function(\n tf.TensorSpec([BATCH_SIZE, STEPS, INPUT_SIZE], model.inputs[0].dtype))\n\n# model directory.\nMODEL_DIR = \"keras_lstm\"\nmodel.save(MODEL_DIR, save_format=\"tf\", signatures=concrete_func)\n\nconverter = tf.lite.TFLiteConverter.from_saved_model(MODEL_DIR)\ntflite_model = converter.convert()", "Step 4: Check the converted TensorFlow Lite model.\nNow load the TensorFlow Lite model and use the TensorFlow Lite python interpreter to verify the results.", "# Run the model with TensorFlow to get expected results.\nTEST_CASES = 10\n\n# Run the model with TensorFlow Lite\ninterpreter = tf.lite.Interpreter(model_content=tflite_model)\ninterpreter.allocate_tensors()\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\nfor i in range(TEST_CASES):\n expected = model.predict(x_test[i:i+1])\n interpreter.set_tensor(input_details[0][\"index\"], x_test[i:i+1, :, :])\n interpreter.invoke()\n result = interpreter.get_tensor(output_details[0][\"index\"])\n\n # Assert if the result of TFLite model is consistent with the TF model.\n np.testing.assert_almost_equal(expected, result)\n print(\"Done. The result of TensorFlow matches the result of TensorFlow Lite.\")\n\n # Please note: TfLite fused Lstm kernel is stateful, so we need to reset\n # the states.\n # Clean up internal states.\n interpreter.reset_all_variables()", "Step 5: Let's inspect the converted TFLite model.\nLet's check the model, you can see the LSTM will be in it's fused format." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
amueller/pydata-amsterdam-2016
Preprocessing and Pipelines.ipynb
cc0-1.0
[ "Preprocessing and Pipelines", "from sklearn.datasets import load_digits\nfrom sklearn.cross_validation import train_test_split\ndigits = load_digits()\nX_train, X_test, y_train, y_test = train_test_split(digits.data,\n digits.target)", "Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold.\nTo do that, we build a pipeline.", "from sklearn.pipeline import Pipeline, make_pipeline\nfrom sklearn.svm import SVC\nfrom sklearn.preprocessing import StandardScaler\n\nstandard_scaler = StandardScaler()\nstandard_scaler.fit(X_train)\nX_train_scaled = standard_scaler.transform(X_train)\nsvm = SVC().fit(X_train_scaled, y_train)\n\n#pipeline = Pipeline([(\"scaler\", StandardScaler()),\n# (\"svm\", SVC())])\n# short version:\npipeline = make_pipeline(StandardScaler(), SVC())\n\npipeline.fit(X_train, y_train)\n\npipeline.score(X_test, y_test)\n\npipeline.predict(X_test)", "Cross-validation with a pipeline", "from sklearn.cross_validation import cross_val_score\ncross_val_score(pipeline, X_train, y_train)", "Grid Search with a pipeline", "import numpy as np\nfrom sklearn.grid_search import GridSearchCV\n\nparam_grid = {'svc__C': 10. ** np.arange(-3, 3),\n 'svc__gamma' : 10. ** np.arange(-3, 3)\n }\n\ngrid_pipeline = GridSearchCV(pipeline, param_grid=param_grid) \n\ngrid_pipeline.fit(X_train, y_train)\n\ngrid_pipeline.score(X_test, y_test)", "Exercise\nMake a pipeline out of the StandardScaler and KNeighborsClassifier and search over the number of neighbors.", "# %load solutions/pipeline_knn.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tiagoantao/biopython-notebook
notebooks/07 - Blast.ipynb
mit
[ "Source of the materials: Biopython cookbook (Adapted)\n<font color='red'>\nNew status: Draft</font>\nBLAST\nRunning BLAST over the Internet\nSaving blast output\nRunning BLAST locally\nParsing BLAST output\nThe BLAST record class\nParsing plain-text BLAST output\nHey, everybody loves BLAST right? I mean, geez, how can it get any\neasier to do comparisons between one of your sequences and every other\nsequence in the known world? But, of course, this section isn’t about\nhow cool BLAST is, since we already know that. It is about the problem\nwith BLAST – it can be really difficult to deal with the volume of data\ngenerated by large runs, and to automate BLAST runs in general.\nFortunately, the Biopython folks know this only too well, so they’ve\ndeveloped lots of tools for dealing with BLAST and making things much\neasier. This section details how to use these tools and do useful things\nwith them.\nDealing with BLAST can be split up into two steps, both of which can be\ndone from within Biopython. Firstly, running BLAST for your query\nsequence(s), and getting some output. Secondly, parsing the BLAST output\nin Python for further analysis.\nYour first introduction to running BLAST was probably via the NCBI\nweb-service. In fact, there are lots of ways you can run BLAST, which\ncan be categorised in several ways. The most important distinction is\nrunning BLAST locally (on your own machine), and running BLAST remotely\n(on another machine, typically the NCBI servers). We’re going to start\nthis chapter by invoking the NCBI online BLAST service from within a\nPython script.\nNOTE: The following Chapter [chapter:searchio] describes\nBio.SearchIO, an experimental module in Biopython. We intend this to\nultimately replace the older Bio.Blast module, as it provides a more\ngeneral framework handling other related sequence searching tools as\nwell. However, until that is declared stable, for production code please\ncontinue to use the Bio.Blast module for dealing with NCBI BLAST.\nRunning BLAST over the Internet\nWe use the function qblast() in the Bio.Blast.NCBIWWW module to call\nthe online version of BLAST. This has three non-optional arguments:\n\n\nThe first argument is the blast program to use for the search, as a\n lower case string. The options and descriptions of the programs are\n available at\n https://blast.ncbi.nlm.nih.gov/Blast.cgi.. Currently\n qblast only works with blastn, blastp, blastx, tblast and tblastx.\n\n\nThe second argument specifies the databases to search against.\n Again, the options for this are available on the NCBI web pages at\n http://www.ncbi.nlm.nih.gov/BLAST/blast_databases.shtml.\n\n\nThe third argument is a string containing your query sequence. This\n can either be the sequence itself, the sequence in fasta format, or\n an identifier like a GI number.\n\n\nThe qblast function also take a number of other option arguments which\nare basically analogous to the different parameters you can set on the\nBLAST web page. We’ll just highlight a few of them here:\n\nThe argument url_base sets the base URL for running BLAST over the internet. By default it connects to the NCBI, but one can use this to connect to an instance of NCBI BLAST running in the cloud. Please refer to the documentation for the qblast function for further details\n\nThe qblast function can return the BLAST results in various\n formats, which you can choose with the optional format_type\n keyword: \"HTML\", \"Text\", \"ASN.1\", or \"XML\". The default is\n \"XML\", as that is the format expected by the parser, described in\n section [sec:parsing-blast] below.\n\n\nThe argument expect sets the expectation or e-value threshold.\n\n\nFor more about the optional BLAST arguments, we refer you to the NCBI’s\nown documentation, or that built into Biopython:", "from Bio.Blast import NCBIWWW\nhelp(NCBIWWW.qblast)", "Note that the default settings on the NCBI BLAST website are not quite\nthe same as the defaults on QBLAST. If you get different results, you’ll\nneed to check the parameters (e.g., the expectation value threshold and\nthe gap values).\nFor example, if you have a nucleotide sequence you want to search\nagainst the nucleotide database (nt) using BLASTN, and you know the GI\nnumber of your query sequence, you can use:", "from Bio.Blast import NCBIWWW\nresult_handle = NCBIWWW.qblast(\"blastn\", \"nt\", \"8332116\")", "Alternatively, if we have our query sequence already in a FASTA\nformatted file, we just need to open the file and read in this record as\na string, and use that as the query argument:", "from Bio.Blast import NCBIWWW\nfasta_string = open(\"data/m_cold.fasta\").read()\nresult_handle = NCBIWWW.qblast(\"blastn\", \"nt\", fasta_string)", "We could also have read in the FASTA file as a SeqRecord and then\nsupplied just the sequence itself:", "from Bio.Blast import NCBIWWW\nfrom Bio import SeqIO\nrecord = SeqIO.read(\"data/m_cold.fasta\", format=\"fasta\")\nresult_handle = NCBIWWW.qblast(\"blastn\", \"nt\", record.seq)", "Supplying just the sequence means that BLAST will assign an identifier\nfor your sequence automatically. You might prefer to use the SeqRecord\nobject’s format method to make a FASTA string (which will include the\nexisting identifier):", "from Bio.Blast import NCBIWWW\nfrom Bio import SeqIO\nrecord = SeqIO.read(\"data/m_cold.fasta\", format=\"fasta\")\nresult_handle = NCBIWWW.qblast(\"blastn\", \"nt\", record.format(\"fasta\"))", "This approach makes more sense if you have your sequence(s) in a\nnon-FASTA file format which you can extract using Bio.SeqIO (see\nChapter 5 - Sequence Input and Output.)\nWhatever arguments you give the qblast() function, you should get back\nyour results in a handle object (by default in XML format). The next\nstep would be to parse the XML output into Python objects representing\nthe search results (Section [sec:parsing-blast]), but you might want\nto save a local copy of the output file first. I find this especially\nuseful when debugging my code that extracts info from the BLAST results\n(because re-running the online search is slow and wastes the NCBI\ncomputer time).\nSaving blast output\nWe need to be a bit careful since we can use result_handle.read() to\nread the BLAST output only once – calling result_handle.read() again\nreturns an empty string.", "with open(\"data/my_blast.xml\", \"w\") as out_handle:\n out_handle.write(result_handle.read())\n result_handle.close()", "After doing this, the results are in the file my_blast.xml and the\noriginal handle has had all its data extracted (so we closed it).\nHowever, the parse function of the BLAST parser (described\nin [sec:parsing-blast]) takes a file-handle-like object, so we can\njust open the saved file for input:", " result_handle = open(\"data/my_blast.xml\")", "Now that we’ve got the BLAST results back into a handle again, we are\nready to do something with them, so this leads us right into the parsing\nsection (see Section [sec:parsing-blast] below). You may want to jump\nahead to that now ….\nRunning BLAST locally\nIntroduction\nRunning BLAST locally (as opposed to over the internet, see\nSection [sec:running-www-blast]) has at least major two advantages:\n\n\nLocal BLAST may be faster than BLAST over the internet;\n\n\nLocal BLAST allows you to make your own database to search for\n sequences against.\n\n\nDealing with proprietary or unpublished sequence data can be another\nreason to run BLAST locally. You may not be allowed to redistribute the\nsequences, so submitting them to the NCBI as a BLAST query would not be\nan option.\nUnfortunately, there are some major drawbacks too – installing all the\nbits and getting it setup right takes some effort:\n\n\nLocal BLAST requires command line tools to be installed.\n\n\nLocal BLAST requires (large) BLAST databases to be setup (and\n potentially kept up to date).\n\n\nTo further confuse matters there are several different BLAST packages\navailable, and there are also other tools which can produce imitation\nBLAST output files, such as BLAT.\nStandalone NCBI BLAST+\nThe “new” NCBI\nBLAST+\nsuite was released in 2009. This replaces the old NCBI “legacy” BLAST\npackage (see below).\nThis section will show briefly how to use these tools from within\nPython. If you have already read or tried the alignment tool examples in\nSection [sec:alignment-tools] this should all seem quite\nstraightforward. First, we construct a command line string (as you would\ntype in at the command line prompt if running standalone BLAST by hand).\nThen we can execute this command from within Python.\nFor example, taking a FASTA file of gene nucleotide sequences, you might\nwant to run a BLASTX (translation) search against the non-redundant (NR)\nprotein database. Assuming you (or your systems administrator) has\ndownloaded and installed the NR database, you might run:\n```\nblastx -query opuntia.fasta -db nr -out opuntia.xml -evalue 0.001 -outfmt 5\n```\nThis should run BLASTX against the NR database, using an expectation\ncut-off value of $0.001$ and produce XML output to the specified file\n(which we can then parse). On my computer this takes about six minutes -\na good reason to save the output to a file so you can repeat any\nanalysis as needed.\nFrom within Biopython we can use the NCBI BLASTX wrapper from the\nBio.Blast.Applications module to build the command line string, and\nrun it:", "from Bio.Blast.Applications import NcbiblastxCommandline\nhelp(NcbiblastxCommandline)\n\nblastx_cline = NcbiblastxCommandline(query=\"opuntia.fasta\", db=\"nr\", evalue=0.001,\noutfmt=5, out=\"opuntia.xml\")\nblastx_cline\n\nprint(blastx_cline)\n\n# stdout, stderr = blastx_cline()", "In this example there shouldn’t be any output from BLASTX to the\nterminal, so stdout and stderr should be empty. You may want to check\nthe output file opuntia.xml has been created.\nAs you may recall from earlier examples in the tutorial, the\nopuntia.fasta contains seven sequences, so the BLAST XML output should\ncontain multiple results. Therefore use Bio.Blast.NCBIXML.parse() to\nparse it as described below in Section [sec:parsing-blast].\nOther versions of BLAST\nNCBI BLAST+ (written in C++) was first released in 2009 as a replacement\nfor the original NCBI “legacy” BLAST (written in C) which is no longer\nbeing updated. There were a lot of changes – the old version had a\nsingle core command line tool blastall which covered multiple\ndifferent BLAST search types (which are now separate commands in\nBLAST+), and all the command line options were renamed. Biopython’s\nwrappers for the NCBI “legacy” BLAST tools have been deprecated and will\nbe removed in a future release. To try to avoid confusion, we do not\ncover calling these old tools from Biopython in this tutorial.\nYou may also come across Washington University\nBLAST (WU-BLAST), and its successor, Advanced\nBiocomputing BLAST (AB-BLAST, released in\n2009, not free/open source). These packages include the command line\ntools wu-blastall and ab-blastall, which mimicked blastall from\nthe NCBI “legacy” BLAST suite. Biopython does not currently provide\nwrappers for calling these tools, but should be able to parse any NCBI\ncompatible output from them.\nParsing BLAST output\nAs mentioned above, BLAST can generate output in various formats, such\nas XML, HTML, and plain text. Originally, Biopython had parsers for\nBLAST plain text and HTML output, as these were the only output formats\noffered at the time. Unfortunately, the BLAST output in these formats\nkept changing, each time breaking the Biopython parsers. Our HTML BLAST\nparser has been removed, but the plain text BLAST parser is still\navailable (see Section [sec:parsing-blast-deprecated]). Use it at your\nown risk, it may or may not work, depending on which BLAST version\nyou’re using.\nAs keeping up with changes in BLAST became a hopeless endeavor,\nespecially with users running different BLAST versions, we now recommend\nto parse the output in XML format, which can be generated by recent\nversions of BLAST. Not only is the XML output more stable than the plain\ntext and HTML output, it is also much easier to parse automatically,\nmaking Biopython a whole lot more stable.\nYou can get BLAST output in XML format in various ways. For the parser,\nit doesn’t matter how the output was generated, as long as it is in the\nXML format.\n\n\nYou can use Biopython to run BLAST over the internet, as described\n in section [sec:running-www-blast].\n\n\nYou can use Biopython to run BLAST locally, as described\n in section [sec:running-local-blast].\n\n\nYou can do the BLAST search yourself on the NCBI site through your\n web browser, and then save the results. You need to choose XML as\n the format in which to receive the results, and save the final BLAST\n page you get (you know, the one with all of the\n interesting results!) to a file.\n\n\nYou can also run BLAST locally without using Biopython, and save the\n output in a file. Again, you need to choose XML as the format in\n which to receive the results.\n\n\nThe important point is that you do not have to use Biopython scripts to\nfetch the data in order to be able to parse it. Doing things in one of\nthese ways, you then need to get a handle to the results. In Python, a\nhandle is just a nice general way of describing input to any info source\nso that the info can be retrieved using read() and readline()\nfunctions (see Section <span>sec:appendix-handles</span>).\nIf you followed the code above for interacting with BLAST through a\nscript, then you already have result_handle, the handle to the BLAST\nresults. For example, using a GI number to do an online search:", "from Bio.Blast import NCBIWWW\nresult_handle = NCBIWWW.qblast(\"blastn\", \"nt\", \"8332116\")", "If instead you ran BLAST some other way, and have the BLAST output (in\nXML format) in the file my_blast.xml, all you need to do is to open\nthe file for reading:", "result_handle = open(\"data/my_blast.xml\")", "Now that we’ve got a handle, we are ready to parse the output. The code\nto parse it is really quite small. If you expect a single BLAST result\n(i.e., you used a single query):", "from Bio.Blast import NCBIXML\nblast_record = NCBIXML.read(result_handle)", "or, if you have lots of results (i.e., multiple query sequences):", "from Bio.Blast import NCBIXML\nblast_records = NCBIXML.parse(result_handle)", "Just like Bio.SeqIO and Bio.AlignIO (see\nChapters [chapter:Bio.SeqIO] and [chapter:Bio.AlignIO]), we have a\npair of input functions, read and parse, where read is for when\nyou have exactly one object, and parse is an iterator for when you can\nhave lots of objects – but instead of getting SeqRecord or\nMultipleSeqAlignment objects, we get BLAST record objects.\nTo be able to handle the situation where the BLAST file may be huge,\ncontaining thousands of results, NCBIXML.parse() returns an iterator.\nIn plain English, an iterator allows you to step through the BLAST\noutput, retrieving BLAST records one by one for each BLAST search\nresult:", "from Bio.Blast import NCBIXML\nblast_records = NCBIXML.parse(result_handle)\nblast_record = next(blast_records)\nprint(blast_record.database_sequences)\n# # ... do something with blast_record", "Or, you can use a for-loop. Note though that you can step through the BLAST records only once. Usually, from each BLAST record you would save the information that you are interested in. If you want to save all returned BLAST records, you can convert the iterator into a list:", "for blast_record in blast_records:\n#Do something with blast_records\n blast_records = list(blast_records)\n\nblast_records = list(blast_records)", "Now you can access each BLAST record in the list with an index as usual. If your BLAST file is huge though, you may run into memory problems trying to save them all in a list.\nUsually, you’ll be running one BLAST search at a time. Then, all you need to do is to pick up the first (and only) BLAST record in blast_records:", "from Bio.Blast import NCBIXML\nblast_records = NCBIXML.parse(result_handle)", "I guess by now you’re wondering what is in a BLAST record.\nThe BLAST record class\nA BLAST Record contains everything you might ever want to extract from\nthe BLAST output. Right now we’ll just show an example of how to get\nsome info out of the BLAST report, but if you want something in\nparticular that is not described here, look at the info on the record\nclass in detail, and take a gander into the code or automatically\ngenerated documentation – the docstrings have lots of good info about\nwhat is stored in each piece of information.\nTo continue with our example, let’s just print out some summary info\nabout all hits in our blast report greater than a particular threshold.\nThe following code does this:", "E_VALUE_THRESH = 0.04\n\nfrom Bio.Blast import NCBIXML\nresult_handle = open(\"data/my_blast.xml\", \"r\")\nblast_records = NCBIXML.parse(result_handle)\n\nfor alignment in blast_record.alignments:\n for hsp in alignment.hsps:\n if hsp.expect < E_VALUE_THRESH:\n print(\"****Alignment****\")\n print(\"sequence:\", alignment.title)\n print(\"length:\", alignment.length)\n print(\"e value:\", hsp.expect)\n print(hsp.query[0:75] + \"...\")\n print(hsp.match[0:75] + \"...\")\n print(hsp.sbjct[0:75] + \"...\")", "This will print out summary reports like the following: *Alignment* sequence: >gb|AF283004.1|AF283004 Arabidopsis thaliana cold acclimation protein WCOR413-like protein alpha form mRNA, complete cds\nlength: 783 \ne value: 0.034 \ntacttgttgatattggatcgaacaaactggagaaccaacatgctcacgtcacttttagtcccttacatattcctc...\n||||||||| | ||||||||||| || |||| || || |||||||| |||||| | | |||||||| ||| | |...\ntacttgttggtgttggatcgaaccaattggaagacgaatatgctcacatcacttctcattccttacatcttcttc...\nBasically, you can do anything you want to with the info in the BLAST\nreport once you have parsed it. This will, of course, depend on what you\nwant to use it for, but hopefully this helps you get started on doing\nwhat you need to do!\nAn important consideration for extracting information from a BLAST\nreport is the type of objects that the information is stored in. In\nBiopython, the parsers return Record objects, either Blast or\nPSIBlast depending on what you are parsing. These objects are defined\nin Bio.Blast.Record and are quite complete.\nHere are my attempts at UML class diagrams for the Blast and\nPSIBlast record classes. If you are good at UML and see\nmistakes/improvements that can be made, please let me know. The Blast\nclass diagram is shown in the next figure\n\nThe PSIBlast record object is similar, but has support for the rounds\nthat are used in the iteration steps of PSIBlast. The class diagram for\nPSIBlast is shown in the next figure.\n\nDealing with PSI-BLAST\nYou can run the standalone version of PSI-BLAST (the legacy NCBI command line tool blastpgp, or its replacement psiblast) using the wrappers in Bio.Blast.Applications module. At the time of writing, the NCBI does not appear to support tools running a PSI-BLAST search via the internet. Note that the Bio.Blast.NCBIXML parser can read the XML output from current versions of PSI-BLAST, but information like which sequences in each iteration is new or reused isn’t present in the XML file. \nDealing with RPS-BLAST\nYou can run the standalone version of RPS-BLAST (either the legacy NCBI command line tool rpsblast, or its replacement with the same name) using the wrappers in Bio.Blast.Applications module. At the time of writing, the NCBI does not appear to support tools running an RPS-BLAST search via the internet. You can use the Bio.Blast.NCBIXML parser to read the XML output from current versions of RPSBLAST." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wuafeing/Python3-Tutorial
02 strings and text/02.06 search replace case insensitive.ipynb
gpl-3.0
[ "Previous\n2.6 字符串忽略大小写的搜索替换\n问题\n你需要以忽略大小写的方式搜索与替换文本字符串\n解决方案\n为了在文本操作时忽略大小写,你需要在使用 re 模块的时候给这些操作提供 re.IGNORECASE 标志参数。比如:", "import re\ntext = \"UPPER PYTHON, lower python, Mixed Python\"\nre.findall(\"python\", text, flags = re.IGNORECASE)\n\nre.sub(\"python\", \"snake\", text, flags = re.IGNORECASE)", "最后的那个例子揭示了一个小缺陷,替换字符串并不会自动跟被匹配字符串的大小写保持一致。 为了修复这个,你可能需要一个辅助函数,就像下面的这样:", "def matchcase(word):\n def replace(m):\n text = m.group()\n if text.isupper():\n return word.upper()\n elif text.islower():\n return word.lower()\n elif text[0].isupper():\n return word.capitalize()\n else:\n return word\n return replace", "下面是使用上述函数的方法:", "re.sub(\"python\", matchcase(\"snake\"), text, flags=re.IGNORECASE)", "译者注: matchcase('snake') 返回了一个回调函数(参数必须是 match 对象),前面一节提到过, sub() 函数除了接受替换字符串外,还能接受一个回调函数。\n讨论\n对于一般的忽略大小写的匹配操作,简单的传递一个 re.IGNORECASE 标志参数就已经足够了。 但是需要注意的是,这个对于某些需要大小写转换的Unicode 匹配可能还不够, 参考 2.10 小节了解更多细节。\nNext" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jinzishuai/learn2deeplearn
google_dl_udacity/lesson3/3_regularization.ipynb
gpl-3.0
[ "Deep Learning\nAssignment 3\nPreviously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.\nThe goal of this assignment is to explore regularization techniques.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nfrom six.moves import cPickle as pickle", "First reload the data we generated in 1_notmnist.ipynb.", "pickle_file = '../notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print('Training set', train_dataset.shape, train_labels.shape)\n print('Validation set', valid_dataset.shape, valid_labels.shape)\n print('Test set', test_dataset.shape, test_labels.shape)", "Reformat into a shape that's more adapted to the models we're going to train:\n- data as a flat matrix,\n- labels as float 1-hot encodings.", "image_size = 28\nnum_labels = 10\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint('Training set', train_dataset.shape, train_labels.shape)\nprint('Validation set', valid_dataset.shape, valid_labels.shape)\nprint('Test set', test_dataset.shape, test_labels.shape)\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])", "Problem 1\nIntroduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.", "graph = tf.Graph()\nwith graph.as_default():\n...\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))+ \\\n tf.scalar_mul(beta, tf.nn.l2_loss(weights1)+tf.nn.l2_loss(weights2))\n \n", "summary\nWith\npython\nbatch_size = 128\nnum_hidden_nodes = 1024\nbeta = 1e-3\nnum_steps = 3001\nResults\n* Test accuracy: 88.5% with beta=0.000000 (no L2 regulization)\n* Test accuracy: 86.7% with beta=0.000010\n* Test accuracy: 88.8% with beta=0.000100\n* Test accuracy: 92.6% with beta=0.001000\n* Test accuracy: 89.7% with beta=0.010000\n* Test accuracy: 82.2% with beta=0.100000\n* Test accuracy: 10.0% with beta=1.000000\n\nProblem 2\nLet's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?", " offset = 0 #offset = (step * batch_size) % (train_labels.shape[0] - batch_size)", "With\npython\nbatch_size = 128\nnum_hidden_nodes = 1024\nbeta = 1e-3\nnum_steps = 3001\nResults\n* Original Test accuracy: 92.6% with beta=0.001000\n* With offset = 0: Test accuracy: 67.5% with beta=0.001000\n\nProblem 3\nIntroduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.\nWhat happens to our extreme overfitting case?", " keep_rate = 0.5\n dropout = tf.nn.dropout(activated_hidden_layer, keep_rate) #dropout if applied after activation\n logits = tf.matmul(dropout, weights2) + biases2 ", "Vary keep_rate:\n* Test accuracy: 92.7% with beta=0.001000, keep_rate =1.000000\n* Test accuracy: 92.3% with beta=0.001000, keep_rate =0.800000\n* Test accuracy: 91.8% with beta=0.001000, keep_rate =0.600000\n* Test accuracy: 90.7% with beta=0.001000, keep_rate =0.400000\n* Test accuracy: 87.0% with beta=0.001000, keep_rate =0.200000\nVary beta while keep keep_rate=0.5\n* Test accuracy: 91.7% with beta=0.001000, keep_rate =0.500000\n* Test accuracy: 87.6% with beta=0.000100, keep_rate =0.500000 \n* Test accuracy: 89.5% with beta=0.010000, keep_rate =0.500000\nNote that keep_rate cannot be set to be 0: range (0, 1]\nWorse Case offset=0: Significant Improvement\n\nNormal: Test accuracy: 91.7% with beta=0.001000, keep_rate =0.500000\noffset = 0 without dropout: Test accuracy: 67.5% with beta=0.001000 (keep_rate =1)\noffset = 0 with dropout: Test accuracy: 74.6% with beta=0.001000, keep_rate =0.500000\n\n\nProblem 4\nTry to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.\nOne avenue you can explore is to add multiple layers.\nAnother one is to use learning rate decay:\nglobal_step = tf.Variable(0) # count the number of steps taken.\nlearning_rate = tf.train.exponential_decay(0.5, global_step, ...)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)\n\n\nFixed Learning Rate\nbatch_size = 128\nnum_hidden_nodes1 = 1024\nnum_hidden_nodes2 = 1024\nbeta = 0.001\nnum_steps = 3001\nkeep_rate = 0.5\nlearning_rate=1e-3\n* Test accuracy: 89.1% with beta=0.001000, keep_rate =0.500000, learning_rate=0.001000\n* Test accuracy: 83.4% with beta=0.001000, keep_rate =0.500000, learning_rate=0.010000\n* learning_rate = 0.1: blow up with NaN\n* learning_rate = 0.5 (all runs in problem 1-3 are with 0.5): blow up with NaN\n* learning_rate = 1e-4: very slow\nLearning Rate Decay\n\nlearning_rate = tf.train.exponential_decay(0.01, global_step, 100, 0.95): Test accuracy: 85.5% with beta=0.001000, keep_rate =0.500000\nlearning_rate = tf.train.exponential_decay(0.005, global_step, 100, 0.95): Test accuracy: 88.9% with beta=0.001000, keep_rate =0.500000\nlearning_rate = tf.train.exponential_decay(0.001, global_step, 100, 0.95): Test accuracy: 89.3% with beta=0.001000, keep_rate =0.500000\nlearning_rate = tf.train.exponential_decay(0.001, global_step, 100, 0.5): Test accuracy: 85.4% with beta=0.001000, keep_rate =0.500000\nlearning_rate = tf.train.exponential_decay(0.01, global_step, 100, 0.5): Test accuracy: 88.0% with beta=0.001000, keep_rate =0.500000\n\nMore Data without Learning Rate De\nbatch_size = 128\nnum_hidden_nodes1 = 1024\nnum_hidden_nodes2 = 1024\nbeta = 0.001\nnum_steps = 30001\nkeep_rate = 0.5\nlearning_rate = 1e-3\n* 30k steps: Test accuracy: 86.8% with beta=0.001000, keep_rate =0.500000\nchange size of layers\nbatch_size = 128\nnum_hidden_nodes1 = 256\nnum_hidden_nodes2 = 512\nbeta = 0.001\nnum_steps = 3001\nkeep_rate = 0.5\nlearning_rate=1e-3\n* Test accuracy: 86.1% with beta=0.001000, keep_rate =0.500000\n* Test accuracy: 85.7% with beta=0.001000, keep_rate =1.000000\nForum user mentioned 4 hidden layer solution to get to 97.3%\nhttps://discussions.udacity.com/t/assignment-3-3-how-to-implement-dropout/45730/24\n\nI was able to get an accuracy of 97.3% using a 4 hidden layer network 1024x1024x305x75 and 95k steps. The trick was to use good weight initialization (sqrt(2/n)) and lower dropout rate (I used 0.75). The code is here https://discussions.udacity.com/t/assignment-4-problem-2/46525/26?u=endri.deliu. With conv nets you get even higher.\n\nprob3.4_endri.deliu.py runs (after fixing a few compilcation problems due to python-3) as following to get 96.7%:\n```\nInitialized\nMinibatch loss at step 0 : 2.4214315\nMinibatch accuracy: 33.6%\nValidation accuracy: 21.9%\nMinibatch loss at step 500 : 0.74792475\nMinibatch accuracy: 85.2%\nValidation accuracy: 85.1%\nMinibatch loss at step 1000 : 0.6289795\nMinibatch accuracy: 85.9%\nValidation accuracy: 86.6%\nMinibatch loss at step 1500 : 0.45435938\nMinibatch accuracy: 90.6%\nValidation accuracy: 87.2%\nMinibatch loss at step 2000 : 0.64454144\nMinibatch accuracy: 83.6%\nValidation accuracy: 87.9%\nMinibatch loss at step 2500 : 0.47072983\nMinibatch accuracy: 85.2%\nValidation accuracy: 88.7%\nMinibatch loss at step 3000 : 0.33217508\nMinibatch accuracy: 93.8%\nValidation accuracy: 88.8%\n...\nMinibatch loss at step 92500 : 0.14325579\nMinibatch accuracy: 98.4%\nValidation accuracy: 92.6%\nMinibatch loss at step 93000 : 0.07832281\nMinibatch accuracy: 98.4%\nValidation accuracy: 92.7%\nMinibatch loss at step 93500 : 0.056985322\nMinibatch accuracy: 99.2%\nValidation accuracy: 92.7%\nMinibatch loss at step 94000 : 0.097948775\nMinibatch accuracy: 99.2%\nValidation accuracy: 92.7%\nMinibatch loss at step 94500 : 0.08198348\nMinibatch accuracy: 97.7%\nValidation accuracy: 92.6%\nMinibatch loss at step 95000 : 0.10525039\nMinibatch accuracy: 98.4%\nValidation accuracy: 92.6%\n\nTest accuracy: 96.7%\n```\nFull output is at output_endri.deliu.txt.\nAnother run with only 3000 steps has a result of 93.8%\nIts setup\n```python\nbatch_size = 128\nhidden_layer1_size = 1024\nhidden_layer2_size = 305\nhidden_lastlayer_size = 75\nuse_multilayers = True\nregularization_meta=0.03 #Note that this is not used in the code (commented out)\n...\nnum_steps = 95001\n```\nAnalysis\n\n4 hidden layer network 1024x1024x305x75 inspite of the above definition of only 3 hidden layer sizes since the hidden_layer1_size is used twice.\nlearning rate deay is used: learning_rate = tf.train.exponential_decay(0.3, global_step, 3500, 0.86, staircase=True) \nHe uses the n=weight_matrix.shape[0] to calculate the initial distribution using stddev=np.sqrt(2/n)\ndropout is used\nkeep_prob=75% for training\nkeep_prob=100% for validation and testing\n\nMy Own 6-Layer Code\nprob3.4_6layers.py:\npython\nbatch_size = 128\nnum_hidden_nodes1 = 1024\nnum_hidden_nodes2 = 1024\nnum_hidden_nodes3 = 305\nnum_hidden_nodes4 = 75\nbeta = 0.03\nnum_steps = 30001\nkeep_rate = 0.75\nresults:\n```\nInitialized\nMinibatch loss at step 0: 58.998505. learning_rate=0.300000\nMinibatch accuracy: 11.7%\nMinibatch loss at step 500: 1.461278. learning_rate=0.300000\nMinibatch accuracy: 78.1%\n...\nMinibatch loss at step 30000: 1.107867. learning_rate=0.089765\nMinibatch accuracy: 85.9%\n\nTest accuracy: 88.5% with beta=0.030000, keep_rate =0.750000\n```\nRemove Regularization: Better Results\n```\nInitialized\nMinibatch loss at step 0: 2.484716. learning_rate=0.300000\nMinibatch accuracy: 12.5%\nMinibatch loss at step 500: 0.748225. learning_rate=0.300000\nMinibatch accuracy: 77.3%\nMinibatch loss at step 1000: 0.730464. learning_rate=0.300000\nMinibatch accuracy: 78.1%\nMinibatch loss at step 1500: 0.463169. learning_rate=0.300000\nMinibatch accuracy: 85.9%\nMinibatch loss at step 2000: 0.601513. learning_rate=0.300000\nMinibatch accuracy: 79.7%\nMinibatch loss at step 2500: 0.561515. learning_rate=0.300000\nMinibatch accuracy: 82.0%\nMinibatch loss at step 3000: 0.287524. learning_rate=0.300000\nMinibatch accuracy: 90.6%\n\nTest accuracy: 93.8% with beta=0.000000, keep_rate =0.750000\n```\nThis is as good as Endri.Deliu's code. Note that without reguliarzation, the initial loss is much smaller.\nMy Best Result: 96.7%\nprob3.4_6layers.py:\n```python\nbatch_size = 128\nnum_hidden_nodes1 = 1024\nnum_hidden_nodes2 = 1024\nnum_hidden_nodes3 = 305\nnum_hidden_nodes4 = 75\nbeta = 0\nnum_steps = 95001\nkeep_rate = 0.75\n```\nresults:\n```\nMinibatch loss at step 94000: 0.077660. learning_rate=0.005944\nMinibatch accuracy: 97.7%\nMinibatch loss at step 94500: 0.097502. learning_rate=0.005112\nMinibatch accuracy: 97.7%\nMinibatch loss at step 95000: 0.100003. learning_rate=0.005112\nMinibatch accuracy: 96.1%\n\nTest accuracy: 96.7% with beta=0.000000, keep_rate =0.750000\n```" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
lahoffm/aclu-bail-reform
src/visualization/Fulton County Data Viz.ipynb
mit
[ "This notebook contains does some basic data visualizations for booking data from the Fulton County jail data from the beginning of 2017 to the present (November 26th).", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\n\ntime_columns = ['inmate_dob',\n 'booking_timestamp',\n 'release_timestamp',\n 'court_date']\n\nindex_col = 'inmate_id'\n\nscrape1 = pd.read_csv(\"fulton_2017-11-26_09-15-47.csv\", \n parse_dates=time_columns,\n index_col=index_col)\nscrape2 = pd.read_csv(\"fulton_2017-11-26_10-34-04.csv\",\n parse_dates=time_columns,\n index_col=index_col)", "Change the above cell to refer to the file locations on your computer (The reason it is two files is that I encountered a previously unseen error halfway through, and had to put a new try/except into the code and restart the scraping).", "len(scrape1)\n\nlen(scrape2)\n\ndf = pd.concat([scrape1,scrape2])\n\nlen(df)\n\ndf.columns\n\ndf['days_jailed'] = df.release_timestamp - df.booking_timestamp\n\ndf['days_jailed_np'] = df.days_jailed.dt.days\n\ndf.loc[df['days_jailed_np']>7,'days_jailed_np'] = 7\n\nsns.distplot(df['days_jailed_np'].dropna())", "This gives us the overall distribution of time imprisoned for everyone in our dataset who has been released.", "df.groupby('inmate_race').agg({'days_jailed_np' : np.mean}).plot(kind='bar')", "This gives us mean time in prison by race.", "ax= sns.violinplot(data=df, x='inmate_race', y='days_jailed_np', cut=0, scale='width')\nfor tick in ax.get_xticklabels():\n tick.set_rotation(45)", "This is a violin plot, which gives us a breakdown of how the distribution of days in jail varies by race. Unfortunately I can't figure out how to set the category labels nicely." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tanyaschlusser/stats-via-python
notebooks/input_output.ipynb
mit
[ "Reading Excel files\nThis notebook demonstrates how to read and manipulate data from\nExcel using Pandas:\n\nInput / Output\nsummaries\nplotting\n\nFirst, import the Pandas library:", "# The library for handling tabular data is called 'pandas'\n# Everyone shortens this to 'pd' for convenience.\nimport pandas as pd", "Get IRS data on businesses\nThe IRS website has some aggregated statistics on business returns in Excel files. We will use the Selected Income and Tax Items for Selected Years.\nThe original data is from the file linked here:\nhttps://www.irs.gov/pub/irs-soi/14intaba.xls,\nbut I cleaned it up by hand to remove footnotes and reformat the column and row headers. You can get the cleaned file in this repository data/14intaba_cleaned.xls.\nIt looks like this:\n<img src=\"img/screenshot-14intaba.png\" width=\"100%\"/>\nRead the data!\nWe will use the read_excel function inside of the Pandas library (accessed using pd.read_excel) to get the data. We need to:\n\nskip the first 2 rows\nSplit out the 'Current dollars' and 'Constant 1990 dollars' subsets\nuse the left two columns to split out the number of returns and their dollar amounts\n\nWhen referring to files on your computer from Jupyer, the path you use is relative to the current Jupyter notebook. My directory looks like this:\n.\n|-- notebooks\n |-- input_output.ipynb\n |-- data\n |- 14intaba_cleaned.xls\nso, the relative path from the notebook input_output.ipynb to the dataset 14intaba_cleaned.xls is:\ndata/14intaba_cleaned.xls", "raw = pd.read_excel('data/14intaba_cleaned.xls', skiprows=2)", "Look at the last 3 rows\nThe function pd.read_excel returns an object called a 'Data Frame', that is defined inside of the Pandas library. It has associated functions that access and manipulate the data inside. For example:", "# Look at the last 3 rows\nraw.tail(3)", "Split out the 'Current dollars' and 'Constant 1990 dollars'\nThere are two sets of data — for the actual dollars for each variable, and also for constant dollars (accounting for inflation). We will split the raw dataset into two and then index the rows by the units (whether they're number of returns or amount paid/claimed).\nThe columns we care about are Variable, Units, and the current or constant dollars from each year. (You can view them all with raw.columns.)\nWe can subset the dataset with the columns we want using raw.ix[:, &lt;desired_cols&gt;].\nThere are a lot of commands in this section...we will do a better job explaining later. For now, ['braces', 'denote', 'a list'], you can add lists, and you can write a shorthand for loop inside of a list (that's called a \"list comprehension\").", "index_cols = ['Units', 'Variable']\ncurrent_dollars_cols = index_cols + [\n c for c in raw.columns if c.startswith('Current')\n]\nconstant_dollars_cols = index_cols + [\n c for c in raw.columns if c.startswith('Constant')\n]\n\ncurrent_dollars_data = raw[current_dollars_cols][9:]\ncurrent_dollars_data.set_index(keys=index_cols, inplace=True)\n\nconstant_dollars_data = raw[constant_dollars_cols][9:]\nconstant_dollars_data.set_index(keys=index_cols, inplace=True)\n\nyears = [int(c[-4:]) for c in constant_dollars_data.columns]\nconstant_dollars_data.columns = years", "Statistics\nPandas provides methods for statistical summaries. The describe method gives an overall summary. dropna(axis=1) deletes columns containing null values. If it were axis=0 it would be deleting rows.", "per_entry = (\n constant_dollars_data.transpose()['Amount (thousand USD)'] * 1000 /\n constant_dollars_data.transpose()['Number of returns']\n)\nper_entry.dropna(axis=1).describe().round()", "Plot\nThe library that provides plot functions is called Matplotlib. To show the plots in this notebook you need to use the \"magic method\" %matplotlib inline. It should be used at the beginning of the notebook for clarity.", "# This should always be at the beginning of the notebook,\n# like all magic statements and import statements.\n# It's only here because I didn't want to describe it earlier.\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (10, 12)", "The per-entry data\nThe data are (I think) for every form filed, not really per capita, but since we're not interpreting it for anything important we can conflate the two.\nPer capita income (Blue line) rose a lot with the tech bubble, then sunk with its crash, and then followed the housing bubble and crash. It also looks like small business income (Red dashed line) hasn't really come back since the crash, but that unemployment (Magenta dots) has gone down.", "styles = ['b-', 'g-.', 'r--', 'c-', 'm:']\naxes = per_entry[[\n 'Total income',\n 'Total social security benefits (not in income)',\n 'Business or profession net income less loss',\n 'Total payments',\n 'Unemployment compensation']].plot(style=styles)\nplt.suptitle('Average USD per return (when stated)')", "Also with log-y\nWe can see the total social security benefits payout (Green dot dash) increase as the baby boomers come of age, and we see the unemployment compensation (Magenta dots) spike after the 2008 crisis and then fall off.", "styles = ['b-', 'r--', 'g-.', 'c-', 'm:']\naxes = constant_dollars_data.transpose()['Amount (thousand USD)'][[\n 'Total income',\n 'Total payments',\n 'Total social security benefits (not in income)',\n 'Business or profession net income less loss',\n 'Unemployment compensation']].plot(logy=True, style=styles)\nplt.legend(bbox_to_anchor=(1, 1),\n bbox_transform=plt.gcf().transFigure)\nplt.suptitle('Total USD (constant 1990 basis)')", "We did it!\nThat was I/O with a little statistical summarization and plotting. ❤ Yay." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
neurodata/ndparse
examples/isbi2012_train.ipynb
apache-2.0
[ "Example: Training a Classifier\nThis notebook shows how one traing a deep learning model to classify a subset of the ISBI 2012 data set. This assumes you have access to the ISBI 2012 data, which is available as a download from the ISBI challenge website or via an ndparse database call (see example below). \nNote that this example provides reasonable but not state-of-the-art results. You will also need to install Keras (this script was tested with version 1.1.0) along with a suitable backend - (we use Theano).\n\nStep 1: setup python environment", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport sys, os, copy, logging, socket, time\n\nimport numpy as np\nimport pylab as plt\n\n#from ndparse.algorithms import nddl as nddl\n#import ndparse as ndp\nsys.path.append('..'); import ndparse as ndp\n\ntry:\n logger\nexcept:\n # do this precisely once\n logger = logging.getLogger(\"train_model\")\n logger.setLevel(logging.DEBUG)\n ch = logging.StreamHandler()\n ch.setFormatter(logging.Formatter('[%(asctime)s:%(name)s:%(levelname)s] %(message)s'))\n logger.addHandler(ch)", "Step 2: Load Training Data", "print(\"Running on system: %s\" % socket.gethostname())\n\n\nif True:\n # Using a local copy of data volume\n #inDir = '/Users/graywr1/code/bio-segmentation/data/ISBI2012/'\n inDir = '/home/pekalmj1/Data/EM_2012'\n X = ndp.nddl.load_cube(os.path.join(inDir, 'train-volume.tif'))\n Y = ndp.nddl.load_cube(os.path.join(inDir, 'train-labels.tif'))\n\n \n \n# show some details. Note that data tensors are assumed to have dimensions:\n# (#slices, #channels, #rows, #columns)\n#\nprint('Train data shape is: %s %s' % (str(X.shape), str(Y.shape)))\nplt.imshow(X[0,0,...], interpolation='none', cmap='bone')\nplt.title('train volume, slice 0')\nplt.gca().axes.get_xaxis().set_ticks([])\nplt.gca().axes.get_yaxis().set_ticks([])\nplt.show()", "Step 3: Training", "# Note that for demonstration purposes we use an artifically low \n# number of training slices and epochs. For actualy training, \n# you would use more data and train for longer.\n\ntrain_slices = np.arange(2) # e.g. change to np.arange(25)\nvalid_slices = np.arange(25,30)\nn_epochs = 1\n\ntic = time.time()\nmodel = ndp.nddl.train_model(X[train_slices,...], np.squeeze(Y[train_slices, ...]),\n X[valid_slices,...], np.squeeze(Y[valid_slices, ...]), \n nEpochs=n_epochs, log=logger)\n\nprint(\"Time to train: %0.2f sec\" % (time.time() - tic))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/healthcare
datathon/nusdatathon18/tutorials/ddsm_ml_tutorial.ipynb
apache-2.0
[ "2018 NUS-MIT Datathon Tutorial: Machine Learning on CBIS-DDSM\nGoal\nIn this colab, we are going to train a simple convolutional neural network (CNN) with Tensorflow, which can be used to classify the mammographic images based on breast density.\nThe network we are going to build is adapted from the official tensorflow tutorial.\nCBIS-DDSM\nThe dataset we are going to work with is CBIS-DDSM. Quote from their website:\n\n\"This CBIS-DDSM (Curated Breast Imaging Subset of DDSM) is an updated and standardized version of the Digital Database for Screening Mammography (DDSM).\"\n\nCBIS-DDSM differs from the original DDSM dataset in that it converted images to DICOM format, which is easier to work with.\nNote that although this tutorial focuses on the CBIS-DDSM dataset, most of it can be easily applied to The International Skin Imaging Collaboration (ISIC) dataset as well. More details will be provided in the Datasets section below.\nSetup\nTo be able to run the code cells in this tutorial, you need to create a copy of this Colab notebook by clicking \"File\" > \"Save a copy in Drive...\" menu.\nYou can share your copy with your teammates by clicking on the \"SHARE\" button on the top-right corner of your Colab notebook copy. Everyone with \"Edit\" permission is able to modify the notebook at the same time, so it is a great way for team collaboration.\nFirst Let's import modules needed to complete the tutorial. You can run the following cell by clicking on the triangle button when you hover over the [ ] space on the top-left corner of the code cell below.", "import numpy as np\nimport os\nimport pandas as pd\nimport random\nimport tensorflow as tf\n\nfrom google.colab import auth\nfrom google.cloud import storage\nfrom io import BytesIO\n# The next import is used to print out pretty pandas dataframes\nfrom IPython.display import display, HTML\nfrom PIL import Image", "Next, we need to authenticate ourselves to Google Cloud Platform. If you are running the code cell below for the first time, a link will show up, which leads to a web page for authentication and authorization. Login with your crendentials and make sure the permissions it requests are proper, after clicking Allow button, you will be redirected to another web page which has a verification code displayed. Copy the code and paste it in the input field below.", "auth.authenticate_user()", "At the same time, let's set the project we are going to use throughout the tutorial.", "project_id = 'nus-datathon-2018-team-00'\nos.environ[\"GOOGLE_CLOUD_PROJECT\"] = project_id", "Optional: In this Colab we can opt to use GPU to train our model by clicking \"Runtime\" on the top menus, then clicking \"Change runtime type\", select \"GPU\" for hardware accelerator. You can verify that GPU is working with the following code cell.", "# Should output something like '/device:GPU:0'.\ntf.test.gpu_device_name()", "Dataset\nWe have already extracted the images from the DICOM files to separate folders on GCS, and some preprocessing were also done with the raw images (If you need custom preprocessing, please consult our tutorial on image preprocessing).\nThe folders ending with _demo contain subsets of training and test images. Specifically, the demo training dataset has 100 images, with 25 images for each breast density category (1 - 4). There are 20 images in the test dataset which were selected randomly. All the images were first padded to 5251x7111 (largest width and height among the selected images) and then resized to 95x128 to fit in memory and save training time. Both training and test images are \"Cranial-Caudal\" only.\nISIC dataset is organized in a slightly different way, the images are in JPEG format and each image comes with a JSON file containing metadata information. In order to make this tutorial work for ISIC, you will need to first pad and resize the images (we provide a script to do that here), and extract the labels from the JSON files based on your interests.\nTraining\nBefore coding on our neurual network, let's create a few helper methods to make loading data from Google Cloud Storage (GCS) easier.", "client = storage.Client()\n\nbucket_name = 'datathon-cbis-ddsm-colab'\nbucket = client.get_bucket(bucket_name)\n\ndef load_images(folder):\n images = []\n labels = []\n # The image name is in format: <LABEL>_Calc_{Train,Test}_P_<Patient_ID>_{Left,Right}_CC.\n for label in [1, 2, 3, 4]:\n blobs = bucket.list_blobs(prefix=(\"%s/%s_\" % (folder, label)))\n\n for blob in blobs:\n byte_stream = BytesIO()\n blob.download_to_file(byte_stream)\n byte_stream.seek(0)\n\n img = Image.open(byte_stream)\n images.append(np.array(img, dtype=np.float32))\n labels.append(label-1) # Minus 1 to fit in [0, 4).\n\n return np.array(images), np.array(labels, dtype=np.int32)\n\ndef load_train_images():\n return load_images('small_train_demo')\n\ndef load_test_images():\n return load_images('small_test_demo')", "Let's create a model function, which will be passed to an estimator that we will create later. The model has an architecture of 6 layers:\n\nConvolutional Layer: Applies 32 5x5 filters, with ReLU activation function\nPooling Layer: Performs max pooling with a 2x2 filter and stride of 2\nConvolutional Layer: Applies 64 5x5 filters, with ReLU activation function\nPooling Layer: Same setup as #2\nDense Layer: 1,024 neurons, with dropout regulartization rate of 0.25\nLogits Layer: 4 neurons, one for each breast density category, i.e. [0, 4)\n\nNote that you can change the parameters on the right (or inline) to tune the neurual network. It is highly recommended to check out the original tensorflow tutorial to get a deeper understanding of the network we are building here.", "KERNEL_SIZE = 5 #@param\nDROPOUT_RATE = 0.25 #@param\n\ndef cnn_model_fn(features, labels, mode):\n \"\"\"Model function for CNN.\"\"\"\n\n # Input Layer.\n # Reshape to 4-D tensor: [batch_size, height, width, channels]\n # DDSM images are grayscale, which have 1 channel.\n input_layer = tf.reshape(features[\"x\"], [-1, 95, 128, 1])\n\n # Convolutional Layer #1.\n # Input Tensor Shape: [batch_size, 95, 128, 1]\n # Output Tensor Shape: [batch_size, 95, 128, 32]\n conv1 = tf.layers.conv2d(\n inputs=input_layer,\n filters=32,\n kernel_size=KERNEL_SIZE,\n padding=\"same\",\n activation=tf.nn.relu)\n\n # Pooling Layer #1.\n # Input Tensor Shape: [batch_size, 95, 128, 1]\n # Output Tensor Shape: [batch_size, 47, 64, 32]\n pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)\n\n # Convolutional Layer #2.\n # Input Tensor Shape: [batch_size, 47, 64, 32]\n # Output Tensor Shape: [batch_size, 47, 64, 64]\n conv2 = tf.layers.conv2d(\n inputs=pool1,\n filters=64,\n kernel_size=KERNEL_SIZE,\n padding=\"same\",\n activation=tf.nn.relu)\n\n # Pooling Layer #2.\n # Input Tensor Shape: [batch_size, 47, 64, 32]\n # Output Tensor Shape: [batch_size, 23, 32, 64]\n pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)\n\n # Flatten tensor into a batch of vectors\n # Input Tensor Shape: [batch_size, 23, 32, 64]\n # Output Tensor Shape: [batch_size, 23 * 32 * 64]\n pool2_flat = tf.reshape(pool2, [-1, 23 * 32 * 64])\n\n # Dense Layer.\n # Input Tensor Shape: [batch_size, 25 * 17 * 64]\n # Output Tensor Shape: [batch_size, 1024]\n dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)\n\n # Dropout operation.\n # 0.75 probability that element will be kept.\n dropout = tf.layers.dropout(inputs=dense, rate=DROPOUT_RATE,\n training=(mode == tf.estimator.ModeKeys.TRAIN))\n\n # Logits Layer.\n # Input Tensor Shape: [batch_size, 1024]\n # Output Tensor Shape: [batch_size, 4]\n logits = tf.layers.dense(inputs=dropout, units=4)\n\n predictions = {\n # Generate predictions (for PREDICT and EVAL mode)\n \"classes\": tf.argmax(input=logits, axis=1),\n # Add `softmax_tensor` to the graph. It is used for PREDICT and by the\n # `logging_hook`.\n \"probabilities\": tf.nn.softmax(logits, name=\"softmax_tensor\")\n }\n if mode == tf.estimator.ModeKeys.PREDICT:\n return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)\n\n # Loss Calculation.\n loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)\n\n if mode == tf.estimator.ModeKeys.TRAIN:\n optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)\n train_op = optimizer.minimize(\n loss=loss,\n global_step=tf.train.get_global_step())\n return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)\n\n # Add evaluation metrics (for EVAL mode).\n eval_metric_ops = {\n \"accuracy\": tf.metrics.accuracy(\n labels=labels, predictions=predictions[\"classes\"])}\n return tf.estimator.EstimatorSpec(\n mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)", "Now that we have a model function, next step is feeding it to an estimator for training. Here are are creating a main function as required by tensorflow.", "BATCH_SIZE = 20 #@param\nSTEPS = 1000 #@param\n\nartifacts_bucket_name = 'nus-datathon-2018-team-00-shared-files'\n# Append a random number to avoid collision.\nartifacts_path = \"ddsm_model_%s\" % random.randint(0, 1000)\nmodel_dir = \"gs://%s/%s\" % (artifacts_bucket_name, artifacts_path)\n\ndef main(_):\n # Load training and test data.\n train_data, train_labels = load_train_images()\n eval_data, eval_labels = load_test_images()\n\n # Create the Estimator.\n ddsm_classifier = tf.estimator.Estimator(\n model_fn=cnn_model_fn,\n model_dir=model_dir)\n\n # Set up logging for predictions.\n # Log the values in the \"Softmax\" tensor with label \"probabilities\".\n tensors_to_log = {\"probabilities\": \"softmax_tensor\"}\n logging_hook = tf.train.LoggingTensorHook(\n tensors=tensors_to_log, every_n_iter=50)\n\n # Train the model.\n train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(\n x={\"x\": train_data},\n y=train_labels,\n batch_size=BATCH_SIZE,\n num_epochs=None,\n shuffle=True)\n ddsm_classifier.train(\n input_fn=train_input_fn,\n steps=STEPS,\n hooks=[logging_hook])\n\n # Evaluate the model and print results.\n eval_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(\n x={\"x\": eval_data},\n y=eval_labels,\n num_epochs=1,\n shuffle=False)\n eval_results = ddsm_classifier.evaluate(input_fn=eval_input_fn)\n print(eval_results)", "Finally, here comes the exciting moment. We are going to train and evaluate the model we just built! Run the following code cell and pay attention to the accuracy printed at the end of logs.\nNote if this is not the first time you run the following cell, to avoid weird errors like \"NaN loss during training\", please run the following command to remove the temporary files.", "# Remove temporary files.\nartifacts_bucket = client.get_bucket(artifacts_bucket_name)\nartifacts_bucket.delete_blobs(artifacts_bucket.list_blobs(prefix=artifacts_path))\n\n# Set logging level.\ntf.logging.set_verbosity(tf.logging.INFO)\n\n# Start training, this will call the main method defined above behind the scene.\n# The whole training process will take ~5 mins.\ntf.app.run()", "As you can see, the result doesn't look too good. This is expected given how little data we use for training and how simple our network is.\nNow for those of you who are interested, let's move to use Cloud Machine Learning Engine to train a model on the whole dataset with a standalone GPU and a TPU respectively. Please continue the instructions here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pdamodaran/yellowbrick
examples/Sangarshanan/comparing_corpus_visualizers.ipynb
apache-2.0
[ "Comparing Corpus Visualizers on Yellowbrick", "##### Import all the necessary Libraries\n\nfrom yellowbrick.text import TSNEVisualizer\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom yellowbrick.text import UMAPVisualizer\nfrom yellowbrick.datasets import load_hobbies", "UMAP vs T-SNE\nUniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction. The algorithm is founded on three assumptions about the data\n\nThe data is uniformly distributed on a Riemannian manifold;\nThe Riemannian metric is locally constant (or can be approximated as such);\nThe manifold is locally connected.\n\nFrom these assumptions it is possible to model the manifold with a fuzzy topological structure. The embedding is found by searching for a low dimensional projection of the data that has the closest possible equivalent fuzzy topological structure.", "corpus = load_hobbies()", "Writing a Function to quickly Visualize Corpus\nWhich can then be used for rapid comparison", "def visualize(dim_reduction,encoding,corpus,labels = True,alpha=0.7,metric=None):\n if 'tfidf' in encoding.lower():\n encode = TfidfVectorizer()\n if 'count' in encoding.lower():\n encode = CountVectorizer()\n docs = encode.fit_transform(corpus.data)\n if labels is True:\n labels = corpus.target\n else:\n labels = None\n if 'umap' in dim_reduction.lower():\n if metric is None:\n viz = UMAPVisualizer()\n else:\n viz = UMAPVisualizer(metric=metric)\n if 't-sne' in dim_reduction.lower():\n viz = TSNEVisualizer(alpha = alpha)\n viz.fit(docs,labels)\n viz.poof()", "Quickly Comparing Plots by Controlling\n\nThe Dimensionality Reduction technique used \nThe Encoding Technique used \nThe dataset to be visualized \nWhether to differentiate Labels or not \nSet the alpha parameter\nSet the metric for UMAP", "visualize('t-sne','tfidf',corpus)\n\nvisualize('t-sne','count',corpus,alpha = 0.5)\n\nvisualize('t-sne','tfidf',corpus,labels =False)\n\nvisualize('umap','tfidf',corpus)\n\nvisualize('umap','tfidf',corpus,labels = False)\n\nvisualize('umap','count',corpus,metric= 'cosine')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/noaa-gfdl/cmip6/models/gfdl-esm4/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: NOAA-GFDL\nSource ID: GFDL-ESM4\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: CMIP5:GFDL-CM3 \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:34\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm4', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Other: troposphere\") \nDOC.set_value(\"mesosphere\") \nDOC.set_value(\"stratosphere\") \nDOC.set_value(\"whole atmosphere\") \n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"Lumped higher hydrocarbon species and oxidation products, parameterized source of Cly and Bry in stratosphere, short-lived species not advected\") \n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(82) \n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \nDOC.set_value(True) \n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Operator splitting\") \n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(30) \n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(30) \n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Anthropogenic\") \nDOC.set_value(\"Other: bare ground\") \nDOC.set_value(\"Sea surface\") \nDOC.set_value(\"Vegetation\") \n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"CO, CH2O, NO, C3H6, isoprene, C2H6, C2H4, C4H10, terpenes, C3H8, acetone, CH3OH, C2H5OH, H2, SO2, NH3\") \n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"DMS\") \n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Aircraft\") \nDOC.set_value(\"Biomass burning\") \nDOC.set_value(\"Lightning\") \nDOC.set_value(\"Other: volcanoes\") \n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"CO, CH2O, NO, C3H6, isoprene, C2H6, C2H4, C4H10, terpenes, C3H8, acetone, CH3OH, C2H5OH, H2, SO2, NH3\") \n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"CH4, N2O\") \n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Bry\") \nDOC.set_value(\"Cly\") \nDOC.set_value(\"H2O\") \nDOC.set_value(\"HOx\") \nDOC.set_value(\"NOy\") \nDOC.set_value(\"Other: sox\") \nDOC.set_value(\"Ox\") \nDOC.set_value(\"VOCs\") \nDOC.set_value(\"isoprene\") \n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(157) \n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(21) \n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(19) \n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \nDOC.set_value(True) \n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \nDOC.set_value(True) \n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \nDOC.set_value(\"Bry\") \nDOC.set_value(\"Cly\") \nDOC.set_value(\"NOy\") \n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \nDOC.set_value(\"NAT (Nitric acid trihydrate)\") \nDOC.set_value(\"Polar stratospheric ice\") \n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(3) \n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \nDOC.set_value(True) \n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \nDOC.set_value(True) \n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"3\") \n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \nDOC.set_value(\"Sulphate\") \n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \nDOC.set_value(True) \n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \nDOC.set_value(39) \n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \nDOC.set_value(\"Offline (with clouds)\") \n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gaufung/Data_Analytics_Learning_Note
python-statatics-tutorial/basic-theme/python-language/Collections.ipynb
mit
[ "Collections module", "from collections import *", "1 Counter\nA Counter is a dict subclass for counting hashable objects. It is an unordered collection where elements are stored as dictionary keys and their counts are stored as dictionary values.\n1.1 construction", "c1 = Counter()\nc2 = Counter('gaufung')\nc3 = Counter({'red':4,'blue':10})\nc4 = Counter(cats=4,dogs=5)", "1.2 using key", "c = Counter(['dog', 'cat'])\nc['fox']", "1.3 delete key\nSetting a count to zero does not remove an element from a counter. Use del to remove it entirely:", "c['dog'] = 0\ndel c['dog']", "1.4 elements\nReturn an iterator over elements repeating each as many times as its count.", "c = Counter(a=4, b=2, c=0, d=-2)\nprint list(c.elements())", "1.5 most_common\nReturn a list of the n most common elements and their counts from the most common to the least.", "Counter('abracadabra').most_common(3)", "1.6 subtract([iterable-or-mapping])\nElements are subtracted from an iterable or from another mapping (or counter).", "c = Counter(a=4, b=2, c=0, d=-2)\nd = Counter(a=1, b=2, c=3, d=4)\nc.subtract(d)\nc", "2 deque\nDeques are a generalization of stacks and queues,Deques support thread-safe, memory efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction.\n\n\nappend(x)\nAdd x to the right side of the deque.\n\n\nappendleft(x)\nAdd x to the left side of the deque.\n\n\nclear()\nRemove all elements from the deque leaving it with length 0.\n\n\ncount(x)\nCount the number of deque elements equal to x.\n\n\nextend(iterable) \nExtend the right side of the deque by appending elements from the iterable argument.\n\n\nextendleft(iterable)\nExtend the left side of the deque by appending elements from iterable. Note, the series of left appends results in reversing the order of elements in the iterable argument.\n\n\npop()\nRemove and return an element from the right side of the deque. If no elements are present, raises an IndexError.\n\n\npopleft()\nRemove and return an element from the left side of the deque. If no elements are present, raises an IndexError.\n\n\nremove(value)\nRemoved the first occurrence of value. If not found, raises a ValueError.\n\n\nreverse()\nReverse the elements of the deque in-place and then return None.\n\n\nrotate(n)\nRotate the deque n steps to the right. If n is negative, rotate to the left. Rotating one step to the right is equivalent to: d.appendleft(d.pop()).\n\n\nmaxlen\nMaximum size of a deque or None if unbounded.\n\n\n3 defaultdict\ndictionary supply missing values", "s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)]\nd = defaultdict(list)\nfor k, v in s:\n d[k].append(v)\nd.items()", "4 namedtuple\nNamed tuples assign meaning to each position in a tuple and allow for more readable, self-documenting code. They can be used wherever regular tuples are used, and they add the ability to access fields by name instead of position index.\nnamedtuple(typename, field_names[, verbose=False][, rename=False])", "Point = namedtuple('Point', ['x', 'y'], verbose=True)\n\np = Point(11, y=22)\np", "5 Orderdict\nOrdered dictionaries are just like regular dictionaries but they remember the order that items were inserted." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
econ-ark/HARK
examples/ConsIndShockModel/IndShockConsumerType.ipynb
apache-2.0
[ "IndShockConsumerType Documentation\nConsumption-Saving model with Idiosyncratic Income Shocks", "# Initial imports and notebook setup, click arrow to show\nfrom HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType\nfrom HARK.utilities import plot_funcs_der, plot_funcs\nimport matplotlib.pyplot as plt\nimport numpy as np\nmystr = lambda number : \"{:.4f}\".format(number)", "The module HARK.ConsumptionSaving.ConsIndShockModel concerns consumption-saving models with idiosyncratic shocks to (non-capital) income. All of the models assume CRRA utility with geometric discounting, no bequest motive, and income shocks are fully transitory or fully permanent.\nConsIndShockModel includes:\n1. A very basic \"perfect foresight\" model with no uncertainty.\n2. A model with risk over transitory and permanent income shocks.\n3. The model described in (2), with an interest rate for debt that differs from the interest rate for savings.\nThis notebook provides documentation for the second of these models.\n$\\newcommand{\\CRRA}{\\rho}$\n$\\newcommand{\\DiePrb}{\\mathsf{D}}$\n$\\newcommand{\\PermGroFac}{\\Gamma}$\n$\\newcommand{\\Rfree}{\\mathsf{R}}$\n$\\newcommand{\\DiscFac}{\\beta}$\nStatement of idiosyncratic income shocks model\nSuppose we want to solve a model like the one analyzed in BufferStockTheory, which has all the same features as the perfect foresight consumer, plus idiosyncratic shocks to income each period. Agents with this kind of model are represented by the class IndShockConsumerType.\nSpecifically, this type of consumer receives two income shocks at the beginning of each period: a completely transitory shock $\\newcommand{\\tShkEmp}{\\theta}{\\tShkEmp_t}$ and a completely permanent shock $\\newcommand{\\pShk}{\\psi}{\\pShk_t}$. Moreover, the agent is subject to borrowing a borrowing limit: the ratio of end-of-period assets $A_t$ to permanent income $P_t$ must be greater than $\\underline{a}$. As with the perfect foresight problem, this model is stated in terms of normalized variables, dividing all real variables by $P_t$:\n\\begin{eqnarray}\nv_t(m_t) &=& \\max_{c_t} {~} u(c_t) + \\DiscFac (1-\\DiePrb_{t+1}) \\mathbb{E}{t} \\left[ (\\PermGroFac{t+1}\\psi_{t+1})^{1-\\CRRA} v_{t+1}(m_{t+1}) \\right], \\\na_t &=& m_t - c_t, \\\na_t &\\geq& \\text{$\\underline{a}$}, \\\nm_{t+1} &=& \\Rfree/(\\PermGroFac_{t+1} \\psi_{t+1}) a_t + \\theta_{t+1}, \\\n(\\psi_{t+1},\\theta_{t+1}) &\\sim& F_{t+1}, \\\n\\mathbb{E}[\\psi]=\\mathbb{E}[\\theta] &=& 1, \\\nu(c) &=& \\frac{c^{1-\\rho}}{1-\\rho}.\n\\end{eqnarray}\nSolution method for IndShockConsumerType\nWith the introduction of (non-trivial) risk, the idiosyncratic income shocks model has no closed form solution and must be solved numerically. The function solveConsIndShock solves the one period problem for the IndShockConsumerType class. To do so, HARK uses the original version of the endogenous grid method (EGM) first described here <cite data-cite=\"6202365/HQ6H9JEI\"></cite>; see also the SolvingMicroDSOPs lecture notes. \nBriefly, the transition equation for $m_{t+1}$ can be substituted into the problem definition; the second term of the reformulated maximand represents \"end of period value of assets\" $\\mathfrak{v}_t(a_t)$ (\"Gothic v\"):\n\\begin{eqnarray}\nv_t(m_t) &=& \\max_{c_t} {~} u(c_t) + \\underbrace{\\DiscFac (1-\\DiePrb_{t+1}) \\mathbb{E}{t} \\left[ (\\PermGroFac{t+1}\\psi_{t+1})^{1-\\CRRA} v_{t+1}(\\Rfree/(\\PermGroFac_{t+1} \\psi_{t+1}) a_t + \\theta_{t+1}) \\right]}_{\\equiv \\mathfrak{v}_t(a_t)}.\n\\end{eqnarray}\nThe first order condition with respect to $c_t$ is thus simply:\n\\begin{eqnarray}\nu^{\\prime}(c_t) - \\mathfrak{v}'_t(a_t) = 0 \\Longrightarrow c_t^{-\\CRRA} = \\mathfrak{v}'_t(a_t) \\Longrightarrow c_t = \\mathfrak{v}'_t(a_t)^{-1/\\CRRA},\n\\end{eqnarray}\nand the marginal value of end-of-period assets can be computed as:\n\\begin{eqnarray}\n\\mathfrak{v}'t(a_t) = \\DiscFac (1-\\DiePrb{t+1}) \\mathbb{E}{t} \\left[ \\Rfree (\\PermGroFac{t+1}\\psi_{t+1})^{-\\CRRA} v'{t+1}(\\Rfree/(\\PermGroFac{t+1} \\psi_{t+1}) a_t + \\theta_{t+1}) \\right].\n\\end{eqnarray}\nTo solve the model, we choose an exogenous grid of $a_t$ values that spans the range of values that could plausibly be achieved, compute $\\mathfrak{v}'_t(a_t)$ at each of these points, calculate the value of consumption $c_t$ whose marginal utility is consistent with the marginal value of assets, then find the endogenous $m_t$ gridpoint as $m_t = a_t + c_t$. The set of $(m_t,c_t)$ gridpoints is then interpolated to construct the consumption function.\nExample parameter values to construct an instance of IndShockConsumerType\nIn order to create an instance of IndShockConsumerType, the user must specify parameters that characterize the (age-varying) distribution of income shocks $F_{t+1}$, the artificial borrowing constraint $\\underline{a}$, and the exogenous grid of end-of-period assets-above-minimum for use by EGM, along with all of the parameters for the perfect foresight model. The table below presents the complete list of parameter values required to instantiate an IndShockConsumerType, along with example values.\n| Parameter | Description | Code | Example value | Time-varying? |\n| :---: | --- | --- | --- | --- |\n| $\\DiscFac$ |Intertemporal discount factor | $\\texttt{DiscFac}$ | $0.96$ | |\n| $\\CRRA$|Coefficient of relative risk aversion | $\\texttt{CRRA}$ | $2.0$ | |\n| $\\Rfree$ | Risk free interest factor | $\\texttt{Rfree}$ | $1.03$ | |\n| $1 - \\DiePrb_{t+1}$ |Survival probability | $\\texttt{LivPrb}$ | $[0.98]$ | $\\surd$ |\n|$\\PermGroFac_{t+1}$|Permanent income growth factor|$\\texttt{PermGroFac}$| $[1.01]$ | $\\surd$ |\n| $\\sigma_\\psi$| Standard deviation of log permanent income shocks | $\\texttt{PermShkStd}$ | $[0.1]$ |$\\surd$ |\n| $N_\\psi$| Number of discrete permanent income shocks | $\\texttt{PermShkCount}$ | $7$ | |\n| $\\sigma_\\theta$| Standard deviation of log transitory income shocks | $\\texttt{TranShkStd}$ | $[0.2]$ | $\\surd$ |\n| $N_\\theta$| Number of discrete transitory income shocks | $\\texttt{TranShkCount}$ | $7$ | |\n| $\\mho$ | Probability of being unemployed and getting $\\theta=\\underline{\\theta}$ | $\\texttt{UnempPrb}$ | $0.05$ | |\n| $\\underline{\\theta}$| Transitory shock when unemployed | $\\texttt{IncUnemp}$ | $0.3$ | |\n| $\\mho^{Ret}$ | Probability of being \"unemployed\" when retired | $\\texttt{UnempPrb}$ | $0.0005$ | |\n| $\\underline{\\theta}^{Ret}$| Transitory shock when \"unemployed\" and retired | $\\texttt{IncUnemp}$ | $0.0$ | |\n| $(none)$ | Period of the lifecycle model when retirement begins | $\\texttt{T_retire}$ | $0$ | |\n| $(none)$ | Minimum value in assets-above-minimum grid | $\\texttt{aXtraMin}$ | $0.001$ | |\n| $(none)$ | Maximum value in assets-above-minimum grid | $\\texttt{aXtraMax}$ | $20.0$ | |\n| $(none)$ | Number of points in base assets-above-minimum grid | $\\texttt{aXtraCount}$ | $48$ | |\n| $(none)$ | Exponential nesting factor for base assets-above-minimum grid | $\\texttt{aXtraNestFac}$ | $3$ | |\n| $(none)$ | Additional values to add to assets-above-minimum grid | $\\texttt{aXtraExtra}$ | $None$ | |\n| $\\underline{a}$| Artificial borrowing constraint (normalized) | $\\texttt{BoroCnstArt}$ | $0.0$ | |\n| $(none)$|Indicator for whether $\\texttt{vFunc}$ should be computed | $\\texttt{vFuncBool}$ | $True$ | |\n| $(none)$ |Indicator for whether $\\texttt{cFunc}$ should use cubic splines | $\\texttt{CubicBool}$ | $False$ | |\n|$T$| Number of periods in this type's \"cycle\" |$\\texttt{T_cycle}$| $1$ | |\n|(none)| Number of times the \"cycle\" occurs |$\\texttt{cycles}$| $0$ | |", "IdiosyncDict={\n # Parameters shared with the perfect foresight model\n \"CRRA\": 2.0, # Coefficient of relative risk aversion\n \"Rfree\": 1.03, # Interest factor on assets\n \"DiscFac\": 0.96, # Intertemporal discount factor\n \"LivPrb\" : [0.98], # Survival probability\n \"PermGroFac\" :[1.01], # Permanent income growth factor\n \n # Parameters that specify the income distribution over the lifecycle\n \"PermShkStd\" : [0.1], # Standard deviation of log permanent shocks to income\n \"PermShkCount\" : 7, # Number of points in discrete approximation to permanent income shocks\n \"TranShkStd\" : [0.2], # Standard deviation of log transitory shocks to income\n \"TranShkCount\" : 7, # Number of points in discrete approximation to transitory income shocks\n \"UnempPrb\" : 0.05, # Probability of unemployment while working\n \"IncUnemp\" : 0.3, # Unemployment benefits replacement rate\n \"UnempPrbRet\" : 0.0005, # Probability of \"unemployment\" while retired\n \"IncUnempRet\" : 0.0, # \"Unemployment\" benefits when retired\n \"T_retire\" : 0, # Period of retirement (0 --> no retirement)\n \"tax_rate\" : 0.0, # Flat income tax rate (legacy parameter, will be removed in future)\n \n # Parameters for constructing the \"assets above minimum\" grid\n \"aXtraMin\" : 0.001, # Minimum end-of-period \"assets above minimum\" value\n \"aXtraMax\" : 20, # Maximum end-of-period \"assets above minimum\" value\n \"aXtraCount\" : 48, # Number of points in the base grid of \"assets above minimum\"\n \"aXtraNestFac\" : 3, # Exponential nesting factor when constructing \"assets above minimum\" grid\n \"aXtraExtra\" : [None], # Additional values to add to aXtraGrid\n \n # A few other paramaters\n \"BoroCnstArt\" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets\n \"vFuncBool\" : True, # Whether to calculate the value function during solution \n \"CubicBool\" : False, # Preference shocks currently only compatible with linear cFunc\n \"T_cycle\" : 1, # Number of periods in the cycle for this agent type \n \n # Parameters only used in simulation\n \"AgentCount\" : 10000, # Number of agents of this type\n \"T_sim\" : 120, # Number of periods to simulate\n \"aNrmInitMean\" : -6.0, # Mean of log initial assets\n \"aNrmInitStd\" : 1.0, # Standard deviation of log initial assets\n \"pLvlInitMean\" : 0.0, # Mean of log initial permanent income\n \"pLvlInitStd\" : 0.0, # Standard deviation of log initial permanent income\n \"PermGroFacAgg\" : 1.0, # Aggregate permanent income growth factor\n \"T_age\" : None, # Age after which simulated agents are automatically killed\n}", "The distribution of permanent income shocks is specified as mean one lognormal, with an age-varying (underlying) standard deviation. The distribution of transitory income shocks is also mean one lognormal, but with an additional point mass representing unemployment; the transitory shocks are adjusted so that the distribution is still mean one. The continuous distributions are discretized with an equiprobable distribution.\nOptionally, the user can specify the period when the individual retires and escapes essentially all income risk as T_retire; this can be turned off by setting the parameter to $0$. In retirement, all permanent income shocks are turned off, and the only transitory shock is an \"unemployment\" shock, likely with small probability; this prevents the retired problem from degenerating into a perfect foresight model.\nThe grid of assets above minimum $\\texttt{aXtraGrid}$ is specified by its minimum and maximum level, the number of gridpoints, and the extent of exponential nesting. The greater the (integer) value of $\\texttt{aXtraNestFac}$, the more dense the gridpoints will be at the bottom of the grid (and more sparse near the top); setting $\\texttt{aXtraNestFac}$ to $0$ will generate an evenly spaced grid of $a_t$.\nThe artificial borrowing constraint $\\texttt{BoroCnstArt}$ can be set to None to turn it off.\nIt is not necessary to compute the value function in this model, and it is not computationally free to do so. You can choose whether the value function should be calculated and returned as part of the solution of the model with $\\texttt{vFuncBool}$. The consumption function will be constructed as a piecewise linear interpolation when $\\texttt{CubicBool}$ is False, and will be a piecewise cubic spline interpolator if True.\nSolving and examining the solution of the idiosyncratic income shocks model\nThe cell below creates an infinite horizon instance of IndShockConsumerType and solves its model by calling its solve method.", "IndShockExample = IndShockConsumerType(**IdiosyncDict)\nIndShockExample.cycles = 0 # Make this type have an infinite horizon\nIndShockExample.solve()", "After solving the model, we can examine an element of this type's $\\texttt{solution}$:", "print(vars(IndShockExample.solution[0]))", "The single-period solution to an idiosyncratic shocks consumer's problem has all of the same attributes as in the perfect foresight model, with a couple additions. The solution can include the marginal marginal value of market resources function $\\texttt{vPPfunc}$, but this is only constructed if $\\texttt{CubicBool}$ is True, so that the MPC can be accurately computed; when it is False, then $\\texttt{vPPfunc}$ merely returns NaN everywhere.\nThe solveConsIndShock function calculates steady state market resources and stores it in the attribute $\\texttt{mNrmSS}$. This represents the steady state level of $m_t$ if this period were to occur indefinitely, but with income shocks turned off. This is relevant in a \"one period infinite horizon\" model like we've specified here, but is less useful in a lifecycle model.\nLet's take a look at the consumption function by plotting it, along with its derivative (the MPC):", "print('Consumption function for an idiosyncratic shocks consumer type:')\nplot_funcs(IndShockExample.solution[0].cFunc,IndShockExample.solution[0].mNrmMin,5)\nprint('Marginal propensity to consume for an idiosyncratic shocks consumer type:')\nplot_funcs_der(IndShockExample.solution[0].cFunc,IndShockExample.solution[0].mNrmMin,5)", "The lower part of the consumption function is linear with a slope of 1, representing the constrained part of the consumption function where the consumer would like to consume more by borrowing-- his marginal utility of consumption exceeds the marginal value of assets-- but he is prevented from doing so by the artificial borrowing constraint.\nThe MPC is a step function, as the $\\texttt{cFunc}$ itself is a piecewise linear function; note the large jump in the MPC where the borrowing constraint begins to bind.\nIf you want to look at the interpolation nodes for the consumption function, these can be found by \"digging into\" attributes of $\\texttt{cFunc}$:", "print('mNrmGrid for unconstrained cFunc is ',IndShockExample.solution[0].cFunc.functions[0].x_list)\nprint('cNrmGrid for unconstrained cFunc is ',IndShockExample.solution[0].cFunc.functions[0].y_list)\nprint('mNrmGrid for borrowing constrained cFunc is ',IndShockExample.solution[0].cFunc.functions[1].x_list)\nprint('cNrmGrid for borrowing constrained cFunc is ',IndShockExample.solution[0].cFunc.functions[1].y_list)", "The consumption function in this model is an instance of LowerEnvelope1D, a class that takes an arbitrary number of 1D interpolants as arguments to its initialization method. When called, a LowerEnvelope1D evaluates each of its component functions and returns the lowest value. Here, the two component functions are the unconstrained consumption function-- how the agent would consume if the artificial borrowing constraint did not exist for just this period-- and the borrowing constrained consumption function-- how much he would consume if the artificial borrowing constraint is binding. \nThe actual consumption function is the lower of these two functions, pointwise. We can see this by plotting the component functions on the same figure:", "plot_funcs(IndShockExample.solution[0].cFunc.functions,-0.25,5.)", "Simulating the idiosyncratic income shocks model\nIn order to generate simulated data, an instance of IndShockConsumerType needs to know how many agents there are that share these particular parameters (and are thus ex ante homogeneous), the distribution of states for newly \"born\" agents, and how many periods to simulated. These simulation parameters are described in the table below, along with example values.\n| Description | Code | Example value |\n| :---: | --- | --- |\n| Number of consumers of this type | $\\texttt{AgentCount}$ | $10000$ |\n| Number of periods to simulate | $\\texttt{T_sim}$ | $120$ |\n| Mean of initial log (normalized) assets | $\\texttt{aNrmInitMean}$ | $-6.0$ |\n| Stdev of initial log (normalized) assets | $\\texttt{aNrmInitStd}$ | $1.0$ |\n| Mean of initial log permanent income | $\\texttt{pLvlInitMean}$ | $0.0$ |\n| Stdev of initial log permanent income | $\\texttt{pLvlInitStd}$ | $0.0$ |\n| Aggregrate productivity growth factor | $\\texttt{PermGroFacAgg}$ | $1.0$ |\n| Age after which consumers are automatically killed | $\\texttt{T_age}$ | $None$ |\nHere, we will simulate 10,000 consumers for 120 periods. All newly born agents will start with permanent income of exactly $P_t = 1.0 = \\exp(\\texttt{pLvlInitMean})$, as $\\texttt{pLvlInitStd}$ has been set to zero; they will have essentially zero assets at birth, as $\\texttt{aNrmInitMean}$ is $-6.0$; assets will be less than $1\\%$ of permanent income at birth.\nThese example parameter values were already passed as part of the parameter dictionary that we used to create IndShockExample, so it is ready to simulate. We need to set the track_vars attribute to indicate the variables for which we want to record a history.", "IndShockExample.track_vars = ['aNrm','mNrm','cNrm','pLvl']\nIndShockExample.initialize_sim()\nIndShockExample.simulate()", "We can now look at the simulated data in aggregate or at the individual consumer level. Like in the perfect foresight model, we can plot average (normalized) market resources over time, as well as average consumption:", "plt.plot(np.mean(IndShockExample.history['mNrm'],axis=1))\nplt.xlabel('Time')\nplt.ylabel('Mean market resources')\nplt.show()\n\nplt.plot(np.mean(IndShockExample.history['cNrm'],axis=1))\nplt.xlabel('Time')\nplt.ylabel('Mean consumption')\nplt.show()", "We could also plot individual consumption paths for some of the consumers-- say, the first five:", "plt.plot(IndShockExample.history['cNrm'][:,0:5])\nplt.xlabel('Time')\nplt.ylabel('Individual consumption paths')\nplt.show()", "Other example specifications of idiosyncratic income shocks consumers\n$\\texttt{IndShockConsumerType}$-- and $\\texttt{HARK}$ in general-- can also represent models that are not infinite horizon. \nLifecycle example\nSuppose we wanted to represent consumers with a lifecycle-- parameter values that differ by age, with a finite end point beyond which the individual cannot surive. This can be done very easily by simply specifying the time-varying attributes $\\texttt{PermGroFac}$, $\\texttt{LivPrb}$, $\\texttt{PermShkStd}$, and $\\texttt{TranShkStd}$ as Python lists specifying the sequence of periods these agents will experience, from beginning to end.\nIn the cell below, we define a parameter dictionary for a rather short ten period lifecycle, with arbitrarily chosen parameters. For a more realistically calibrated (and much longer) lifecycle model, see the SolvingMicroDSOPs REMARK.", "LifecycleDict={ # Click arrow to expand this fairly large parameter dictionary\n # Parameters shared with the perfect foresight model\n \"CRRA\": 2.0, # Coefficient of relative risk aversion\n \"Rfree\": 1.03, # Interest factor on assets\n \"DiscFac\": 0.96, # Intertemporal discount factor\n \"LivPrb\" : [0.99,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.1],\n \"PermGroFac\" : [1.01,1.01,1.01,1.02,1.02,1.02,0.7,1.0,1.0,1.0],\n \n # Parameters that specify the income distribution over the lifecycle\n \"PermShkStd\" : [0.1,0.2,0.1,0.2,0.1,0.2,0.1,0,0,0],\n \"PermShkCount\" : 7, # Number of points in discrete approximation to permanent income shocks\n \"TranShkStd\" : [0.3,0.2,0.1,0.3,0.2,0.1,0.3,0,0,0],\n \"TranShkCount\" : 7, # Number of points in discrete approximation to transitory income shocks\n \"UnempPrb\" : 0.05, # Probability of unemployment while working\n \"IncUnemp\" : 0.3, # Unemployment benefits replacement rate\n \"UnempPrbRet\" : 0.0005, # Probability of \"unemployment\" while retired\n \"IncUnempRet\" : 0.0, # \"Unemployment\" benefits when retired\n \"T_retire\" : 7, # Period of retirement (0 --> no retirement)\n \"tax_rate\" : 0.0, # Flat income tax rate (legacy parameter, will be removed in future)\n \n # Parameters for constructing the \"assets above minimum\" grid\n \"aXtraMin\" : 0.001, # Minimum end-of-period \"assets above minimum\" value\n \"aXtraMax\" : 20, # Maximum end-of-period \"assets above minimum\" value\n \"aXtraCount\" : 48, # Number of points in the base grid of \"assets above minimum\"\n \"aXtraNestFac\" : 3, # Exponential nesting factor when constructing \"assets above minimum\" grid\n \"aXtraExtra\" : [None], # Additional values to add to aXtraGrid\n \n # A few other paramaters\n \"BoroCnstArt\" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets\n \"vFuncBool\" : True, # Whether to calculate the value function during solution \n \"CubicBool\" : False, # Preference shocks currently only compatible with linear cFunc\n \"T_cycle\" : 10, # Number of periods in the cycle for this agent type \n \n # Parameters only used in simulation\n \"AgentCount\" : 10000, # Number of agents of this type\n \"T_sim\" : 120, # Number of periods to simulate\n \"aNrmInitMean\" : -6.0, # Mean of log initial assets\n \"aNrmInitStd\" : 1.0, # Standard deviation of log initial assets\n \"pLvlInitMean\" : 0.0, # Mean of log initial permanent income\n \"pLvlInitStd\" : 0.0, # Standard deviation of log initial permanent income\n \"PermGroFacAgg\" : 1.0, # Aggregate permanent income growth factor\n \"T_age\" : 11, # Age after which simulated agents are automatically killed \n}", "In this case, we have specified a ten period model in which retirement happens in period $t=7$. Agents in this model are more likely to die as they age, and their permanent income drops by 30\\% at retirement. Let's make and solve this lifecycle example, then look at the $\\texttt{solution}$ attribute.", "LifecycleExample = IndShockConsumerType(**LifecycleDict)\nLifecycleExample.cycles = 1 # Make this consumer live a sequence of periods -- a lifetime -- exactly once\nLifecycleExample.solve()\nprint('First element of solution is',LifecycleExample.solution[0])\nprint('Solution has', len(LifecycleExample.solution),'elements.')", "This was supposed to be a ten period lifecycle model-- why does our consumer type have eleven elements in its $\\texttt{solution}$? It would be more precise to say that this specification has ten non-terminal periods. The solution to the 11th and final period in the model would be the same for every set of parameters: consume $c_t = m_t$, because there is no future. In a lifecycle model, the terminal period is assumed to exist; the $\\texttt{LivPrb}$ parameter does not need to end with a $0.0$ in order to guarantee that survivors die.\nWe can quickly plot the consumption functions in each period of the model:", "print('Consumption functions across the lifecycle:')\nmMin = np.min([LifecycleExample.solution[t].mNrmMin for t in range(LifecycleExample.T_cycle)])\nLifecycleExample.unpack('cFunc') # This makes all of the cFuncs accessible in the attribute cFunc\nplot_funcs(LifecycleExample.cFunc,mMin,5)", "\"Cyclical\" example\nWe can also model consumers who face an infinite horizon, but who do not face the same problem in every period. Consider someone who works as a ski instructor: they make most of their income for the year in the winter, and make very little money in the other three seasons.\nWe can represent this type of individual as a four period, infinite horizon model in which expected \"permanent\" income growth varies greatly across seasons.", "CyclicalDict = { # Click the arrow to expand this parameter dictionary\n # Parameters shared with the perfect foresight model\n \"CRRA\": 2.0, # Coefficient of relative risk aversion\n \"Rfree\": 1.03, # Interest factor on assets\n \"DiscFac\": 0.96, # Intertemporal discount factor\n \"LivPrb\" : 4*[0.98], # Survival probability\n \"PermGroFac\" : [1.082251, 2.8, 0.3, 1.1],\n \n # Parameters that specify the income distribution over the lifecycle\n \"PermShkStd\" : [0.1,0.1,0.1,0.1],\n \"PermShkCount\" : 7, # Number of points in discrete approximation to permanent income shocks\n \"TranShkStd\" : [0.2,0.2,0.2,0.2],\n \"TranShkCount\" : 7, # Number of points in discrete approximation to transitory income shocks\n \"UnempPrb\" : 0.05, # Probability of unemployment while working\n \"IncUnemp\" : 0.3, # Unemployment benefits replacement rate\n \"UnempPrbRet\" : 0.0005, # Probability of \"unemployment\" while retired\n \"IncUnempRet\" : 0.0, # \"Unemployment\" benefits when retired\n \"T_retire\" : 0, # Period of retirement (0 --> no retirement)\n \"tax_rate\" : 0.0, # Flat income tax rate (legacy parameter, will be removed in future)\n \n # Parameters for constructing the \"assets above minimum\" grid\n \"aXtraMin\" : 0.001, # Minimum end-of-period \"assets above minimum\" value\n \"aXtraMax\" : 20, # Maximum end-of-period \"assets above minimum\" value\n \"aXtraCount\" : 48, # Number of points in the base grid of \"assets above minimum\"\n \"aXtraNestFac\" : 3, # Exponential nesting factor when constructing \"assets above minimum\" grid\n \"aXtraExtra\" : [None], # Additional values to add to aXtraGrid\n \n # A few other paramaters\n \"BoroCnstArt\" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets\n \"vFuncBool\" : True, # Whether to calculate the value function during solution \n \"CubicBool\" : False, # Preference shocks currently only compatible with linear cFunc\n \"T_cycle\" : 4, # Number of periods in the cycle for this agent type \n \n # Parameters only used in simulation\n \"AgentCount\" : 10000, # Number of agents of this type\n \"T_sim\" : 120, # Number of periods to simulate\n \"aNrmInitMean\" : -6.0, # Mean of log initial assets\n \"aNrmInitStd\" : 1.0, # Standard deviation of log initial assets\n \"pLvlInitMean\" : 0.0, # Mean of log initial permanent income\n \"pLvlInitStd\" : 0.0, # Standard deviation of log initial permanent income\n \"PermGroFacAgg\" : 1.0, # Aggregate permanent income growth factor\n \"T_age\" : None, # Age after which simulated agents are automatically killed \n}", "This consumer type's parameter dictionary is nearly identical to the original infinite horizon type we made, except that each of the time-varying parameters now have four values, rather than just one. Most of these have the same value in each period except for $\\texttt{PermGroFac}$, which varies greatly over the four seasons. Note that the product of the four \"permanent\" income growth factors is almost exactly 1.0-- this type's income does not grow on average in the long run!\nLet's make and solve this consumer type, then plot his quarterly consumption functions:", "CyclicalExample = IndShockConsumerType(**CyclicalDict)\nCyclicalExample.cycles = 0 # Make this consumer type have an infinite horizon\nCyclicalExample.solve()\n\nCyclicalExample.unpack('cFunc')\nprint('Quarterly consumption functions:')\nmMin = min([X.mNrmMin for X in CyclicalExample.solution])\nplot_funcs(CyclicalExample.cFunc,mMin,5)", "The very low green consumption function corresponds to the quarter in which the ski instructors make most of their income. They know that they are about to experience a 70% drop in \"permanent\" income, so they do not consume much relative to their income this quarter. In the other three quarters, normalized consumption is much higher, as current \"permanent\" income is low relative to future expectations. In level, the consumption chosen in each quarter is much more similar" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
karlstroetmann/Artificial-Intelligence
Python/4 Automatic Theorem Proving/Parser.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open (\"../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)", "A Simple Parser for Term Rewriting\nThis file implements a parser for terms and equations. It uses the parser generator Ply. To install Ply, change the cell below into a code cell and execute it. If the package ply is already installed, this command will only produce a message that the package is already installed.\n!conda install -y -c anaconda ply\nSpecification of the Scanner\nThe scanner that is implemented below recognizes numbers, variable names, function names, and various operator symbols. Variable names have to start with a lower case letter, while function names start with an uppercase letter.", "import ply.lex as lex\n\ntokens = [ 'NUMBER', 'VAR', 'FCT', 'BACKSLASH' ]", "The token Number specifies a natural number. Syntactically, numbers are treated a function symbols.", "def t_NUMBER(t):\n r'0|[1-9][0-9]*'\n return t", "Variables start with a letter, followed by letters, digits, and underscores. They must be followed by a character that is not an opening parenthesis (.", "def t_VAR(t):\n r'[a-zA-Z][a-zA-Z0-9_]*(?=[^(a-zA-Z0-9_])'\n return t", "Function names start with a letter, followed by letters, digits, and underscores. \nThey have to be followed by an opening parenthesis (.", "def t_FCT(t):\n r'[a-zA-Z][a-zA-Z0-9_]*(?=[(])'\n return t\n\ndef t_BACKSLASH(t):\n r'\\\\'\n return t", "Single line comments are supported and work as in C.", "def t_COMMENT(t):\n r'//[^\\n]*'\n t.lexer.lineno += t.value.count('\\n')\n pass", "The arithmetic operators and a few other symbols are supported.", "literals = ['+', '-', '*', '/', '\\\\', '%', '^', '(', ')', ';', '=', ',']", "White space, i.e. space characters, tabulators, and carriage returns are ignored.", "t_ignore = ' \\t\\r'", "Syntactically, newline characters are ignored. However, we still need to keep track of them in order to know which line we are in. This information is needed later for error messages.", "def t_newline(t):\n r'\\n'\n t.lexer.lineno += 1\n return", "Given a token, the function find_colum returns the column where token starts.\nThis is possible, because token.lexer.lexdata stores the string that is given to the scanner and token.lexpos is the number of characters that precede token.", "def find_column(token):\n program = token.lexer.lexdata\n line_start = program.rfind('\\n', 0, token.lexpos) + 1\n return (token.lexpos - line_start) + 1", "The function t_error is called for any token t that can not be scanned by the lexer. In this case, t.value[0] is the first character that can not be recognized by the scanner.", "def t_error(t):\n column = find_column(t)\n print(f\"Illegal character '{t.value[0]}' in line {t.lineno}, column {column}.\")\n t.lexer.skip(1)", "The next assignment is necessary to make the lexer think that the code given above is part of some file.", "__file__ = 'main'\n\nlexer = lex.lex()\n\ndef test_scanner(file_name):\n with open(file_name, 'r') as handle:\n program = handle.read() \n print(program)\n lexer.input(program)\n lexer.lineno = 1\n return [t for t in lexer]", "for t in test_scanner('Examples/quasigroup.eqn'):\n print(t)\nSpecification of the Parser\nWe will use the following grammar to specify the language that our compiler can translate:\n```\naxioms\n : equation\n | axioms equation \n ;\nequation \n : term '=' term\n ;\nterm: term '+' term \n | term '-' term \n | term '*' term \n | term '/' term \n | term '\\' term\n | term '%' term\n | term '^' term\n | '(' term ')' \n | FCT '(' term_list ')' \n | FCT \n | VAR\n ;\nterm_list\n : / epsilon /\n | term\n | term ',' ne_term_list\n ;\nne_term_list\n : term\n | term ',' ne_term_list\n ;\n```\nWe will use precedence declarations to resolve the ambiguity that is inherent in this grammar.", "import ply.yacc as yacc", "The start variable of our grammar is axioms.", "start = 'axioms'\n\nprecedence = (\n ('nonassoc', '='),\n ('left', '+', '-'),\n ('left', '*', '/', 'BACKSLASH', '%'),\n ('right', '^')\n)\n\ndef p_axioms_one(p):\n \"axioms : equation\"\n p[0] = ('axioms', p[1])\n \ndef p_axioms_more(p):\n \"axioms : axioms equation\"\n p[0] = p[1] + (p[2],)\n\ndef p_equation(p):\n \"equation : term '=' term ';'\"\n p[0] = ('=', p[1], p[3])\n\ndef p_term_plus(p):\n \"term : term '+' term\"\n p[0] = ('+', p[1], p[3])\n \ndef p_term_minus(p):\n \"term : term '-' term\"\n p[0] = ('-', p[1], p[3])\n \ndef p_term_times(p):\n \"term : term '*' term\"\n p[0] = ('*', p[1], p[3])\n \ndef p_term_divide(p):\n \"term : term '/' term\"\n p[0] = ('/', p[1], p[3])\n \ndef p_term_backslash(p):\n \"term : term BACKSLASH term\"\n p[0] = ('\\\\', p[1], p[3])\n \ndef p_term_modulo(p):\n \"term : term '%' term\"\n p[0] = ('%', p[1], p[3])\n \ndef p_term_power(p):\n \"term : term '^' term\"\n p[0] = ('^', p[1], p[3])\n \ndef p_term_group(p):\n \"term : '(' term ')'\"\n p[0] = p[2]\n\ndef p_term_fct_call(p):\n \"term : FCT '(' term_list ')'\"\n p[0] = (p[1],) + p[3][1:]\n\ndef p_term_number(p):\n \"term : NUMBER\"\n p[0] = (p[1],)\n\ndef p_term_id(p):\n \"term : VAR\"\n p[0] = ('$var', p[1])\n\ndef p_term_list_empty(p):\n \"term_list :\"\n p[0] = ('.',)\n \ndef p_term_list_one(p):\n \"term_list : term\"\n p[0] = ('.', p[1]) \n\ndef p_term_list_more(p):\n \"term_list : term ',' ne_term_list\"\n p[0] = ('.', p[1]) + p[3][1:] \n\ndef p_ne_term_list_one(p):\n \"ne_term_list : term\"\n p[0] = ('.', p[1]) \n \ndef p_ne_term_list_more(p):\n \"ne_term_list : term ',' ne_term_list\"\n p[0] = ('.', p[1]) + p[3][1:] \n\ndef p_error(p):\n if p:\n column = find_column(p)\n print(f'Syntax error at token \"{p.value}\" in line {p.lineno}, column {column}.')\n else:\n print('Syntax error at end of input.')", "Setting the optional argument write_tables to False is required to prevent an obscure bug where the parser generator tries to read an empty parse table. As we have used precedence declarations to resolve all shift/reduce conflicts, the action table should contain no conflict.", "parser = yacc.yacc(write_tables=False, debug=True)", "!cat parser.out\nThe notebook AST-2-Dot.ipynb provides the function tuple2dot. This function can be used to visualize the abstract syntax tree that is generated by the function yacc.parse.", "%run AST-2-Dot.ipynb", "The function parse takes a file_name as its sole argument. The file is read and parsed. \nThe resulting parse tree is visualized using graphviz. It is important to reset the\nattribute lineno of the scanner, for otherwise error messages will not have the correct line numbers.", "def test_parse(file_name):\n lexer.lineno = 1\n with open(file_name, 'r') as handle:\n program = handle.read() \n ast = yacc.parse(program)\n print(ast)\n return tuple2dot(ast)", "!cat Examples/quasigroup.eqn\ntest_parse('Examples/quasigroup.eqn')\nThe function indent is used to indent the generated assembler commands by preceding them with 8 space characters.", "def parse_file(file_name):\n lexer.lineno = 1\n with open(file_name, 'r') as handle:\n program = handle.read() \n AST = yacc.parse(program)\n if AST:\n _, *L = AST\n return L\n return None", "parse_file('Examples/group-theory.eqn')", "def parse_equation(s):\n lexer.lineno = 1\n AST = yacc.parse(s + ';')\n if AST:\n _, *L = AST\n return L[0]\n return None", "parse_equation('i(x) * x = 1')", "def parse_term(s):\n lexer.lineno = 1\n AST = yacc.parse(s + '= 1;')\n if AST:\n _, *L = AST\n return L[0][1]\n return None", "parse_term('i(x) * x')", "def to_str(t):\n if isinstance(t, set):\n return '{' + ', '.join({ f'{to_str(eq)}' for eq in t }) + '}'\n if isinstance(t, list):\n return '[' + ', '.join([ f'{to_str(eq)}' for eq in t ]) + ']'\n if isinstance(t, dict):\n return '{' + ', '.join({ f'{k}: {to_str(v)}' for k, v in t.items() }) + '}'\n if isinstance(t, str):\n return t\n if t[0] == '$var':\n return t[1]\n if len(t) == 3 and t[0] in ['=']:\n _, lhs, rhs = t\n return f'{to_str(lhs)} = {to_str(rhs)}'\n if t[0] == '\\\\':\n op, lhs, rhs = t\n return to_str_paren(lhs) + ' \\\\ ' + to_str_paren(rhs)\n if len(t) == 3 and t[0] in ['+', '-', '*', '/', '%', '^']:\n op, lhs, rhs = t\n return f'{to_str_paren(lhs)} {op} {to_str_paren(rhs)}'\n f, *Args = t\n if Args == []:\n return f\n return f'{f}({to_str_list(Args)})'\n\ndef to_str_paren(t):\n if isinstance(t, str):\n return t\n if t[0] == '$var':\n return t[1]\n if len(t) == 3:\n op, lhs, rhs = t\n return f'({to_str_paren(lhs)} {op} {to_str_paren(rhs)})'\n f, *Args = t\n if Args == []:\n return f\n return f'{f}({to_str_list(Args)})'\n\ndef to_str_list(TL):\n if TL == []:\n return ''\n t, *Ts = TL\n if Ts == []:\n return f'{to_str(t)}'\n return f'{to_str(t)}, {to_str_list(Ts)}'" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WillRhB/PythonLesssons
Untitled1.ipynb
mit
[ "Getting Data from the Web\nhttp://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/var/year/iso3.ext\nvar is either tas or pr ext is usually CSV iso3 is the ISO standard 3 letter code for the country of interest (capitals) \nlook up country codes online \ne.g. http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/tas/year/GBR.csv\nUse (https://datahelpdesk.worldbank.org/knowledgebase/articles/902061-climate-data-api and http://www.nationsonline.org/oneworld/country_code_list.htm)", "# Python request library lets us get data straight from a URL \nimport requests \n\nurl = \"http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/tas/year/GBR.csv\"\nresponse = requests.get(url ) # gets the get function from the request library to find the url and put it in a loop etc \n\nif response.status_code != 200:\n print ('Failed to get data: ', response.status_code)\nelse: \n print ('First 100 charecters of data are: ')\n print (response.text[:100])", "To do:\nGet temperature for Guatemala\nFetch rainfall for Afghanistan between 1980 and 1999", "url = 'http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/tas/GTM.csv'\nresponse = requests.get(url ) # gets the get function from the request library to find the url and put it in a loop etc \n\nif response.status_code != 200:\n print ('Failed to get data: ', response.status_code)\nelse: \n print ('First 100 charecters of data are: ')\n print (response.text[:100])\n\nurl = 'http://climatedataapi.worldbank.org/climateweb/rest/v1/country/annualavg/pr/1980/1999/AFG.csv'\nresponse = requests.get(url ) # gets the get function from the request library to find the url and put it in a loop etc \n\nif response.status_code != 200:\n print ('Failed to get data: ', response.status_code)\nelse: \n print ('First 100 charecters of data are: ')\n print (response.text[:100])\n\n# Create a csv file: test01.csv\n\n1901, 12.3\n1902, 45.6\n1903, 78.9\n\nwith open('test01.csv', 'r') as reader: \n for line in reader: \n print (len(line))\n\nwith open ('test01.csv', 'r') as reader: \n for line in reader: \n fields = line.split (',')\n print (fields)\n\n# We need to get rid of the hidden newline \\n\nwith open ('test01.csv', 'r') as reader: \n for line in reader: \n fields = line.strip().split (',')\n print (fields)", "Using the csv library instead", "import csv\n\nwith open ('test01.csv', 'r') as rawdata: \n csvdata = csv.reader(rawdata)\n for record in csvdata: \n print (record)\n\nurl = 'http://climatedataapi.worldbank.org/climateweb/rest/v1/country/cru/tas/year/GTM.csv'\nresponse = requests.get(url ) # gets the get function from the request library to find the url and put it in a loop etc \n\nif response.status_code != 200:\n print ('Failed to get data: ', response.status_code)\nelse: \n wrapper = csv.reader(response.text.strip().split('\\n'))\n for record in wrapper: \n if record[0] != 'year' : \n year = int(record[0])\n value = float(record[1])\n print (year, value)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
UltronAI/Deep-Learning
CS231n/assignment3/StyleTransfer-PyTorch.ipynb
mit
[ "Style Transfer\nIn this notebook we will implement the style transfer technique from \"Image Style Transfer Using Convolutional Neural Networks\" (Gatys et al., CVPR 2015).\nThe general idea is to take two images, and produce a new image that reflects the content of one but the artistic \"style\" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself.\nThe deep network we use as a feature extractor is SqueezeNet, a small model that has been trained on ImageNet. You could use any network, but we chose SqueezeNet here for its small size and efficiency.\nHere's an example of the images you'll be able to produce by the end of this notebook:\n\nSetup", "import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport torchvision\nimport torchvision.transforms as T\nimport PIL\n\nimport numpy as np\n\nfrom scipy.misc import imread\nfrom collections import namedtuple\nimport matplotlib.pyplot as plt\n\nfrom cs231n.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD\n%matplotlib inline", "We provide you with some helper functions to deal with images, since for this part of the assignment we're dealing with real JPEGs, not CIFAR-10 data.", "def preprocess(img, size=512):\n transform = T.Compose([\n T.Scale(size),\n T.ToTensor(),\n T.Normalize(mean=SQUEEZENET_MEAN.tolist(),\n std=SQUEEZENET_STD.tolist()),\n T.Lambda(lambda x: x[None]),\n ])\n return transform(img)\n\ndef deprocess(img):\n transform = T.Compose([\n T.Lambda(lambda x: x[0]),\n T.Normalize(mean=[0, 0, 0], std=[1.0 / s for s in SQUEEZENET_STD.tolist()]),\n T.Normalize(mean=[-m for m in SQUEEZENET_MEAN.tolist()], std=[1, 1, 1]),\n T.Lambda(rescale),\n T.ToPILImage(),\n ])\n return transform(img)\n\ndef rescale(x):\n low, high = x.min(), x.max()\n x_rescaled = (x - low) / (high - low)\n return x_rescaled\n\ndef rel_error(x,y):\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef features_from_img(imgpath, imgsize):\n img = preprocess(PIL.Image.open(imgpath), size=imgsize)\n img_var = Variable(img.type(dtype))\n return extract_features(img_var, cnn), img_var\n\n# Older versions of scipy.misc.imresize yield different results\n# from newer versions, so we check to make sure scipy is up to date.\ndef check_scipy():\n import scipy\n vnum = int(scipy.__version__.split('.')[1])\n assert vnum >= 16, \"You must install SciPy >= 0.16.0 to complete this notebook.\"\n\ncheck_scipy()\n\nanswers = np.load('style-transfer-checks.npz')\n", "As in the last assignment, we need to set the dtype to select either the CPU or the GPU", "dtype = torch.FloatTensor\n# Uncomment out the following line if you're on a machine with a GPU set up for PyTorch!\n# dtype = torch.cuda.FloatTensor \n\n# Load the pre-trained SqueezeNet model.\ncnn = torchvision.models.squeezenet1_1(pretrained=True).features\ncnn.type(dtype)\n\n# We don't want to train the model any further, so we don't want PyTorch to waste computation \n# computing gradients on parameters we're never going to update.\nfor param in cnn.parameters():\n param.requires_grad = False\n\n# We provide this helper code which takes an image, a model (cnn), and returns a list of\n# feature maps, one per layer.\ndef extract_features(x, cnn):\n \"\"\"\n Use the CNN to extract features from the input image x.\n \n Inputs:\n - x: A PyTorch Variable of shape (N, C, H, W) holding a minibatch of images that\n will be fed to the CNN.\n - cnn: A PyTorch model that we will use to extract features.\n \n Returns:\n - features: A list of feature for the input images x extracted using the cnn model.\n features[i] is a PyTorch Variable of shape (N, C_i, H_i, W_i); recall that features\n from different layers of the network may have different numbers of channels (C_i) and\n spatial dimensions (H_i, W_i).\n \"\"\"\n features = []\n prev_feat = x\n for i, module in enumerate(cnn._modules.values()):\n next_feat = module(prev_feat)\n features.append(next_feat)\n prev_feat = next_feat\n return features", "Computing Loss\nWe're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms: content loss + style loss + total variation loss. You'll fill in the functions that compute these weighted terms below.\nContent loss\nWe can generate an image that reflects the content of one image and the style of another by incorporating both in our loss function. We want to penalize deviations from the content of the content image and deviations from the style of the style image. We can then use this hybrid loss function to perform gradient descent not on the parameters of the model, but instead on the pixel values of our original image.\nLet's first write the content loss function. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\\ell$), that has feature maps $A^\\ell \\in \\mathbb{R}^{1 \\times C_\\ell \\times H_\\ell \\times W_\\ell}$. $C_\\ell$ is the number of filters/channels in layer $\\ell$, $H_\\ell$ and $W_\\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\\ell \\in \\mathbb{R}^{N_\\ell \\times M_\\ell}$ be the feature map for the current image and $P^\\ell \\in \\mathbb{R}^{N_\\ell \\times M_\\ell}$ be the feature map for the content source image where $M_\\ell=H_\\ell\\times W_\\ell$ is the number of elements in each feature map. Each row of $F^\\ell$ or $P^\\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image. Finally, let $w_c$ be the weight of the content loss term in the loss function.\nThen the content loss is given by:\n$L_c = w_c \\times \\sum_{i,j} (F_{ij}^{\\ell} - P_{ij}^{\\ell})^2$", "def content_loss(content_weight, content_current, content_original):\n \"\"\"\n Compute the content loss for style transfer.\n \n Inputs:\n - content_weight: Scalar giving the weighting for the content loss.\n - content_current: features of the current image; this is a PyTorch Tensor of shape\n (1, C_l, H_l, W_l).\n - content_target: features of the content image, Tensor with shape (1, C_l, H_l, W_l).\n \n Returns:\n - scalar content loss\n \"\"\"\n pass\n", "Test your content loss. You should see errors less than 0.001.", "def content_loss_test(correct):\n content_image = 'styles/tubingen.jpg'\n image_size = 192\n content_layer = 3\n content_weight = 6e-2\n \n c_feats, content_img_var = features_from_img(content_image, image_size)\n \n bad_img = Variable(torch.zeros(*content_img_var.data.size()))\n feats = extract_features(bad_img, cnn)\n \n student_output = content_loss(content_weight, c_feats[content_layer], feats[content_layer]).data.numpy()\n error = rel_error(correct, student_output)\n print('Maximum error is {:.3f}'.format(error))\n\ncontent_loss_test(answers['cl_out'])", "Style loss\nNow we can tackle the style loss. For a given layer $\\ell$, the style loss is defined as follows:\nFirst, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results.\nGiven a feature map $F^\\ell$ of shape $(1, C_\\ell, M_\\ell)$, the Gram matrix has shape $(1, C_\\ell, C_\\ell)$ and its elements are given by:\n$$G_{ij}^\\ell = \\sum_k F^{\\ell}{ik} F^{\\ell}{jk}$$\nAssuming $G^\\ell$ is the Gram matrix from the feature map of the current image, $A^\\ell$ is the Gram Matrix from the feature map of the source style image, and $w_\\ell$ a scalar weight term, then the style loss for the layer $\\ell$ is simply the weighted Euclidean distance between the two Gram matrices:\n$$L_s^\\ell = w_\\ell \\sum_{i, j} \\left(G^\\ell_{ij} - A^\\ell_{ij}\\right)^2$$\nIn practice we usually compute the style loss at a set of layers $\\mathcal{L}$ rather than just a single layer $\\ell$; then the total style loss is the sum of style losses at each layer:\n$$L_s = \\sum_{\\ell \\in \\mathcal{L}} L_s^\\ell$$\nBegin by implementing the Gram matrix computation below:", "def gram_matrix(features, normalize=True):\n \"\"\"\n Compute the Gram matrix from features.\n \n Inputs:\n - features: PyTorch Variable of shape (N, C, H, W) giving features for\n a batch of N images.\n - normalize: optional, whether to normalize the Gram matrix\n If True, divide the Gram matrix by the number of neurons (H * W * C)\n \n Returns:\n - gram: PyTorch Variable of shape (N, C, C) giving the\n (optionally normalized) Gram matrices for the N input images.\n \"\"\"\n pass\n", "Test your Gram matrix code. You should see errors less than 0.001.", "def gram_matrix_test(correct):\n style_image = 'styles/starry_night.jpg'\n style_size = 192\n feats, _ = features_from_img(style_image, style_size)\n student_output = gram_matrix(feats[5].clone()).data.numpy()\n error = rel_error(correct, student_output)\n print('Maximum error is {:.3f}'.format(error))\n \ngram_matrix_test(answers['gm_out'])", "Next, implement the style loss:", "# Now put it together in the style_loss function...\ndef style_loss(feats, style_layers, style_targets, style_weights):\n \"\"\"\n Computes the style loss at a set of layers.\n \n Inputs:\n - feats: list of the features at every layer of the current image, as produced by\n the extract_features function.\n - style_layers: List of layer indices into feats giving the layers to include in the\n style loss.\n - style_targets: List of the same length as style_layers, where style_targets[i] is\n a PyTorch Variable giving the Gram matrix the source style image computed at\n layer style_layers[i].\n - style_weights: List of the same length as style_layers, where style_weights[i]\n is a scalar giving the weight for the style loss at layer style_layers[i].\n \n Returns:\n - style_loss: A PyTorch Variable holding a scalar giving the style loss.\n \"\"\"\n # Hint: you can do this with one for loop over the style layers, and should\n # not be very much code (~5 lines). You will need to use your gram_matrix function.\n pass\n", "Test your style loss implementation. The error should be less than 0.001.", "def style_loss_test(correct):\n content_image = 'styles/tubingen.jpg'\n style_image = 'styles/starry_night.jpg'\n image_size = 192\n style_size = 192\n style_layers = [1, 4, 6, 7]\n style_weights = [300000, 1000, 15, 3]\n \n c_feats, _ = features_from_img(content_image, image_size) \n feats, _ = features_from_img(style_image, style_size)\n style_targets = []\n for idx in style_layers:\n style_targets.append(gram_matrix(feats[idx].clone()))\n \n student_output = style_loss(c_feats, style_layers, style_targets, style_weights).data.numpy()\n error = rel_error(correct, student_output)\n print('Error is {:.3f}'.format(error))\n\n \nstyle_loss_test(answers['sl_out'])", "Total-variation regularization\nIt turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or \"total variation\" in the pixel values. \nYou can compute the \"total variation\" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$:\n$L_{tv} = w_t \\times \\sum_{c=1}^3\\sum_{i=1}^{H-1} \\sum_{j=1}^{W-1} \\left( (x_{i,j+1, c} - x_{i,j,c})^2 + (x_{i+1, j,c} - x_{i,j,c})^2 \\right)$\nIn the next cell, fill in the definition for the TV loss term. To receive full credit, your implementation should not have any loops.", "def tv_loss(img, tv_weight):\n \"\"\"\n Compute total variation loss.\n \n Inputs:\n - img: PyTorch Variable of shape (1, 3, H, W) holding an input image.\n - tv_weight: Scalar giving the weight w_t to use for the TV loss.\n \n Returns:\n - loss: PyTorch Variable holding a scalar giving the total variation loss\n for img weighted by tv_weight.\n \"\"\"\n # Your implementation should be vectorized and not require any loops!\n pass\n", "Test your TV loss implementation. Error should be less than 0.001.", "def tv_loss_test(correct):\n content_image = 'styles/tubingen.jpg'\n image_size = 192\n tv_weight = 2e-2\n\n content_img = preprocess(PIL.Image.open(content_image), size=image_size)\n content_img_var = Variable(content_img.type(dtype))\n \n student_output = tv_loss(content_img_var, tv_weight).data.numpy()\n error = rel_error(correct, student_output)\n print('Error is {:.3f}'.format(error))\n \ntv_loss_test(answers['tv_out'])", "Now we're ready to string it all together (you shouldn't have to modify this function):", "def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight,\n style_layers, style_weights, tv_weight, init_random = False):\n \"\"\"\n Run style transfer!\n \n Inputs:\n - content_image: filename of content image\n - style_image: filename of style image\n - image_size: size of smallest image dimension (used for content loss and generated image)\n - style_size: size of smallest style image dimension\n - content_layer: layer to use for content loss\n - content_weight: weighting on content loss\n - style_layers: list of layers to use for style loss\n - style_weights: list of weights to use for each layer in style_layers\n - tv_weight: weight of total variation regularization term\n - init_random: initialize the starting image to uniform random noise\n \"\"\"\n \n # Extract features for the content image\n content_img = preprocess(PIL.Image.open(content_image), size=image_size)\n content_img_var = Variable(content_img.type(dtype))\n feats = extract_features(content_img_var, cnn)\n content_target = feats[content_layer].clone()\n\n # Extract features for the style image\n style_img = preprocess(PIL.Image.open(style_image), size=style_size)\n style_img_var = Variable(style_img.type(dtype))\n feats = extract_features(style_img_var, cnn)\n style_targets = []\n for idx in style_layers:\n style_targets.append(gram_matrix(feats[idx].clone()))\n\n # Initialize output image to content image or nois\n if init_random:\n img = torch.Tensor(content_img.size()).uniform_(0, 1)\n else:\n img = content_img.clone().type(dtype)\n\n # We do want the gradient computed on our image!\n img_var = Variable(img, requires_grad=True)\n\n # Set up optimization hyperparameters\n initial_lr = 3.0\n decayed_lr = 0.1\n decay_lr_at = 180\n\n # Note that we are optimizing the pixel values of the image by passing\n # in the img_var Torch variable, whose requires_grad flag is set to True\n optimizer = torch.optim.Adam([img_var], lr=initial_lr)\n \n f, axarr = plt.subplots(1,2)\n axarr[0].axis('off')\n axarr[1].axis('off')\n axarr[0].set_title('Content Source Img.')\n axarr[1].set_title('Style Source Img.')\n axarr[0].imshow(deprocess(content_img.cpu()))\n axarr[1].imshow(deprocess(style_img.cpu()))\n plt.show()\n plt.figure()\n \n for t in range(200):\n if t < 190:\n img.clamp_(-1.5, 1.5)\n optimizer.zero_grad()\n\n feats = extract_features(img_var, cnn)\n \n # Compute loss\n c_loss = content_loss(content_weight, feats[content_layer], content_target)\n s_loss = style_loss(feats, style_layers, style_targets, style_weights)\n t_loss = tv_loss(img_var, tv_weight) \n loss = c_loss + s_loss + t_loss\n \n loss.backward()\n\n # Perform gradient descents on our image values\n if t == decay_lr_at:\n optimizer = torch.optim.Adam([img_var], lr=decayed_lr)\n optimizer.step()\n\n if t % 100 == 0:\n print('Iteration {}'.format(t))\n plt.axis('off')\n plt.imshow(deprocess(img.cpu()))\n plt.show()\n print('Iteration {}'.format(t))\n plt.axis('off')\n plt.imshow(deprocess(img.cpu()))\n plt.show()", "Generate some pretty pictures!\nTry out style_transfer on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook.\n\nThe content_image is the filename of content image.\nThe style_image is the filename of style image.\nThe image_size is the size of smallest image dimension of the content image (used for content loss and generated image).\nThe style_size is the size of smallest style image dimension.\nThe content_layer specifies which layer to use for content loss.\nThe content_weight gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content).\nstyle_layers specifies a list of which layers to use for style loss. \nstyle_weights specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image.\ntv_weight specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content. \n\nBelow the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes.", "# Composition VII + Tubingen\nparams1 = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/composition_vii.jpg',\n 'image_size' : 192,\n 'style_size' : 512,\n 'content_layer' : 3,\n 'content_weight' : 5e-2, \n 'style_layers' : (1, 4, 6, 7),\n 'style_weights' : (20000, 500, 12, 1),\n 'tv_weight' : 5e-2\n}\n\nstyle_transfer(**params1)\n\n# Scream + Tubingen\nparams2 = {\n 'content_image':'styles/tubingen.jpg',\n 'style_image':'styles/the_scream.jpg',\n 'image_size':192,\n 'style_size':224,\n 'content_layer':3,\n 'content_weight':3e-2,\n 'style_layers':[1, 4, 6, 7],\n 'style_weights':[200000, 800, 12, 1],\n 'tv_weight':2e-2\n}\n\nstyle_transfer(**params2)\n\n# Starry Night + Tubingen\nparams3 = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/starry_night.jpg',\n 'image_size' : 192,\n 'style_size' : 192,\n 'content_layer' : 3,\n 'content_weight' : 6e-2,\n 'style_layers' : [1, 4, 6, 7],\n 'style_weights' : [300000, 1000, 15, 3],\n 'tv_weight' : 2e-2\n}\n\nstyle_transfer(**params3)", "Feature Inversion\nThe code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations).\nNow, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image.\n(Similarly, you could do \"texture synthesis\" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.) \n[1] Aravindh Mahendran, Andrea Vedaldi, \"Understanding Deep Image Representations by Inverting them\", CVPR 2015", "# Feature Inversion -- Starry Night + Tubingen\nparams_inv = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/starry_night.jpg',\n 'image_size' : 192,\n 'style_size' : 192,\n 'content_layer' : 3,\n 'content_weight' : 6e-2,\n 'style_layers' : [1, 4, 6, 7],\n 'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss\n 'tv_weight' : 2e-2,\n 'init_random': True # we want to initialize our image to be random\n}\n\nstyle_transfer(**params_inv)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zscore/pavement_analysis
src/Snapping_Readings_OSRM.ipynb
mit
[ "import collections\nimport functools\nfrom imposm.parser import OSMParser\nimport json\nfrom matplotlib import collections as mc\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport matplotlib.cm as cmx\nfrom numpy import nan\nimport numpy as np\nimport pandas as pd\nimport pyproj\nimport requests\nimport scipy as sp\nimport rtree\n# import seaborn as sb\nfrom scipy import signal\n# import shapely\nimport shapely.geometry\n%pylab inline\n\nimport data_munging", "Ride Report Method\nHere, we use the match method from the OSRM API with the code modified to return only the endpoints of segments. This allows us to aggregate over OSM segments since the node IDs are uniquely associated with a lat/lon pair given sufficient precision in the returned coordinates. The API recommends not using every single value for the match method, but I'm giving them regardless because it's easier to code. Down-sampling the ride might actually help to smooth some of the rides. (or perhaps not if we accidentally get a jagged part).\nCurrently, I am unsure how to mark up OSM data with bumpiness information, as we have \ndata that look like this in the raw OSM file:\n&lt;way id=\"23642309\" version=\"25\" timestamp=\"2013-12-26T23:03:24Z\" changeset=\"19653154\" uid=\"28775\" user=\"StellanL\"&gt;\n &lt;nd ref=\"258965973\"/&gt;\n &lt;nd ref=\"258023463\"/&gt;\n &lt;nd ref=\"736948618\"/&gt;\n &lt;nd ref=\"258023391\"/&gt;\n &lt;nd ref=\"736948622\"/&gt;\n &lt;nd ref=\"930330659\"/&gt;\n &lt;nd ref=\"736861978\"/&gt;\n &lt;nd ref=\"930330542\"/&gt;\n &lt;nd ref=\"930330544\"/&gt;\n &lt;nd ref=\"929808660\"/&gt;\n &lt;nd ref=\"736934948\"/&gt;\n &lt;nd ref=\"930330644\"/&gt;\n &lt;nd ref=\"736871567\"/&gt;\n &lt;nd ref=\"619628331\"/&gt;\n &lt;nd ref=\"740363293\"/&gt;\n &lt;nd ref=\"931468900\"/&gt;\n &lt;tag k=\"name\" v=\"North Wabash Avenue\"/&gt;\n &lt;tag k=\"highway\" v=\"tertiary\"/&gt;\n &lt;tag k=\"loc_ref\" v=\"44 E\"/&gt;\n &lt;/way&gt;\"\nMy tentative idea is to match up the lat/lons with OSM id using IMPOSM, then find the nd refs in the original data and add a property that contains bumpiness information.", "rides, readings = data_munging.read_raw_data()\nreadings = data_munging.clean_readings(readings)\nreadings = data_munging.add_proj_to_readings(readings, data_munging.NAD83)", "If using a Dockerized OSRM instance, you can get the IP address by linking up to the Docker container running OSRM and pinging it. Usually though, the url here will be correct since it is the default.", "digital_ocean_url = 'http://162.243.23.60/osrm-chi-vanilla/'\nlocal_docker_url = 'http://172.17.0.2:5000/'\nurl = local_docker_url\nnearest_request = url + 'nearest?loc={0},{1}'\nmatch_request = url + 'match?loc={0},{1}&t={2}&loc={3},{4}&t={5}'\n\ndef readings_to_match_str(readings):\n data_str = '&loc={0},{1}&t={2}'\n output_str = ''\n elapsed_time = 0\n for i, reading in readings.iterrows():\n elapsed_time += 1\n new_str = data_str.format(str(reading['start_lat']), str(reading['start_lon']), str(elapsed_time))\n output_str += new_str\n return url + 'match?' + output_str[1:]", "This is a small example of how everything should work for troubleshooting and other purposes.", "test_request = readings_to_match_str(readings.loc[readings['ride_id'] == 128, :])\nprint(test_request)\n\nmatched_ride = requests.get(test_request).json()\n\nsnapped_points = pd.DataFrame(matched_ride['matchings'][0]['matched_points'], columns=['lat', 'lon'])\n\nax = snapped_points.plot(x='lon', y='lat', kind='scatter')\nreadings.loc[readings['ride_id'] == 128, :].plot(x='start_lon', y='start_lat', kind='scatter', ax=ax)\nfig = plt.gcf()\nfig.set_size_inches(18.5, 10.5)\nplt.show()\n\na_reading = readings.loc[0, :]\ntest_match_request = match_request.format(a_reading['start_lat'],\n a_reading['start_lon'], \n 0,\n a_reading['end_lat'],\n a_reading['end_lon'],\n 1)\n# This does not work because OSRM does not accept floats as times. \n# test_map_request = map_request.format(*tuple(a_reading[['start_lat', 'start_lon', 'start_time',\n# 'end_lat', 'end_lon', 'end_time']]))\n\ntest_nearest_request = nearest_request.format(a_reading['start_lat'], a_reading['start_lon'])\n\nosrm_response = requests.get(test_match_request).json()\nosrm_response['matchings'][0]['matched_points']\n\nosrm_response = requests.get(test_nearest_request).json()\nosrm_response['mapped_coordinate']\n\nreadings['snapped_lat'] = 0\nreadings['snapped_lon'] = 0\n\nchi_readings = data_munging.filter_readings_to_chicago(readings)\nchi_rides = list(set(chi_readings.ride_id))\n\n# This is a small list of rides that I think are bad based upon their graphs.\n# I currently do not have an automatic way to update this.\nbad_rides = [128, 129, 5.0, 7.0, 131, 133, 34, 169]\ngood_chi_rides = [i for i in chi_rides if i not in bad_rides]\n\nfor ride_id in chi_rides:\n if ride_id in bad_rides:\n print('ride_id')\n try:\n print('num readings: ' + str(sum(readings['ride_id'] == ride_id)))\n except:\n print('we had some issues here.')\n\nall_snapped_points = []\nreadings['snapped_lat'] = np.NaN\nreadings['snapped_lon'] = np.NaN\nfor ride_id in chi_rides:\n if pd.notnull(ride_id):\n ax = readings.loc[readings['ride_id'] == ride_id, :].plot(x='start_lon', y='start_lat')\n try:\n matched_ride = requests.get(readings_to_match_str(readings.loc[readings['ride_id'] == ride_id, :])).json() \n readings.loc[readings['ride_id'] == ride_id, ['snapped_lat', 'snapped_lon']] = matched_ride['matchings'][0]['matched_points']\n readings.loc[readings['ride_id'] == ride_id, :].plot(x='snapped_lon', y='snapped_lat', ax=ax)\n except:\n print('could not snap')\n plt.title('Plotting Ride ' + str(ride_id))\n fig = plt.gcf()\n fig.set_size_inches(18.5, 10.5)\n plt.show()\n\nax = readings.loc[readings['ride_id'] == 2, :].plot(x='snapped_lon', y='snapped_lat', style='r-')\nfor ride_id in good_chi_rides:\n print(ride_id)\n try:\n# readings.loc[readings['ride_id'] == ride_id, :].plot(x='start_lon', y='start_lat', ax=ax)\n readings.loc[readings['ride_id'] == ride_id, :].plot(x='snapped_lon', y='snapped_lat', ax=ax, style='b-')\n except:\n print('bad')\nax = readings.loc[readings['ride_id'] == 2, :].plot(x='snapped_lon', y='snapped_lat', style='r-', ax=ax)\nfig = plt.gcf()\nfig.set_size_inches(36, 36)\nplt.show()\n\n# This code goes through a ride backwards in order to figure out what two endpoints \n# the bicycle was going between.\nreadings['next_snapped_lat'] = np.NaN\nreadings['next_snapped_lon'] = np.NaN\nfor ride_id in chi_rides:\n next_lat_lon = (np.NaN, np.NaN)\n for index, row in reversed(list(readings.loc[readings['ride_id'] == ride_id, :].iterrows())):\n readings.loc[index, ['next_snapped_lat', 'next_snapped_lon']] = next_lat_lon\n if (row['snapped_lat'], row['snapped_lon']) != next_lat_lon:\n next_lat_lon = (row['snapped_lat'], row['snapped_lon'])\n\nclean_chi_readings = readings.loc[[ride_id in chi_rides for ride_id in readings['ride_id']], :]\n\nclean_chi_readings.to_csv(data_munging.data_dir + 'clean_chi_readings.csv')\n\nclean_chi_readings = pd.read_csv(data_munging.data_dir + 'clean_chi_readings.csv')\n\nroad_bumpiness = collections.defaultdict(list)\nfor index, reading in clean_chi_readings.iterrows():\n if reading['gps_mph'] < 30 and reading['gps_mph'] > 3:\n osm_segment = [(reading['snapped_lat'], reading['snapped_lon']),\n (reading['next_snapped_lat'], reading['next_snapped_lon'])]\n osm_segment = sorted(osm_segment)\n if all([lat_lon != (np.NaN, np.NaN) for lat_lon in osm_segment]):\n road_bumpiness[tuple(osm_segment)].append(reading['abs_mean_over_speed'])\n\n# sorted_road_bumpiness = sorted(road_bumpiness.items(), key=lambda i: len(i[1]), reverse=True)\n\ntotal_road_readings = dict((osm_segment, len(road_bumpiness[osm_segment])) for osm_segment in road_bumpiness)\n\nagg_road_bumpiness = dict((osm_segment, np.mean(road_bumpiness[osm_segment])) for osm_segment in road_bumpiness)\n\nagg_path = data_munging.data_dir + 'agg_road_bumpiness.txt'", "This section here functions as a shortcut if you just want to load up the aggregate bumpiness instead of \nhaving to calculate all of it", "with open(agg_path, 'w') as f:\n f.write(str(agg_road_bumpiness))\n\nwith open(agg_path, 'r') as f:\n agg_road_bumpiness = f.read()\n\nagg_road_bumpiness = eval(agg_road_bumpiness)\n\ndef osm_segment_is_null(osm_segment):\n return (pd.isnull(osm_segment[0][0])\n or pd.isnull(osm_segment[0][1])\n or pd.isnull(osm_segment[1][0])\n or pd.isnull(osm_segment[1][1]))\n\nagg_road_bumpiness = dict((osm_segment, agg_road_bumpiness[osm_segment]) for osm_segment in agg_road_bumpiness if not osm_segment_is_null(osm_segment))\n\n# This is where we filter out all osm segments that are too long\n\ndef find_seg_dist(lat_lon):\n return data_munging.calc_dist(lat_lon[0][1], lat_lon[0][0], lat_lon[1][1], lat_lon[1][0])\n\nseg_dist = dict()\nfor lat_lon in agg_road_bumpiness:\n seg_dist[lat_lon] = data_munging.calc_dist(lat_lon[0][1], lat_lon[0][0], lat_lon[1][1], lat_lon[1][0])\n\nwith open('../dat/chi_agg_info.csv', 'w') as f:\n f.write('lat_lon_tuple|agg_road_bumpiness|total_road_readings|seg_dist\\n')\n for lat_lon in agg_road_bumpiness:\n if data_munging.calc_dist(lat_lon[0][1], lat_lon[0][0], lat_lon[1][1], lat_lon[1][0]) < 200:\n f.write(str(lat_lon) + '|' + str(agg_road_bumpiness[lat_lon])\n + '|' + str(total_road_readings[lat_lon])\n + '|' + str(seg_dist[lat_lon]) + '\\n')\n\nseg_dist[lat_lon]\n\nnp.max(agg_road_bumpiness.values())\n\nplt.hist(agg_road_bumpiness.values())\n\nimport matplotlib.colors as colors\n\nplasma = cm = plt.get_cmap('plasma')\ncNorm = colors.Normalize(vmin=0, vmax=1.0)\nscalarMap = cmx.ScalarMappable(norm=cNorm, cmap=plasma)\n\nfor osm_segment, bumpiness in agg_road_bumpiness.items():\n# lat_lon = osm_segment\n# color = (1, 0, 0) if data_munging.calc_dist(lat_lon[0][1], lat_lon[0][0], lat_lon[1][1], lat_lon[1][0]) > 100 else (0, 1, 0)\n plt.plot([osm_segment[0][1], osm_segment[1][1]],\n [osm_segment[0][0], osm_segment[1][0]],\n# color=color)\n color=scalarMap.to_rgba(bumpiness))\nfig = plt.gcf()\nfig.set_size_inches(24, 48)\nplt.show()\n\nfiltered_agg_bumpiness = dict((lat_lon, agg_road_bumpiness[lat_lon])\n for lat_lon in agg_road_bumpiness if find_seg_dist(lat_lon) < 200)\n\nwith open(data_dir + 'filtered_chi_road_bumpiness.txt', 'w') as f:\n f.write(str(filtered_agg_bumpiness))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Neuroglycerin/neukrill-net-work
notebooks/model_run_and_result_analyses/Revisiting alexnet based experiment with 64 inputs (large).ipynb
mit
[ "from pylearn2.utils.serial import load as load_model\nfrom pylearn2.gui.get_weights_report import get_weights_report\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport os.path\nimport io\nfrom IPython.display import display, Image\n\nmodel = load_model(os.path.expandvars('${DATA_DIR}/plankton/models/alexnet_based_-_the_return_64_inputs_experiment_recent.pkl'))\n\nprint('## Model structure summary\\n')\nprint(model)\nparams = model.get_params() \nn_params = {p.name : p.get_value().size for p in params}\ntotal_params = sum(n_params.values())\nprint('\\n## Number of parameters\\n')\nprint(' ' + '\\n '.join(['{0} : {1} ({2:.1f}%)'.format(k, v, 100.*v/total_params) \n for k, v in sorted(n_params.items(), key=lambda x: x[0])]))\nprint('\\nTotal : {0}'.format(total_params))", "Plot train and valid set NLL", "tr = np.array(model.monitor.channels['valid_y_y_1_nll'].time_record) / 3600.\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(111)\nax1.plot(model.monitor.channels['valid_y_y_1_nll'].val_record)\nax1.plot(model.monitor.channels['train_y_y_1_nll'].val_record)\nax1.set_xlabel('Epochs')\nax1.legend(['Valid', 'Train'])\nax1.set_ylabel('NLL')\nax1.set_ylim(0., 5.)\nax1.grid(True)\nax2 = ax1.twiny()\nax2.set_xticks(np.arange(0,tr.shape[0],20))\nax2.set_xticklabels(['{0:.2f}'.format(t) for t in tr[::20]])\nax2.set_xlabel('Hours')\n\nplt.plot(model.monitor.channels['train_term_1_l1_penalty'].val_record)\nplt.plot(model.monitor.channels['train_term_2_weight_decay'].val_record)\n\npv = get_weights_report(model=model)\nimg = pv.get_img()\nimg = img.resize((4*img.size[0], 4*img.size[1]))\nimg_data = io.BytesIO()\nimg.save(img_data, format='png')\ndisplay(Image(data=img_data.getvalue(), format='png'))\n\nplt.plot(model.monitor.channels['learning_rate'].val_record)", "Plot ratio of update norms to parameter norms across epochs for different layers", "h1_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h1_W_kernel_norm_mean'].val_record])\nh1_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h1_kernel_norms_mean'].val_record])\nplt.plot(h1_W_norms / h1_W_up_norms)\n#plt.ylim(0,1000)\nplt.show()\nplt.plot(model.monitor.channels['valid_h1_kernel_norms_mean'].val_record)\nplt.plot(model.monitor.channels['valid_h1_kernel_norms_max'].val_record)\n\nh2_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h2_W_kernel_norm_mean'].val_record])\nh2_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h2_kernel_norms_mean'].val_record])\nplt.plot(h2_W_norms / h2_W_up_norms)\nplt.show()\nplt.plot(model.monitor.channels['valid_h2_kernel_norms_mean'].val_record)\nplt.plot(model.monitor.channels['valid_h2_kernel_norms_max'].val_record)\n\nh3_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h3_W_kernel_norm_mean'].val_record])\nh3_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h3_kernel_norms_mean'].val_record])\nplt.plot(h3_W_norms / h3_W_up_norms)\nplt.show()\nplt.plot(model.monitor.channels['valid_h3_kernel_norms_mean'].val_record)\nplt.plot(model.monitor.channels['valid_h3_kernel_norms_max'].val_record)\n\nh4_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h4_W_kernel_norm_mean'].val_record])\nh4_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h4_kernel_norms_mean'].val_record])\nplt.plot(h4_W_norms / h4_W_up_norms)\nplt.show()\nplt.plot(model.monitor.channels['valid_h4_kernel_norms_mean'].val_record)\nplt.plot(model.monitor.channels['valid_h4_kernel_norms_max'].val_record)\n\nh5_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h5_W_kernel_norm_mean'].val_record])\nh5_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h5_kernel_norms_mean'].val_record])\nplt.plot(h5_W_norms / h5_W_up_norms)\nplt.show()\nplt.plot(model.monitor.channels['valid_h5_kernel_norms_mean'].val_record)\nplt.plot(model.monitor.channels['valid_h5_kernel_norms_max'].val_record)\n\nh6_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_h6_W_col_norm_mean'].val_record])\nh6_W_norms = np.array([float(v) for v in model.monitor.channels['valid_h6_col_norms_mean'].val_record])\nplt.plot(h6_W_norms / h6_W_up_norms)\nplt.show()\nplt.plot(model.monitor.channels['valid_h6_col_norms_mean'].val_record)\nplt.plot(model.monitor.channels['valid_h6_col_norms_max'].val_record)\n\ny_W_up_norms = np.array([float(v) for v in model.monitor.channels['mean_update_softmax_W_col_norm_mean'].val_record])\ny_W_norms = np.array([float(v) for v in model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record])\nplt.plot(y_W_norms / y_W_up_norms)\nplt.show()\nplt.plot(model.monitor.channels['valid_y_y_1_col_norms_mean'].val_record)\nplt.plot(model.monitor.channels['valid_y_y_1_col_norms_max'].val_record)" ]
[ "code", "markdown", "code", "markdown", "code" ]
syednasar/datascience
deeplearning/intro-to-tensorflow/intro_to_tensorflow.ipynb
mit
[ "<h1 align=\"center\">TensorFlow Neural Network Lab</h1>\n\n<img src=\"image/notmnist.png\">\nIn this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href=\"http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html\">notMNIST</a>, consists of images of a letter from A to J in differents font.\nThe above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!\nTo start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print \"All modules imported\".", "import hashlib\nimport os\nimport pickle\nfrom urllib.request import urlretrieve\n\nimport numpy as np\nfrom PIL import Image\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelBinarizer\nfrom sklearn.utils import resample\nfrom tqdm import tqdm\nfrom zipfile import ZipFile\n\nprint('All modules imported.')", "The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).", "def download(url, file):\n \"\"\"\n Download file from <url>\n :param url: URL to file\n :param file: Local file path\n \"\"\"\n if not os.path.isfile(file):\n print('Downloading ' + file + '...')\n urlretrieve(url, file)\n print('Download Finished')\n\n# Download the training and test dataset.\ndownload('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')\ndownload('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')\n\n# Make sure the files aren't corrupted\nassert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\\\n 'notMNIST_train.zip file is corrupted. Remove the file and try again.'\nassert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\\\n 'notMNIST_test.zip file is corrupted. Remove the file and try again.'\n\n# Wait until you see that all files have been downloaded.\nprint('All files downloaded.')\n\ndef uncompress_features_labels(file):\n \"\"\"\n Uncompress features and labels from a zip file\n :param file: The zip file to extract the data from\n \"\"\"\n features = []\n labels = []\n\n with ZipFile(file) as zipf:\n # Progress Bar\n filenames_pbar = tqdm(zipf.namelist(), unit='files')\n \n # Get features and labels from all files\n for filename in filenames_pbar:\n # Check if the file is a directory\n if not filename.endswith('/'):\n with zipf.open(filename) as image_file:\n image = Image.open(image_file)\n image.load()\n # Load image data as 1 dimensional array\n # We're using float32 to save on memory space\n feature = np.array(image, dtype=np.float32).flatten()\n\n # Get the the letter from the filename. This is the letter of the image.\n label = os.path.split(filename)[1][0]\n\n features.append(feature)\n labels.append(label)\n return np.array(features), np.array(labels)\n\n# Get the features and labels from the zip files\ntrain_features, train_labels = uncompress_features_labels('notMNIST_train.zip')\ntest_features, test_labels = uncompress_features_labels('notMNIST_test.zip')\n\n# Limit the amount of data to work with a docker container\ndocker_size_limit = 150000\ntrain_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)\n\n# Set flags for feature engineering. This will prevent you from skipping an important step.\nis_features_normal = False\nis_labels_encod = False\n\n# Wait until you see that all features and labels have been uncompressed.\nprint('All features and labels uncompressed.')", "<img src=\"image/Mean Variance - Image.png\" style=\"height: 75%;width: 75%; position: relative; right: 5%\">\nProblem 1\nThe first problem involves normalizing the features for your training and test data.\nImplement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.\nSince the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.\nMin-Max Scaling:\n$\nX'=a+{\\frac {\\left(X-X_{\\min }\\right)\\left(b-a\\right)}{X_{\\max }-X_{\\min }}}\n$\nIf you're having trouble solving problem 1, you can view the solution here.", "# Problem 1 - Implement Min-Max scaling for grayscale image data\ndef normalize_grayscale(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n # TODO: Implement Min-Max scaling for grayscale image data\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n# Test Cases\nnp.testing.assert_array_almost_equal(\n normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),\n [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,\n 0.125098039216, 0.128235294118, 0.13137254902, 0.9],\n decimal=3)\nnp.testing.assert_array_almost_equal(\n normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),\n [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,\n 0.896862745098, 0.9])\n\nif not is_features_normal:\n train_features = normalize_grayscale(train_features)\n test_features = normalize_grayscale(test_features)\n is_features_normal = True\n\nprint('Tests Passed!')\n\nif not is_labels_encod:\n # Turn labels into numbers and apply One-Hot Encoding\n encoder = LabelBinarizer()\n encoder.fit(train_labels)\n train_labels = encoder.transform(train_labels)\n test_labels = encoder.transform(test_labels)\n\n # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32\n train_labels = train_labels.astype(np.float32)\n test_labels = test_labels.astype(np.float32)\n is_labels_encod = True\n\nprint('Labels One-Hot Encoded')\n\nassert is_features_normal, 'You skipped the step to normalize the features'\nassert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'\n\n# Get randomized datasets for training and validation\ntrain_features, valid_features, train_labels, valid_labels = train_test_split(\n train_features,\n train_labels,\n test_size=0.05,\n random_state=832289)\n\nprint('Training features and labels randomized and split.')\n\n# Save the data for easy access\npickle_file = 'notMNIST.pickle'\nif not os.path.isfile(pickle_file):\n print('Saving data to pickle file...')\n try:\n with open('notMNIST.pickle', 'wb') as pfile:\n pickle.dump(\n {\n 'train_dataset': train_features,\n 'train_labels': train_labels,\n 'valid_dataset': valid_features,\n 'valid_labels': valid_labels,\n 'test_dataset': test_features,\n 'test_labels': test_labels,\n },\n pfile, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nprint('Data cached in pickle file.')", "Checkpoint\nAll your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.", "%matplotlib inline\n\n# Load the modules\nimport pickle\nimport math\n\nimport numpy as np\nimport tensorflow as tf\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\n# Reload the data\npickle_file = 'notMNIST.pickle'\nwith open(pickle_file, 'rb') as f:\n pickle_data = pickle.load(f)\n train_features = pickle_data['train_dataset']\n train_labels = pickle_data['train_labels']\n valid_features = pickle_data['valid_dataset']\n valid_labels = pickle_data['valid_labels']\n test_features = pickle_data['test_dataset']\n test_labels = pickle_data['test_labels']\n del pickle_data # Free up memory\n\nprint('Data and modules loaded.')", "Problem 2\nNow it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.\n<img src=\"image/network_diagram.png\" style=\"height: 40%;width: 40%; position: relative; right: 10%\">\nFor the input here the images have been flattened into a vector of $28 \\times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. \nFor the neural network to train on your data, you need the following <a href=\"https://www.tensorflow.org/resources/dims_types.html#data-types\">float32</a> tensors:\n - features\n - Placeholder tensor for feature data (train_features/valid_features/test_features)\n - labels\n - Placeholder tensor for label data (train_labels/valid_labels/test_labels)\n - weights\n - Variable Tensor with random numbers from a truncated normal distribution.\n - See <a href=\"https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal\">tf.truncated_normal() documentation</a> for help.\n - biases\n - Variable Tensor with all zeros.\n - See <a href=\"https://www.tensorflow.org/api_docs/python/constant_op.html#zeros\"> tf.zeros() documentation</a> for help.\nIf you're having trouble solving problem 2, review \"TensorFlow Linear Function\" section of the class. If that doesn't help, the solution for this problem is available here.", "# All the pixels in the image (28 * 28 = 784)\nfeatures_count = 784\n# All the labels\nlabels_count = 10\n\n# TODO: Set the features and labels tensors\n# features = \n# labels = \n\n# TODO: Set the weights and biases tensors\n# weights = \n# biases = \n\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n\n#Test Cases\nfrom tensorflow.python.ops.variables import Variable\n\nassert features._op.name.startswith('Placeholder'), 'features must be a placeholder'\nassert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'\nassert isinstance(weights, Variable), 'weights must be a TensorFlow variable'\nassert isinstance(biases, Variable), 'biases must be a TensorFlow variable'\n\nassert features._shape == None or (\\\n features._shape.dims[0].value is None and\\\n features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'\nassert labels._shape == None or (\\\n labels._shape.dims[0].value is None and\\\n labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'\nassert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'\nassert biases._variable._shape == (10), 'The shape of biases is incorrect'\n\nassert features._dtype == tf.float32, 'features must be type float32'\nassert labels._dtype == tf.float32, 'labels must be type float32'\n\n# Feed dicts for training, validation, and test session\ntrain_feed_dict = {features: train_features, labels: train_labels}\nvalid_feed_dict = {features: valid_features, labels: valid_labels}\ntest_feed_dict = {features: test_features, labels: test_labels}\n\n# Linear Function WX + b\nlogits = tf.matmul(features, weights) + biases\n\nprediction = tf.nn.softmax(logits)\n\n# Cross entropy\ncross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)\n\n# Training loss\nloss = tf.reduce_mean(cross_entropy)\n\n# Create an operation that initializes all variables\ninit = tf.global_variables_initializer()\n\n# Test Cases\nwith tf.Session() as session:\n session.run(init)\n session.run(loss, feed_dict=train_feed_dict)\n session.run(loss, feed_dict=valid_feed_dict)\n session.run(loss, feed_dict=test_feed_dict)\n biases_data = session.run(biases)\n\nassert not np.count_nonzero(biases_data), 'biases must be zeros'\n\nprint('Tests Passed!')\n\n# Determine if the predictions are correct\nis_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))\n# Calculate the accuracy of the predictions\naccuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))\n\nprint('Accuracy function created.')", "<img src=\"image/Learn Rate Tune - Image.png\" style=\"height: 70%;width: 70%\">\nProblem 3\nBelow are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.\nParameter configurations:\nConfiguration 1\n* Epochs: 1\n* Learning Rate:\n * 0.8\n * 0.5\n * 0.1\n * 0.05\n * 0.01\nConfiguration 2\n* Epochs:\n * 1\n * 2\n * 3\n * 4\n * 5\n* Learning Rate: 0.2\nThe code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.\nIf you're having trouble solving problem 3, you can view the solution here.", "# Change if you have memory restrictions\nbatch_size = 128\n\n# TODO: Find the best parameters for each configuration\n# epochs = \n# learning_rate = \n\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n# Gradient Descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) \n\n# The accuracy measured against the validation set\nvalidation_accuracy = 0.0\n\n# Measurements use for graphing loss and accuracy\nlog_batch_step = 50\nbatches = []\nloss_batch = []\ntrain_acc_batch = []\nvalid_acc_batch = []\n\nwith tf.Session() as session:\n session.run(init)\n batch_count = int(math.ceil(len(train_features)/batch_size))\n\n for epoch_i in range(epochs):\n \n # Progress bar\n batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')\n \n # The training cycle\n for batch_i in batches_pbar:\n # Get a batch of training features and labels\n batch_start = batch_i*batch_size\n batch_features = train_features[batch_start:batch_start + batch_size]\n batch_labels = train_labels[batch_start:batch_start + batch_size]\n\n # Run optimizer and get loss\n _, l = session.run(\n [optimizer, loss],\n feed_dict={features: batch_features, labels: batch_labels})\n\n # Log every 50 batches\n if not batch_i % log_batch_step:\n # Calculate Training and Validation accuracy\n training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)\n validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)\n\n # Log batches\n previous_batch = batches[-1] if batches else 0\n batches.append(log_batch_step + previous_batch)\n loss_batch.append(l)\n train_acc_batch.append(training_accuracy)\n valid_acc_batch.append(validation_accuracy)\n\n # Check accuracy against Validation data\n validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)\n\nloss_plot = plt.subplot(211)\nloss_plot.set_title('Loss')\nloss_plot.plot(batches, loss_batch, 'g')\nloss_plot.set_xlim([batches[0], batches[-1]])\nacc_plot = plt.subplot(212)\nacc_plot.set_title('Accuracy')\nacc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')\nacc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')\nacc_plot.set_ylim([0, 1.0])\nacc_plot.set_xlim([batches[0], batches[-1]])\nacc_plot.legend(loc=4)\nplt.tight_layout()\nplt.show()\n\nprint('Validation accuracy at {}'.format(validation_accuracy))", "Test\nYou're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.", "### DON'T MODIFY ANYTHING BELOW ###\n# The accuracy measured against the test set\ntest_accuracy = 0.0\n\nwith tf.Session() as session:\n \n session.run(init)\n batch_count = int(math.ceil(len(train_features)/batch_size))\n\n for epoch_i in range(epochs):\n \n # Progress bar\n batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')\n \n # The training cycle\n for batch_i in batches_pbar:\n # Get a batch of training features and labels\n batch_start = batch_i*batch_size\n batch_features = train_features[batch_start:batch_start + batch_size]\n batch_labels = train_labels[batch_start:batch_start + batch_size]\n\n # Run optimizer\n _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})\n\n # Check accuracy against Test data\n test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)\n\n\nassert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)\nprint('Nice Job! Test Accuracy is {}'.format(test_accuracy))", "Multiple layers\nGood job! You built a one layer TensorFlow network! However, you might want to build more than one layer. This is deep learning after all! In the next section, you will start to satisfy your need for more layers." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gwsb-istm-6212-fall-2016/syllabus-and-schedule
projects/project-01/solution/problem-01-solution.ipynb
cc0-1.0
[ "Problem 01 - solution\nThe following are my approach to solving these problems. Note that there may be more than one approach to each, and we could even debate the exact solutions.\nProblem 1 - Word Counts\nPart A - Part A. Characters in Little Women\nHow many times are each of the following characters mentioned by name in the text of Little Women: Jo, Beth, Meg, Amy", "!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/women.txt\n\n!cat women.txt | grep -oE '\\w{{2,}}' \\\n | grep -e \"Jo\\|Beth\\|Meg\\|Amy\" \\\n | tr '[:upper:]' '[:lower:]' \\\n | sort | uniq -c | sort -rn", "As we can see from the output of the command above, by the straight wording of the question, there are exactly 1,355 mentions of Jo, 683 of Meg, 645 of Amy, and 459 of Beth in Little Women. If we were to assume that diminutive or nickname forms might count as well, we might add mentions of \"Megs\", \"Bethy\", and \"Meggy\" to these counts, for example. For the purposes of this solution, however, I assume that these are not required, because the text might need to be consulted directly by someone familiar with the novel to determine which nicknames are valid, which seems beyond the scope of the assignment.\nPart B - Juliet and Romeo in Romeo and Juliet\nHow many times do each of the characters Juliet and Romeo have speaking lines in Romeo and Juliet? Keep in mind that this is the text of a play.", "!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/romeo.txt", "First we must recall -- as the problem highlights -- that this text is that of a play. Because of this, we cannot simply count mentions of \"Romeo,\" as we might accidentally inflate the count due to mentions of this character, for example, by other characters in their speaking lines. Instead, we must first look for a patter that indicates Romeo's speaking lines specifically.", "!cat romeo.txt | grep \"Rom\" | head -25", "In this brief sample, we can see title lines and metadata that include mention of Romeo, and both stage directions (\"Enter Romeo\") and spoken lines that include his name. What stands out, though, is that lines spoken by Romeo appear to be delineated by \"Rom.\", so we can search for this specific pattern. Let's verify that the same should hold true for mentions of Juliet.", "!cat romeo.txt | grep \"Jul\" | head -25", "We see that the pattern seems to hold for both. I will assume that matches of the exact characters \"Rom.\" and \"Jul.\" indicate the start of a speaking line for one or the other characters, and will explicitly count only those lines.", "!cat romeo.txt | grep -w \"Rom\\.\" \\\n | grep -oE '\\w{{2,}}\\.' \\\n | grep \"Rom\" \\\n | sort | uniq -c | sort -rn\n\n!cat romeo.txt | grep -w \"Jul\\.\" \\\n | grep -oE '\\w{{2,}}\\.' \\\n | grep \"Jul\" \\\n | sort | uniq -c | sort -rn", "The two pipelines above indicate that Romeo has 163 speaking lines, while Juliet has only 117. To match the specific case with a trailing ., the first regular expressions in both above cases use the -w flag to denote a word match and the escape sequence \\. to match the literal trailing period. The second regular expressions include this literal at the end of the match sequence as well, with the trailing literal period in '\\w{{2,}}\\.' requiring that the match include the period at the end.\nProblem 2 - Capital Bikeshare\nPart A - Station counts\nWhich 10 Capital Bikeshare stations were the most popular departing stations in Q1 2016?", "!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/2016q1.csv.zip\n\n!unzip 2016q1.csv.zip\n\n!head -5 2016q1.csv | csvlook\n\n!csvcut -n 2016q1.csv\n\n!csvcut -c5 2016q1.csv | tail -n +2 | csvsort | uniq -c | sort -rn | head -10", "As we can see in the above results, the top ten starting stations in this time period were led by Columbus Circle / Union Station with over 13,000 rides, followed by Dupont Circle and the Lincoln Memorial and the rest as listed.\nIn the pipeline above, tail -n +2 ensures we skip the header line before the sort process begins.\nWhich 10 were the most popular destination stations in Q1 2016?", "!csvcut -c7 2016q1.csv | tail -n +2 | csvsort | uniq -c | sort -rn | head -10", "The above results show us very similar numbers for destination stations during the same time period, with the first four stations unchanged and led again by Union Station with over 13,000 rides. Thomas Circle appears to be a more prominent start station than end station, as does Eastern Market, which does not even make the top ten destination stations.\nPart B - bike counts\nFor the most popular departure station, which 10 bikes were used most in trips departing from there?\nIn this part, we will use csvgrep to select only the required stations - Union Station, in both cases.", "!csvgrep -c5 -m \"Columbus Circle / Union Station\" 2016q1.csv | head", "We can further limit the columns used to cut down on the data flowing through the pipe.", "!csvcut -c5,8 2016q1.csv | csvgrep -c1 -m \"Columbus Circle / Union Station\" | head\n\n!csvcut -c5,8 2016q1.csv \\\n | csvgrep -c1 -m \"Columbus Circle / Union Station\" \\\n | csvcut -c2 \\\n | tail -n +2 \\\n | sort | uniq -c | sort -rn | head -12", "Above are the most commonly used bikes in trips departing from Union Station, led by bike number W22227. As we might expect it appears that the distribution seems rather uniform. Note that because several bikes had exactly 15 trips starting from Union Station, the list includes the top twelve bikes, rather than the top ten.\nWhich 10 bikes were used most in trips ending at the most popular destination station?", "!csvcut -c7,8 2016q1.csv \\\n | csvgrep -c1 -m \"Columbus Circle / Union Station\" \\\n | csvcut -c2 \\\n | tail -n +2 \\\n | sort | uniq -c | sort -rn | head -15", "Above are the most commonly used bikes in trips arriving at Union Station, let by bike number W00485. It is interesting to note that bike W22227, the top departing bike, is in second place, but bike W00485, the top arriving bike, does not appear in the top ten departing bikes. In any case these also seem at first glance to be uniformly distributed. Again, the list is expanded, this time to fifteen bikes, to account for the tie at exactly fifteen trips.\nProblem 3 - Filters\nPart A - split and lowercase filters\nWrite a Python filter than replaces grep -oE '\\w{2,}' to split lines of text into one word per line, and write an additional Python filter to replace tr '[:upper:]' '[:lower:]' to transform text into lower case.\nWith your two new filters, repeat the original pipeline, and substitute your new filters as appropriate. You should obtain the same results.", "!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/simplefilter.py\n\n!cp simplefilter.py split.py", "The file split.py is modified from the template to split lines of text into one word per line. To demonstrate this, we can compare the original pipeline with a new pipeline with split.py substituting for the first grep command.", "!cat women.txt \\\n | grep -oE '\\w{{1,}}' \\\n | tr '[:upper:]' '[:lower:]' \\\n | sort \\\n | uniq -c \\\n | sort -rn \\\n | head -10", "We can ignore the broken pipe and related errors as the output appears to be correct.\nNext, we repeat the pipeline with split.py substituted:", "!chmod +x split.py", "Examining the filter script below, the key line, #14, removes trailing newlines, splits tokens by the space (' '), and removes words that are not entirely alphabetical.", "!grep -n '' split.py\n\n!cat women.txt \\\n | ./split.py \\\n | tr '[:upper:]' '[:lower:]' \\\n | sort \\\n | uniq -c \\\n | sort -rn \\\n | head -10", "Almost the exact words listed appear in nearly the same order, but with lower counts for each. We can examine the output of each command to see if there are obvious differences:", "!cat women.txt | grep -oE '\\w{{2,}}' | head -25\n\n!cat women.txt | ./split.py | head -25", "We can see straight away on the first few lines that there is a difference. Let's look at the text itself:", "!head -3 women.txt", "Three obvious issues jump out. First, the initial \"The\" is elided; it is not clear why. Next, \"Women\" is removed, perhaps due to the trailing comma, which will cause the token to fail the isalpha() test. Also, \"Alcott\" is removed, perhaps having to do with its position at the end of the line.\nWe can update the filter to use Python's regular expression model and a similar expression, \\w{1,} to find all matches more intelligently. Here the regular expression is prepared in line 13 and used in line 18.", "!grep -n '' split.py\n\n!cat women.txt | ./split.py | head -25", "This looks much better. We can try the full pipeline again:", "!cat women.txt \\\n | ./split.py \\\n | tr '[:upper:]' '[:lower:]' \\\n | sort \\\n | uniq -c \\\n | sort -rn \\\n | head -10", "This looks to be an exact match.", "!cp simplefilter.py lowercase.py", "The filter lowercase.py is modified from the template to lowercase incoming lines of text.", "!chmod +x lowercase.py\n\n!grep -n '' lowercase.py", "Note that the only line aside from the comments that changes in the above script is line #12, which adds the lower() to the print statement.", "!head women.txt | ./lowercase.py", "This looks correct, so we'll first attempt to replace the original pipeline's use of tr with lowercase.py:", "!cat women.txt \\\n | grep -oE '\\w{{1,}}' \\\n | ./lowercase.py \\\n | sort \\\n | uniq -c \\\n | sort -rn \\\n | head -10", "Looks good so far, we are seeing the exact same counts. To address the problem's challenge, we finally replace both filters at once.", "!cat women.txt \\\n | ./split.py \\\n | ./lowercase.py \\\n | sort \\\n | uniq -c \\\n | sort -rn \\\n | head -10", "This completes Problem 3 - Part A.\nPart B - stop words\nWrite a Python filter that removes at least ten common words of English text, commonly known as \"stop words\". Sources of English stop word lists are readily available online, or you may generate your own list from the text.\nWe begin by acquiring a common list of English stop words, gathered from the site http://www.textfixer.com/resources/common-english-words.txt as linked from the Wikipedia page on stop words.", "!wget http://www.textfixer.com/resources/common-english-words.txt\n\n!head common-english-words.txt", "Next we copy the template filter script as before, renaming it appropriately.", "!cp simplefilter.py stopwords.py\n\n!chmod +x stopwords.py\n\n!grep -n '' stopwords.py", "The key changes in stopwords.py from the template are line #13, which imports the list of stopwords, and line #20, which checks whether an incoming word is in the stopword list. Note also that in line #19 the removal of a trailing newline occurs before checking for stopwords.\nThe assumption that incoming text will already be split into one word per line and lowercased is stated explicitly in the first comment, lines #6-7.", "!head women.txt | ./split.py | ./lowercase.py | ./stopwords.py", "This appears to be correct. Let's put it all together:", "!cat women.txt \\\n | ./split.py \\\n | ./lowercase.py \\\n | ./stopwords.py \\\n | sort \\\n | uniq -c \\\n | sort -rn \\\n | head -25", "This would seem to be correct - we see the names we looked for earlier appearing near the top of the list, and common stop words are indeed removed - however the list starts with odd \"words\", in \"t\", \"s\", \"m\", and \"ll\". Is it possible that these are occurences of contractions? We can check a few different ways. First, let's see if our split.py is causing the problem:", "!cat women.txt \\\n | grep -oE '\\w{{1,}}' \\\n | ./lowercase.py \\\n | ./stopwords.py \\\n | sort \\\n | uniq -c \\\n | sort -rn \\\n | head -25", "No, the results are exactly the same. Instead, we'll need to look for occurrences of \"t\" and \"s\" by themselves. The --context option to grep might help us here, pointing out surrounding text to search for in the source.", "!cat women.txt \\\n | ./split.py \\\n | ./lowercase.py \\\n | grep --context=2 -oE '^t$' \\\n | head -20\n\n!grep -i \"we haven't got\" women.txt", "Aha, it does appear that the occurences of a bare \"t\" are from contractions. Let's repeat with \"s\", which might occur in possessives.", "!cat women.txt \\\n | ./split.py \\\n | ./lowercase.py \\\n | grep --context=2 -oE '^s$' \\\n | head -20\n\n!grep -i \"amy's valley\" women.txt", "There we have it - the counts from above were correct, and we could eliminate \"t\" and \"s\" from consideration with a grep -v, and we can further assume that the \"ll\" and \"m\" occurences are also from contractions, so we'll remove them as well.", "!cat women.txt \\\n | ./split.py \\\n | ./lowercase.py \\\n | ./stopwords.py \\\n | grep -v -oE '^s|t|m|ll$' \\\n | sort \\\n | uniq -c \\\n | sort -rn \\\n | head -25", "Here we have a final count. It is interesting to note that these counts of character names (Jo, Meg, etc.) are slightly different from before, perhaps due to punctuation handling, but it seems beyond the scope of the question to answer it precisely.\nExtra credit - parallel stop words\nUse GNU parallel to count the 25 most common words across all the 109 texts in the zip file provided, with stop words removed.", "!wget https://raw.githubusercontent.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/master/projects/project-01/texts.zip\n\n!unzip -l texts.zip | head -5\n\n!mkdir all-texts\n\n!unzip -d all-texts texts.zip\n\n!time ls all-texts/*.txt \\\n | parallel --eta -j+0 \"grep -oE '\\w{1,}' {} | tr '[:upper:]' '[:lower:]' | grep -v -oE '^s|t|m|l|ll|d$' | ./stopwords.py >> all-words.txt\"", "In the above line, I've limited the word size to one character, removed common contractions, and piped the overall result through the new stopwords.py Python filter.", "!wc -l all-words.txt\n\n!time sort all-words.txt | uniq -c | sort -rn | head -25", "Above we have the top 25 words across all 109 texts, with common English stop words removed. It is easy to imagine that a broader set of stop words would remove many more common words like \"one\", \"up\", \"down\", and \"here\", but that is the tradeoff of choosing any single list." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/fast-and-lean-data-science/fairing_train.ipynb
apache-2.0
[ "import glob, re, os\nimport logging\nimport fairing\nGCP_PROJECT = fairing.cloud.gcp.guess_project_name()\nDOCKER_REGISTRY = 'gcr.io/{}'.format(GCP_PROJECT) # every Google Cloud Platform project comes with a private Docker registry\nbase_image = \"{}/{}\".format(DOCKER_REGISTRY, \"fairing:latest\")\nlogging.getLogger('googleapiclient.discovery_cache').setLevel(logging.ERROR) # suppress nagging bug about a library incompatibility\nprint(base_image)", "Authenticate with the docker registry first\nbash\ngcloud auth configure-docker\nIf using TPUs please also authorize Cloud TPU to access your project as described here.\nSet up your output bucket", "BUCKET = \"gs://\" # your bucket here\nassert re.search(r'gs://.+', BUCKET), 'A GCS bucket is required to store your results.'", "Build a base image to work with fairing", "!cat Dockerfile\n\n!docker build . -t {base_image}\n\n!docker push {base_image}", "Start an AI Platform job", "additional_files = '' # If your code requires additional files, you can specify them here (or include everything in the current folder with glob.glob('./**', recursive=True))\n# If your code does not require any dependencies or config changes, you can directly start from an official Tensorflow docker image\n#fairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image='gcr.io/deeplearning-platform-release/tf-gpu.1-13')\n\n# base image\nfairing.config.set_builder('docker', registry=DOCKER_REGISTRY, base_image=base_image)\n# AI Platform job hardware config\nfairing.config.set_deployer('gcp', job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}})\n# input and output notebooks\nfairing.config.set_preprocessor('full_notebook',\n notebook_file=\"05K_MNIST_TF20Keras_Tensorboard_playground.ipynb\",\n input_files=additional_files,\n output_file=os.path.join(BUCKET, 'fairing-output', 'mnist-001.ipynb'))\n\n\n# GPU settings for single K80, single p100 respectively\n# job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}}\n# job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'standard_p100'}}\n\n# These job_config settings for TPUv2\n#job_config={'trainingInput': {'scaleTier': 'BASIC_GPU'}}\n#job_config={'trainingInput': {'scaleTier': 'CUSTOM', 'masterType': 'n1-standard-8', 'workerType': 'cloud_tpu', 'workerCount': 1,\n# 'workerConfig': {'accelerator_config': {'type': 'TPU_V2','count': 8}}}})\n# On AI Platform, TPUv3 support is alpha and available to whitelisted customers only\n\nfairing.config.run()", "License\n\nauthor: Martin Gorner<br>\ntwitter: @martin_gorner\n\nCopyright 2019 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nThis is not an official Google product but sample code provided for an educational purpose" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
turbomanage/training-data-analyst
courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nMNIST Estimator to TPUEstimator\nThis notebook will show you how to port an Estimator model to TPUEstimator.\nAll the lines that had to be changed in the porting are marked with a \"TPU REFACTORING\" comment.\nYou do the porting only once. TPUEstimator then works on both TPU and GPU with the use_tpu=False flag.\nImports", "import os, re, math, json, shutil, pprint, datetime\nimport PIL.Image, PIL.ImageFont, PIL.ImageDraw # \"pip3 install Pillow\" or \"pip install Pillow\" if needed\nimport numpy as np\nimport tensorflow as tf\nfrom matplotlib import pyplot as plt\nfrom tensorflow.python.platform import tf_logging\nprint(\"Tensorflow version \" + tf.__version__)", "Parameters", "BATCH_SIZE = 32 #@param {type:\"integer\"}\nBUCKET = 'gs://' #@param {type:\"string\"}\n\nassert re.search(r'gs://.+', BUCKET), 'You need a GCS bucket for your Tensorboard logs. Head to http://console.cloud.google.com/storage and create one.'\n\ntraining_images_file = 'gs://mnist-public/train-images-idx3-ubyte'\ntraining_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'\nvalidation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'\nvalidation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'\n\n#@title visualization utilities [RUN ME]\n\"\"\"\nThis cell contains helper functions used for visualization\nand downloads only. You can skip reading it. There is very\nlittle useful Keras/Tensorflow code here.\n\"\"\"\n\n# Matplotlib config\nplt.rc('image', cmap='gray_r')\nplt.rc('grid', linewidth=0)\nplt.rc('xtick', top=False, bottom=False, labelsize='large')\nplt.rc('ytick', left=False, right=False, labelsize='large')\nplt.rc('axes', facecolor='F8F8F8', titlesize=\"large\", edgecolor='white')\nplt.rc('text', color='a8151a')\nplt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts\nMATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), \"mpl-data/fonts/ttf\")\n\n# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)\ndef dataset_to_numpy_util(training_dataset, validation_dataset, N):\n \n # get one batch from each: 10000 validation digits, N training digits\n unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())\n v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()\n t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()\n \n # Run once, get one batch. Session.run returns numpy results\n with tf.Session() as ses:\n (validation_digits, validation_labels,\n training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])\n \n # these were one-hot encoded in the dataset\n validation_labels = np.argmax(validation_labels, axis=1)\n training_labels = np.argmax(training_labels, axis=1)\n \n return (training_digits, training_labels,\n validation_digits, validation_labels)\n\n# create digits from local fonts for testing\ndef create_digits_from_local_fonts(n):\n font_labels = []\n img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1\n font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)\n font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)\n d = PIL.ImageDraw.Draw(img)\n for i in range(n):\n font_labels.append(i%10)\n d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)\n font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)\n font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])\n return font_digits, font_labels\n\n# utility to display a row of digits with their predictions\ndef display_digits(digits, predictions, labels, title, n):\n plt.figure(figsize=(13,3))\n digits = np.reshape(digits, [n, 28, 28])\n digits = np.swapaxes(digits, 0, 1)\n digits = np.reshape(digits, [28, 28*n])\n plt.yticks([])\n plt.xticks([28*x+14 for x in range(n)], predictions)\n for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):\n if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red\n plt.imshow(digits)\n plt.grid(None)\n plt.title(title)\n \n# utility to display multiple rows of digits, sorted by unrecognized/recognized status\ndef display_top_unrecognized(digits, predictions, labels, n, lines):\n idx = np.argsort(predictions==labels) # sort order: unrecognized first\n for i in range(lines):\n display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],\n \"{} sample validation digits out of {} with bad predictions in red and sorted first\".format(n*lines, len(digits)) if i==0 else \"\", n)\n \n# utility to display training and validation curves\ndef display_training_curves(training, validation, title, subplot):\n if subplot%10==1: # set up the subplots on the first call\n plt.subplots(figsize=(10,10), facecolor='#F0F0F0')\n plt.tight_layout()\n ax = plt.subplot(subplot)\n ax.grid(linewidth=1, color='white')\n ax.plot(training)\n ax.plot(validation)\n ax.set_title('model '+ title)\n ax.set_ylabel(title)\n ax.set_xlabel('epoch')\n ax.legend(['train', 'valid.'])", "Colab-only auth for this notebook and the TPU", "IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence\nif IS_COLAB_BACKEND:\n from google.colab import auth\n auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets", "TPU detection", "#TPU REFACTORING: detect the TPU\ntry: # TPU detection\n tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # Picks up a connected TPU on Google's Colab, ML Engine, Kubernetes and Deep Learning VMs accessed through the 'ctpu up' utility\n #tpu = tf.contrib.cluster_resolver.TPUClusterResolver('MY_TPU_NAME') # If auto-detection does not work, you can pass the name of the TPU explicitly (tip: on a VM created with \"ctpu up\" the TPU has the same name as the VM)\n print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])\n USE_TPU = True\nexcept ValueError:\n tpu = None\n print(\"Running on GPU or CPU\")\n USE_TPU = False", "tf.data.Dataset: parse files and prepare training and validation datasets\nPlease read the best practices for building input pipelines with tf.data.Dataset", "def read_label(tf_bytestring):\n label = tf.decode_raw(tf_bytestring, tf.uint8)\n label = tf.reshape(label, [])\n label = tf.one_hot(label, 10)\n return label\n \ndef read_image(tf_bytestring):\n image = tf.decode_raw(tf_bytestring, tf.uint8)\n image = tf.cast(image, tf.float32)/256.0\n image = tf.reshape(image, [28*28])\n return image\n \ndef load_dataset(image_file, label_file):\n imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)\n imagedataset = imagedataset.map(read_image, num_parallel_calls=16)\n labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)\n labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)\n dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))\n return dataset \n \ndef get_training_dataset(image_file, label_file, batch_size):\n dataset = load_dataset(image_file, label_file)\n dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset\n dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)\n dataset = dataset.repeat() # Mandatory for TPU for now\n dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed\n dataset = dataset.prefetch(-1) # prefetch next batch while training (-1: autotune prefetch buffer size)\n return dataset\n\n#TPU REFACTORING: training and eval batch sizes must be the same: passing batch_size parameter here too\n# def get_validation_dataset(image_file, label_file):\ndef get_validation_dataset(image_file, label_file, batch_size):\n dataset = load_dataset(image_file, label_file)\n dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset\n #TPU REFACTORING: training and eval batch sizes must be the same: passing batch_size parameter here too\n # dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch\n dataset = dataset.batch(batch_size, drop_remainder=True)\n dataset = dataset.repeat() # Mandatory for TPU for now\n return dataset\n\n# instantiate the datasets\ntraining_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)\nvalidation_dataset = get_validation_dataset(validation_images_file, validation_labels_file, 10000)\n\n# For TPU, we will need a function that returns the dataset\n\n# TPU REFACTORING: input_fn's must have a params argument though which TPUEstimator passes params['batch_size']\n# training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)\n# validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)\ntraining_input_fn = lambda params: get_training_dataset(training_images_file, training_labels_file, params['batch_size'])\nvalidation_input_fn = lambda params: get_validation_dataset(validation_images_file, validation_labels_file, params['batch_size'])", "Let's have a look at the data", "N = 24\n(training_digits, training_labels,\n validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)\ndisplay_digits(training_digits, training_labels, training_labels, \"training digits and their labels\", N)\ndisplay_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], \"validation digits and their labels\", N)\nfont_digits, font_labels = create_digits_from_local_fonts(N)", "Estimator model\nIf you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD", "# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs\n\n# TPU REFACTORING: model_fn must have a params argument. TPUEstimator passes batch_size and use_tpu into it\n#def model_fn(features, labels, mode):\ndef model_fn(features, labels, mode, params):\n\n is_training = (mode == tf.estimator.ModeKeys.TRAIN)\n\n x = features\n y = tf.reshape(x, [-1, 28, 28, 1])\n\n y = tf.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False)(y) # no bias necessary before batch norm\n y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training) # no batch norm scaling necessary before \"relu\"\n y = tf.nn.relu(y) # activation after batch norm\n\n y = tf.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2)(y)\n y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)\n y = tf.nn.relu(y)\n\n y = tf.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2)(y)\n y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)\n y = tf.nn.relu(y)\n\n y = tf.layers.Flatten()(y)\n y = tf.layers.Dense(200, use_bias=False)(y)\n y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)\n y = tf.nn.relu(y)\n y = tf.layers.Dropout(0.5)(y, training=is_training)\n \n logits = tf.layers.Dense(10)(y)\n predictions = tf.nn.softmax(logits)\n classes = tf.math.argmax(predictions, axis=-1)\n \n if (mode != tf.estimator.ModeKeys.PREDICT):\n loss = tf.losses.softmax_cross_entropy(labels, logits)\n\n step = tf.train.get_or_create_global_step()\n # TPU REFACTORING: step is now increased once per GLOBAL_BATCH_SIZE = 8*BATCH_SIZE. Must adjust learning rate schedule accordingly\n # lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000, 1/math.e)\n lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000//8, 1/math.e)\n \n # TPU REFACTORING: custom Tensorboard summaries do not work. Only default Estimator summaries will appear in Tensorboard.\n # tf.summary.scalar(\"learn_rate\", lr)\n \n optimizer = tf.train.AdamOptimizer(lr)\n # TPU REFACTORING: wrap the optimizer in a CrossShardOptimizer: this implements the multi-core training logic\n if params['use_tpu']:\n optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)\n \n # little wrinkle: batch norm uses running averages which need updating after each batch. create_train_op does it, optimizer.minimize does not.\n train_op = tf.contrib.training.create_train_op(loss, optimizer)\n #train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())\n \n # TPU REFACTORING: a metrics_fn is needed for TPU\n # metrics = {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}\n metric_fn = lambda classes, labels: {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}\n tpu_metrics = (metric_fn, [classes, labels]) # pair of metric_fn and its list of arguments, there can be multiple pairs in a list\n # metric_fn will run on CPU, not TPU: more operations are allowed\n else:\n loss = train_op = metrics = tpu_metrics = None # None of these can be computed in prediction mode because labels are not available\n \n # TPU REFACTORING: EstimatorSpec => TPUEstimatorSpec\n ## return tf.estimator.EstimatorSpec(\n return tf.contrib.tpu.TPUEstimatorSpec(\n mode=mode,\n predictions={\"predictions\": predictions, \"classes\": classes}, # name these fields as you like\n loss=loss,\n train_op=train_op,\n # TPU REFACTORING: a metrics_fn is needed for TPU, passed into the eval_metrics field instead of eval_metrics_ops\n # eval_metric_ops=metrics\n eval_metrics = tpu_metrics\n )\n\n# Called once when the model is saved. This function produces a Tensorflow\n# graph of operations that will be prepended to your model graph. When\n# your model is deployed as a REST API, the API receives data in JSON format,\n# parses it into Tensors, then sends the tensors to the input graph generated by\n# this function. The graph can transform the data so it can be sent into your\n# model input_fn. You can do anything you want here as long as you do it with\n# tf.* functions that produce a graph of operations.\ndef serving_input_fn():\n # placeholder for the data received by the API (already parsed, no JSON decoding necessary,\n # but the JSON must contain one or multiple 'image' key(s) with 28x28 greyscale images as content.)\n inputs = {\"serving_input\": tf.placeholder(tf.float32, [None, 28, 28])} # the shape of this dict should match the shape of your JSON\n features = inputs['serving_input'] # no transformation needed\n return tf.estimator.export.TensorServingInputReceiver(features, inputs) # features are the features needed by your model_fn\n # Return a ServingInputReceiver if your features are a dictionary of Tensors, TensorServingInputReceiver if they are a straight Tensor", "Train and validate the model, this time on TPU", "EPOCHS = 10\n\n# TPU_REFACTORING: to use all 8 cores, increase the batch size by 8\nGLOBAL_BATCH_SIZE = BATCH_SIZE * 8\n\n# TPU_REFACTORING: TPUEstimator increments the step once per GLOBAL_BATCH_SIZE: must adjust epoch length accordingly\n# steps_per_epoch = 60000 // BATCH_SIZE # 60,000 images in training dataset\nsteps_per_epoch = 60000 // GLOBAL_BATCH_SIZE # 60,000 images in training dataset\n\nMODEL_EXPORT_NAME = \"mnist\" # name for exporting saved model\n\n# TPU_REFACTORING: the TPU will run multiple steps of training before reporting back\nTPU_ITERATIONS_PER_LOOP = steps_per_epoch # report back after each epoch\n\ntf_logging.set_verbosity(tf_logging.INFO)\nnow = datetime.datetime.now()\nMODEL_DIR = BUCKET+\"/mnistjobs/job\" + \"-{}-{:02d}-{:02d}-{:02d}:{:02d}:{:02d}\".format(now.year, now.month, now.day, now.hour, now.minute, now.second)\n\n# TPU REFACTORING: the RunConfig has changed\n#training_config = tf.estimator.RunConfig(model_dir=MODEL_DIR, save_summary_steps=10, save_checkpoints_steps=steps_per_epoch, log_step_count_steps=steps_per_epoch/4)\ntraining_config = tf.contrib.tpu.RunConfig(\n cluster=tpu,\n model_dir=MODEL_DIR,\n tpu_config=tf.contrib.tpu.TPUConfig(TPU_ITERATIONS_PER_LOOP))\n \n# TPU_REFACTORING: exporters do not work yet. Must call export_savedmodel manually after training\n#export_latest = tf.estimator.LatestExporter(MODEL_EXPORT_NAME, serving_input_receiver_fn=serving_input_fn)\n \n# TPU_REFACTORING: Estimator => TPUEstimator\n#estimator = tf.estimator.Estimator(model_fn=model_fn, config=training_config)\nestimator = tf.contrib.tpu.TPUEstimator(\n model_fn=model_fn,\n model_dir=MODEL_DIR,\n # TPU_REFACTORING: training and eval batch size must be the same for now\n train_batch_size=GLOBAL_BATCH_SIZE,\n eval_batch_size=10000, # 10000 digits in eval dataset\n predict_batch_size=10000, # prediction on the entire eval dataset in the demo below\n config=training_config,\n use_tpu=USE_TPU,\n # TPU REFACTORING: setting the kind of model export we want\n export_to_tpu=False) # we want an exported model for CPU/GPU inference because that is what is supported on ML Engine\n\n# TPU REFACTORING: train_and_evaluate does not work on TPU yet, TrainSpec not needed\n# train_spec = tf.estimator.TrainSpec(training_input_fn, max_steps=EPOCHS*steps_per_epoch)\n# TPU REFACTORING: train_and_evaluate does not work on TPU yet, EvalSpec not needed\n# eval_spec = tf.estimator.EvalSpec(validation_input_fn, steps=1, exporters=export_latest, throttle_secs=0) # no eval throttling: evaluates after each checkpoint\n\n# TPU REFACTORING: train_and_evaluate does not work on TPU yet, must train then eval manually\n# tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\nestimator.train(training_input_fn, steps=steps_per_epoch*EPOCHS)\nestimator.evaluate(input_fn=validation_input_fn, steps=1)\n \n# TPU REFACTORING: exporters do not work yet. Must call export_savedmodel manually after training\nestimator.export_savedmodel(os.path.join(MODEL_DIR, MODEL_EXPORT_NAME), serving_input_fn)\ntf_logging.set_verbosity(tf_logging.WARN)", "Visualize predictions", "# recognize digits from local fonts\n# TPU REFACTORING: TPUEstimator.predict requires a 'params' in ints input_fn so that it can pass params['batch_size']\n#predictions = estimator.predict(lambda: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),\npredictions = estimator.predict(lambda params: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),\n yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call\npredicted_font_classes = next(predictions)['classes']\ndisplay_digits(font_digits, predicted_font_classes, font_labels, \"predictions from local fonts (bad predictions in red)\", N)\n\n# recognize validation digits\npredictions = estimator.predict(validation_input_fn,\n yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call\npredicted_labels = next(predictions)['classes']\ndisplay_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)", "Deploy the trained model to ML Engine\nPush your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.\nYou will need a GCS bucket and a GCP project for this.\nModels deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.\nGoogle Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.\nConfiguration", "PROJECT = \"\" #@param {type:\"string\"}\nNEW_MODEL = True #@param {type:\"boolean\"}\nMODEL_NAME = \"estimator_mnist_tpu\" #@param {type:\"string\"}\nMODEL_VERSION = \"v0\" #@param {type:\"string\"}\n\nassert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'\n\n#TPU REFACTORING: TPUEstimator does not create the 'export' subfolder\n#export_path = os.path.join(MODEL_DIR, 'export', MODEL_EXPORT_NAME)\nexport_path = os.path.join(MODEL_DIR, MODEL_EXPORT_NAME)\nlast_export = sorted(tf.gfile.ListDirectory(export_path))[-1]\nexport_path = os.path.join(export_path, last_export)\nprint('Saved model directory found: ', export_path)", "Deploy the model\nThis uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models", "# Create the model\nif NEW_MODEL:\n !gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1\n\n# Create a version of this model (you can add --async at the end of the line to make this call non blocking)\n# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions\n# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter\n!echo \"Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}\"\n!gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10", "Test the deployed model\nYour model is now available as a REST API. Let us try to call it. The cells below use the \"gcloud ml-engine\"\ncommand line tool but any tool that can send a JSON payload to a REST endpoint will work.", "# prepare digits to send to online prediction endpoint\ndigits = np.concatenate((font_digits, validation_digits[:100-N]))\nlabels = np.concatenate((font_labels, validation_labels[:100-N]))\nwith open(\"digits.json\", \"w\") as f:\n for digit in digits:\n # the format for ML Engine online predictions is: one JSON object per line\n data = json.dumps({\"serving_input\": digit.tolist()}) # \"serving_input\" because that is what you defined in your serving_input_fn: {\"serving_input\": tf.placeholder(tf.float32, [None, 28, 28])}\n f.write(data+'\\n')\n\n# Request online predictions from deployed model (REST API) using the \"gcloud ml-engine\" command line.\npredictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}\n\npredictions = np.array([int(p.split('[')[0]) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest\ndisplay_top_unrecognized(digits, predictions, labels, N, 100//N)", "License\n\nauthor: Martin Gorner<br>\ntwitter: @martin_gorner\n\nCopyright 2018 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nThis is not an official Google product but sample code provided for an educational purpose" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kecnry/autofig
docs/tutorials/mesh.ipynb
gpl-3.0
[ "Autofig Mesh", "import autofig\nimport numpy as np\nimport phoebe # PHOEBE 2.1 used for this demonstration\n\n#autofig.inline()", "Let's generate a mesh in PHOEBE", "b = phoebe.default_binary()\nb.add_dataset('mesh', times=[0], columns=['teffs', 'vws'])\nb.run_compute()\n\nverts = b.get_value(qualifier='uvw_elements', component='primary', context='model')\nprint(verts.shape) # [polygon, vertex, dimension]\n\nteffs = b.get_value(qualifier='teffs', component='primary', context='model')\nprint(teffs.shape) # [polygon]\n\nvzs = b.get_value(qualifier='vws', component='primary', context='model')\nprint(vzs.shape) # [polygon]\n\nxs = verts[:, :, 0]\nys = verts[:, :, 1]\nzs = verts[:, :, 2]\nprint(xs.shape, ys.shape, zs.shape) # [polygon, vertex]", "Meshes can be drawn by calling the mesh (instead of plot) method of a figure. Most syntax and features are identical between the two, with the following exceptions:\n* NO 'c' or 's' dimensions\n* ADDITION of 'fc' (facecolor) and 'ec' (edgecolor) dimensions\n* linestyle applies to the edges\n* NO highlight\n* uncover DEFAULTS to True\n* trail DEFAULTS to 0\n* NO marker\n* NO linebreak\nIf 'z' is passed, the polygons will automatically be sorted in the order of positive z. It is therefore suggested to pass 'z' for any 3D meshes even if plotting in 2D.\nThe edgecolor will default to 'black' and the facecolor to 'none' if not provided:", "autofig.reset()\nautofig.mesh(x=xs, y=ys, z=zs, \n xlabel='x', xunit='solRad', \n ylabel='y', yunit='solRad')\nmplfig = autofig.draw()", "As was the case for dimensions in plot, 'fc' (facecolor) and 'ec' (edgecolor) accept the following suffixes:\n* label\n* unit\n* map\n* lim", "autofig.reset()\nautofig.mesh(x=xs, y=ys, z=zs, \n xlabel='x', xunit='solRad', \n ylabel='y', yunit='solRad',\n fc=teffs, fcmap='afmhot', fclabel='teff', fcunit='K')\nmplfig = autofig.draw()", "The edges can be turned off by passing ec='none'. Also see how fclim='symmetric' will force the white in the 'bwr' colormap to correspond to vz=0.", "autofig.reset()\nautofig.mesh(x=xs, y=ys, z=zs,\n xlabel='x', xunit='solRad', \n ylabel='y', yunit='solRad',\n fc=-vzs, fcmap='bwr', fclim='symmetric', fclabel='rv', fcunit='solRad/d', \n ec='none')\nmplfig = autofig.draw()", "The facecolor default to 'none' allows you to see \"through\" the mesh:", "autofig.reset()\nautofig.mesh(x=xs, y=ys, z=zs,\n xlabel='x', xunit='solRad', \n ylabel='y', yunit='solRad',\n ec=-vzs, ecmap='bwr', eclim='symmetric', eclabel='rv', ecunit='solRad/d')\nmplfig = autofig.draw()", "In order to not see through the mesh, set the facecolor to 'white':", "autofig.reset()\nautofig.mesh(x=xs, y=ys, z=zs,\n xlabel='x', xunit='solRad', \n ylabel='y', yunit='solRad',\n ec=-vzs, ecmap='bwr', eclim='symmetric', eclabel='rv', ecunit='solRad/d', \n fc='white')\nmplfig = autofig.draw()", "We can of course provide different arrays and colormaps for the edge and face:", "autofig.reset()\nautofig.mesh(x=xs, y=ys, z=zs, \n xlabel='x', xunit='solRad', \n ylabel='y', yunit='solRad',\n fc=teffs, fcmap='afmhot', fclabel='teff', fcunit='K',\n ec=-vzs, ecmap='bwr', eclim='symmetric', eclabel='rv', ecunit='solRad/d')\nmplfig = autofig.draw()", "Animate and Limits", "times = np.linspace(0,1,21)\nb = phoebe.default_binary()\nb.add_dataset('mesh', times=times, columns='vws')\nb.run_compute()", "Rather than add an extra dimension, we can make a separate call to mesh for each time and pass the time to the 'i' dimension as a float.", "autofig.reset()\nfor t in times:\n for c in ['primary', 'secondary']:\n verts = b.get_value(time=t, component=c, qualifier='uvw_elements', context='model')\n vzs = b.get_value(time=t, component=c, qualifier='vws', context='model')\n xs = verts[:, :, 0]\n ys = verts[:, :, 1]\n zs = verts[:, :, 2]\n autofig.mesh(x=xs, y=ys, z=zs, i=t,\n xlabel='x', xunit='solRad', \n ylabel='y', yunit='solRad',\n fc=-vzs, fcmap='bwr', fclim='symmetric', fclabel='rv', fcunit='solRad/d', \n ec='none',\n consider_for_limits=c=='primary')\n \nmplfig = autofig.draw()\n\nautofig.gcf().axes[0].pad_aspect=False # pad_aspect=True (default) causes issues with fixed limits... sigh\nanim = autofig.animate(i=times, save='mesh_1.gif', save_kwargs={'writer': 'imagemagick'})", "", "autofig.gcf().axes[0].x.lim = None\nanim = autofig.animate(i=times, save='mesh_2.gif', save_kwargs={'writer': 'imagemagick'})", "", "autofig.gcf().axes[0].x.lim = 4\nanim = autofig.animate(i=times, save='mesh_3.gif', save_kwargs={'writer': 'imagemagick'})", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jinzishuai/learn2deeplearn
deeplearning.ai/C2.ImproveDeepNN/week1-hw/Gradient Checking/Gradient+Checking+v1.ipynb
gpl-3.0
[ "Gradient Checking\nWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. \nYou are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. \nBut backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, \"Give me a proof that your backpropagation is actually working!\" To give this reassurance, you are going to use \"gradient checking\".\nLet's do it!", "# Packages\nimport numpy as np\nfrom testCases import *\nfrom gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector", "1) How does gradient checking work?\nBackpropagation computes the gradients $\\frac{\\partial J}{\\partial \\theta}$, where $\\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.\nBecause forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\\frac{\\partial J}{\\partial \\theta}$. \nLet's look back at the definition of a derivative (or gradient):\n$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\nIf you're not familiar with the \"$\\displaystyle \\lim_{\\varepsilon \\to 0}$\" notation, it's just a way of saying \"when $\\varepsilon$ is really really small.\"\nWe know the following:\n\n$\\frac{\\partial J}{\\partial \\theta}$ is what you want to make sure you're computing correctly. \nYou can compute $J(\\theta + \\varepsilon)$ and $J(\\theta - \\varepsilon)$ (in the case that $\\theta$ is a real number), since you're confident your implementation for $J$ is correct. \n\nLets use equation (1) and a small value for $\\varepsilon$ to convince your CEO that your code for computing $\\frac{\\partial J}{\\partial \\theta}$ is correct!\n2) 1-dimensional gradient checking\nConsider a 1D linear function $J(\\theta) = \\theta x$. The model contains only a single real-valued parameter $\\theta$, and takes $x$ as input.\nYou will implement code to compute $J(.)$ and its derivative $\\frac{\\partial J}{\\partial \\theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. \n<img src=\"images/1Dgrad_kiank.png\" style=\"width:600px;height:250px;\">\n<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>\nThe diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ (\"forward propagation\"). Then compute the derivative $\\frac{\\partial J}{\\partial \\theta}$ (\"backward propagation\"). \nExercise: implement \"forward propagation\" and \"backward propagation\" for this simple function. I.e., compute both $J(.)$ (\"forward propagation\") and its derivative with respect to $\\theta$ (\"backward propagation\"), in two separate functions.", "# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(x, theta):\n \"\"\"\n Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n \n Returns:\n J -- the value of function J, computed using the formula J(theta) = theta * x\n \"\"\"\n \n ### START CODE HERE ### (approx. 1 line)\n J = x*theta\n ### END CODE HERE ###\n \n return J\n\nx, theta = 2, 4\nJ = forward_propagation(x, theta)\nprint (\"J = \" + str(J))", "Expected Output:\n<table style=>\n <tr>\n <td> ** J ** </td>\n <td> 8</td>\n </tr>\n</table>\n\nExercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\\theta) = \\theta x$ with respect to $\\theta$. To save you from doing the calculus, you should get $dtheta = \\frac { \\partial J }{ \\partial \\theta} = x$.", "# GRADED FUNCTION: backward_propagation\n\ndef backward_propagation(x, theta):\n \"\"\"\n Computes the derivative of J with respect to theta (see Figure 1).\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n \n Returns:\n dtheta -- the gradient of the cost with respect to theta\n \"\"\"\n \n ### START CODE HERE ### (approx. 1 line)\n dtheta = x\n ### END CODE HERE ###\n \n return dtheta\n\nx, theta = 2, 4\ndtheta = backward_propagation(x, theta)\nprint (\"dtheta = \" + str(dtheta))", "Expected Output:\n<table>\n <tr>\n <td> ** dtheta ** </td>\n <td> 2 </td>\n </tr>\n</table>\n\nExercise: To show that the backward_propagation() function is correctly computing the gradient $\\frac{\\partial J}{\\partial \\theta}$, let's implement gradient checking.\nInstructions:\n- First compute \"gradapprox\" using the formula above (1) and a small value of $\\varepsilon$. Here are the Steps to follow:\n 1. $\\theta^{+} = \\theta + \\varepsilon$\n 2. $\\theta^{-} = \\theta - \\varepsilon$\n 3. $J^{+} = J(\\theta^{+})$\n 4. $J^{-} = J(\\theta^{-})$\n 5. $gradapprox = \\frac{J^{+} - J^{-}}{2 \\varepsilon}$\n- Then compute the gradient using backward propagation, and store the result in a variable \"grad\"\n- Finally, compute the relative difference between \"gradapprox\" and the \"grad\" using the following formula:\n$$ difference = \\frac {\\mid\\mid grad - gradapprox \\mid\\mid_2}{\\mid\\mid grad \\mid\\mid_2 + \\mid\\mid gradapprox \\mid\\mid_2} \\tag{2}$$\nYou will need 3 Steps to compute this formula:\n - 1'. compute the numerator using np.linalg.norm(...)\n - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.\n - 3'. divide them.\n- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.", "# GRADED FUNCTION: gradient_check\n\ndef gradient_check(x, theta, epsilon = 1e-7):\n \"\"\"\n Implement the backward propagation presented in Figure 1.\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n \n Returns:\n difference -- difference (2) between the approximated gradient and the backward propagation gradient\n \"\"\"\n \n # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.\n ### START CODE HERE ### (approx. 5 lines)\n thetaplus = theta+epsilon # Step 1\n thetaminus = theta-epsilon # Step 2\n J_plus = forward_propagation(x, thetaplus) # Step 3\n J_minus = forward_propagation(x, thetaminus) # Step 4\n gradapprox = (J_plus-J_minus)/(2*epsilon) # Step 5\n ### END CODE HERE ###\n \n # Check if gradapprox is close enough to the output of backward_propagation()\n ### START CODE HERE ### (approx. 1 line)\n grad = backward_propagation(x, theta)\n ### END CODE HERE ###\n \n ### START CODE HERE ### (approx. 1 line)\n numerator = np.linalg.norm(grad-gradapprox) # Step 1'\n denominator = np.linalg.norm(grad)+np.linalg.norm(gradapprox) # Step 2'\n difference = numerator/denominator # Step 3'\n ### END CODE HERE ###\n \n if difference < 1e-7:\n print (\"The gradient is correct!\")\n else:\n print (\"The gradient is wrong!\")\n \n return difference\n\nx, theta = 2, 4\ndifference = gradient_check(x, theta)\nprint(\"difference = \" + str(difference))", "Expected Output:\nThe gradient is correct!\n<table>\n <tr>\n <td> ** difference ** </td>\n <td> 2.9193358103083e-10 </td>\n </tr>\n</table>\n\nCongrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation(). \nNow, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!\n3) N-dimensional gradient checking\nThe following figure describes the forward and backward propagation of your fraud detection model.\n<img src=\"images/NDgrad_kiank.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>\nLet's look at your implementations for forward propagation and backward propagation.", "def forward_propagation_n(X, Y, parameters):\n \"\"\"\n Implements the forward propagation (and computes the cost) presented in Figure 3.\n \n Arguments:\n X -- training set for m examples\n Y -- labels for m examples \n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n W1 -- weight matrix of shape (5, 4)\n b1 -- bias vector of shape (5, 1)\n W2 -- weight matrix of shape (3, 5)\n b2 -- bias vector of shape (3, 1)\n W3 -- weight matrix of shape (1, 3)\n b3 -- bias vector of shape (1, 1)\n \n Returns:\n cost -- the cost function (logistic cost for one example)\n \"\"\"\n \n # retrieve parameters\n m = X.shape[1]\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n W3 = parameters[\"W3\"]\n b3 = parameters[\"b3\"]\n\n # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID\n Z1 = np.dot(W1, X) + b1\n A1 = relu(Z1)\n Z2 = np.dot(W2, A1) + b2\n A2 = relu(Z2)\n Z3 = np.dot(W3, A2) + b3\n A3 = sigmoid(Z3)\n\n # Cost\n logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)\n cost = 1./m * np.sum(logprobs)\n \n cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)\n \n return cost, cache", "Now, run backward propagation.", "def backward_propagation_n(X, Y, cache):\n \"\"\"\n Implement the backward propagation presented in figure 2.\n \n Arguments:\n X -- input datapoint, of shape (input size, 1)\n Y -- true \"label\"\n cache -- cache output from forward_propagation_n()\n \n Returns:\n gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.\n \"\"\"\n \n m = X.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n dW3 = 1./m * np.dot(dZ3, A2.T)\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n \n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = 1./m * np.dot(dZ2, A1.T) * 2\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = 1./m * np.dot(dZ1, X.T)\n db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\n \"dA2\": dA2, \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2,\n \"dA1\": dA1, \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients", "You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.\nHow does gradient checking work?.\nAs in 1) and 2), you want to compare \"gradapprox\" to the gradient computed by backpropagation. The formula is still:\n$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\nHowever, $\\theta$ is not a scalar anymore. It is a dictionary called \"parameters\". We implemented a function \"dictionary_to_vector()\" for you. It converts the \"parameters\" dictionary into a vector called \"values\", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.\nThe inverse function is \"vector_to_dictionary\" which outputs back the \"parameters\" dictionary.\n<img src=\"images/dictionary_to_vector.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>\nWe have also converted the \"gradients\" dictionary into a vector \"grad\" using gradients_to_vector(). You don't need to worry about that.\nExercise: Implement gradient_check_n().\nInstructions: Here is pseudo-code that will help you implement the gradient check.\nFor each i in num_parameters:\n- To compute J_plus[i]:\n 1. Set $\\theta^{+}$ to np.copy(parameters_values)\n 2. Set $\\theta^{+}_i$ to $\\theta^{+}_i + \\varepsilon$\n 3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\\theta^{+}$ )). \n- To compute J_minus[i]: do the same thing with $\\theta^{-}$\n- Compute $gradapprox[i] = \\frac{J^{+}_i - J^{-}_i}{2 \\varepsilon}$\nThus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: \n$$ difference = \\frac {\\| grad - gradapprox \\|_2}{\\| grad \\|_2 + \\| gradapprox \\|_2 } \\tag{3}$$", "# GRADED FUNCTION: gradient_check_n\n\ndef gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):\n \"\"\"\n Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n\n \n Arguments:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. \n x -- input datapoint, of shape (input size, 1)\n y -- true \"label\"\n epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n \n Returns:\n difference -- difference (2) between the approximated gradient and the backward propagation gradient\n \"\"\"\n \n # Set-up variables\n parameters_values, _ = dictionary_to_vector(parameters)\n grad = gradients_to_vector(gradients)\n num_parameters = parameters_values.shape[0]\n J_plus = np.zeros((num_parameters, 1))\n J_minus = np.zeros((num_parameters, 1))\n gradapprox = np.zeros((num_parameters, 1))\n \n # Compute gradapprox\n for i in range(num_parameters):\n \n # Compute J_plus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_plus[i]\".\n # \"_\" is used because the function you have to outputs two parameters but we only care about the first one\n ### START CODE HERE ### (approx. 3 lines)\n thetaplus = np.copy(parameters_values) # Step 1\n thetaplus[i][0] = thetaplus[i][0]+epsilon # Step 2\n J_plus[i], _ = forward_propagation_n(X, Y,vector_to_dictionary(thetaplus)) # Step 3\n ### END CODE HERE ###\n \n # Compute J_minus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_minus[i]\".\n ### START CODE HERE ### (approx. 3 lines)\n thetaminus = np.copy(parameters_values) # Step 1\n thetaminus[i][0] = thetaminus[i][0]-epsilon # Step 2 \n J_minus[i], _ = forward_propagation_n(X, Y,vector_to_dictionary(thetaminus)) # Step 3\n ### END CODE HERE ###\n \n # Compute gradapprox[i]\n ### START CODE HERE ### (approx. 1 line)\n gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon)\n ### END CODE HERE ###\n \n # Compare gradapprox to backward propagation gradients by computing difference.\n ### START CODE HERE ### (approx. 1 line)\n numerator = np.linalg.norm(grad-gradapprox, ord=2) # Step 1'\n denominator = np.linalg.norm(grad, ord=2)+np.linalg.norm(gradapprox, ord=2) # Step 2'\n difference = numerator/denominator # Step 3'\n ### END CODE HERE ###\n\n if difference > 2e-7:\n print (\"\\033[93m\" + \"There is a mistake in the backward propagation! difference = \" + str(difference) + \"\\033[0m\")\n else:\n print (\"\\033[92m\" + \"Your backward propagation works perfectly fine! difference = \" + str(difference) + \"\\033[0m\")\n \n return difference\n\nX, Y, parameters = gradient_check_n_test_case()\n\ncost, cache = forward_propagation_n(X, Y, parameters)\ngradients = backward_propagation_n(X, Y, cache)\ndifference = gradient_check_n(parameters, gradients, X, Y)", "Expected output:\n<table>\n <tr>\n <td> ** There is a mistake in the backward propagation!** </td>\n <td> difference = 0.285093156781 </td>\n </tr>\n</table>\n\nIt seems that there were errors in the backward_propagation_n code we gave you! Good that you've implemented the gradient check. Go back to backward_propagation and try to find/correct the errors (Hint: check dW2 and db1). Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining backward_propagation_n() if you modify the code. \nCan you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented. \nNote \n- Gradient Checking is slow! Approximating the gradient with $\\frac{\\partial J}{\\partial \\theta} \\approx \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct. \n- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout. \nCongrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :) \n<font color='blue'>\nWhat you should remember from this notebook:\n- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).\n- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
flutter/codelabs
tfrs-flutter/step5/backend/ranking/ranking.ipynb
bsd-3-clause
[ "Copyright 2022 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Recommending movies: ranking\nThis tutorial is a slightly adapted version of the basic ranking tutorial from TensorFlow Recommenders documentation.\nImports\nLet's first get our imports out of the way.", "!pip install -q tensorflow-recommenders\n!pip install -q --upgrade tensorflow-datasets\n\nimport os\nimport pprint\nimport tempfile\n\nfrom typing import Dict, Text\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\nimport tensorflow_recommenders as tfrs", "Preparing the dataset\nWe're continuing to use the MovieLens dataset. This time, we're also going to keep the ratings: these are the objectives we are trying to predict.", "ratings = tfds.load(\"movielens/100k-ratings\", split=\"train\")\n\nratings = ratings.map(lambda x: {\n \"movie_title\": x[\"movie_title\"],\n \"user_id\": x[\"user_id\"],\n \"user_rating\": x[\"user_rating\"]\n})", "We'll split the data by putting 80% of the ratings in the train set, and 20% in the test set.", "tf.random.set_seed(42)\nshuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)\n\ntrain = shuffled.take(80_000)\ntest = shuffled.skip(80_000).take(20_000)", "Next we figure out unique user ids and movie titles present in the data so that we can create the embedding user and movie embedding tables.", "movie_titles = ratings.batch(1_000_000).map(lambda x: x[\"movie_title\"])\nuser_ids = ratings.batch(1_000_000).map(lambda x: x[\"user_id\"])\n\nunique_movie_titles = np.unique(np.concatenate(list(movie_titles)))\nunique_user_ids = np.unique(np.concatenate(list(user_ids)))", "Implementing a model\nArchitecture\nRanking models do not face the same efficiency constraints as retrieval models do, and so we have a little bit more freedom in our choice of architectures. We can implement our ranking model as follows:", "class RankingModel(tf.keras.Model):\n\n def __init__(self):\n super().__init__()\n embedding_dimension = 32\n\n # Compute embeddings for users.\n self.user_embeddings = tf.keras.Sequential([\n tf.keras.layers.StringLookup(\n vocabulary=unique_user_ids, mask_token=None),\n tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)\n ])\n\n # Compute embeddings for movies.\n self.movie_embeddings = tf.keras.Sequential([\n tf.keras.layers.StringLookup(\n vocabulary=unique_movie_titles, mask_token=None),\n tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)\n ])\n\n # Compute predictions.\n self.ratings = tf.keras.Sequential([\n # Learn multiple dense layers.\n tf.keras.layers.Dense(256, activation=\"relu\"),\n tf.keras.layers.Dense(64, activation=\"relu\"),\n # Make rating predictions in the final layer.\n tf.keras.layers.Dense(1)\n ])\n \n def call(self, inputs):\n\n user_id, movie_title = inputs\n\n user_embedding = self.user_embeddings(user_id)\n movie_embedding = self.movie_embeddings(movie_title)\n\n return self.ratings(tf.concat([user_embedding, movie_embedding], axis=1))", "Loss and metrics\nWe'll make use of the Ranking task object: a convenience wrapper that bundles together the loss function and metric computation. \nWe'll use it together with the MeanSquaredError Keras loss in order to predict the ratings.", "task = tfrs.tasks.Ranking(\n loss = tf.keras.losses.MeanSquaredError(),\n metrics=[tf.keras.metrics.RootMeanSquaredError()]\n)", "The full model\nWe can now put it all together into a model.", "class MovielensModel(tfrs.models.Model):\n\n def __init__(self):\n super().__init__()\n self.ranking_model: tf.keras.Model = RankingModel()\n self.task: tf.keras.layers.Layer = tfrs.tasks.Ranking(\n loss = tf.keras.losses.MeanSquaredError(),\n metrics=[tf.keras.metrics.RootMeanSquaredError()]\n )\n\n def call(self, features: Dict[str, tf.Tensor]) -> tf.Tensor:\n return self.ranking_model(\n (features[\"user_id\"], features[\"movie_title\"]))\n\n def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:\n labels = features.pop(\"user_rating\")\n \n rating_predictions = self(features)\n\n # The task computes the loss and the metrics.\n return self.task(labels=labels, predictions=rating_predictions)", "Fitting and evaluating\nAfter defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.\nLet's first instantiate the model.", "model = MovielensModel()\nmodel.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))", "Then shuffle, batch, and cache the training and evaluation data.", "cached_train = train.shuffle(100_000).batch(8192).cache()\ncached_test = test.batch(4096).cache()", "Then train the model:", "model.fit(cached_train, epochs=3)", "As the model trains, the loss is falling and the RMSE metric is improving.\nFinally, we can evaluate our model on the test set:", "model.evaluate(cached_test, return_dict=True)", "The lower the RMSE metric, the more accurate our model is at predicting ratings.\nExporting for serving\nThe model can be easily exported for serving:", "tf.saved_model.save(model, \"exported-ranking/123\")", "We will deploy the model with TensorFlow Serving soon.", "# Zip the SavedModel folder for easier download\n!zip -r exported-ranking.zip exported-ranking/ " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hhain/sdap17
notebooks/pawel_ueb2/mustererkennung_in_funkmessdaten_PCA.ipynb
mit
[ "Mustererkennung in Funkmessdaten\nAufgabe 1: Laden der Datenbank in Jupyter Notebook", "# imports\nimport re\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport pprint as pp", "Wir öffnen die Datenbank und lassen uns die Keys der einzelnen Tabellen ausgeben. \n", "hdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf')\nprint(hdf.keys)", "Aufgabe 2: Inspektion eines einzelnen Dataframes\nWir laden den Frame x1_t1_trx_1_4 und betrachten seine Dimension.", "df_x1_t1_trx_1_4 = hdf.get('/x1/t1/trx_1_4')\nprint(\"Rows:\", df_x1_t1_trx_1_4.shape[0])\nprint(\"Columns:\", df_x1_t1_trx_1_4.shape[1])", "Als nächstes Untersuchen wir exemplarisch für zwei Empfänger-Sender-Gruppen die Attributzusammensetzung.", "# first inspection of columns from df_x1_t1_trx_1_4\ndf_x1_t1_trx_1_4.head(5)", "Für die Analyse der Frames definieren wir einige Hilfsfunktionen.", "# Little function to retrieve sender-receiver tuples from df columns\ndef extract_snd_rcv(df):\n regex = r\"trx_[1-4]_[1-4]\"\n # creates a set containing the different pairs\n snd_rcv = {x[4:7] for x in df.columns if re.search(regex, x)}\n return [(x[0],x[-1]) for x in snd_rcv]\n\n# Sums the number of columns for each sender-receiver tuple\ndef get_column_counts(snd_rcv, df):\n col_counts = {}\n for snd,rcv in snd_rcv:\n col_counts['Columns for pair {} {}:'.format(snd, rcv)] = len([i for i, word in enumerate(list(df.columns)) if word.startswith('trx_{}_{}'.format(snd, rcv))])\n return col_counts\n\n# Analyze the column composition of a given measurement.\ndef analyse_columns(df):\n df_snd_rcv = extract_snd_rcv(df)\n cc = get_column_counts(df_snd_rcv, df)\n\n for x in cc:\n print(x, cc[x])\n print(\"Sum of pair related columns: %i\" % sum(cc.values()))\n print()\n print(\"Other columns are:\")\n for att in [col for col in df.columns if 'ifft' not in col and 'ts' not in col]:\n print(att)\n\n# Analyze the values of the target column.\ndef analyze_target(df):\n print(df['target'].unique())\n print(\"# Unique values in target: %i\" % len(df['target'].unique()))", "Bestimme nun die Spaltezusammensetzung von df_x1_t1_trx_1_4.", "analyse_columns(df_x1_t1_trx_1_4)", "Betrachte den Inhalt der \"target\"-Spalte von df_x1_t1_trx_1_4.", "analyze_target(df_x1_t1_trx_1_4)", "Als nächstes laden wir den Frame x3_t2_trx_3_1 und betrachten seine Dimension.", "df_x3_t2_trx_3_1 = hdf.get('/x3/t2/trx_3_1')\nprint(\"Rows:\", df_x3_t2_trx_3_1.shape[0])\nprint(\"Columns:\", df_x3_t2_trx_3_1.shape[1])", "Gefolgt von einer Analyse seiner Spaltenzusammensetzung und seiner \"target\"-Werte.", "analyse_columns(df_x3_t2_trx_3_1)\n\nanalyze_target(df_x3_t2_trx_3_1)", "Frage: Was stellen Sie bzgl. der „Empfänger-Nummer_Sender-Nummer“-Kombinationen fest? Sind diese gleich? Welche Ausprägungen finden Sie in der Spalte „target“? \nAntwort: Wir sehen, wenn jeweils ein Paar sendet, hören die anderen beiden Sender zu und messen ihre Verbindung zu den gerade sendenden Knoten (d.h. 6 Paare in jedem Dataframe). Sendet z.B. das Paar 3 1, so misst Knoten 1 die Verbindung 1-3, Knoten 3 die Verbindung 3-1 und Knoten 2 und 4 Verbindung 2-1 und 2-3 bzw. 4-1 und 4-3. Die 10 verschiedenen Ausprägungen der Spalte \"target\" sind oben zu sehen.\nAufgabe 3: Visualisierung der Messreihe des Datensatz\nWir visualisieren die Rohdaten mit verschiedenen Heatmaps, um so die Integrität der Daten optisch zu validieren und Ideen für mögliche Features zu entwickeln. Hier stellen wir exemplarisch die Daten von Frame df_x1_t1_trx_1_4 dar.", "vals = df_x1_t1_trx_1_4.loc[:,'trx_2_4_ifft_0':'trx_2_4_ifft_1999'].values\n\n# one big heatmap\nplt.figure(figsize=(14, 12))\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')\nplt.show()", "Wir betrachten wie verschiedene Farbschemata unterschiedliche Merkmale unserer Rohdaten hervorheben.", "# compare different heatmaps\nplt.figure(1, figsize=(12,10))\n\n# nipy_spectral_r scheme\nplt.subplot(221)\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')\n\n# terrain scheme\nplt.subplot(222)\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='terrain')\n\n# Vega10 scheme\nplt.subplot(223)\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Vega10')\n\n# Wistia scheme\nplt.subplot(224)\nplt.title('trx_2_4_ifft')\nplt.xlabel(\"ifft of frequency\")\nplt.ylabel(\"measurement\")\nax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Wistia')\n\n# Adjust the subplot layout, because the logit one may take more space\n# than usual, due to y-tick labels like \"1 - 10^{-3}\"\nplt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25,\n wspace=0.2)\n\n\nplt.show()", "Aufgabe 3: Groundtruth-Label anpassen", "# Iterating over hdf data and creating interim data presentation stored in data/interim/testmessungen_interim.hdf\n# Interim data representation contains aditional binary class (binary_target - encoding 0=empty and 1=not empty)\n# and multi class target (multi_target - encoding 0-9 for each possible class)\nfrom sklearn.preprocessing import LabelEncoder\nle = LabelEncoder()\n\ninterim_path = '../../data/interim/01_testmessungen.hdf'\n\ndef binary_mapper(df):\n \n def map_binary(target):\n if target.startswith('Empty'):\n return 0\n else:\n return 1\n \n df['binary_target'] = pd.Series(map(map_binary, df['target']))\n \n \ndef multiclass_mapper(df):\n le.fit(df['target'])\n df['multi_target'] = le.transform(df['target'])\n \nfor key in hdf.keys():\n df = hdf.get(key)\n binary_mapper(df)\n multiclass_mapper(df)\n df.to_hdf(interim_path, key)\n\nhdf.close()", "Überprüfe neu beschrifteten Dataframe „/x1/t1/trx_3_1“ verwenden. Wir erwarten als Ergebnisse für 5 zu Beginn des Experiments „Empty“ (bzw. 0) und für 120 mitten im Experiment „Not Empty“ (bzw. 1).", "hdf = pd.HDFStore('../../data/interim/01_testmessungen.hdf')\ndf_x1_t1_trx_3_1 = hdf.get('/x1/t1/trx_3_1')\nprint(\"binary_target for measurement 5:\", df_x1_t1_trx_3_1['binary_target'][5])\nprint(\"binary_target for measurement 120:\", df_x1_t1_trx_3_1['binary_target'][120])\nhdf.close()", "Aufgabe 4: Einfacher Erkenner mit Hold-Out-Validierung\nWir folgen den Schritten in Aufgabe 4 und testen einen einfachen Erkenner.", "from evaluation import *\nfrom filters import *\nfrom utility import *\nfrom features import *", "Öffnen von Hdf mittels pandas", "# raw data to achieve target values\nhdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf')", "Beispiel Erkenner\nDatensätze vorbereiten", "# generate datasets\ntst = ['1','2','3']\ntst_ds = []\n\nfor t in tst:\n\n df_tst = hdf.get('/x1/t'+t+'/trx_3_1')\n lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]\n \n #df_tst_cl,_ = distortion_filter(df_tst_cl)\n \n groups = get_trx_groups(df_tst)\n df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target')\n df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single)\n df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature\n \n df_all = pd.concat( [df_std, df_mean, df_p2p], axis=1 ) # added p2p feature\n \n df_all = cf_std_window(df_all, window=4, label='target')\n \n df_tst_sum = generate_class_label_presence(df_all, state_variable='target')\n \n # remove index column\n df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]\n print('Columns in Dataset:',t)\n print(df_tst_sum.columns)\n \n tst_ds.append(df_tst_sum.copy())\n\n# holdout validation\nprint(hold_out_val(tst_ds, target='target', include_self=False, cl='rf', verbose=False, random_state=1))", "Schließen von HDF Store", "hdf.close()", "Aufgabe 5: Eigener Erkenner\nFür die Konstruktion eines eigenen Erkenners führen wir die entsprechenden Preprocessing und Mapping Schritte ausgehend von den Roddaten erneut durch und passen diese unseren Bedürfnissen an.# Load hdfs data\nhdfs = pd.HDFStore(\"../../data/raw/henrik/TestMessungen_NEU.hdf\")", "# Load raw data\nhdf = pd.HDFStore(\"../../data/raw/TestMessungen_NEU.hdf\")\n\n# Check available keys in hdf store\nprint(hdf.keys)", "Vorverarbeitung\nZuerst passen wir die Groundtruth-Label an, entfernen Zeitstempel sowie Zeilenindices und speichern die resultierenden Frames ab.", "hdf_path = \"../../data/interim/02_tesmessungen.hdf\"\n\n# Mapping groundtruth to 0-empty and 1-not empty and prepare for further preprocessing by\n# removing additional timestamp columns and index column\n# Storing cleaned dataframes (no index, removed _ts columns, mapped multi classes to 0-empty, 1-not empty)\n# to new hdfstore to `data/interim/02_testmessungen.hdf`\n\n\ndfs = []\nfor key in hdf.keys():\n df = hdf.get(key)\n #df['target'] = df['target'].map(lambda x: 0 if x.startswith(\"Empty\") else 1) \n # drop all time stamp columns who endswith _ts\n cols = [c for c in df.columns if not c.lower().endswith(\"ts\")]\n df = df[cols]\n df = df.drop('Timestamp', axis=1)\n df = df.drop('index', axis=1)\n df.to_hdf(hdf_path, key)\nhdf.close()", "Wir sehen, dass nur noch die 6 x 2000 Messungen für die jeweiligen Paare sowie die 'target'-Werte in den resultierenden Frames enthalten sind.", "hdf = pd.HDFStore(hdf_path)\ndf = hdf.get(\"/x1/t1/trx_1_2\")\ndf.head()\n\n# Step-1 repeating the previous taks 4 to get a comparable base result with the now dropped _ts and index column to improve from\n# generate datasets\nfrom evaluation import *\nfrom filters import *\nfrom utility import *\nfrom features import *\n \ndef prepare_features(c, p):\n tst = ['1','2','3']\n tst_ds = []\n\n for t in tst:\n\n df_tst = hdf.get('/x'+c+'/t'+t+'/trx_'+p)\n lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]\n\n #df_tst_cl,_ = distortion_filter(df_tst_cl)\n\n df_tst,_ = distortion_filter(df_tst)\n\n groups = get_trx_groups(df_tst)\n df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target')\n df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single)\n\n df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature\n\n df_kurt = rf_grouped(df_tst, groups=groups, fn=rf_kurtosis_single)\n\n df_all = pd.concat( [df_std, df_mean, df_p2p, df_kurt], axis=1 ) # added p2p feature\n\n df_all = cf_std_window(df_all, window=4, label='target')\n\n df_all = cf_diff(df_all, label='target')\n\n df_tst_sum = generate_class_label_presence(df_all, state_variable='target')\n\n # remove index column\n df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]\n # print('Columns in Dataset:',t)\n # print(df_tst_sum.columns)\n\n tst_ds.append(df_tst_sum.copy())\n \n return tst_ds\n\ntst_ds = prepare_features(c='1', p='3_1')\n\n# Evaluating different supervised learning methods provided in eval.py\n# added a NN evaluator but there are some problems regarding usage and hidden layers\n# For the moment only kurtosis and cf_diff are added to the dataset as well as the distortion filter\n# Feature selection is needed right now!\nfor elem in ['rf', 'dt', 'nb' ,'nn','knn']:\n print(elem, \":\", hold_out_val(tst_ds, target='target', include_self=False, cl=elem, verbose=False, random_state=1))\n\n# extra column features generated and reduced with PCA\n\nfrom evaluation import *\nfrom filters import *\nfrom utility import *\nfrom features import *\nfrom new_features import *\n\ndef prepare_features_PCA_cf(c, p):\n tst = ['1','2','3']\n tst_ds = []\n\n for t in tst:\n\n df_tst = hdf.get('/x'+c+'/t'+t+'/trx_'+p)\n\n lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]\n\n df_tst,_ = distortion_filter(df_tst)\n\n groups = get_trx_groups(df_tst)\n\n df_cf_mean = reduce_dim_PCA(cf_mean_window(df_tst, window=3, column_key=\"ifft\", label=None ).fillna(0), n_comps=10)\n #df_cf_std = reduce_dim_PCA(cf_std_window(df_tst, window=3, column_key=\"ifft\", label=None ).fillna(0), n_comps=10)\n df_cf_ptp = reduce_dim_PCA(cf_ptp(df_tst, window=3, column_key=\"ifft\", label=None ).fillna(0), n_comps=10)\n #df_cf_kurt = reduce_dim_PCA(cf_kurt(df_tst, window=3, column_key=\"ifft\", label=None ).fillna(0), n_comps=10)\n\n\n #df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single)\n df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single, label='target')\n df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature\n df_kurt = rf_grouped(df_tst, groups=groups, fn=rf_kurtosis_single)\n df_skew = rf_grouped(df_tst, groups=groups, fn=rf_skew_single)\n\n\n df_all = pd.concat( [df_mean, df_p2p, df_kurt, df_skew], axis=1 ) \n\n df_all = cf_std_window(df_all, window=4, label='target')\n\n df_all = cf_diff(df_all, label='target')\n\n df_all = reduce_dim_PCA(df_all.fillna(0), n_comps=10, label='target')\n\n df_all = pd.concat( [df_all, df_cf_mean, df_cf_ptp], axis=1)\n\n df_tst_sum = generate_class_label_presence(df_all, state_variable='target')\n\n # remove index column\n df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]\n\n #print('Columns in Dataset:',t)\n #print(df_tst_sum.columns)\n\n\n tst_ds.append(df_tst_sum.copy())\n \n return tst_ds\n\ntst_ds_PCA = prepare_features_PCA_cf(c='1', p='3_1')\n\n# Evaluating different supervised learning methods provided in eval.py\n# We can see that the column features have increased F1 score of the classifiers\n# Best score for Naive Bayes\nfor elem in ['rf', 'dt', 'nb' ,'nn','knn']:\n print(elem, \":\", hold_out_val(tst_ds_PCA, target='target', include_self=False, cl=elem, verbose=False, random_state=1))\n\ndef evaluate_models(ds):\n res = {}\n for elem in ['rf', 'dt', 'nb' ,'nn','knn']: \n res[elem] = hold_out_val(ds, target='target', include_self=False, cl=elem, verbose=False, random_state=1)\n return res\n\n \ndef evaluate_performance(c, p):\n # include a prepare data function?\n ds = prepare_features(c, p)\n return evaluate_models(ds) \n\n\ndef evaluate_performance_PCA_cf(c, p):\n # include a prepare data function?\n ds = prepare_features_PCA_cf(c, p)\n return evaluate_models(ds) \n\nconfig = ['1','2','3','4']\npairing = ['1_2','1_4','2_3','3_1','3_4','4_2']\ntst_ds = []\n\nres_all = []\nfor c in config:\n print(\"Testing for configuration\", c)\n for p in pairing:\n print(\"Analyse performance for pairing\", p)\n res = evaluate_performance(c, p)\n res_all.append(res)\n # TODO draw graph\n for model in res:\n print(model, res[model])\n\nall_keys = set().union(*(d.keys() for d in res_all))\nprint(all_keys)\nprint(\"results for prepare_features() function\")\nfor key in all_keys:\n print(\"mean F1 for {}: {}\".format(key, sum(item[key][0] for item in res_all)/len(res_all)))\n\nconfig = ['1','2','3','4']\npairing = ['1_2','1_4','2_3','3_1','3_4','4_2']\ntst_ds = []\n\nres_all_PCA = []\nfor c in config:\n print(\"Testing for configuration\", c)\n for p in pairing:\n print(\"Analyse performance for pairing\", p)\n res = evaluate_performance_PCA_cf(c, p)\n res_all_PCA.append(res)\n # TODO draw graph\n for model in res:\n print(model, res[model])\n\nall_keys = set().union(*(d.keys() for d in res_all_PCA))\nprint(all_keys)\nprint(\"results for prepare_features_PCA_cf() function\")\nfor key in all_keys:\n print(\"mean F1 for {}: {}\".format(key, sum(item[key][0] for item in res_all_PCA)/len(res_all_PCA)))", "Aufgabe 6: Online Erkenner\nSerialisierung des Models für den Online Predictor\nDas zuvor gewählte Model wird serialisiert und in 'models/solution_ueb02' gespeichert damit es beim starten der REST-API geladen werden kann.", "from sklearn.externals import joblib\njoblib.dump(res['dt'], '../../models/solution_ueb02/model.plk')", "Starten des online servers\nHierzu müssen die Abhängigkeiten Flask, flask_restful, flask_cors installiert sein\nThe following command starts a flask_restful server on localhost port:5444 which answers json post requests. The server is implemented in the file online.py within the ipynb folder and makes use of the final chosen model.\nRequests can be made as post request to http://localhost:5444/predict with a json file of the following format:\n{ \"row\": \"features\" }\nbe careful that the sent file is valid json. The answer contains the predicted class.\n{ \"p_class\": \"predicted class\" }\nFor now the online predictor only predicts the class of single lines sent to it", "# Navigate to notebooks/solution_ueb02 and start the server\n# with 'python -m online'\n\n# Nun werden zeilenweise Anfragen an die REST-API simuliert, jeder valider json request wird mit einer\n# json prediction response beantwortet" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zipeiyang/liupengyuan.github.io
chapter3/python正则表达式基础快速教程.ipynb
mit
[ "python正则表达式基础快速教程\nBy liupengyuan@pku.edu.cn\n正则表达式,这个术语不太容易望文生义(没有去考证是如何被翻译为正则表达式的),其实其英文为Regular Expression,直接翻译就是:有规律的表达式。这个表达式其实就是一个字符序列,反映某种字符规律,用(字符串模式匹配)来处理字符串。很多高级语言均支持利用正则表达式对字符串进行处理的操作。\npython提供的正则表达式文档可参见:https://docs.python.org/3/library/re.html", "import re", "首先引入python正则表达式库re\n\n1. 入门", "s = 'Blow low, follow in of which low. lower, lmoww oow aow bow cow 23742937 dow kdiieur998.'\np = 'low'", "假设要在字符串s中查找单词low,由于该单词的规律就是low,因此可将low作为一个正则表达式,可命名为p。", "m = re.findall(p, s)\nm", "findall(pattern, string)是re模块中的函数,会在字符串string中将所有匹配正则表达式pattern模式的字符串提取出来,并以一个list的形式返回。该方法是从左到右进行扫描,所返回的list中的每个匹配按照从左到右匹配的顺序进行存放。\n正则表达式low能够将所有单词low匹配出来,但是也会将lower,Blow等含有low字符串中的low也匹配出来。", "p = r'\\blow\\b'\nm = re.findall(p, s)\nm", "\\b,即boundary,是正则表达式中的一种特殊字符,表示单词的边界。正则表达式r'\\blow\\b'就是要单独匹配low,该字符串两侧为单词的边界(边界为空格等,但是并不是要匹配之)", "p = r'[lmo]ow'\nm = re.findall(p, s)\nm", "[lmo],匹配lmo字母中的任何一个", "p = r'[a-d]ow'\nm = re.findall(p, s)\nm", "[a-d],匹配abcd字母中的任何一个", "p = r'\\d'\nm = re.findall(p, s)\nm", "\\d,即digit,表示数字", "p = r'\\d+'\nm = re.findall(p, s)\nm", "+,元字符,表示一个或者重复多个对象,对象为+前面指定的模式\n因此\\d+可以匹配长度至少为1的任意正整数。\n\n2. 基本匹配与实例\n字符模式|匹配模式内容|等价于\n----|---|--\n[a-d]|One character of: a, b, c, d|[abcd]\n[^a-d]|One character except: a, b, c, d|[^abcd]\nabc丨def|abc or def|\n\\d|One digit|[0-9]\n\\D|One non-digit|[^0-9]\n\\s|One whitespace|[ \\t\\n\\r\\f\\v]\n\\S|One non-whitespace|[^ \\t\\n\\r\\f\\v]\n\\w|One word character|[a-zA-Z0-9_]\n\\W|One non-word character|[^a-zA-Z0-9_]\n.|Any character (except newline)|[^\\n]\n固定点标记|匹配模式内容\n----|---\n^|Start of the string\n$|End of the string\n\\b|Boundary between word and non-word characters\n数量词|匹配模式内容\n----|---\n{5}|Match expression exactly 5 times\n{2,5}|Match expression 2 to 5 times\n{2,}|Match expression 2 or more times\n{,5}|Match expression 0 to 5 times\n*|Match expression 0 or more times\n{,}|Match expression 0 or more times\n?|Match expression 0 or 1 times\n{0,1}|Match expression 0 or 1 times\n+|Match expression 1 or more times\n{1,}|Match expression 1 or more times\n字符转义|转义匹配内容\n----|---\n\\.|. character\n\\\\|\\ character\n\\| character\n\\+|+ character\n\\?|? character\n\\{|{ character\n\\)|) character\n\\[|[ character", "m = re.findall(r'\\d{3,4}-?\\d{8}', '010-66677788,02166697788, 0451-22882828')\nm", "匹配电话号码,区号可以是3或者4位,号码为8位,中间可以有-或者没有。", "m = re.findall(r'[\\u4e00-\\u9fa5]', '测试 汉 字,abc,测试xia,可以')\nm", "匹配汉字\n\n\n几个实例\n\n\n正则表达式|匹配内容\n----|---\n[A-Za-z0-9]|匹配英文和数字\n[\\u4E00-\\u9FA5A-Za-z0-9_]|中文英文和数字及下划线\n^[a-zA-Z][a-zA-Z0-9_]{4,15}$`|合法账号,长度在5-16个字符之间,只能用字母数字下划线,且第一个位置必须为字母\n3. 进阶\n3.1 python正则表达式几个函数\n函数|功能|用法\n----|---|---\nre.search|Return a match object if pattern found in string|re.search(r'[pat]tern', 'string')\nre.finditer|Return an iterable of match objects (one for each match)|re.finditer(r'[pat]tern', 'string')\nre.findall|Return a list of all matched strings (different when capture groups)|re.findall(r'[pat]tern', 'string')\nre.split|Split string by regex delimeter & return string list|re.split(r'[ -]', 'st-ri ng')\nre.compile|Compile a regular expression pattern for later use|re.compile(r'[pat]tern')", "m = re.search(r'\\d{3,4}-?\\d{8}', '010-66677788,02166697788, 0451-22882828')\nm\n\nm.group()", "利用group()函数,取出match对象中的内容", "ms = re.finditer(r'\\d{3,4}-?\\d{8}', '010-66677788,02166697788, 0451-22882828')\nfor m in ms:\n print(m.group())\n\nwords = re.split(r'[,-]', '010-66677788,02166697788,0451-22882828')\nwords\n\np = re.compile(r'[,-]')\np.split('010-66677788,02166697788,0451-22882828')", "利用compile()函数将正则表达式编译,如以后多次运行,可加快程序运行速度\n\n3.2 分组与引用\nGroup Type|Expression\n----|---\nCapturing|( ... )\nNon-capturing|(?: ... )\nCapturing group named Y|(?P&lt;Y&gt; ... )\nMatch the Y'th captured group|\\Y\nMatch the named group Y|(?P=Y)\n\n(...) 将括号中的部分,放在一起,视为一组,即group。以该group来匹配符合条件的字符串。\ngroup,可被同一正则表达式的后续,所引用,引用可以利用其位置,或者利用其名称,可称为反向引用。", "p = re.compile('(ab)+')\np.search('ababababab').group()\n\np.search('ababababab').groups()", "有分组的情况,用groups()函数取出匹配的所有分组", "p=re.compile('(\\d)-(\\d)-(\\d)')\np.search('1-2-3').group()\n\np.search('1-2-3').groups()\n\ns = '喜欢/v 你/x 的/u 眼睛/n 和/u 深情/n 。/w'\np = re.compile(r'(\\S+)/n')\nm = p.findall(s)\nm", "按出现顺序捕获名词(/n)。", "p=re.compile('(?P<first>\\d)-(\\d)-(\\d)')\np.search('1-2-3').group()", "在分组内,可通过?P&lt;name&gt;的形式,给该分组命名,其中name是给该分组的命名", "p.search('1-2-3').group('first')", "可利用group('name'),直接通过组名来获取匹配的该分组", "s = 'age:13,name:Tom;age:18,name:John'\np = re.compile(r'age:(\\d+),name:(\\w+)')\nm = p.findall(s)\nm\n\np = re.compile(r'age:(?:\\d+),name:(\\w+)')\nm = p.findall(s)\nm", "(?:\\d+),匹配该模式,但不捕获该分组。因此没有捕获该分组的数字", "s = 'abcdebbcde'\np = re.compile(r'([ab])\\1')\nm = p.search(s)\nprint('The match is {},the capture group is {}'.format(m.group(), m.groups()))", "此即为反向引用\n当分组([ab])内的a或b匹配成功后,将开始匹配\\1,\\1将匹配前面分组成功的字符。因此该正则表达式将匹配aa或bb。\n类似地,r'([a-z])\\1{3}',该正则将匹配连续的4个英文小写字母。", "s = '12,56,89,123,56,98, 12'\np = re.compile(r'\\b(\\d+)\\b.*\\b\\1\\b')\nm = p.search(s)\nm.group(1)", "利用反向引用来判断是否含有重复数字,可提取第一个重复的数字。\n其中\\1是引用前一个分组的匹配。", "s = '12,56,89,123,56,98, 12'\np = re.compile(r'\\b(?P<name>\\d+)\\b.*\\b(?P=name)\\b')\nm = p.search(s)\nm.group(1)", "与前一个类似,但是利用了带分组名称的反向引用。\n\n3.3 贪婪与懒惰\n数量词|匹配模式内容\n----|---\n{2,5}?|Match 2 to 5 times (less preferred)\n{2,}?|Match 2 or more times (less preferred)\n{,5}?|Match 0 to 5 times (less preferred)\n*?|Match 0 or more times (less preferred)\n{,}?|Match 0 or more times (less preferred)\n??|Match 0 or 1 times (less preferred)\n{0,1}?|Match 0 or 1 times (less preferred)\n+?|Match 1 or more times (less preferred)\n{1,}?|Match 1 or more times (less preferred)\n\n当正则表达式中包含能接受重复的限定符时,通常的行为是(在使整个表达式能得到匹配的前提下)匹配尽可能多的字符。\n而懒惰匹配,是匹配尽可能少的字符。方法是在重复的后面加一个?。", "p = re.compile('(ab)+')\np.search('ababababab').group()\n\np = re.compile('(ab)+?')\np.search('ababababab').group()", "进一步学习可参考官方文档以及《精通正则表达式(第3版)》" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jvitria/DeepLearningBBVA2016
7.2 Word Embeddings.ipynb
mit
[ "Embeddings ... and word embeddings\nModeling text may not be as obvious as modeling images or audio. \nIn images, inputs are a collection of pixels. In audio, it can be the vector obtained from a spectrogram (i.e. Fourier transform through time). Both representations share a common feature. Both are dense. \nWhen we consider the problem of natural language processing (NLP), things differ a little. \nLet us first check on the most simple representation of language. In particular, document representation.\n1. Document representation\nIn text classification for example, we are given a description $x \\in {\\bf R}^d$ of a document $\\delta$ and a fixed set of classes $y \\in {c_1, \\dots, c_K}$, for example the document topic. Given a new document, our goal is to predict the most probable class.\nA very simple description of a document is the bag-of-words description. This representation transforms a complete text to a vector of $d$ predefined words. The set of predefined words is selected by the practicioner. For example, the list can consist of the set of all words in a given language. \n<b>Example 1:</b>\nSuppose we are given four different documents belonging to the topics $y={\\text{'economics'},\\text{'technology'}}$ and we select as our representation the following bag-of-words $x = {\\text{'market'}, \\text{'stock'}, \\text{'price'}, \\text{'application'}, \\text{'mobile'}, \\text{'google'}}$. We can count the number of times a certain term appears in that document and expect that this description is discriminative enough for identifying the document topic. Check the following example:\n<table border=\"1\">\n<tr>\n<td></td>\n<td>market</td>\n<td>stock</td>\n<td>price</td>\n<td>application</td>\n<td>mobile</td>\n<td>google</td>\n</tr>\n<tr>\n<td>document 1($\\text{'economics'}$)</td>\n<td>1</td>\n<td>2</td>\n<td>3</td>\n<td>0</td>\n<td>0</td>\n<td>0</td>\n</tr>\n<tr>\n<td>document 2($\\text{'economics'}$)</td>\n<td>0</td>\n<td>1</td>\n<td>2</td>\n<td>0</td>\n<td>0</td>\n<td>1</td>\n</tr>\n<tr>\n<td>document 3($\\text{'technology'}$)</td>\n<td>0</td>\n<td>0</td>\n<td>0</td>\n<td>2</td>\n<td>3</td>\n<td>1</td>\n</tr>\n<tr>\n<td>document 4($\\text{'technology'}$)</td>\n<td>1</td>\n<td>0</td>\n<td>1</td>\n<td>2</td>\n<td>3</td>\n<td>0</td>\n</tr>\n</table>\n\nIn this representation, document 2 is represented by the vector (0,1,2,0,0,1). We can alternatively use a binary value representing whether a term appears or not in the document. In this last case document would be represesnted by (0,1,1,0,0,1).\nObserve that this is a context free representation, i.e. the order of the words is not considered. Consider the sentences \"Google reduces the prices of applications in App market\" and \"The number of aplications in Google App market with cheap prices is reduced by 20%\". The representation for both sentences is the same, though the exact meaning of both sentences is completely different. However, this kind of representation may be enough for identifying that both refers to $\\text{'technology'}$.\n<div class = \"alert alert-info\" style = \"border-radius:10px\"> **NOTE: ** From a classification point of view, these representations convey very different meaning. In the case of word counting we are expecting the classification method con consider the exact appearance number of words of relevant. In some sense, we could say we are looking for a model that classify the text . For example, if we use a naive bayes approach we could look for the probability of generating the first word, then the second, etc. This will be consistent with the fact that we are considering the multiplicity of that word. This representation can be regarded as *multinomial representation*. On the other hand, if we use a binary representation, the meaning of this is very different. We are considering the words that appear in the document. But, the not appearance of a certain word is also important and, or interesting. In this case document 3 in the former example is characterized by the apearance of `application`, `mobile`, and `google`. But also by the not appearance of `market`, `stock`, and `price`.</div>\n\n2. Word embeddings\nA different approach for working with NLP is considering the embedding of single words, i.e. looking for a manifold where semantically similar words are mapped to nearby points. These are called vector space models.\nThe term word embedding was introduced by Bengio et al. at the begining of the 2000's. However it was Mikolov et al. in 2013 with the creation of word2vec who popularized word embeddings. From then, different word embeddings using deep learning appeared. It is worth mentioning GloVe from Pennington et al. (2014).\nThe term embedding naturally appears in any deep architecture where there is a bottleneck layer. The output of that bottleneck layer can be seen as low-dimensional embedding. This idea lies at the core of deep learning, and we have seen that before in the unsupervised notebook.\nIn this sense we can easily distinguish among task-oriented embeddings and general purpose embeddings:\n+ Task oriented embeddings has the Embedding layer a one of the layers for a different task. This is, while solving the problem at hand, one also learns a suitable embedding for that task.\n\nGeneral purpose embeddings are designed to be used or transferred across different tasks.\n\n3. A disection of the Embedding layer\nContrary to other embeddings, such as the ones we found in the image domain, the word embedding layer usually regarded as a mapping from a discrete set of objects (words) to a real valued vector, i.e. \n$$k\\in{1..N} \\rightarrow \\mathbb{R}^d$$\nThus, we can represent the Embedding layer as $N\\times d$ matrix, or just a table/dictionary.\n$$\n\\begin{matrix}\nword1: \\\nword2:\\\n\\vdots\\\nwordN: \\\n\\end{matrix}\n\\left[\n\\begin{matrix}\nx_{1,1}&x_{1,2}& \\dots &x_{1,d}\\\nx_{2,1}&x_{2,2}& \\dots &x_{2,d}\\\n\\vdots&&\\\nx_{N,1}&x_{N,2}& \\dots &x_{N,d}\\\n\\end{matrix}\n\\right]\n$$\nIn this sense, the basic operation that an embedding layer has to accomplish is that given a certain word it returns the assigned code. And the goal in learning is to learn the values in the matrix.\nIn the learning process the matrix is initialized at random and learned using standard learning procedures, such as backpropagation.\n4. Learning general purpose word embeddings\nIn this section, we are going to check two of the most well known strategies for learning general purpose embeddings, namely CBOW and Skip-gram.\nThe idea behind both methods is simple, the context around a word is a hint with respect the underlying semantic of that word, i.e. if we find the same context around the target words in different sentences but those words are different, most probably they convey the same meaning.\nStrictly following this idea, we can define CBOW (Continuous Bag-of-Words): Given the context of a word, i.e. the $k$ words around the target, we want to infer what the target word is.\nWe can however change this order, and this will gives us the Skip-gram. In the skip-gram, given one word, we want to predict its context.\n5. Coding a skip-gram model\n<small>This code is based on the word2vec example from Udacity.</small>", "%matplotlib inline\nfrom __future__ import print_function\nimport collections\nimport math\nimport numpy as np\nimport os\nimport random\nimport tensorflow as tf\nimport zipfile\nfrom matplotlib import pylab\nfrom six.moves import range\nfrom six.moves.urllib.request import urlretrieve\nfrom sklearn.manifold import TSNE\n\nurl = 'http://mattmahoney.net/dc/'\n\ndef maybe_download(filename, expected_bytes):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n if not os.path.exists(filename):\n filename, _ = urlretrieve(url + filename, filename)\n statinfo = os.stat(filename)\n if statinfo.st_size == expected_bytes:\n print('Found and verified %s' % filename)\n else:\n print(statinfo.st_size)\n raise Exception(\n 'Failed to verify ' + filename + '. Can you get to it with a browser?')\n return filename\n\nfilename = maybe_download('text8.zip', 31344016)\n\ndef read_data(filename):\n \"\"\"Extract the first file enclosed in a zip file as a list of words\"\"\"\n with zipfile.ZipFile(filename) as f:\n data = tf.compat.as_str(f.read(f.namelist()[0])).split()\n return data\n \nwords = read_data(filename)\nprint('Data size %d' % len(words))\n\nvocabulary_size = 50000\n\ndef build_dataset(words):\n count = [['UNK', -1]]\n count.extend(collections.Counter(words).most_common(vocabulary_size - 1))\n dictionary = dict()\n for word, _ in count:\n dictionary[word] = len(dictionary)\n data = list()\n unk_count = 0\n for word in words:\n if word in dictionary:\n index = dictionary[word]\n else:\n index = 0 # dictionary['UNK']\n unk_count = unk_count + 1\n data.append(index)\n count[0][1] = unk_count\n reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) \n return data, count, dictionary, reverse_dictionary\n\ndata, count, dictionary, reverse_dictionary = build_dataset(words)\nprint('Most common words (+UNK)', count[:5])\nprint('Sample data', data[:10])\ndel words # Hint to reduce memory.\n\ndata_index = 0\n\ndef generate_batch(batch_size, num_skips, skip_window):\n global data_index\n assert batch_size % num_skips == 0\n assert num_skips <= 2 * skip_window\n batch = np.ndarray(shape=(batch_size), dtype=np.int32)\n labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)\n span = 2 * skip_window + 1 # [ skip_window target skip_window ]\n buffer = collections.deque(maxlen=span)\n for _ in range(span):\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n for i in range(batch_size // num_skips):\n target = skip_window # target label at the center of the buffer\n targets_to_avoid = [ skip_window ]\n for j in range(num_skips):\n while target in targets_to_avoid:\n target = random.randint(0, span - 1)\n targets_to_avoid.append(target)\n batch[i * num_skips + j] = buffer[skip_window]\n labels[i * num_skips + j, 0] = buffer[target]\n buffer.append(data[data_index])\n data_index = (data_index + 1) % len(data)\n return batch, labels\n\nprint('data:', [reverse_dictionary[di] for di in data[:8]])\n\nfor num_skips, skip_window in [(2, 1), (4, 2)]:\n data_index = 0\n batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)\n print('\\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))\n print(' batch:', [reverse_dictionary[bi] for bi in batch])\n print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])\n\nbatch_size = 128\nembedding_size = 128 # Dimension of the embedding vector.\nskip_window = 1 # How many words to consider left and right.\nnum_skips = 2 # How many times to reuse an input to generate a label.\n# We pick a random validation set to sample nearest neighbors. here we limit the\n# validation samples to the words that have a low numeric ID, which by\n# construction are also the most frequent. \nvalid_size = 16 # Random set of words to evaluate similarity on.\nvalid_window = 100 # Only pick dev samples in the head of the distribution.\nvalid_examples = np.array(random.sample(range(valid_window), valid_size))\nnum_sampled = 64 # Number of negative examples to sample.\n\ngraph = tf.Graph()\n\nwith graph.as_default(), tf.device('/cpu:0'):\n\n # Input data.\n train_dataset = tf.placeholder(tf.int32, shape=[batch_size])\n train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # Variables.\n embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))\n softmax_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size],stddev=1.0 / math.sqrt(embedding_size)))\n softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))\n \n # Model.\n # Look up embeddings for inputs.\n embed = tf.nn.embedding_lookup(embeddings, train_dataset)\n # Compute the softmax loss, using a sample of the negative labels each time.\n loss = tf.reduce_mean(\n tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,\n train_labels, num_sampled, vocabulary_size))\n\n # Optimizer.\n # Note: The optimizer will optimize the softmax_weights AND the embeddings.\n # This is because the embeddings are defined as a variable quantity and the\n # optimizer's `minimize` method will by default modify all variable quantities \n # that contribute to the tensor it is passed.\n # See docs on `tf.train.Optimizer.minimize()` for more details.\n optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)\n \n # Compute the similarity between minibatch examples and all embeddings.\n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))\n normalized_embeddings = embeddings / norm\n valid_embeddings = tf.nn.embedding_lookup(\n normalized_embeddings, valid_dataset)\n similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))\n\nnum_steps = 1000001\n\nwith tf.Session(graph=graph) as session:\n tf.initialize_all_variables().run()\n print('Initialized')\n average_loss = 0\n for step in range(num_steps):\n batch_data, batch_labels = generate_batch(\n batch_size, num_skips, skip_window)\n feed_dict = {train_dataset : batch_data, train_labels : batch_labels}\n _, l = session.run([optimizer, loss], feed_dict=feed_dict)\n average_loss += l\n if step % 2000 == 0:\n if step > 0:\n average_loss = average_loss / 2000\n # The average loss is an estimate of the loss over the last 2000 batches.\n print('Average loss at step %d: %f' % (step, average_loss))\n average_loss = 0\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n if step % 10000 == 0:\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = reverse_dictionary[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = reverse_dictionary[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n final_embeddings = normalized_embeddings.eval()", "Let us save the embedding for later use.", "#### DUMP\n#import pickle\n#f = open('myembedding.pkl','wb')\n#pickle.dump([final_embeddings,dictionary,reverse_dictionary],f)\n#f.close()\n\nnum_points = 400\n\ntsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)\ntwo_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])\n\ndef plot(embeddings, labels):\n assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'\n pylab.figure(figsize=(15,15)) # in inches\n for i, label in enumerate(labels):\n x, y = embeddings[i,:]\n pylab.scatter(x, y)\n pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',\n ha='right', va='bottom')\n pylab.show()\n\nwords = [reverse_dictionary[i] for i in range(1, num_points+1)]\nplot(two_d_embeddings, words)", "6. Understanding and using the embedding.\nIn the former code we have the dictionary that converts from the word to an index, and the reverse_dictionary that given an index returns the corresponding word.", "import pickle\n\nf = open('./dataset/myembedding.pkl','rb')\nfe,dic,rdic=pickle.load(f)\nf.close()\n\ndic['woman']\n\nrdic[42]", "The embedding tries to put together words with similar meaning. A good embedding allows to semantically operate. Let us check some simple semantical operations:", "result = (fe[dic['two'],:]+ fe[dic['one'],:])\n\nfrom scipy.spatial import distance\ncandidates=np.argsort(distance.cdist(fe,result[np.newaxis,:],metric=\"seuclidean\"),axis=0)\n\nfor i in xrange(5):\n idx=candidates[i][0]\n print(rdic[idx])", "We can also define word analogies: football is to ? as foot is to hand", "result = (fe[dic['football'],:] - fe[dic['foot'],:] + fe[dic['hand'],:])\n\nfrom scipy.spatial import distance\ncandidates=np.argsort(distance.cdist(fe,result[np.newaxis,:],metric=\"seuclidean\"),axis=0)\n\nfor i in xrange(5):\n idx=candidates[i][0]\n print(rdic[idx])\n\nresult = (fe[dic['madrid'],:] - fe[dic['spain'],:] + fe[dic['germany'],:])\n\nfrom scipy.spatial import distance\ncandidates=np.argsort(distance.cdist(fe,result[np.newaxis,:],metric=\"seuclidean\"),axis=0)\n\nfor i in xrange(5):\n idx=candidates[i][0]\n print(rdic[idx])\n\nresult = (fe[dic['barcelona'],:] - fe[dic['spain'],:] + fe[dic['germany'],:])\n\nfrom scipy.spatial import distance\ncandidates=np.argsort(distance.cdist(fe,result[np.newaxis,:],metric=\"seuclidean\"),axis=0)\n\nfor i in xrange(5):\n idx=candidates[i][0]\n print(rdic[idx])", "Let us used a pretrained embedding. We will use a simple embedding detailed in Improving Word Representations via Global Context and Multiple Word Prototypes", "import pandas as pd\n\ndf = pd.read_table(\"./dataset/wordVectors.txt\",delimiter=\" \",header=None)\n\nembedding=df.values[:,:-1]\n\nf = open(\"./dataset/vocab.txt\",'r')\ndictionary=dict()\nfor word in f.readlines():\n dictionary[word] = len(dictionary)\n \nreverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) \n\nresult = embedding[dictionary['king\\n'],:]-embedding[dictionary['man\\n'],:]+embedding[dictionary['girl\\n'],:]\nimport numpy as np\nfrom scipy.spatial import distance\ncandidates=np.argsort(distance.cdist(embedding,result[np.newaxis,:],metric=\"seuclidean\"),axis=0)\n\nfor i in xrange(0,5):\n idx=candidates[i][0]\n print(reverse_dictionary[idx])", "<div class = \"alert alert-info\" style=\"border-radius:10px\">**EXERCISE: ** Toy with word embeddings by creating new analogies.</div>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DataPilot/notebook-miner
summary_of_work/server_notebooks/bottom_up/Bottom Up Random Forest SplitCall.ipynb
apache-2.0
[ "Prediction using the bottom up method, taking into account function names\nThis notebook details the process of prediction from which homework a notebook came after featurizing the notebook using the bottom up method. This is done by gathering all templates in each notebook after running the algorithm, then using countvectorizer to featurize the notebooks, and finally using random forests to make the prediction", "import sys\nhome_directory = '/dfs/scratch2/fcipollone'\nsys.path.append(home_directory)\nimport numpy as np\nfrom nbminer.notebook_miner import NotebookMiner\n\nhw_filenames = np.load('../homework_names_jplag_combined_per_student.npy')\nhw_notebooks = [[NotebookMiner(filename) for filename in temp[:59]] for temp in hw_filenames]\n\nfrom nbminer.pipeline.pipeline import Pipeline\nfrom nbminer.features.features import Features\nfrom nbminer.preprocess.get_ast_features import GetASTFeatures\nfrom nbminer.preprocess.get_imports import GetImports\nfrom nbminer.preprocess.resample_by_node import ResampleByNode\nfrom nbminer.encoders.ast_graph.ast_graph import ASTGraphReducer\nfrom nbminer.preprocess.feature_encoding import FeatureEncoding\nfrom nbminer.encoders.cluster.kmeans_encoder import KmeansEncoder\nfrom nbminer.results.similarity.jaccard_similarity import NotebookJaccardSimilarity\nfrom nbminer.results.prediction.corpus_identifier import CorpusIdentifier\na = Features(hw_notebooks[2], 'hw2')\na.add_notebooks(hw_notebooks[3], 'hw3')\na.add_notebooks(hw_notebooks[4], 'hw4')\na.add_notebooks(hw_notebooks[5], 'hw5')\ngastf = GetASTFeatures()\nrbn = ResampleByNode()\ngi = GetImports()\nagr = ASTGraphReducer(a, threshold=8, split_call=True)\nci = CorpusIdentifier()\npipe = Pipeline([gastf, rbn, gi, agr, ci])\na = pipe.transform(a)\n\nimport tqdm\nX, y = ci.get_data_set()\nsimilarities = np.zeros((len(X), len(X)))\nfor i in tqdm.tqdm(range(len(X))):\n for j in range(len(X)):\n if len(set.union(set(X[i]), set(X[j]))) == 0:\n continue\n similarities[i][j] = len(set.intersection(set(X[i]), set(X[j]))) / (len(set.union(set(X[i]), set(X[j]))))", "Inter and Intra Similarities\nThe first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below", "def get_avg_inter_intra_sims(X, y, val):\n inter_sims = []\n intra_sims = []\n for i in range(len(X)):\n for j in range(i+1, len(X)):\n if y[i] == y[j] and y[i] == val:\n intra_sims.append(similarities[i][j])\n else:\n inter_sims.append(similarities[i][j])\n return np.array(intra_sims), np.array(inter_sims)\n\nfor i in np.unique(y):\n intra_sims, inter_sims = get_avg_inter_intra_sims(X, y, i)\n print('Mean intra similarity for hw',i,'is',np.mean(intra_sims),'with std',np.std(intra_sims))\n print('Mean inter similarity for hw',i,'is',np.mean(inter_sims),'with std',np.std(inter_sims))\n print('----')\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = 5, 15\ndef get_all_sims(X, y, val):\n sims = []\n for i in range(len(X)):\n for j in range(i+1, len(X)):\n if y[i] == val or y[j] == val:\n sims.append(similarities[i][j])\n return sims\nfig, axes = plt.subplots(4)\nfor i in range(4):\n axes[i].hist(get_all_sims(X,y,i), bins=30)\n axes[i].set_xlabel(\"Similarity Value\")\n axes[i].set_ylabel(\"Number of pairs\")", "Sims color coded", "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = 5, 15\ndef get_all_sims(X, y, val):\n sims = []\n sims_outer = []\n for i in range(len(X)):\n for j in range(i+1, len(X)):\n if y[i] == val or y[j] == val:\n if y[i] == y[j]:\n sims.append(similarities[i][j])\n else:\n sims_outer.append(similarities[i][j])\n return sims,sims_outer\n\nfig, axes = plt.subplots(4)\nfor i in range(4):\n axes[i].hist(get_all_sims(X,y,i)[1], bins=30)\n axes[i].hist(get_all_sims(X,y,i)[0], bins=30)\n axes[i].set_xlabel(\"Similarity Value\")\n axes[i].set_ylabel(\"Number of pairs\")", "Actual Prediction\nWhile the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows:\n\nSplit the data into train and test\nVectorize based on templates that exist\nBuild a random forest classifier that uses this feature representation, and measure the performance", "import sklearn\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.model_selection import cross_val_score\n\nX, y = ci.get_data_set()\ncountvec = sklearn.feature_extraction.text.CountVectorizer()\nX_list = [\" \".join(el) for el in X]\ncountvec.fit(X_list)\nX = countvec.transform(X_list)\n\np = np.random.permutation(len(X.todense()))\nX = X.todense()[p]\ny = np.array(y)[p]\n\nclf = sklearn.ensemble.RandomForestClassifier(n_estimators=400, max_depth=3)\nscores = cross_val_score(clf, X, y, cv=10)\nprint(scores)\nprint(np.mean(scores))\n\n\nX.shape", "Clustering\nLastly, we try unsupervised learning, clustering based on the features we've extracted, and measure using sillouette score.", "X, y = ci.get_data_set()\ncountvec = sklearn.feature_extraction.text.CountVectorizer()\nX_list = [\" \".join(el) for el in X]\ncountvec.fit(X_list)\nX = countvec.transform(X_list)\n\nclusterer = sklearn.cluster.KMeans(n_clusters = 4).fit(X)\ncluster_score = (sklearn.metrics.silhouette_score(X, clusterer.labels_))\ncheat_score = (sklearn.metrics.silhouette_score(X, y))\n\nprint('Silhouette score using the actual labels:', cheat_score)\nprint('Silhouette score using the cluster labels:', cluster_score)\n\nx_reduced = sklearn.decomposition.PCA(n_components=2).fit_transform(X.todense())\nplt.rcParams['figure.figsize'] = 5, 10\nfig, axes = plt.subplots(2)\naxes[0].scatter(x_reduced[:,0], x_reduced[:,1], c=y)\naxes[0].set_title('PCA Reduced notebooks with original labels')\naxes[0].set_xlim(left=-2, right=5)\naxes[1].scatter(x_reduced[:,0], x_reduced[:,1], c=clusterer.labels_)\naxes[1].set_title('PCA Reduced notebooks with kmean cluster labels')\naxes[1].set_xlim(left=-2, right=5)", "Trying to restrict features\nThe problem above is that there are too many unimportant features -- all this noise makes it hard to seperate the different classes. To try to counteract this, I'll try ranking the features using tfidf and only take some of them", "X, y = ci.get_data_set()\ntfidf = sklearn.feature_extraction.text.TfidfVectorizer()\nX_list = [\" \".join(el) for el in X]\ntfidf.fit(X_list)\nX = tfidf.transform(X_list)\n#X = X.todense()\n\nfeature_array = np.array(tfidf.get_feature_names())\ntfidf_sorting = np.argsort(X.toarray()).flatten()[::-1]\ntop_n = feature_array[tfidf_sorting][:50]\nprint(top_n)\n\ntop_n = [el[1] for el in sra[:15]]\nprint(top_n)\n\nX, y = ci.get_data_set()\ncountvec = sklearn.feature_extraction.text.CountVectorizer()\n\nX_list = [\" \".join([val for val in el if val in top_n]) for el in X]\ncountvec.fit(X_list)\nX = countvec.transform(X_list)\nX = X.todense()\n\nx_reduced = sklearn.decomposition.PCA(n_components=2).fit_transform(X)\nprint(x_reduced.shape)\nplt.rcParams['figure.figsize'] = 5, 5\nplt.scatter(x_reduced[:,0], x_reduced[:,1], c=y)\n\ncheat_score = (sklearn.metrics.silhouette_score(x_reduced, y))\nprint(cheat_score)", "T-SNE", "X, y = ci.get_data_set()\ntfidf = sklearn.feature_extraction.text.TfidfVectorizer()\nX_list = [\" \".join(el) for el in X]\ntfidf.fit(X_list)\nX = tfidf.transform(X_list)\n#X = X.todense()\n\n# This is a recommended step when using T-SNE\nx_reduced = sklearn.decomposition.PCA(n_components=50).fit_transform(X.todense())\n\nfrom sklearn.manifold import TSNE\ntsn = TSNE(n_components=2)\nx_red = tsn.fit_transform(x_reduced)\nplt.scatter(x_red[:,0], x_red[:,1], c=y)\n\ncheat_score = (sklearn.metrics.silhouette_score(x_red, y))\nprint(cheat_score)\n\nclusterer = sklearn.cluster.KMeans(n_clusters = 4).fit(x_red)\ncluster_score = (sklearn.metrics.silhouette_score(X, clusterer.labels_))\nprint(cluster_score)\n\nplt.scatter(x_red[:,0], x_red[:,1], c=clusterer.labels_)", "What's happening\nFiguring out what is going on is a bit difficult, but we can look at the top templates generated from the random forest, and see why they might have been chosen", "'''\nLooking at the output below, it's clear that the bottom up method is recognizing very specific\nstructures of ast graph, which makes sense because some structures are exactly repeated in\nhomeworks. For example:\n\ntreatment = pd.Series([0]*4 + [1]*2)\n\nis a line in all of the homework one notebooks, and the top feature of the random forest\n(at time of running) is \n\nvar = pd.Series([0] * 4 + [1] * 2)\n\nNote that the bottom up method does not even take the specific numbers into account, but only\nthe operations.\n'''\n\nclf.fit(X,y)\nfnames= countvec.get_feature_names()\nclfi = clf.feature_importances_\nsa = []\nfor i in range(len(clfi)):\n sa.append((clfi[i], fnames[i]))\nsra = [el for el in reversed(sorted(sa))]\nimport astor\nfor temp in sra:\n temp = temp[1]\n print(temp, agr.templates.get_examples(temp)[1])\n for i in range(5):\n print ('\\t',astor.to_source(agr.templates.get_examples(temp)[0][i]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/bigquery-oreilly-book
05_devel/google_api_client.ipynb
apache-2.0
[ "Example of using the Google API Client to access BigQuery\nNote that this is <b>not</b> the recommended approach. You should use the BigQuery client library because that is idiomatic Python. \nSee the bigquery_client notebook for examples.\nAuthenticate and build stubs", "PROJECT='cloud-training-demos' # CHANGE THIS\nfrom googleapiclient.discovery import build\nservice = build('bigquery', 'v2')", "Get info about a dataset", "# information about the ch04 dataset\ndsinfo = service.datasets().get(datasetId=\"ch04\", projectId=PROJECT).execute()\nfor info in dsinfo.items():\n print(info)", "List tables and creation times", "# list tables in dataset\ntables = service.tables().list(datasetId=\"ch04\", projectId=PROJECT).execute()\nfor t in tables['tables']:\n print(t['tableReference']['tableId'] + ' was created at ' + t['creationTime'])", "Query and get result", "# send a query request\nrequest={\n \"useLegacySql\": False, \n \"query\": \"SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5\" \n}\nprint(request)\nresponse = service.jobs().query(projectId=PROJECT, body=request).execute()\nprint('----' * 10)\nfor r in response['rows']:\n print(r['f'][0]['v'])", "Asynchronous query and paging through results", "# send a query request that will not terminate within the timeout specified and will require paging\nrequest={\n \"useLegacySql\": False,\n \"timeoutMs\": 0,\n \"useQueryCache\": False,\n \"query\": \"SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5\" \n}\nresponse = service.jobs().query(projectId=PROJECT, body=request).execute()\nprint(response)\n\njobId = response['jobReference']['jobId']\nprint(jobId)\n\n# get query results\nwhile (not response['jobComplete']):\n response = service.jobs().getQueryResults(projectId=PROJECT, \n jobId=jobId, \n maxResults=2, \n timeoutMs=5).execute()\n\nwhile (True):\n # print responses\n for row in response['rows']:\n print(row['f'][0]['v']) # station name\n print('--' * 5)\n # page through responses\n if 'pageToken' in response:\n pageToken = response['pageToken']\n # get next page\n response = service.jobs().getQueryResults(projectId=PROJECT, \n jobId=jobId, \n maxResults=2,\n pageToken=pageToken,\n timeoutMs=5).execute()\n else:\n break\n ", "Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AleksanderLidtke/AnalyseCollisionFragments
ResultingToGeneratedFragmentsRatio.ipynb
mit
[ "Module imports and version check\nImport the key modules needed later on and check their versions to make sure this analysis can be reproduced.", "import matplotlib.pyplot\nassert matplotlib.__version__>='1.5.1'\n\nimport numpy\nassert numpy.__version__>='1.10.4'", "Setup the matplotlib environment to make the plots look pretty.", "# Show the plots inside the notebook.\n%matplotlib inline\n# Make the figures high-resolution.\n%config InlineBackend.figure_format='retina'\n# Various font sizes.\nticksFontSize=18\nlabelsFontSizeSmall=20\nlabelsFontSize=30\ntitleFontSize=34\nlegendFontSize=14\nmatplotlib.rc('xtick', labelsize=ticksFontSize) \nmatplotlib.rc('ytick', labelsize=ticksFontSize)\n# Colourmaps.\ncm=matplotlib.pyplot.cm.get_cmap('viridis')", "Introduction\nThis is notebook analyses the data from a projection of an evolutionary space debris model DAMAGE. It investigates the amplification of the numbers of fragments that collisions generate themselves, which occurs due to follow-on collisions. The projected scenario is \"mitigation only\" with additional collision avoidance performed by active spacecraft.\nRead the data\nRead the data about the number of fragments generated in DAMAGE collisions as well as the corresponding number of fragments in the final (2213) population snapshot, which every collision gave rise to. This accounts for the follow-on collisions that occurred in certain cases, as well as decay of fragments. Store the data in arrays and distinguish between all collisions and the subset of catastrophic collisions, which exceeded the $40\\ J/g$ energy threshold.\nThe data are stored on GitHub. Will read one file at a time and cap the amount of characters to be read at any given time, not to clog the network.", "import urllib2, numpy\nfrom __future__ import print_function\n\n# All collisions.\nlines=urllib2.urlopen('https://raw.githubusercontent.com/AleksanderLidtke/\\\nAnalyseCollisionFragments/master/AllColGenerated').read(856393*25) # no. lines * no. chars per line\nallColGen=numpy.array(lines.split('\\n')[1:-1],dtype=numpy.float64) # Skip the header and the last empty line\n\nlines=urllib2.urlopen('https://raw.githubusercontent.com/AleksanderLidtke/\\\nAnalyseCollisionFragments/master/AllColResulting').read(856393*25)\nallColRes=numpy.array(lines.split('\\n')[1:-1],dtype=numpy.float64)\n\nassert allColGen.shape==allColRes.shape\nprint(\"Read data for {} collisions.\".format(allColGen.size))\n\n# Catastrophic collisions (a subset of all collisions).\nlines=urllib2.urlopen('https://raw.githubusercontent.com/AleksanderLidtke/\\\nAnalyseCollisionFragments/master/CatColGenerated').read(500227*25) # Fewer lines for the subset of all collisions.\ncatColGen=numpy.array(lines.split('\\n')[1:-1],dtype=numpy.float64)\n\nlines=urllib2.urlopen('https://raw.githubusercontent.com/AleksanderLidtke/\\\nAnalyseCollisionFragments/master/CatColResulting').read(500227*25)\ncatColRes=numpy.array(lines.split('\\n')[1:-1],dtype=numpy.float64)\n\nassert catColGen.shape==catColRes.shape\nprint(\"Read data for {} catastrophic collisions.\".format(catColGen.size))", "Analyse the ratio\nDescription\nInvestigate the fact that sometimes follow-on collisions will result in certain collisions being responsible for more fragments at some census epoch than they generated themselves. In order to do this, investigate the ratio between the number of fragments in the population snapshot in 2213 that resulted from a given collision, $N_{res}$, and the number of fragments $\\geq 10$ cm generated in every collision, $N_{gen}$:\n$$ r=\\frac{N_{res}}{N_{gen}}. $$\n$N_{res}$ is the effective number of objects $\\geq 10$ cm passing through the low-Earth orbit (LEO) volume that every collision has given rise to. If a fragment from collision $C_1$ took part in another collision, $C_2$, all the fragments from these two collisions left on-orbit at the census epoch are said to have been caused by $C_1.$ This is because if $C_1$ hadn't taken place, $C_2$ wouldn't have taken place either.\nThe effective number of objects $N_{res}$ is computed as the number of fragments $\\geq 10$ cm, $N$, multiplied by the fraction of the orbital period that the fragments spend under the altitude of 2000 km, i.e. within the LEO regime:\n$$N_{res}=N\\times\\frac{\\tau_{LEO}}{\\tau},$$\nwhere $\\tau_{LEO}$ is the time that the object fragments spend under the altitude of 2000 km during every orbit, and $\\tau$ is the orbital period.\nResults\nThis is what the ratio $r$ looks like for all collisions and the subset of catastrophic ones.", "# Compute the ratios.\nallRatios=allColRes/allColGen\ncatRatios=catColRes/catColGen\n# Plot.\nfig=matplotlib.pyplot.figure(figsize=(12,8))\nax=fig.gca()\nmatplotlib.pyplot.grid(linewidth=1)\nax.set_xlabel(r\"$Time\\ (s)$\",fontsize=labelsFontSize)\nax.set_ylabel(r\"$Response\\ (-)$\",fontsize=labelsFontSize)\nax.set_xlim(0,7)\nax.set_ylim(-2,2)\nax.plot(allColGen,allRatios,alpha=1.0,label=r\"$All\\ collisions$\",marker='o',c='k',markersize=1,mew=0,lw=0)\nax.plot(catColGen,catRatios,alpha=1.0,label=r\"$Catastrophic$\",marker='x',c='r',markersize=1,mew=2,lw=0)\nax.set_xlabel(r\"$No.\\ generated\\ fragments\\ \\geq10\\ cm$\",fontsize=labelsFontSize)\nax.set_ylabel(r\"$Resulting-to-generated\\ ratio$\",fontsize=labelsFontSize)\nax.set_xlim(0,12000)\nax.set_ylim(0,10)\nax.ticklabel_format(axis='x', style='sci', scilimits=(-2,-1))\nax.tick_params(axis='both',reset=False,which='both',length=5,width=1.5)\nmatplotlib.pyplot.subplots_adjust(left=0.1,right=0.95,top=0.95,bottom=0.1)\nbox=ax.get_position()\nax.set_position([box.x0+box.width*0.0,box.y0+box.height*0.05,box.width*0.99,box.height*0.88])\nax.legend(bbox_to_anchor=(0.5,1.14),loc='upper center',prop={'size':legendFontSize},fancybox=True,\\\n shadow=True,ncol=3)\nfig.show()", "Not very legible, right? But some things can be observed in the above figure anyway. First of all, there's a \"dip\" in the number of generated fragments around $6.5\\times 10^4$. This was caused by the fact that the number of generated fragments, $N$, exceeding a certain length $L_c$ is given by a power law:\n$$ N(L_c)=0.1M^{0.75}L_c^{-1.71},$$\nwhre $M$ is the mass of both objects [1]. This shows that, in spite of 150000 Monte Carlo (MC) runs being used to project the analysed scenario over 200 years, there simply weren't many collisions that involved two objects with masses that resulted in around $6.5\\times10^4$ fragments. This is because the distribution of the masses of objects in orbit isn't continuous and not every combination of collided masses is equally likely.\nNow, let us bin the ratios inside fixed-width bins and compute the mean and median inside each bin to see if the mean is close to the median, i.e. whether the distribution is close to normal-ish:", "bins=numpy.arange(0,allColGen.max(),500)\nmeans=numpy.zeros(bins.size-1)\nmedians=numpy.zeros(bins.size-1)\nmeansCat=numpy.zeros(bins.size-1)\nmediansCat=numpy.zeros(bins.size-1)\nfor i in range(bins.size-1):\n means[i]=numpy.mean(allRatios[(allColGen>=bins[i]) & (allColGen<bins[i+1])])\n medians[i]=numpy.median(allRatios[(allColGen>=bins[i]) & (allColGen<bins[i+1])])\n meansCat[i]=numpy.mean(catRatios[(catColGen>=bins[i]) & (catColGen<bins[i+1])])\n mediansCat[i]=numpy.median(catRatios[(catColGen>=bins[i]) & (catColGen<bins[i+1])])\n\n# Plot.\nfig=matplotlib.pyplot.figure(figsize=(14,8))\nax=fig.gca()\nmatplotlib.pyplot.grid(linewidth=2)\nax.plot(bins[:-1],means,alpha=1.0,label=r\"$Mean,\\ all$\",marker=None,c='k',lw=3,ls='--')\nax.plot(bins[:-1],medians,alpha=1.0,label=r\"$Median,\\ all$\",marker=None,c='k',lw=3,ls=':')\nax.plot(bins[:-1],meansCat,alpha=1.0,label=r\"$Mean,\\ catastrophic$\",marker=None,c='r',lw=3,ls='--')\nax.plot(bins[:-1],mediansCat,alpha=1.0,label=r\"$Median,\\ catastrophic$\",marker=None,c='r',lw=3,ls=':')\nax.set_xlabel(r\"$No.\\ generated\\ fragments\\ \\geq10\\ cm$\",fontsize=labelsFontSize)\nax.set_ylabel(r\"$Resulting-to-generated\\ ratio$\",fontsize=labelsFontSize)\nax.set_xlim(0,12000)\nax.set_ylim(0,1)\nax.ticklabel_format(axis='x', style='sci', scilimits=(-2,-1))\nax.tick_params(axis='both',reset=False,which='both',length=5,width=1.5)\nmatplotlib.pyplot.subplots_adjust(left=0.1,right=0.95,top=0.92,bottom=0.1)\nbox=ax.get_position()\nax.set_position([box.x0+box.width*0.0,box.y0+box.height*0.05,box.width*0.99,box.height*0.88])\nax.legend(bbox_to_anchor=(0.5,1.18),loc='upper center',prop={'size':legendFontSize},fancybox=True,\\\n shadow=True,ncol=2)\nfig.show()", "The more fragments were generated in a collision, the fewer fragments a given collision gave rise to in the final population. This seems counter-intuitive because large collisions are expected to contribute more to the long-term growth of the debris population than others, which generate fewer fragments [2]. However, these plots do not contradict this thesis about the reasons for the growth of the number of debris because they do not show which collisions will drive the predicted growth of the number of objects in orbit. Rather, they show which collisions are likely to results in many follow-on collisions that will amplify the number of fragments that the collisions generate, thus fuelling the \"Kessler syndrome\"[3]. They do not necessarily say that th number of resulting fragments will be large on absolute terms.\nThe mean in every analysed bin was considerably different than the median, meaning that the distributions in every bin were far from normal. This means that in every bin relatively few collisions resulted in many follow-on collisions that increased the ratio $r$. This is what the distribution of the ratio $r$ looks like in every bin of the no. generated fragments:", "ratioBins=numpy.linspace(0,2,100)\n# Get colours for every bin of the number of generated fragments.\ncNorm=matplotlib.colors.Normalize(vmin=0, vmax=bins.size-1)\nscalarMap=matplotlib.cm.ScalarMappable(norm=cNorm,cmap=cm)\nhistColours=[]\nfor i in range(0,bins.size-1):\n histColours.append(scalarMap.to_rgba(i))\n# Plot the histograms.\nfig=matplotlib.pyplot.figure(figsize=(14,8))\nax=fig.gca()\nmatplotlib.pyplot.grid(linewidth=2)\nax.set_xlabel(r\"$Resulting-to-generated\\ ratio$\",fontsize=labelsFontSize)\nax.set_ylabel(r\"$Fraction\\ of\\ collisions$\",fontsize=labelsFontSize)\nfor i in range(bins.size-1):\n ax.hist(allRatios[(allColGen>=bins[i]) & (allColGen<bins[i+1])],\\\n ratioBins,normed=1,cumulative=1,histtype='step',ls='solid',\\\n color=histColours[i],label=r\"${}-{},\\ all$\".format(bins[i],bins[i+1]))\n ax.hist(catRatios[(catColGen>=bins[i]) & (catColGen<bins[i+1])],\\\n ratioBins,normed=1,cumulative=1,histtype='step',ls='dashed',\\\n color=histColours[i],label=r\"${}-{},\\ cat$\".format(bins[i],bins[i+1]))\nax.set_xlim(0,2)\nax.ticklabel_format(axis='y', style='sci', scilimits=(-2,-1))\nax.tick_params(axis='both',reset=False,which='both',length=5,width=1.5)\nmatplotlib.pyplot.subplots_adjust(left=0.1,right=0.95,top=0.92,bottom=0.1)\nbox=ax.get_position()\nax.set_position([box.x0+box.width*0.0,box.y0+box.height*0.05,box.width*0.99,box.height*0.6])\nax.legend(bbox_to_anchor=(0.5,1.8),loc='upper center',prop={'size':legendFontSize},fancybox=True,\\\n shadow=True,ncol=5)\nfig.show()", "Most collisions had a ratio of less than $2.0$. Only", "numpy.sum(allRatios>=2.0)/float(allRatios.size)*100", "percent of collisions generated twice as many or more fragments in the final population than they generated themselves." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nicholsonjohnc/jupyter
deep_image_segmentation_with_convolutional_neural_networks.ipynb
mit
[ "Deep Image Segmentation with Convolutional Neural Networks (CNNs)\nImage segmentation\nHere, we focus on using Convolutional Neural Networks or CNNs for segmenting images. Specifically, we use Python and Keras (with TensorFlow as backend) to implement a CNN capable of segmenting lungs in CT scan images with ~94% accuracy.\nConvolutional neural networks\nCNNs are a special kind of neural network inspired by the brain’s visual cortex. So it should come as no surprise that they excel at visual tasks. CNNs have layers. Each layer learns higher-level features from the previous layer. This layered architecture is analogous to how, in the visual cortex, higher-level neurons react to higher-level patterns that are combinations of lower-level patterns generated by lower-level neurons. Also, unlike traditional Artificial Neural Networks or ANNs, which typically consist of fully connected layers, CNNs consist of partially connected layers. In fact, in CNNs, neurons in one layer typically only connect to a few neighboring neurons from the previous layer. This partially connected architecture is analogous to how so-called cortical neurons in the visual cortex only react to stimuli in their receptive fields, which overlap to cover the entire visual field. Partial connectivity has computational benefits as well, since, with fewer connections, fewer weights need to be learned during training. This allows CNNs to handle larger images than traditional ANNs.\nImport libraries and initialize Keras", "import os\nimport numpy as np\nnp.random.seed(123)\nimport pandas as pd\nfrom glob import glob\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport keras.backend as K\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, BatchNormalization, UpSampling2D\nfrom keras.utils import np_utils\nfrom skimage.io import imread\nfrom sklearn.model_selection import train_test_split\n\n# set channels first notation\nK.set_image_dim_ordering('th')", "Importing, downsampling, and visualizing data", "# Get paths to all images and masks.\nall_image_paths = glob('E:\\\\data\\\\lungs\\\\2d_images\\\\*.tif')\nall_mask_paths = glob('E:\\\\data\\\\lungs\\\\2d_masks\\\\*.tif')\nprint(len(all_image_paths), 'image paths found')\nprint(len(all_mask_paths), 'mask paths found')\n\n# Define function to read in and downsample an image.\ndef read_image (path, sampling=1): return np.expand_dims(imread(path)[::sampling, ::sampling],0)\n\n# Import and downsample all images and masks.\nall_images = np.stack([read_image(path, 4) for path in all_image_paths], 0)\nall_masks = np.stack([read_image(path, 4) for path in all_mask_paths], 0) / 255.0\nprint('Image resolution is', all_images[1].shape)\nprint('Mask resolution is', all_images[1].shape)\n\n# Visualize an example CT image and manual segmentation. \nexample_no = 1\nfig, ax = plt.subplots(nrows=1, ncols=2, sharex='col', sharey='row', figsize=(10,5))\nax[0].imshow(all_images[example_no, 0], cmap='Blues')\nax[0].set_title('CT image', fontsize=18)\nax[0].tick_params(labelsize=16)\nax[1].imshow(all_masks[example_no, 0], cmap='Blues')\nax[1].set_title('Manual segmentation', fontsize=18)\nax[1].tick_params(labelsize=16)", "Split data into training and validation sets", "X_train, X_test, y_train, y_test = train_test_split(all_images, all_masks, test_size=0.1)\nprint('Training input is', X_train.shape)\nprint('Training output is {}, min is {}, max is {}'.format(y_train.shape, y_train.min(), y_train.max()))\nprint('Testing set is', X_test.shape)", "Create CNN model", "# Create a sequential model, i.e. a linear stack of layers.\nmodel = Sequential()\n\n# Add a 2D convolution layer.\nmodel.add(\n Conv2D(\n filters=32, \n kernel_size=(3, 3), \n activation='relu', \n input_shape=all_images.shape[1:],\n padding='same'\n )\n)\n\n# Add a 2D convolution layer.\nmodel.add(\n Conv2D(filters=64, \n kernel_size=(3, 3), \n activation='sigmoid', \n input_shape=all_images.shape[1:],\n padding='same'\n )\n)\n\n# Add a max pooling layer.\nmodel.add(\n MaxPooling2D(\n pool_size=(2, 2), \n padding='same'\n )\n)\n\n# Add a dense layer.\nmodel.add(\n Dense(\n 64, \n activation='relu'\n )\n)\n\n# Add a 2D convolution layer.\nmodel.add(\n Conv2D(\n filters=1, \n kernel_size=(3, 3), \n activation='sigmoid', \n input_shape=all_images.shape[1:],\n padding='same'\n )\n)\n\n# Add a 2D upsampling layer.\nmodel.add(\n UpSampling2D(\n size=(2,2)\n )\n)\n\nmodel.compile(\n loss='binary_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy','mse']\n)\n\nprint(model.summary())", "Train CNN model", "history = model.fit(X_train, y_train, validation_split=0.10, epochs=10, batch_size=10)\n\ntest_no = 7\nfig, ax = plt.subplots(nrows=1, ncols=3, sharex='col', sharey='row', figsize=(15,5))\nax[0].imshow(X_test[test_no,0], cmap='Blues')\nax[0].set_title('CT image', fontsize=18)\nax[0].tick_params(labelsize=16)\nax[1].imshow(y_test[test_no,0], cmap='Blues')\nax[1].set_title('Manual segmentation', fontsize=18)\nax[1].tick_params(labelsize=16)\nax[2].imshow(model.predict(X_test)[test_no,0], cmap='Blues')\nax[2].set_title('CNN segmentation', fontsize=18)\nax[2].tick_params(labelsize=16)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ITAM-DS/analisis-numerico-computo-cientifico
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
apache-2.0
[ "(COMPC)=\n5.3 Compilación a C\n```{admonition} Notas para contenedor de docker:\nComando de docker para ejecución de la nota de forma local:\nnota: cambiar &lt;ruta a mi directorio&gt; por la ruta de directorio que se desea mapear a /datos dentro del contenedor de docker y &lt;versión imagen de docker&gt; por la versión más actualizada que se presenta en la documentación.\ndocker run --rm -v &lt;ruta a mi directorio&gt;:/datos --name jupyterlab_optimizacion_2 -p 8888:8888 -d palmoreck/jupyterlab_optimizacion_2:&lt;versión imagen de docker&gt;\npassword para jupyterlab: qwerty\nDetener el contenedor de docker:\ndocker stop jupyterlab_optimizacion_2\nDocumentación de la imagen de docker palmoreck/jupyterlab_optimizacion_2:&lt;versión imagen de docker&gt; en liga.\n```\n\n```{admonition} Al final de esta nota la comunidad lectora:\n:class: tip\n\n\nComprenderá diferencias entre lenguajes de programación que son intérpretes y los que requieren/realizan pasos de compilación.\n\n\nComprenderá por qué definir tipo de valores en lenguajes que son intérpretes conducen a tiempos de ejecución menores.\n\n\nAprenderá lo que es una compilación ahead of time (AOT) y just in time (JIT). Se mostrarán ejemplos de lenguajes y paquetes que realizan ambos tipos de compilaciones.\n\n\n```\nSe presentan códigos y sus ejecuciones en una máquina m4.16xlarge con una AMI ubuntu 20.04 - ami-042e8287309f5df03 de la nube de AWS. Se utilizó en la sección de User data el script_profiling_and_BLAS.sh\nLa máquina m4.16xlarge tiene las siguientes características:", "%%bash\nlscpu\n\n%%bash\nsudo lshw -C memory\n\n%%bash\nuname -ar #r for kernel, a for all", "```{admonition} Observación\n:class: tip\nEn la celda anterior se utilizó el comando de magic %%bash. Algunos comandos de magic los podemos utilizar también con import. Ver ipython-magics\n```\nCaracterísticas de los lenguajes de programación\nLos lenguajes de programación y sus implementaciones tienen características como las siguientes:\n\n\nRealizar un parsing de las instrucciones y ejecutarlas de forma casi inmediata (intérprete). Como ejemplo está el lenguaje: Beginners' All-purpose Symbolic Instruction Code: BASIC\n\n\nRealizar un parsing de las instrucciones, traducirlas a una representación intermedia (IR) y ejecutarlas. La traducción a una representación intermedia es un bytecode. Como ejemplo se encuentra el lenguaje Python en su implementación CPython.\n\n\nCompilar ahead of time (AOT) las instrucciones antes de su ejecución. Como ejemplo se encuentran los lenguajes C, C++ y Fortran.\n\n\nRealizar un parsing de las instrucciones y compilarlas en una forma just in time compilation (JIT) at runtime. Como ejemplos se encuentran los lenguajes Julia y Python en su implementación con PyPy.\n\n\nLa ejecución de instrucciones será más rápida dependiendo del lenguaje, la implementación que se haga del mismo y de sus features.\n```{admonition} Comentarios\n\n\nVarios proyectos están en desarrollo para mejorar eficiencia y otros temas. Algunos de ellos son:\n\n\nPyPy\n\n\nA better API for extending Python in C: hpyproject\n\n\n\n\nLa implementación CPython de Python es la estándar, pero hay otras más como PyPy. Ver python-vs-cpython para una breve explicación de implementaciones de Python. Ver Alternative R implementations y R implementations para implementaciones de R diferentes a la estándar.\n\n\n```\nCpython\n<img src=\"https://dl.dropboxusercontent.com/s/6quwf6c2ci5ey0n/cpython.png?dl=0\" heigth=\"900\" width=\"900\">\nCompilación AOT y JIT\n```{margin}\nEs común utilizar la palabra librería en lugar de paquete en el contexto de compilación.\n```\nUna compilación AOT crea una librería, especializada para nuestras máquinas y se puede utilizar de forma instantánea. Un ejemplo de lo anterior lo tenemos con Cython, el cual es un paquete que realiza la compilación de módulos de Python. Por ejemplo, las librerías de NumPy, SciPy o Scikit-learn instalados vía pip o conda utilizan Cython para compilar secciones de tales librerías adaptadas a nuestras máquinas.\nUna compilación JIT no requiere que se realice \"trabajo previo\" de nuestro lado, la compilación se realiza al tiempo que se utiliza el código, at runtime. En términos coloquiales, en una compilación JIT, se iniciará la ejecución del código identificando diferentes secciones que pueden compilarse y que por tanto se ejecutarán más lentamente de lo normal pues se estará realizando la compilación al tiempo de ejecución. Sin embargo, en sucesivas ejecuciones del mismo código tales secciones serán más rápidas. En resúmen se requiere un warm-up, ver por ejemplo how-fast-is-pypy.\nLa compilación AOT da los mejores speedups pero solicita mayor trabajo de nuestro lado. La compilación JIT da buenos speedups con poca intervención nuestra pero utiliza más memoria y más tiempo en iniciar la ejecución del código, ver por ejemplo python_performance-slide-15 acerca de PyPy issues. \nPara la ejecución frecuente de scripts pequeños la compilación AOT resulta una mejor opción que la compilación JIT, ver por ejemplo couldn't the jit dump and reload already compiled machine code. \nA continuación se presentan ejecuciones en diferentes lenguajes con sus implementaciones estándar para aproximar el área debajo de la curva de $f(x) = e^{-x^2}$ en el intervalo $[0, 1]$ con la regla del rectángulo compuesto. Se mide el tiempo de ejecución utilizando $n = 10^7$ nodos.\nPython", "%%file Rcf_python.py\nimport math\nimport time\ndef Rcf(f,a,b,n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n sum_res = 0\n for i in range(n):\n x = a+(i+1/2)*h_hat\n sum_res += f(x)\n return h_hat*sum_res\n\nif __name__ == \"__main__\": \n n = 10**7\n f = lambda x: math.exp(-x**2)\n a = 0\n b = 1\n start_time = time.time()\n res = Rcf(f,a,b,n)\n end_time = time.time()\n secs = end_time-start_time\n print(\"Rcf tomó\", secs, \"segundos\" )\n\n%%bash\npython3 Rcf_python.py", "R", "%%file Rcf_R.R\nRcf<-function(f,a,b,n){\n '\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n \n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n '\n \n h_hat <- (b-a)/n\n sum_res <- 0\n for(i in 0:(n-1)){\n x <- a+(i+1/2)*h_hat\n sum_res <- sum_res + f(x)\n }\n approx <- h_hat*sum_res\n}\nn <- 10**7\nf <- function(x)exp(-x^2)\na <- 0\nb <- 1\nsystem.time(Rcf(f,a,b,n))\n\n%%bash\nRscript Rcf_R.R", "Julia\nVer: Julia: performance-tips", "%%file Rcf_julia.jl\n\"\"\"\nCompute numerical approximation using rectangle or mid-point\nmethod in an interval.\n\n# Arguments\n\n- `f::Float`: function expression of integrand.\n- `a::Float`: left point of interval.\n- `b::Float`: right point of interval.\n- `n::Integer`: number of subintervals.\n\"\"\"\nfunction Rcf(f, a, b, n)\n h_hat = (b-a)/n\n sum_res = 0\n for i in 0.0:n-1\n x = a+(i+1/2)*h_hat\n sum_res += f(x)\n end \n return h_hat*sum_res\nend\nfunction main()\n a = 0\n b = 1\n n =10^7\n f(x) = exp(-x^2)\n res(f, a, b, n) = @time Rcf(f, a, b, n)\n println(res(f, a, b, n))\n println(res(f, a, b, n))\nend\n\nmain()\n\n%%bash\n/usr/local/julia-1.7.1/bin/julia Rcf_julia.jl", "(RCFJULIATYPEDVALUES)=\nRcf_julia_typed_values.jl", "%%file Rcf_julia_typed_values.jl\n\"\"\"\nCompute numerical approximation using rectangle or mid-point\nmethod in an interval.\n\n# Arguments\n\n- `f::Float`: function expression of integrand.\n- `a::Float`: left point of interval.\n- `b::Float`: right point of interval.\n- `n::Integer`: number of subintervals.\n\"\"\"\nfunction Rcf(f, a, b, n)\n h_hat = (b-a)/n\n sum_res = 0.0\n for i in 0:n-1\n x = a+(i + 1/2)*h_hat\n sum_res += f(x)\n end \n return h_hat*sum_res\nend\nfunction main()\n a = 0.0\n b = 1.0\n n =10^7\n f(x) = exp(-x^2)\n res(f, a, b, n) = @time Rcf(f, a, b, n)\n println(res(f, a, b, n))\n println(res(f, a, b, n))\nend\n\nmain()\n\n%%bash\n/usr/local/julia-1.7.1/bin/julia Rcf_julia_typed_values.jl", "(RCFJULIANAIVE)=\nRcf_julia_naive.jl", "%%file Rcf_julia_naive.jl\n\"\"\"\nCompute numerical approximation using rectangle or mid-point\nmethod in an interval.\n\n# Arguments\n\n- `f::Float`: function expression of integrand.\n- `a::Float`: left point of interval.\n- `b::Float`: right point of interval.\n- `n::Integer`: number of subintervals.\n\"\"\"\nfunction Rcf(f, a, b, n)\n h_hat = (b-a)/n\n sum_res = 0\n for i in 0:n-1\n x = a+(i + 1/2)*h_hat\n sum_res += f(x)\n end \n return h_hat*sum_res\nend\nfunction main()\n a = 0\n b = 1\n n =10^7\n f(x) = exp(-x^2)\n res(f, a, b, n) = @time Rcf(f, a, b, n)\n println(res(f, a, b, n))\n println(res(f, a, b, n))\nend\n\nmain()\n\n%%bash\n/usr/local/julia-1.7.1/bin/julia Rcf_julia_naive.jl", "C\nPara la medición de tiempos se utilizaron las ligas: measuring-time-in-millisecond-precision y find-execution-time-c-program.\n(RCFC)=\nRcf_c.c", "%%file Rcf_c.c\n#include<stdio.h>\n#include<stdlib.h>\n#include<math.h>\n#include<time.h>\n#include <sys/time.h>\n\nvoid Rcf(double ext_izq, double ext_der, int n,\\\n double *sum_res_p);\ndouble f(double nodo);\n\nint main(int argc, char *argv[]){\n double sum_res = 0.0;\n double a = 0.0, b = 1.0;\n int n = 1e7;\n struct timeval start;\n struct timeval end;\n long seconds;\n long long mili;\n \n gettimeofday(&start, NULL);\n Rcf(a,b,n,&sum_res);\n gettimeofday(&end, NULL);\n seconds = (end.tv_sec - start.tv_sec);\n mili = 1000*(seconds) + (end.tv_usec - start.tv_usec)/1000; \n printf(\"Tiempo de ejecución: %lld milisegundos\", mili);\n \n return 0;\n}\nvoid Rcf(double a, double b, int n, double *sum){\n double h_hat = (b-a)/n;\n double x = 0.0;\n int i = 0;\n *sum = 0.0;\n for(i = 0; i <= n-1; i++){\n x = a+(i+1/2.0)*h_hat;\n *sum += f(x);\n }\n *sum = h_hat*(*sum);\n}\ndouble f(double nodo){\n double valor_f;\n valor_f = exp(-pow(nodo,2));\n return valor_f;\n}\n\n\n%%bash\ngcc -Wall Rcf_c.c -o Rcf_c.out -lm\n\n%%bash\n./Rcf_c.out", "¿Por qué dar información sobre el tipo de valores (u objetos) que se utilizan en un código ayuda a que su ejecución sea más rápida?\nPython es dynamically typed que se refiere a que un objeto de cualquier tipo y cualquier statement que haga referencia a un objeto, pueden cambiar su tipo. Esto hace difícil que la máquina virtual pueda optimizar la ejecución del código pues no se conoce qué tipo será utilizado para las operaciones futuras. Por ejemplo:", "v = -1.0\n\nprint(type(v), abs(v))\n\nv = 1 - 1j\n\nprint(type(v), abs(v))", "La función abs trabaja diferente dependiendo del tipo de objeto. Para un número entero o punto flotante regresa el negativo de $-1.0$ y para un número complejo calcula una norma Euclidiana tomando de $v$ su parte real e imaginaria: $\\text{abs}(v) = \\sqrt{v.real^2 + v.imag^2}$.\nLo anterior en la práctica implica la ejecución de más instrucciones y por tanto mayor tiempo en ejecutarse. Antes de llamar a abs en la variable, Python revisa el tipo y decide cuál método llamar (overhead).\n```{admonition} Comentarios\n\n\nAdemás cada número en Python está wrapped up en un objeto de Python de alto nivel. Por ejemplo para un entero se tiene el objeto int. Tal objeto tiene otras funciones por ejemplo __str__ para imprimirlo.\n\n\nEs muy común que en los códigos no cambien los tipos por lo que la compilación AOT es una buena opción para una ejecución más rápida.\n\n\nSiguiendo con los dos comentarios anteriores, si sólo se desea calcular operaciones matemáticas (como el caso de la raíz cuadrada anterior) no requerimos la funcionalidad del objeto de alto nivel.\n\n\n```\nCython\n\nEs un compilador que traduce instrucciones anotadas y escritas en un lenguaje híbrido entre Python y C que resultan un módulo compilado. Este módulo puede ser importado como un módulo regular de Python utilizando import. Típicamente el módulo compilado resulta ser similar en sintaxis al lenguaje C.\n\n```{margin}\nLa frase código tipo CPU-bound es código cuya ejecución involucra un porcentaje mayor para uso de CPU que uso de memoria o I/O.\n```\n\nTiene un buen tiempo en la comunidad (2007 aproximadamente), es altamente usado y es de las herramientas preferidas para código tipo CPU-bound. Es un fork de Pyrex (2002) que expande sus capacidades.\n\n```{admonition} Comentario\nPyrex en términos simples es Python con manejo de tipo de valores de C. Pyrex traduce el código escrito en Python a código de C (lo cual evita el uso de la Python/C API) y permite la declaración de parámetros o valores en tipos de valores de C.\n```\n\n\nRequiere conocimiento del lenguaje C lo cual debe tomarse en cuenta en un equipo de desarrollo de software y se sugiere utilizarlo en secciones pequeñas del código.\n\n\nSoporta la API OpenMP para aprovechar los múltiples cores de una máquina.\n\n\nPuede utilizarse vía un script setup.py que compila un módulo para usarse con import y también puede utilizarse en IPython vía un comando magic.\n\n\n<img src=\"https://dl.dropboxusercontent.com/s/162u0zcfpm8lewu/cython.png?dl=0\" heigth=\"900\" width=\"900\">\n```{admonition} Comentario\nEn el paso de compilación a código de máquina del dibujo anterior se omitieron detalles como son: creación de un archivo .c y compilación de tal archivo con el compilador gcc al módulo compilado (en sistemas Unix tiene extensión .so).\nVer machine code\n```\n\nCython y el compilador gcc analizan el código anotado para determinar qué instrucciones pueden optimizarse mediante una compilación AOT.\n\n¿En qué casos y qué tipo de ganancias en velocidad podemos esperar al usar Cython?\n\nUn caso es en el que se tenga un código con muchos loops que realicen operaciones matemáticas típicamente no vectorizadas o que no pueden vectorizarse. Esto es, códigos en los que las instrucciones son básicamente sólo Python sin utilizar paquetes externos. Además, si en el ciclo las variables no cambian de su tipo (por ejemplo de int a float) entonces es un código que obtendrá ganancia en velocidad al compilar a código de máquina.\n\n```{admonition} Observación\n:class: tip\nSi tu código de Python llama a operaciones vectorizadas vía NumPy podría ser que no se ejecute más rápido tu código después de compilarlo. Principalmente porque probablemente no se crearán muchos objetos intermedios que es un feature de NumPy.\n```\n\n\nNo esperamos tener un speedup después de compilar para llamadas a librerías externas (por ejemplo paqueterías que manejan bases de datos). También es poco probable que se obtengan ganancias significativas en programas que tengan alta carga de I/O.\n\n\nEn general es poco probable que tu código compilado se ejecute más rápido que un código en C \"bien escrito\" y también es poco probable que se ejecute más lento. Es muy posible que el código C generado desde Python mediante Cython pueda alcanzar las velocidades de un código escrito en C, a menos que la persona que programó en C tenga un gran conocimiento de formas de hacer que el código de C se ajuste a la arquitectura de la máquina sobre la que se ejecutan los códigos.\n\n\nEjemplo utilizando un archivo setup.py", "import math\nimport time\n\nfrom pytest import approx\nfrom scipy.integrate import quad\nfrom IPython.display import HTML, display", "Para este caso requerimos tres archivos:\n1.El código que será compilado en un archivo con extensión .pyx (escrito en Python). \n```{admonition} Observación\n:class: tip\nLa extensión .pyx se utiliza en el lenguaje Pyrex. \n```\n2.Un archivo setup.py que contiene las instrucciones para llamar a Cython y se encarga de crear el módulo compilado.\n3.El código escrito en Python que importará el módulo compilado.\nArchivo .pyx:", "%%file Rcf_cython.pyx\ndef Rcf(f,a,b,n): #Rcf: rectángulo compuesto para f\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n nodes = [a+(i+1/2)*h_hat for i in range(n)]\n sum_res = 0\n for node in nodes:\n sum_res = sum_res+f(node)\n return h_hat*sum_res", "Archivo setup.py que contiene las instrucciones para el build:", "%%file setup.py\nfrom distutils.core import setup\nfrom Cython.Build import cythonize\n\nsetup(ext_modules = cythonize(\"Rcf_cython.pyx\", \n compiler_directives={'language_level' : 3})\n )", "Compilar desde la línea de comandos:", "%%bash\npython3 setup.py build_ext --inplace", "Importar módulo compilado y ejecutarlo:", "f=lambda x: math.exp(-x**2) #using math library\n\nn = 10**7\na = 0\nb = 1\n\nimport Rcf_cython\n\nstart_time = time.time()\nres = Rcf_cython.Rcf(f, a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf tomó\",secs,\"segundos\" )\n\nobj, err = quad(f, a, b)\n\nprint(res == approx(obj))", "Comando de magic %cython\n```{margin}\nVer extensions-bundled-with-ipython para extensiones que antes se incluían en Ipython.\n```\nAl instalar Cython se incluye tal comando. Al ejecutarse crea el archivo .pyx, lo compila con setup.py e importa en el notebook.", "%load_ext Cython\n\n%%cython\ndef Rcf(f,a,b,n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n nodes = [a+(i+1/2)*h_hat for i in range(n)]\n sum_res = 0\n for node in nodes:\n sum_res = sum_res+f(node)\n return h_hat*sum_res\n\nstart_time = time.time()\nres = Rcf(f, a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf tomó\",secs,\"segundos\" )\n\nobj, err = quad(f, a, b)\n\nprint(res == approx(obj))", "Anotaciones para analizar un bloque de código\nCython tiene la opción de annotation para generar un archivo con extensión .html en el que cada línea puede ser expandida haciendo un doble click que mostrará el código C generado. Líneas \"más amarillas\" refieren a más llamadas en la máquina virtual de Python, mientras que líneas más blancas significan \"más código en C y no Python\".\nEl objetivo es remover la mayor cantidad de líneas amarillas posibles pues son costosas en tiempo. Si tales líneas están dentro de loops serán todavía más costosas. Al final se busca tener códigos cuyas anotaciones sean lo más blancas posibles. \n```{admonition} Observación\n:class: tip\nConcentra tu atención en las líneas que son amarillas y están dentro de los loops, no inviertas tiempo en líneas amarillas que están fuera de loops y que no causan una ejecución lenta. Una ayuda para identificar lo anterior la da el perfilamiento.\n```\nEjemplo vía línea de comando", "%%bash\n$HOME/.local/bin/cython --force -3 --annotate Rcf_cython.pyx", "Ver archivo creado: Rcf_cython.html\n```{margin}\nLa liga correcta del archivo Rcf_cython.c es Rcf_cython.c\n```", "display(HTML(\"Rcf_cython.html\"))", "```{admonition} Comentarios\nPara el código anterior el statement en donde se crean los nodos involucra un loop y es \"muy amarilla\". Si se perfila el código se verá que es una línea en la que se gasta una buena parte del tiempo total de ejecución del código.\n```\nUna primera opción que tenemos es crear los nodos para el método de integración dentro del loop y separar el llamado a la list comprehension nodes=[a+(i+1/2)*h_hat for i in range(n)]:", "%%file Rcf_2_cython.pyx\ndef Rcf(f,a,b,n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n sum_res = 0\n for i in range(n):\n x = a+(i+1/2)*h_hat\n sum_res += f(x)\n return h_hat*sum_res\n\n%%bash\n$HOME/.local/bin/cython --force -3 --annotate Rcf_2_cython.pyx", "```{margin}\nLa liga correcta del archivo Rcf_2_cython.c es Rcf_2_cython.c\n```", "display(HTML(\"Rcf_2_cython.html\"))", "```{admonition} Comentario\nPara el código anterior los statements que están dentro del loop son \"muy amarillos\". En tales statements involucran tipos de valores que no cambiarán en la ejecución de cada loop. Una opción es declarar los tipos de objetos que están involucrados en el loop utilizando la sintaxis cdef. Ver function_declarations, definition-of-def-cdef-and-cpdef-in-cython\n```", "%%file Rcf_3_cython.pyx\ndef Rcf(f, double a, double b, unsigned int n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n cdef unsigned int i\n cdef double x, sum_res, h_hat\n h_hat = (b-a)/n\n sum_res = 0\n for i in range(n):\n x = a+(i+1/2)*h_hat\n sum_res += f(x)\n return h_hat*sum_res\n\n%%bash\n$HOME/.local/bin/cython -3 --force --annotate Rcf_3_cython.pyx", "```{margin}\nLa liga correcta del archivo Rcf_3_cython.c es Rcf_3_cython.c\n```", "display(HTML(\"Rcf_3_cython.html\"))", "```{admonition} Comentario\nAl definir tipos, éstos sólo serán entendidos por Cython y no por Python. Cython utiliza estos tipos para convertir el código de Python a código de C.\n```\nUna opción con la que perdemos flexibilidad pero ganamos en disminuir tiempo de ejecución es directamente llamar a la función math.exp:", "%%file Rcf_4_cython.pyx\nimport math\ndef Rcf(double a, double b, unsigned int n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n cdef unsigned int i\n cdef double x, sum_res, h_hat\n h_hat = (b-a)/n\n sum_res = 0\n for i in range(n):\n x = a+(i+1/2)*h_hat\n sum_res += math.exp(-x**2)\n return h_hat*sum_res\n\n%%bash\n$HOME/.local/bin/cython -3 --force --annotate Rcf_4_cython.pyx", "```{margin}\nLa liga correcta del archivo Rcf_4_cython.c es Rcf_4_cython.c\n```", "display(HTML(\"Rcf_4_cython.html\"))", "Mejoramos el tiempo si directamente utilizamos la función exp de la librería math de Cython, ver calling C functions.\n(RCF5CYTHON)=\nRcf_5_cython.pyx", "%%file Rcf_5_cython.pyx\nfrom libc.math cimport exp as c_exp\n\ncdef double f(double x) nogil:\n return c_exp(-x**2)\n \ndef Rcf(double a, double b, unsigned int n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n cdef unsigned int i\n cdef double x, sum_res, h_hat\n h_hat = (b-a)/n\n sum_res = 0\n for i in range(n):\n x = a+(i+1/2)*h_hat\n sum_res += f(x)\n return h_hat*sum_res\n\n%%bash\n$HOME/.local/bin/cython -3 --force --annotate Rcf_5_cython.pyx", "```{margin}\nLa liga correcta del archivo Rcf_5_cython.c es Rcf_5_cython.c\n```", "display(HTML(\"Rcf_5_cython.html\"))", "```{admonition} Comentario\nUn tradeoff en la optimización de código se realiza entre flexibilidad, legibilidad y una ejecución rápida del código.\n```", "%%file setup_2.py\nfrom distutils.core import setup\nfrom Cython.Build import cythonize\n\nsetup(ext_modules = cythonize(\"Rcf_2_cython.pyx\", \n compiler_directives={'language_level' : 3})\n )", "Compilar desde la línea de comandos:", "%%bash\npython3 setup_2.py build_ext --inplace\n\n%%file setup_3.py\nfrom distutils.core import setup\nfrom Cython.Build import cythonize\n\nsetup(ext_modules = cythonize(\"Rcf_3_cython.pyx\", \n compiler_directives={'language_level' : 3})\n )", "Compilar desde la línea de comandos:", "%%bash\npython3 setup_3.py build_ext --inplace\n\n%%file setup_4.py\nfrom distutils.core import setup\nfrom Cython.Build import cythonize\n\nsetup(ext_modules = cythonize(\"Rcf_4_cython.pyx\", \n compiler_directives={'language_level' : 3})\n )", "Compilar desde la línea de comandos:", "%%bash\npython3 setup_4.py build_ext --inplace\n\n%%file setup_5.py\nfrom distutils.core import setup\nfrom Cython.Build import cythonize\n\nsetup(ext_modules = cythonize(\"Rcf_5_cython.pyx\", \n compiler_directives={'language_level' : 3})\n )", "Compilar desde la línea de comandos:", "%%bash\npython3 setup_5.py build_ext --inplace", "Importar módulos compilados:", "import Rcf_2_cython, Rcf_3_cython, Rcf_4_cython, Rcf_5_cython\n\nstart_time = time.time()\nres_2 = Rcf_2_cython.Rcf(f, a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_2 tomó\",secs,\"segundos\" )", "Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema:", "print(res_2 == approx(obj))\n\nstart_time = time.time()\nres_3 = Rcf_3_cython.Rcf(f, a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_3 tomó\",secs,\"segundos\" )\n\nprint(res_3 == approx(obj))\n\nstart_time = time.time()\nres_4 = Rcf_4_cython.Rcf(a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_4 tomó\",secs,\"segundos\" )\n\nprint(res_4 == approx(obj))\n\nstart_time = time.time()\nres_5 = Rcf_5_cython.Rcf(a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_5 tomó\",secs,\"segundos\" )", "Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema:", "print(res_5 == approx(obj))", "Ejemplo de implementación con NumPy\nComparamos con una implementación usando NumPy y vectorización:", "import numpy as np\n\nf_np = lambda x: np.exp(-x**2)", "(RCFNUMPY)=\nRcf_numpy", "def Rcf_numpy(f,a,b,n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n aux_vec = np.linspace(a, b, n+1)\n nodes = (aux_vec[:-1]+aux_vec[1:])/2\n return h_hat*np.sum(f(nodes))\n\nstart_time = time.time()\nres_numpy = Rcf_numpy(f_np, a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_numpy tomó\",secs,\"segundos\" )\n\nprint(res_numpy == approx(obj))", "```{admonition} Comentarios\n\n\nLa implementación con NumPy resulta ser la segunda más rápida principalmente por el uso de bloques contiguos de memoria para almacenar los valores y la vectorización. La implementación anterior, sin embargo, requiere un conocimiento de las funciones de tal paquete. Para este ejemplo utilizamos linspace y la funcionalidad de realizar operaciones de forma vectorizada para la creación de los nodos y evaluación de la función. Una situación que podría darse es que para un problema no podamos utilizar alguna función de NumPy o bien no tengamos el ingenio para pensar cómo realizar una operación de forma vectorizada. En este caso Cython puede ser una opción a utilizar.\n\n\nEn Cython se tienen las memoryviews para acceso de bajo nivel a la memoria similar a la que proveen los arrays de NumPy en el caso de requerirse arrays en una forma más general que no sólo sean de NumPy (por ejemplo de C o de Cython, ver Cython arrays).\n\n\n```\n```{admonition} Observación\n:class: tip\nCompárese la implementación vía NumPy con el uso de listas para los nodos. Recuérdese que las listas de Python alojan locaciones donde se pueden encontrar los valores y no los valores en sí. Los arrays de NumPy almacenan tipos de valores primitivos. Las listas tienen data fragmentation que causan memory fragmentation y por tanto un mayor impacto del Von Neumann bottleneck. Además el almacenamiento de tipo de objetos de alto nivel en las listas causa overhead en lugar de almacenamiento de tipo de valores primitivos en un array de NumPy.\n```\nCython y OpenMP\nOpenMP es una extensión al lenguaje C y es una API para cómputo en paralelo en un sistema de memoria compartida, aka, shared memory parallel programming con CPUs. Se revisará con mayor profundidad en la nota de cómputo en paralelo. \n```{margin}\nVer global interpreter lock (GIL), global interpreter lock\n```\nEn Cython, OpenMP se utiliza mediante prange (parallel range). Además debe deshabilitarse el GIL.\n```{admonition} Observación\n:class: tip\nAl deshabilitar el GIL en una sección de código se debe operar con tipos primitivos. En tal sección no se debe operar con objetos Python (por ejemplo listas).\n```\n(RCF5CYTHONOPENMP)=\nRcf_5_cython_openmp", "%%file Rcf_5_cython_openmp.pyx\nfrom cython.parallel import prange\nfrom libc.math cimport exp as c_exp\n\ncdef double f(double x) nogil:\n return c_exp(-x**2)\n\ndef Rcf(double a, double b, unsigned int n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n cdef int i\n cdef double x, sum_res, h_hat\n h_hat = (b-a)/n\n sum_res = 0\n for i in prange(n, schedule=\"guided\", nogil=True):\n x = a+(i+1/2)*h_hat\n sum_res += f(x)\n return h_hat*sum_res", "```{admonition} Comentario\nCon prange puede elegirse diferente scheduling. Si schedule recibe el valor static el trabajo a realizar se reparte equitativamente entre los cores y si algunos threads terminan antes permanecerán sin realizar trabajo, aka idle. Con dynamic y guided se reparte de manera dinámica at runtime que es útil si la cantidad de trabajo es variable y si threads terminan antes pueden recibir trabajo a realizar.\n```", "%%bash\n$HOME/.local/bin/cython -3 --force Rcf_5_cython_openmp.pyx", "En el archivo setup.py se coloca la directiva -fopenmp.\n```{margin}\nVer Rcf_5_cython_openmp.c para la implementación en C de la función Rcf_5_cython_openmp.Rcf.\n```", "%%file setup_5_openmp.py\nfrom setuptools import Extension, setup\nfrom Cython.Build import cythonize\n\next_modules = [Extension(\"Rcf_5_cython_openmp\",\n [\"Rcf_5_cython_openmp.pyx\"], \n extra_compile_args=[\"-fopenmp\"],\n extra_link_args=[\"-fopenmp\"],\n )\n ]\n\nsetup(ext_modules = cythonize(ext_modules))", "Compilar desde la línea de comandos:", "%%bash\npython3 setup_5_openmp.py build_ext --inplace\n\nimport Rcf_5_cython_openmp\n\nstart_time = time.time()\nres_5_openmp = Rcf_5_cython_openmp.Rcf(a, b, n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_5_openmp tomó\",secs,\"segundos\" )", "Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema:", "print(res_5_openmp == approx(obj))", "```{admonition} Ejercicio\n:class: tip\nImplementar la regla de Simpson compuesta con NumPy, Cython y Cython + OpenMP en una máquina de AWS con las mismas características que la que se presenta en esta nota y medir tiempo de ejecución.\n```\nNumba\n\n\nUtiliza compilación JIT at runtime mediante el compilador llvmlite.\n\n\nPuede utilizarse para funciones built in de Python o de NumPy.\n\n\nTiene soporte para cómputo en paralelo en CPU/GPU.\n\n\nUtiliza CFFI y ctypes para llamar a funciones de C. \n\n\nVer numba architecture para una explicación detallada de su funcionamiento.\n\n\nSe utiliza un decorator para anotar cuál función se desea compilar.\nEjemplo de uso con Numba", "from numba import jit", "```{margin}\nEn glossary: nopython se da la definición de nopython mode en Numba. Ahí se indica que se genera código que no usa Python C API y requiere que los tipos de valores nativos de Python puedan ser inferidos. \nPuede usarse el decorator njit que es un alias para @jit(nopython=True).\n```\n(RCFNUMBA)=\nRcf_numba", "@jit(nopython=True)\ndef Rcf_numba(a,b,n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n sum_res = 0\n for i in range(n):\n x = a+(i+1/2)*h_hat\n sum_res += np.exp(-x**2)\n return h_hat*sum_res\n\n\nstart_time = time.time()\nres_numba = Rcf_numba(a,b,n)\nend_time = time.time()", "```{margin}\nSe mide dos veces el tiempo de ejecución para no incluir el tiempo de compilación. Ver 5minguide.\n```", "secs = end_time-start_time\nprint(\"Rcf_numba con compilación tomó\", secs, \"segundos\" )\n\nstart_time = time.time()\nres_numba = Rcf_numba(a,b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_numba tomó\", secs, \"segundos\" )", "Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema:", "print(res_numba == approx(obj))", "Con la función inspect_types nos ayuda para revisar si pudo inferirse información de los tipos de valores a partir del código escrito.", "print(Rcf_numba.inspect_types())", "Ejemplo de uso de Numba con cómputo en paralelo\nVer numba: parallel, numba: threading layer", "from numba import prange", "(RCFNUMBAPARALLEL)=\nRcf_numba_parallel", "@jit(nopython=True, parallel=True)\ndef Rcf_numba_parallel(a,b,n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n sum_res = 0\n for i in prange(n):\n x = a+(i+1/2)*h_hat\n sum_res += np.exp(-x**2)\n return h_hat*sum_res\n\n\nstart_time = time.time()\nres_numba_parallel = Rcf_numba_parallel(a,b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_numba_parallel con compilación tomó\", secs, \"segundos\" )\n\nstart_time = time.time()\nres_numba_parallel = Rcf_numba_parallel(a,b,n)\nend_time = time.time()", "```{margin} \nVer parallel-diagnostics para información relacionada con la ejecución en paralelo. Por ejemplo ejecutar Rcf_numba_parallel.parallel_diagnostics(level=4).\n```", "secs = end_time-start_time\nprint(\"Rcf_numba_parallel tomó\", secs, \"segundos\" )", "Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema:", "print(res_numba_parallel == approx(obj))", "Ejemplo Numpy y Numba\nEn el siguiente ejemplo se utiliza la función linspace para auxiliar en la creación de los nodos y obsérvese que Numba sin problema trabaja los ciclos for (en el caso por ejemplo que no hubiéramos podido vectorizar la operación de creación de nodos).", "@jit(nopython=True)\ndef Rcf_numpy_numba(a,b,n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n aux_vec = np.linspace(a, b, n+1)\n sum_res = 0\n for i in range(n-1):\n x = (aux_vec[i]+aux_vec[i+1])/2\n sum_res += np.exp(-x**2)\n return h_hat*sum_res\n\nstart_time = time.time()\nres_numpy_numba = Rcf_numpy_numba(a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_numpy_numba con compilación tomó\",secs,\"segundos\" )\n\nstart_time = time.time()\nres_numpy_numba = Rcf_numpy_numba(a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_numpy_numba tomó\",secs,\"segundos\" )\n\nprint(res_numpy_numba == approx(obj))\n\n@jit(nopython=True)\ndef Rcf_numpy_numba_2(a,b,n):\n \"\"\"\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n \"\"\"\n h_hat = (b-a)/n\n aux_vec = np.linspace(a, b, n+1)\n nodes = (aux_vec[:-1]+aux_vec[1:])/2\n return h_hat*np.sum(np.exp(-nodes**2))\n\nstart_time = time.time()\nres_numpy_numba_2 = Rcf_numpy_numba_2(a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_numpy_numba_2 con compilación tomó\",secs,\"segundos\" )\n\nstart_time = time.time()\nres_numpy_numba_2 = Rcf_numpy_numba_2(a, b,n)\nend_time = time.time()\n\nsecs = end_time-start_time\nprint(\"Rcf_numpy_numba_2 tomó\",secs,\"segundos\" )\n\nprint(res_numpy_numba_2 == approx(obj))", "```{admonition} Observación\n:class: tip\nObsérvese que no se mejora el tiempo de ejecución utilizando vectorización y linspace que usando un ciclo for en la implementación anterior Rcf_numpy_numba. De hecho en Rcf_numpy_numba_2 tiene un tiempo de ejecución igual que {ref}Rcf_numpy &lt;RCFNUMPY&gt;.\n```\n```{admonition} Ejercicio\n:class: tip\nImplementar la regla de Simpson compuesta con Numba, Numpy y Numba, Numba con cómputo en paralelo en una máquina de AWS con las mismas características que la que se presenta en esta nota y medir tiempo de ejecución.\n```\nRcpp\nRcpp permite integrar C++ y R de forma sencilla mediante su API.\n¿Por qué usar Rcpp?\nCon Rcpp nos da la posibilidad de obtener eficiencia en ejecución de un código con C++ conservando la flexibilidad de trabajar con R. C o C++ aunque requieren más líneas de código, son órdenes de magnitud más rápidos que R. Sacrificamos las ventajas que tiene R como facilidad de escribir códigos por velocidad en ejecución.\n¿Cuando podríamos usar Rcpp?\n\n\nEn loops que no pueden vectorizarse de forma sencilla. Si tenemos loops en los que una iteración depende de la anterior.\n\n\nSi hay que llamar una función millones de veces.\n\n\n¿Por qué no usamos C?\nSí es posible llamar funciones de C desde R pero resulta en más trabajo por parte de nosotros. Por ejemplo, de acuerdo a H. Wickham:\n\"...R’s C API. Unfortunately this API is not well documented. I’d recommend starting with my notes at R’s C interface. After that, read “The R API” in “Writing R Extensions”. A number of exported functions are not documented, so you’ll also need to read the R source code to figure out the details.\"\nY como primer acercamiento a la compilación de código desde R es preferible seguir las recomendaciones de H. Wickham en utilizar la API de Rcpp.\nEjemplo con Rcpp\nEn la siguiente implementación se utiliza vapply que es más rápida que sapply pues se especifica con anterioridad el tipo de valor que devuelve.", "Rcf <- function(f,a,b,n){\n '\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n \n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n '\n h_hat <- (b-a)/n\n sum_res <- 0\n x <- vapply(0:(n-1),function(j)a+(j+1/2)*h_hat,numeric(1))\n for(j in 1:n){\n sum_res <- sum_res+f(x[j])\n }\n h_hat*sum_res\n}\n\na <- 0\nb <- 1\nf <- function(x)exp(-x^2)\nn <- 10**7\n\nsystem.time(res <- Rcf(f,a,b,n))\n\nerr_relativo <- function(aprox,obj)abs(aprox-obj)/abs(obj)", "```{margin}\nEn la documentación de integrate se menciona que se utilice Vectorize.\n```", "obj <- integrate(Vectorize(f),0,1) \n\nprint(err_relativo(res,obj$value))\n\nRcf_2 <- function(f,a,b,n){\n '\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n \n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n '\n h_hat <- (b-a)/n\n x <- vapply(0:(n-1),function(j)a+(j+1/2)*h_hat,numeric(1))\n h_hat*sum(f(x))\n}\n\nsystem.time(res_2 <- Rcf_2(f,a,b,n))\n\nprint(err_relativo(res_2,obj$value))\n\nlibrary(Rcpp)", "En Rcpp se tiene la función cppFunction que recibe código escrito en C++ para definir una función que puede ser utilizada desde R. \nPrimero reescribamos la implementación en la que no utilicemos vapply.", "Rcf_3 <- function(f,a,b,n){\n '\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n \n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n '\n h_hat <- (b-a)/n\n sum_res <- 0\n for(i in 0:(n-1)){\n x <- a+(i+1/2)*h_hat\n sum_res <- sum_res+f(x)\n }\n h_hat*sum_res\n}\n\nsystem.time(res_3 <- Rcf_3(f,a,b,n))\n\nprint(err_relativo(res_3,obj$value))", "(RCFRCPP)=\nRcf_Rcpp\nEscribimos source code en C++ que será el primer parámetro que recibirá cppFunction.", "f_str <- 'double Rcf_Rcpp(double a, double b, int n){\n double h_hat;\n double sum_res=0;\n int i;\n double x;\n h_hat=(b-a)/n;\n for(i=0;i<=n-1;i++){\n x = a+(i+1/2.0)*h_hat;\n sum_res += exp(-pow(x,2));\n }\n return h_hat*sum_res;\n }'\n\ncppFunction(f_str)", "Si queremos obtener más información de la ejecución de la línea anterior podemos usar lo siguiente.\n```{margin}\nSe utiliza rebuild=TRUE para que se vuelva a compilar, ligar con la librería en C++ y más operaciones de cppFunction.\n```", "cppFunction(f_str, verbose=TRUE, rebuild=TRUE) ", "```{admonition} Comentarios\n\n\nAl ejecutar la línea de cppFunction, Rcpp compilará el código de C++ y construirá una función de R que se conecta con la función compilada de C++. \n\n\nSi se lee la salida de la ejecución con verbose=TRUE se utiliza un tipo de valor SEXP. De acuerdo a H. Wickham:\n\n\n...functions that talk to R must use the SEXP type for both inputs and outputs. SEXP, short for S expression, is the C struct used to represent every type of object in R. A C function typically starts by converting SEXPs to atomic C objects, and ends by converting C objects back to a SEXP. (The R API is designed so that these conversions often don’t require copying.)\n\nLa función Rcpp::wrap convierte objetos de C++ a objetos de R y Rcpp:as viceversa.\n\n```", "system.time(res_4 <- Rcf_Rcpp(a,b,n))\n\nprint(err_relativo(res_4,obj$value))", "Otras funcionalidades de Rcpp\nNumericVector\nEn Rcpp se definen clases para relacionar tipos de valores de R con tipo de valores de C++ para el manejo de vectores. Entre éstas se encuentran NumericVector, IntegerVector, CharacterVector y LogicalVector que se relacionan con vectores tipo numeric, integer, character y logical respectivamente. \nPor ejemplo, para el caso de NumericVector se tiene el siguiente ejemplo.", "f_str <- 'NumericVector my_f(NumericVector x){\n return exp(log(x));\n }'\n\ncppFunction(f_str)\n\nprint(my_f(seq(0,1,by=.1)))", "Ejemplo con NumericVector\nPara mostrar otro ejemplo en el caso de la regla de integración del rectángulo considérese la siguiente implementación.", "Rcf_implementation_example <- function(f,a,b,n){\n '\n Compute numerical approximation using rectangle or mid-point\n method in an interval.\n \n Nodes are generated via formula: x_i = a+(i+1/2)h_hat for\n i=0,1,...,n-1 and h_hat=(b-a)/n\n Args:\n \n f (float): function expression of integrand.\n \n a (float): left point of interval.\n \n b (float): right point of interval.\n \n n (int): number of subintervals.\n \n Returns:\n \n sum_res (float): numerical approximation to integral\n of f in the interval a,b\n '\n h_hat <- (b-a)/n\n fx <- f(vapply(0:(n-1),function(j)a+(j+1/2)*h_hat,numeric(1)))\n h_hat*sum(fx)\n}\n\nres_numeric_vector <- Rcf_implementation_example(f,a,b,n)\n\nprint(err_relativo(res_numeric_vector,obj$value))", "Utilicemos Rcpp para definir una función que recibe un NumericVector para realizar la suma.\n```{margin}\nEl método .size() regresa un integer.\n```", "f_str<-'double Rcf_numeric_vector(NumericVector f_x,double h_hat){\n double sum_res=0;\n int i;\n int n = f_x.size();\n for(i=0;i<=n-1;i++){\n sum_res+=f_x[i];\n }\n return h_hat*sum_res;\n }'\n\nh_hat <- (b-a)/n\n\nfx <- f(vapply(0:(n-1),function(j)a+(j+1/2)*h_hat,numeric(1)))\n\nprint(tail(fx))\n\ncppFunction(f_str,rebuild=TRUE)\n\nres_numeric_vector <- Rcf_numeric_vector(fx,h_hat)\n\nprint(err_relativo(res_numeric_vector,obj$value))", "Otro ejemplo en el que se devuelve un vector tipo NumericVector para crear los nodos.", "f_str <- 'NumericVector Rcf_nodes(double a, double b, int n){\n double h_hat=(b-a)/n;\n int i;\n NumericVector x(n);\n for(i=0;i<n;i++)\n x[i]=a+(i+1/2.0)*h_hat;\n return x;\n }'\n\ncppFunction(f_str,rebuild=TRUE)\n\nprint(Rcf_nodes(0,1,2))", "Ejemplo de llamado a función definida en ambiente global con Rcpp\nTambién en Rcpp es posible llamar funciones definidas en el ambiente global, por ejemplo.\n```{margin}\nRObject es una clase de C++ para definir un objeto de R.\n```", "f_str <- 'RObject fun(double x){\n Environment env = Environment::global_env();\n Function f=env[\"f\"];\n return f(x);\n }'\n\ncppFunction(f_str,rebuild=TRUE)\n\nfun(1)\n\nf(1)\n\nprint(fun)", "```{admonition} Comentario\n.Call es una función base para llamar funciones de C desde R:\nThere are two ways to call C functions from R: .C() and .Call(). .C() is a quick and dirty way to call an C function that doesn’t know anything about R because .C() automatically converts between R vectors and the corresponding C types. .Call() is more flexible, but more work: your C function needs to use the R API to convert its inputs to standard C data types.\nH. Wickham.\n```", "print(f)", "```{admonition} Ejercicio\n:class: tip\nRevisar rcpp-sugar, Rcpp syntactic sugar y proponer programas que utilicen sugar.\n```\n```{admonition} Ejercicio\n:class: tip\nImplementar la regla de Simpson compuesta con Rcpp en una máquina de AWS con las mismas características que la que se presenta en esta nota y medir tiempo de ejecución.\n```\n```{admonition} Comentario\nTambién existe el paquete RcppParallel para cómputo en paralelo con la funcionalidad de Rcpp.\n```\nTabla resúmen con códigos que reportaron los mejores tiempos\nEn la siguiente tabla se presentan ligas hacia las implementaciones de la regla del rectángulo compuesta en diferentes lenguajes y paqueterías que reportaron los mejores tiempos. Cualquier código siguiente presenta muy buen desempeño en tal implementación. Las diferencias de tiempos entre los códigos son pequeñas y se sugiere volver a correr los códigos midiendo tiempos con paqueterías que permitan la repetición de las mediciones como timeit.\n|Lenguaje| Código |\n|:---:|:---:|\n|Python| {ref}Rcf_numba_parallel &lt;RCFNUMBAPARALLEL&gt;|\n|Python| {ref}Rcf_5_cython_openmp &lt;RCF5CYTHONOPENMP&gt;|\n|Julia| {ref}Rcf_julia_naive &lt;RCFJULIANAIVE&gt;|\n|R| {ref}Rcf_Rcpp &lt;RCFRCPP&gt;|\n|Python| {ref}Rcf_5_cython &lt;RCF5CYTHON&gt;|\n|Julia| {ref}Rcf_julia_typed_values &lt;RCFJULIATYPEDVALUES&gt;|\n|Python| {ref}Rcf_numba &lt;RCFNUMBA&gt;|\n|Python| {ref}Rcf_numpy &lt;RCFNUMPY&gt;|\n|C| {ref}Rcf_c &lt;RCFC&gt;|\n```{admonition} Ejercicio\n:class: tip\nPresentar una tabla de resúmen de tiempos para las implementaciones pedidas en los ejercicios anteriores.\n```\nReferencias de interés\n\n\nCython.\n\n\nintroduction to cython.\n\n\nBasic cython tutorial\n\n\nSource files and compilation\n\n\nCompiling with a jupyter notebook\n\n\nhpy-kick-off-sprint-report\n\n\nPyPy - FAQ\n\n\nGlossary: virtual machine\n\n\nGlossary: Cpython\n\n\nGlossary: bytecode.\n\n\nPython interpreter, what-is-interpreter-explain-how-python-interpreter-works\n\n\nwhat-is-runtime-in-context-of-python-what-does-it-consist-of\n\n\nnumba: performance tips, numba: generators\n\n\nDirk Eddelbuettel: rcpp .\n\n\nrcpp.org\n\n\nRcpp for everyone.\n\n\nIntroduction to rcpp:From Simple Examples to Machine Learning.\n\n\nRcpp note.\n\n\nRcpp quick reference guide.\n\n\nshould-i-prefer-rcppnumericvector-over-stdvector\n\n\nrcppparallel-getting-r-and-c-to-work-some-more-in-parallel\n\n\nLearncpp.\n\n\nCplusplus.\n\n\nwhat-are-the-downsides-of-jit-compilation\n\n\n```{admonition} Ejercicios\n:class: tip\n1.Resuelve los ejercicios y preguntas de la nota.\n```\nReferencias:\n\n\nM. Gorelick, I. Ozsvald, High Performance Python, O'Reilly Media, 2014.\n\n\nH. Wickham, Advanced R, 2014\n\n\nB. W. Kernighan, D. M. Ritchie, The C Programming Language, Prentice Hall Software Series, 1988\n\n\nC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fluxcapacitor/source.ml
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
apache-2.0
[ "Deploy Fully Optimized Model to TensorFlow Serving\nFreeze Fully Optimized Graph", "from tensorflow.python.tools import freeze_graph\n\noptimize_me_parent_path = '/root/models/optimize_me/linear/cpu'\n\nfully_optimized_model_graph_path = '%s/fully_optimized_cpu.pb' % optimize_me_parent_path\nfully_optimized_frozen_model_graph_path = '%s/fully_optimized_frozen_cpu.pb' % optimize_me_parent_path\n\nmodel_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path\n\nfreeze_graph.freeze_graph(input_graph=fully_optimized_model_graph_path, \n input_saver=\"\",\n input_binary=True, \n input_checkpoint='/root/models/optimize_me/linear/cpu/model.ckpt',\n output_node_names=\"add\",\n restore_op_name=\"save/restore_all\", \n filename_tensor_name=\"save/Const:0\",\n output_graph=fully_optimized_frozen_model_graph_path, \n clear_devices=True, \n initializer_nodes=\"\")\nprint(fully_optimized_frozen_model_graph_path)", "File Size", "%%bash\n\nls -l /root/models/optimize_me/linear/cpu/", "Graph", "%%bash\n\nsummarize_graph --in_graph=/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport re\nfrom google.protobuf import text_format\nfrom tensorflow.core.framework import graph_pb2\n\ndef convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):\n graph = graph_pb2.GraphDef()\n with open(input_graph, \"rb\") as fh:\n if is_input_graph_binary:\n graph.ParseFromString(fh.read())\n else:\n text_format.Merge(fh.read(), graph)\n with open(output_dot, \"wt\") as fh:\n print(\"digraph graphname {\", file=fh)\n for node in graph.node:\n output_name = node.name\n print(\" \\\"\" + output_name + \"\\\" [label=\\\"\" + node.op + \"\\\"];\", file=fh)\n for input_full_name in node.input:\n parts = input_full_name.split(\":\")\n input_name = re.sub(r\"^\\^\", \"\", parts[0])\n print(\" \\\"\" + input_name + \"\\\" -> \\\"\" + output_name + \"\\\";\", file=fh)\n print(\"}\", file=fh)\n print(\"Created dot file '%s' for graph '%s'.\" % (output_dot, input_graph))\n \n\ninput_graph='/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb'\noutput_dot='/root/notebooks/fully_optimized_frozen_cpu.dot'\nconvert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)\n\n%%bash\n\ndot -T png /root/notebooks/fully_optimized_frozen_cpu.dot \\\n -o /root/notebooks/fully_optimized_frozen_cpu.png > /tmp/a.out\n\nfrom IPython.display import Image\n\nImage('/root/notebooks/fully_optimized_frozen_cpu.png')", "Run Standalone Benchmarks\nNote: These benchmarks are running against the standalone models on disk. We will benchmark the models running within TensorFlow Serving soon.", "%%bash\n\nbenchmark_model --graph=/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb \\\n --input_layer=weights,bias,x_observed \\\n --input_layer_type=float,float,float \\\n --input_layer_shape=:: \\\n --output_layer=add", "Save Model for Deployment and Inference\nReset Default Graph", "import tensorflow as tf\n\ntf.reset_default_graph()", "Create New Session", "sess = tf.Session()", "Generate Version Number", "from datetime import datetime \n\nversion = int(datetime.now().strftime(\"%s\"))", "Load Optimized, Frozen Graph", "%%bash\n\ninspect_checkpoint --file_name=/root/models/optimize_me/linear/cpu/model.ckpt\n\nsaver = tf.train.import_meta_graph('/root/models/optimize_me/linear/cpu/model.ckpt.meta')\nsaver.restore(sess, '/root/models/optimize_me/linear/cpu/model.ckpt')\n\noptimize_me_parent_path = '/root/models/optimize_me/linear/cpu'\nfully_optimized_frozen_model_graph_path = '%s/fully_optimized_frozen_cpu.pb' % optimize_me_parent_path\nprint(fully_optimized_frozen_model_graph_path)\n\nwith tf.gfile.GFile(fully_optimized_frozen_model_graph_path, 'rb') as f:\n graph_def = tf.GraphDef()\n graph_def.ParseFromString(f.read())\n\ntf.import_graph_def(\n graph_def, \n input_map=None, \n return_elements=None, \n name=\"\", \n op_dict=None, \n producer_op_list=None\n)\n\nprint(\"weights = \", sess.run(\"weights:0\"))\nprint(\"bias = \", sess.run(\"bias:0\"))", "Create SignatureDef Asset for TensorFlow Serving", "from tensorflow.python.saved_model import utils\nfrom tensorflow.python.saved_model import signature_constants\nfrom tensorflow.python.saved_model import signature_def_utils\n\ngraph = tf.get_default_graph()\n\nx_observed = graph.get_tensor_by_name('x_observed:0')\ny_pred = graph.get_tensor_by_name('add:0')\n\ntensor_info_x_observed = utils.build_tensor_info(x_observed)\nprint(tensor_info_x_observed)\n\ntensor_info_y_pred = utils.build_tensor_info(y_pred)\nprint(tensor_info_y_pred)\n\nprediction_signature = signature_def_utils.build_signature_def(inputs = \n {'x_observed': tensor_info_x_observed}, \n outputs = {'y_pred': tensor_info_y_pred}, \n method_name = signature_constants.PREDICT_METHOD_NAME)", "Save Model with Assets", "from tensorflow.python.saved_model import builder as saved_model_builder\nfrom tensorflow.python.saved_model import tag_constants\n\nfully_optimized_saved_model_path = '/root/models/linear_fully_optimized/cpu/%s' % version\nprint(fully_optimized_saved_model_path)\n\nbuilder = saved_model_builder.SavedModelBuilder(fully_optimized_saved_model_path)\nbuilder.add_meta_graph_and_variables(sess, \n [tag_constants.SERVING],\n signature_def_map={'predict':prediction_signature, \nsignature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature}, \n clear_devices=True,\n)\n\nbuilder.save(as_text=False)\n\nimport os\nprint(fully_optimized_saved_model_path)\nos.listdir(fully_optimized_saved_model_path)\nos.listdir('%s/variables' % fully_optimized_saved_model_path)\n\nsess.close()", "STOP All Kernels and Terminals\nThe GPU is wedged at this point. We need to set it free!!\n\nOpen a Terminal through Jupyter Notebook\n(Menu Bar -> Terminal -> New Terminal)\n\nStart Http-Grpc Proxy in Separate Terminal\nhttp_grpc_proxy 9004 9000\nThe params are as follows:\n* 1: proxy_port for this proxy\n* 2: tf_serving_port for TensorFlow Serving\nStart TensorFlow Serving in Separate Terminal\nPoint to the model_base_path of the fully optimized model.\ntensorflow_model_server \\\n --port=9000 \\\n --model_name=linear \\\n --model_base_path=/root/models/linear_fully_optimized/cpu/ \\\n --enable_batching=false\nThe params are as follows:\n* port (int)\n* model_name (anything)\n* model_base_path (/path/to/model/ above all versioned sub-directories)\n* enable_batching (true|false)\nRun the Following Command in the Terminal to Predict\nRun the following in a terminal\npredict 9004 1.5\nThe params are as follows:\n* 1: proxy_port\n* 2: x_observed feed input\nReturns:\n* y_pred prediction\nMonitor GPU in Separate Terminal\nRun the following in a terminal\nwatch -n 1 nvidia-smi\nStart Load Test in Separate Terminal\nloadtest high\nThe params are as follows:\n* 1: amount of load low|medium|high\nNotice the throughput and avg/min/max latencies:\nsummary ... = 400.2/s Avg: 249 Min: 230 Max: 286 Err: 0 (0.00%)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dwhswenson/contact_map
examples/advanced_matplotlib.ipynb
lgpl-2.1
[ "Advanced matplotlib tricks\nThe most common tricks to get the publication-ready figures for your contact map plots are described in Customizing contact map plots. Here we will illustrate a few more advanced techniques. In general, we recommend getting familiar with matplotlib through its own documentation, but some of the recipes below may be useful.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport mdtraj as md\ntraj = md.load(\"5550217/kras.xtc\", top=\"5550217/kras.pdb\")\n\nfrom contact_map import ContactFrequency\ntraj_contacts = ContactFrequency(traj)\nframe_contacts = ContactFrequency(traj[0])\ndiff = traj_contacts - frame_contacts", "Advanced color schemes\nIn some cases, matplotlib's built-in color maps may not be sufficient. You may want to create custom color schemes, either as a matter of personal style or to create schemes that place more emphasis on certain values. To do this, the diverging_cmap keyword in Contact Map Explorer's plotting function can be useful, as can some matplotlib techniques.\nCustomizing whether a color map is treated as diverging or sequential\nAs discussed in Changing the color map, Contact Map Explorer tries to be smart about how it treats diverging color maps: if the data includes negative values (as possible with a contact difference), then the color map spans the values from -1 to 1. On the other hand, if the data only includes positive values and if the color map is diverging, then only the upper half of of the color map is used.\nThe diverging color maps that Contact Map Explorer recognizes are the ones listed as \"Diverging\" in the matplotlib documentation. The sequential color maps recognized by Contact Map Explorer are the ones listed as \"Perceptually Uniform Sequential\", \"Sequential\", or \"Sequential (2)\". Other color maps are not recognized by Contact Map Explorer, and by default will be treated as sequential while raising a warning:", "traj_contacts.residue_contacts.plot(cmap=\"gnuplot\");", "If you want to either force a sequential color map to use the only the upper half of the color space, or to force a diverging color map to use the full color space, use the diverging_cmap option in the color map. Setting diverging_cmap will also silence the warning for an unknown color map. This is particularly useful for user-defined custom color maps, which could be either diverging or sequential.", "fig, axs = plt.subplots(1, 2, figsize=(10, 4))\n# force a diverging color map to use the full color space, not just upper half\n# (as if it is not diverging)\ntraj_contacts.residue_contacts.plot_axes(ax=axs[0], cmap='PRGn', diverging_cmap=False);\n\n# force sequential color map to use only the upper half (as if it is diverging)\ntraj_contacts.residue_contacts.plot_axes(ax=axs[1], cmap='Blues', diverging_cmap=True);", "\"Clipping\" at high or low values\nYou might be interested in somehow marking which values are very high or very low. This can be done by creating a custom color map. Details on this can be found in the matplotlib documentation on colormap manipulation. The basic idea we'll implement here is to use the 'PRGn' colormap, but to make values below -0.9 show up as red, and values above 0.9 show up as blue. We do this by making a color map based on 200 colors; the first 10 (-1.0 to -0.9) are red, then we use 180 representing the PRGn map (-0.9 to 0.9), and finally the last 10 (0.9 to 1.0) are blue. Note that colors in this approach are actually discrete, so you need enough colors from the PRGn map to make it a reasonable model for continuous behavior. For truly continuous color maps, see matplotlib documentation on LinearSegmentedColormap.\nThis is very similar in principle to one of the matplotlib examples on \"Creating listed colormaps\".", "from matplotlib import cm\nfrom matplotlib.colors import ListedColormap\nimport numpy as np\n\nPRGn = cm.get_cmap('PRGn', 180)\nred = np.array([1.0, 0.0, 0.0, 1.0])\nblue = np.array([0.0, 0.0, 1.0, 1.0])\n\n# custom color map of 200 colors with bottom 10 red; top 10 blue; rest is normal PRGb\nnew_colors = np.array([red] * 10 + list(PRGn(np.linspace(0, 1, 180))) + [blue] * 10)\ncustom_cmap = ListedColormap(new_colors)\n\n# must give diverging_cmap here; custom map is unknown to Contact Map Explorer\ndiff.residue_contacts.plot(cmap=custom_cmap, diverging_cmap=True);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdhp-docs/python-notebooks
python_matplotlib_geo_fr.ipynb
mit
[ "Cartographie avec Python et Matplotlib", "%matplotlib inline", "TODO:\n- https://matplotlib.org/basemap/users/intro.html\n- http://scitools.org.uk/cartopy/docs/latest/gallery.html\n- https://waterprogramming.wordpress.com/2016/12/19/plotting-geographic-data-from-geojson-files-using-python/\n- http://maxberggren.se/2015/08/04/basemap/\nLa cartographie avec basemap\nRéférences\n\nPython for Data Analysis de Wes McKinney, ed. O'Reilly, 2013 (p.261)\n\nAttention\nLe développement de Basemap a été stoppé et il est conseillé d'utiliser Cartopy à la place.\n\"Starting in 2016, Basemap came under new management. The Cartopy project will replace Basemap, but it hasn’t yet implemented all of Basemap’s features. All new software development should try to use Cartopy whenever possible, and existing software should start the process of switching over to use Cartopy. All maintenance and development efforts should be focused on Cartopy.\" (http://matplotlib.org/basemap/users/intro.html)\n\nInstallation\nBasemap n'est pas installé par défaut avec Matplotlib.\nPour l'installer avec conda:\nconda install basemap\n\nExemple", "from mpl_toolkits.basemap import Basemap\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\n\nlllat = 41.0 # latitude of lower left hand corner of the desired map domain (degrees).\nurlat = 52.0 # latitude of upper right hand corner of the desired map domain (degrees).\n\nlllon = -5.0 # longitude of lower left hand corner of the desired map domain (degrees).\nurlon = 9.5 # longitude of upper right hand corner of the desired map domain (degrees).\n\nm = Basemap(ax=ax,\n projection='stere',\n lon_0=(urlon+lllon)/2.,\n lat_0=(urlat+lllat)/2.,\n llcrnrlat=lllat,\n urcrnrlat=urlat,\n llcrnrlon=lllon,\n urcrnrlon=urlon,\n resolution='l') # Can be ``c`` (crude), ``l`` (low), ``i`` (intermediate), ``h`` (high), ``f`` (full) or None.\n\nm.drawcoastlines()\nm.drawstates()\nm.drawcountries()\n#m.drawrivers()\n#m.drawcounties()\n\n# Eiffel tower's coordinates\npt_lat = 48.858223\npt_lon = 2.2921653\n\nx, y = m(pt_lon, pt_lat)\n\nprint(pt_lat, pt_lon)\nprint(x, y)\n\nm.plot(x, y, 'ro')\n\n#plt.savefig(\"map.png\")\nplt.show()", "La cartographie avec cartopy" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
CivicKnowledge/metatab-packages
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
mit
[ "Senior Income and Home Value Distributions For San Diego County\nThis package extracts the home value and household income for households in San DIego county with one or more household members aged 65 or older. . The base data is from the 2015 5 year PUMS sample, from IPUMS<sup>1</sup>. The primary dataset variables used are: HHINCOME and VALUEH. \nThis extract is intended for analysis of senior issues in San Diego County, so the record used are further restricted with these filters: \n\nWHERE AGE > = 65\nHHINCOME < 9999999\nVALUEH < 9999999 \nSTATEFIP = 6 ( California ) \nCOUNTYFIPS = 73 ( San Diego County ) \n\nThe limits on the HHINCOME and VALUEH variables eliminate top coding. \nThis analysis used the IPUMS (ipums) data", "%matplotlib inline\n%load_ext metatab\n\n%load_ext autoreload\n%autoreload 2\n\n%mt_lib_dir lib\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np \nimport metatab as mt\nimport seaborn as sns; sns.set(color_codes=True)\nimport sqlite3\nfrom IPython.display import display_html, HTML, display\n\nimport statsmodels as sm\nfrom statsmodels.nonparametric.kde import KDEUnivariate\nfrom scipy import integrate, stats\n\nfrom incomedist import * \nfrom multikde import MultiKde \n\nplt.rcParams['figure.figsize']=(6,6)\n\n%mt_open_package\n\n\n!pwd\n", "Source Data\nThe PUMS data is a sample, so both household and person records have weights. We use those weights to replicate records. We are not adjusting the values for CPI, since we don't have a CPI for 2015, and because the medians for income comes out pretty close to those from the 2015 5Y ACS. \nThe HHINCOME and VALUEH have the typical distributions for income and home values, both of which look like Poisson distributions.", "# Check the weights for the whole file to see if they sum to the number\n# of households and people in the county. They don't, but the sum of the weights for households is close, \n# 126,279,060 vs about 116M housholds\ncon = sqlite3.connect(\"ipums.sqlite\")\nwt = pd.read_sql_query(\"SELECT YEAR, DATANUM, SERIAL, HHWT, PERNUM, PERWT FROM ipums \"\n \"WHERE PERNUM = 1 AND YEAR = 2015\", con)\n\nwt.drop(0, inplace=True)\n\nnd_s = wt.drop_duplicates(['YEAR', 'DATANUM','SERIAL'])\ncountry_hhwt_sum = nd_s[nd_s.PERNUM == 1]['HHWT'].sum()\n\nlen(wt), len(nd_s), country_hhwt_sum\n\nimport sqlite3\n\n# PERNUM = 1 ensures only record for each household \n\ncon = sqlite3.connect(\"ipums.sqlite\")\nsenior_hh = pd.read_sql_query(\n \"SELECT DISTINCT SERIAL, HHWT, PERWT, HHINCOME, VALUEH \"\n \"FROM ipums \"\n \"WHERE \"\n # \"AGE >= 65 AND \" \n \"HHINCOME < 9999999 AND VALUEH < 9999999 AND \"\n \"STATEFIP = 6 AND COUNTYFIPS=73 \", con)\n\n# Since we're doing a probabilistic simulation, the easiest way to deal with the weight is just to repeat rows. \n# However, adding the weights doesn't change the statistics much, so they are turned off now, for speed. \n\ndef generate_data():\n \n for index, row in senior_hh.drop_duplicates('SERIAL').iterrows():\n #for i in range(row.HHWT):\n yield (row.HHINCOME, row.VALUEH)\n \nincv = pd.DataFrame(list(generate_data()), columns=['HHINCOME', 'VALUEH'])\n\n\nsns.jointplot(x=\"HHINCOME\", y=\"VALUEH\", marker='.', scatter_kws={'alpha': 0.1}, data=incv, kind='reg');\n\nfrom matplotlib.ticker import FuncFormatter\n\nfig = plt.figure(figsize = (20,12))\nax = fig.add_subplot(111)\n \n\nfig.suptitle(\"Distribution Plot of Home Values in San Diego County\\n\"\n \"( Truncated at $2.2M )\", fontsize=18)\nsns.distplot(incv.VALUEH[incv.VALUEH <2200000], ax=ax);\nax.set_xlabel('Home Value ($)', fontsize=14)\nax.set_ylabel('Density', fontsize=14);\nax.get_xaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ',')))\n\n\nfrom matplotlib.ticker import FuncFormatter\n\nfig = plt.figure(figsize = (20,12))\nax = fig.add_subplot(111)\n \n\nfig.suptitle(\"Distribution Plot of Home Values in San Diego County\\n\"\n \"( Truncated at $2.2M )\", fontsize=18)\nsns.kdeplot(incv.VALUEH[incv.VALUEH <2200000], ax=ax);\nsns.kdeplot(incv.VALUEH[incv.VALUEH <1900000]+300000, ax=ax);\nax.set_xlabel('Home Value ($)', fontsize=14)\nax.set_ylabel('Density', fontsize=14);\nax.get_xaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ',')))", "Procedure\nAfter extracting the data for HHINCOME and VALUEH, we rank both values and then quantize the rankings into 10 groups, 0 through 9, hhincome_group and valueh_group. The HHINCOME variable correlates with VALUEH at .36, and the quantized rankings hhincome_group and valueh_group correlate at .38.\nInitial attempts were made to fit curves to the income and home value distributions, but it is very difficult to find well defined models that fit real income distributions. Bordley (bordley) analyzes the fit for 15 different distributions, reporting success with variations of the generalized beta distribution, gamma and Weibull. Majumder (majumder) proposes a four parameter model with variations for special cases. None of these models were considered well established enough to fit within the time contraints for the project, so this analysis will use empirical distributions that can be scale to fit alternate parameters.", "\nincv['valueh_rank'] = incv.rank()['VALUEH']\nincv['valueh_group'] = pd.qcut(incv.valueh_rank, 10, labels=False )\nincv['hhincome_rank'] = incv.rank()['HHINCOME']\nincv['hhincome_group'] = pd.qcut(incv.hhincome_rank, 10, labels=False )\nincv[['HHINCOME', 'VALUEH', 'hhincome_group', 'valueh_group']] .corr()\n\nfrom metatab.pands import MetatabDataFrame\nodf = MetatabDataFrame(incv)\nodf.name = 'income_homeval'\nodf.title = 'Income and Home Value Records for San Diego County'\nodf.HHINCOME.description = 'Household income'\nodf.VALUEH.description = 'Home value'\nodf.valueh_rank.description = 'Rank of the VALUEH value'\nodf.valueh_group.description = 'The valueh_rank value quantized into 10 bins, from 0 to 9'\nodf.hhincome_rank.description = 'Rank of the HHINCOME value'\nodf.hhincome_group.description = 'The hhincome_rank value quantized into 10 bins, from 0 to 9'\n\n%mt_add_dataframe odf --materialize", "Then, we group the dataset by valueh_group and collect all of the income values for each group. These groups have different distributions, with the lower numbered group shewing to the left and the higher numbered group skewing to the right. \nTo use these groups in a simulation, the user would select a group for a subject's home value, then randomly select an income in that group. When this is done many times, the original VALUEH correlates to the new distribution ( here, as t_income ) at .33, reasonably similar to the original correlations.", "import matplotlib.pyplot as plt\nimport numpy as np\n\nmk = MultiKde(odf, 'valueh_group', 'HHINCOME')\n\nfig,AX = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(15,15))\n\nincomes = [30000,\n 40000,\n 50000,\n 60000,\n 70000,\n 80000,\n 90000,\n 100000,\n 110000]\n\nfor mi, ax in zip(incomes, AX.flatten()):\n s, d, icdf, g = mk.make_kde(mi)\n syn_d = mk.syn_dist(mi, 10000)\n \n syn_d.plot.hist(ax=ax, bins=40, title='Median Income ${:0,.0f}'.format(mi), normed=True, label='Generated')\n\n ax.plot(s,d, lw=2, label='KDE')\n \nfig.suptitle('Income Distributions By Median Income\\nKDE and Generated Distribution')\nplt.legend(loc='upper left')\nplt.show()\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nmk = MultiKde(odf, 'valueh_group', 'HHINCOME')\n\nfig = plt.figure(figsize = (20,12))\nax = fig.add_subplot(111)\n\nincomes = [30000,\n 40000,\n 50000,\n 60000,\n 70000,\n 80000,\n 90000,\n 100000,\n 110000]\n\nfor mi in incomes:\n s, d, icdf, g = mk.make_kde(mi)\n syn_d = mk.syn_dist(mi, 10000)\n\n #syn_d.plot.hist(ax=ax, bins=40, normed=True, label='Generated')\n\n ax.plot(s,d, lw=2, label=str(mi))\n \nfig.suptitle('Income Distributions By Median Income\\nKDE and Generated Distribution\\n< $250,000')\nplt.legend(loc='upper left')\nax.set_xlim([0,250000])\nplt.show()\n\ndf_kde = incv[ (incv.HHINCOME <200000) & (incv.VALUEH < 1000000) ]\nax = sns.kdeplot(df_kde.HHINCOME, df_kde.VALUEH, cbar=True)", "A scatter matrix show similar structure for VALUEH and t_income.", "t = incv.copy()\nt['t_income'] = mk.syn_dist(t.HHINCOME.median(), len(t))\nt[['HHINCOME','VALUEH','t_income']].corr()\n\nsns.pairplot(t[['VALUEH','HHINCOME','t_income']]);", "The simulated incomes also have similar statistics to the original incomes. However, the median income is high. In San Diego county, the median household income for householders 65 and older in the 2015 5 year ACS about \\$51K, versus \\$56K here. For home values, the mean home value for 65+ old homeowners is \\$468K in the 5 year ACS, vs \\$510K here.", "\ndisplay(HTML(\"<h3>Descriptive Stats</h3>\"))\nt[['VALUEH','HHINCOME','t_income']].describe()\n\ndisplay(HTML(\"<h3>Correlations</h3>\"))\nt[['VALUEH','HHINCOME','t_income']].corr()", "Bibliography", "%mt_bibliography\n\n# Tests", "Create a new KDE distribution, based on the home values, including only home values ( actually KDE supports ) between $130,000 and $1.5M.", "s,d = make_prototype(incv.VALUEH.astype(float), 130_000, 1_500_000)\n\nplt.plot(s,d)", "Overlay the prior plot with the histogram of the original values. We're using np.histogram to make the histograph, so it appears as a line chart.", "v = incv.VALUEH.astype(float).sort_values()\n#v = v[ ( v > 60000 ) & ( v < 1500000 )]\n\nhist, bin_edges = np.histogram(v, bins=100, density=True)\n\nbin_middles = 0.5*(bin_edges[1:] + bin_edges[:-1])\n\nbin_width = bin_middles[1] - bin_middles[0]\n\nassert np.isclose(sum(hist*bin_width),1) # == 1 b/c density==True\n\nhist, bin_edges = np.histogram(v, bins=100) # Now, without 'density'\n\n# And, get back to the counts, but now on the KDE\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nax.plot(s,d * sum(hist*bin_width));\n\nax.plot(bin_middles, hist);\n", "Show an a home value curve, interpolated to the same values as the distribution. The two curves should be co-incident.", "def plot_compare_curves(p25, p50, p75):\n fig = plt.figure(figsize = (20,12))\n ax = fig.add_subplot(111)\n\n sp, dp = interpolate_curve(s, d, p25, p50, p75)\n\n ax.plot(pd.Series(s), d, color='black');\n ax.plot(pd.Series(sp), dp, color='red');\n\n# Re-input the quantiles for the KDE\n# Curves should be co-incident\nplot_compare_curves(2.800000e+05,4.060000e+05,5.800000e+05)\n", "Now, interpolate to the values for the county, which shifts the curve right.", "# Values for SD County home values\nplot_compare_curves(349100.0,485900.0,703200.0)\n", "Here is an example of creating an interpolated distribution, then generating a synthetic distribution from it.", "sp, dp = interpolate_curve(s, d, 349100.0,485900.0,703200.0)\nv = syn_dist(sp, dp, 10000)\n\nplt.hist(v, bins=100); \npd.Series(v).describe()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CalPolyPat/phys202-2015-work
assignments/assignment08/InterpolationEx02.ipynb
mit
[ "Interpolation Exercise 2", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nsns.set_style('white')\n\nfrom scipy.interpolate import griddata", "Sparse 2d interpolation\nIn this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:\n\nThe square domain covers the region $x\\in[-5,5]$ and $y\\in[-5,5]$.\nThe values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.\nThe value of $f$ is known at a single interior point: $f(0,0)=1.0$.\nThe function $f$ is not known at any other points.\n\nCreate arrays x, y, f:\n\nx should be a 1d array of the x coordinates on the boundary and the 1 interior point.\ny should be a 1d array of the y coordinates on the boundary and the 1 interior point.\nf should be a 1d array of the values of f at the corresponding x and y coordinates.\n\nYou might find that np.hstack is helpful.", "x = np.hstack((np.linspace(-4,4,9), np.full(11, -5), np.linspace(-4,4,9), np.full(11, 5), [0]))\ny = np.hstack((np.full(9,-5), np.linspace(-5, 5,11), np.full(9,5), np.linspace(-5,5,11), [0]))\nf = np.hstack((np.zeros(20), np.zeros(20),[1.0]))\nprint(f)", "The following plot should show the points on the boundary and the single point in the interior:", "plt.scatter(x, y);\n\nassert x.shape==(41,)\nassert y.shape==(41,)\nassert f.shape==(41,)\nassert np.count_nonzero(f)==1", "Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:\n\nxnew and ynew should be 1d arrays with 100 points between $[-5,5]$.\nXnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.\nFnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).\nUse cubic spline interpolation.", "xnew = np.linspace(-5, 5, 100)\nynew = np.linspace(-5, 5, 100)\nXnew, Ynew = np.meshgrid(xnew, ynew)\nFnew = griddata((x, y), f , (Xnew, Ynew), method='cubic')\nplt.imshow(Fnew, extent=(-5,5,-5,5))\n\nassert xnew.shape==(100,)\nassert ynew.shape==(100,)\nassert Xnew.shape==(100,100)\nassert Ynew.shape==(100,100)\nassert Fnew.shape==(100,100)", "Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.", "plt.contourf(Xnew, Ynew, Fnew, cmap='hot')\nplt.colorbar(label='Z')\nplt.box(False)\nplt.title(\"The interpolated 2d grid of our data.\")\nplt.xlabel('X')\nplt.ylabel('Y');\n\nassert True # leave this to grade the plot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rmoehn/cartpole
notebooks/IntegrationExperiments.ipynb
mit
[ "import functools\nimport itertools\nimport math\n\nimport matplotlib\nfrom matplotlib import pyplot\nimport numpy as np\nimport scipy.integrate\n\nimport sys\nsys.path.append(\"..\")\nfrom hiora_cartpole import fourier_fa\nfrom hiora_cartpole import fourier_fa_int\nfrom hiora_cartpole import offswitch_hfa\nfrom hiora_cartpole import linfa\nfrom hiora_cartpole import driver\nfrom hiora_cartpole import interruptibility\n\nimport gym_ext.tools as gym_tools\n\nimport gym\n\ndef make_CartPole():\n return gym.make(\"CartPole-v0\")\n\nclipped_high = np.array([2.5, 3.6, 0.28, 3.7])\nclipped_low = -clipped_high\nstate_ranges = np.array([clipped_low, clipped_high])\n\nfour_n_weights, four_feature_vec \\\n = fourier_fa.make_feature_vec(state_ranges,\n n_acts=2,\n order=3)\n\ndef make_uninterruptable_experience(choose_action=linfa.choose_action_Sarsa):\n return linfa.init(lmbda=0.9,\n init_alpha=0.001,\n epsi=0.1,\n feature_vec=four_feature_vec,\n n_weights=four_n_weights,\n act_space=env.action_space,\n theta=None,\n is_use_alpha_bounds=True,\n map_obs=functools.partial(gym_tools.warning_clip_obs, ranges=state_ranges),\n choose_action=choose_action)\n\nenv = make_CartPole()\nfexperience = make_uninterruptable_experience()\nfexperience, steps_per_episode, alpha_per_episode \\\n = driver.train(env, linfa, fexperience, n_episodes=400, max_steps=500, is_render=False)\n# Credits: http://matplotlib.org/examples/api/two_scales.html\nfig, ax1 = pyplot.subplots()\nax1.plot(steps_per_episode, color='b')\nax2 = ax1.twinx()\nax2.plot(alpha_per_episode, color='r')\npyplot.show()\n\nsr = state_ranges\n\ndef Q_at_x(e, x, a):\n return scipy.integrate.tplquad(\n lambda x_dot, theta, theta_dot: \\\n e.feature_vec(np.array([x, x_dot, theta, theta_dot]), a)\\\n .dot(e.theta),\n sr[0][1],\n sr[1][1],\n lambda _: sr[0][2],\n lambda _: sr[1][2],\n lambda _, _1: sr[0][3],\n lambda _, _1: sr[1][3])\n\nfrom multiprocessing import Pool\np = Pool(4)\n\ndef Q_fun(x):\n return Q_at_x(fexperience, x, 0)\n\nnum_Qs = np.array( map(Q_fun, np.arange(-2.38, 2.5, 0.5*1.19)) )\nnum_Qs\n\nsym_Q_s0 = fourier_fa_int.make_sym_Q_s0(state_ranges, 3)\n\nsym_Qs = np.array( [sym_Q_s0(fexperience.theta, 0, s0) \n for s0 in np.arange(-2.38, 2.5, 0.5*1.19)] )\nsym_Qs\n\nnum_Qs[:,0] / sym_Qs\n\nnum_Qs[:,0] - sym_Qs\n\nnp.prod(state_ranges[1,1:] - state_ranges[0,1:])", "Trying MountainCar", "mc_env = gym.make(\"MountainCar-v0\")\n\nmc_n_weights, mc_feature_vec = fourier_fa.make_feature_vec(\n np.array([mc_env.low, mc_env.high]),\n n_acts=3,\n order=2)\n\nmc_experience = linfa.init(lmbda=0.9,\n init_alpha=1.0,\n epsi=0.1,\n feature_vec=mc_feature_vec,\n n_weights=mc_n_weights,\n act_space=mc_env.action_space,\n theta=None,\n is_use_alpha_bounds=True)\n\nmc_experience, mc_spe, mc_ape = driver.train(mc_env, linfa, mc_experience,\n n_episodes=400,\n max_steps=200,\n is_render=False)\n\nfig, ax1 = pyplot.subplots()\nax1.plot(mc_spe, color='b')\nax2 = ax1.twinx()\nax2.plot(mc_ape, color='r')\npyplot.show()\n\ndef mc_Q_at_x(e, x, a):\n return scipy.integrate.quad(\n lambda x_dot: e.feature_vec(np.array([x, x_dot]), a).dot(e.theta),\n mc_env.low[1],\n mc_env.high[1])\n\ndef mc_Q_fun(x):\n return mc_Q_at_x(mc_experience, x, 0)\n\nsample_xs = np.arange(mc_env.low[0], mc_env.high[0], \n (mc_env.high[0] - mc_env.low[0]) / 8.0)\n\nmc_num_Qs = np.array( map(mc_Q_fun, sample_xs) )\nmc_num_Qs\n\nmc_sym_Q_s0 = fourier_fa_int.make_sym_Q_s0(\n np.array([mc_env.low, mc_env.high]),\n 2)\n\nmc_sym_Qs = np.array( [mc_sym_Q_s0(mc_experience.theta, 0, s0)\n for s0 in sample_xs] )\nmc_sym_Qs \n\nmc_sym_Qs - mc_num_Qs[:,0]", "Let's try some arbitrary thetas\nAnd see what the ratio depends on. I've seen above that it's probably not the order of the Fourier FA, but the number of dimensions.", "# Credits: http://stackoverflow.com/a/1409496/5091738\ndef make_integrand(feature_vec, theta, s0, n_dim):\n argstr = \", \".join([\"s{}\".format(i) for i in xrange(1, n_dim)])\n \n code = \"def integrand({argstr}):\\n\" \\\n \" return feature_vec(np.array([s0, {argstr}]), 0).dot(theta)\\n\" \\\n .format(argstr=argstr)\n \n #print code\n \n compiled = compile(code, \"fakesource\", \"exec\")\n fakeglobals = {'feature_vec': feature_vec, 'theta': theta, 's0': s0,\n 'np': np}\n fakelocals = {}\n eval(compiled, fakeglobals, fakelocals)\n \n return fakelocals['integrand']\n\nprint make_integrand(None, None, None, 4)\n\nfor order in xrange(1,3):\n for n_dim in xrange(2, 4):\n print \"\\norder {} dims {}\".format(order, n_dim)\n \n min_max = np.array([np.zeros(n_dim), 3 * np.ones(n_dim)])\n n_weights, feature_vec = fourier_fa.make_feature_vec(\n min_max,\n n_acts=1,\n order=order) \n \n theta = np.cos(np.arange(0, 2*np.pi, 2*np.pi/n_weights))\n \n sample_xs = np.arange(0, 3, 0.3)\n \n def num_Q_at_x(s0):\n integrand = make_integrand(feature_vec, theta, s0, n_dim)\n return scipy.integrate.nquad(integrand, min_max.T[1:])\n \n num_Qs = np.array( map(num_Q_at_x, sample_xs) )\n #print num_Qs\n \n sym_Q_at_x = fourier_fa_int.make_sym_Q_s0(min_max, order)\n \n sym_Qs = np.array( [sym_Q_at_x(theta, 0, s0) for s0 in sample_xs] )\n #print sym_Qs\n \n print sym_Qs / num_Qs[:,0]", "If the bounds of the states are [0, n], the ratio between symbolic and numeric results is $1/n^{n_{dim}-1}$. Or this is at least what I think I see.\nThis looks like there's a problem with normalization. (What also very strongly suggested it, was that numeric and symbolic results were equal over [0, 1], but started to differ when I changed to [0, 2].)", "np.arange(0, 1, 10)\n\nimport sympy as sp\na, b, x, f = sp.symbols(\"a b x f\")\n\nb_int = sp.Integral(1, (x, a, b))\n\nsp.init_printing()\n\nu_int = sp.Integral((1-a)/(b-a), (x, 0, 1))\n\nu_int\n\n(b_int / u_int).simplify()\n\nb_int.subs([(a,0), (b,2)]).doit()\n\nu_int.subs([(a,0), (b,2)]).doit()\n\n(u_int.doit()*b).simplify()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
maxalbert/paper-supplement-nanoparticle-sensing
notebooks/fig_9c_comparison_of_frequency_change_for_various_external_field_strengths.ipynb
mit
[ "Fig. 9(c): Comparison of Frequency Change for Various External Fields\nThis notebook reproduces Fig. 9(c) in the paper, which shows the frequency change $\\Delta f$ for the first eigenmode (N = 1) for three strengths of the out-of-plane field: 0 Tesla, 0.1 Tesla and 1 Tesla.", "import matplotlib.lines as mlines\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom matplotlib._png import read_png\nfrom matplotlib.offsetbox import OffsetImage, AnnotationBbox\n\n%matplotlib inline\nplt.style.use('style_sheets/fig9c.mplstyle')", "Read the data frame containing the eigenmode data and filter out the parameter values relevant for this plot.", "df = pd.read_csv('../data/eigenmode_info_data_frame.csv')\ndf = df.query('(has_particle == True) and (x == 0) and (y == 0) and '\n '(d_particle == 20) and (Ms_particle == 1e6) and (N == 1)')\ndf = df.sort_values('d')", "Define helper function to plot $\\Delta f$ vs. particle separation for a single value of the external field strength.", "def plot_freq_change_vs_particle_distance_for_field_strength(ax, Hz, H_ext_descr, color):\n \"\"\"\n Plot frequency change vs. particle distance for a single field strength `Hz`.\n \"\"\"\n df_filtered = df.query('Hz == {Hz} and N == 1'.format(Hz=Hz))\n d_vals = df_filtered['d']\n freq_diffs = df_filtered['freq_diff'] * 1e3 # frequency change in MHz\n ax.plot(d_vals, freq_diffs, color=color, label='H={}'.format(H_ext_descr))\n\ndef add_eigenmode_profile(filename):\n imagebox = OffsetImage(read_png(filename), zoom=0.75)\n ab = AnnotationBbox(imagebox,\n (40, 0.4), xybox=(60, 220), xycoords='data',\n boxcoords='data', frameon=False)\n ax.add_artist(ab)", "Produce the plot for Fig. 9(c).", "fig, ax = plt.subplots(figsize=(6, 6))\n\nfor H_z, H_ext_descr, color in [(0, '0 T', '#4DAF4A'),\n (8e4, '0.1 T', '#377EB8'),\n (80e4, '1 T', '#E41A1C')\n ]:\n plot_freq_change_vs_particle_distance_for_field_strength(ax, H_z, H_ext_descr, color)\n\nadd_eigenmode_profile(\"../images/eigenmode_profile_with_particle_at_x_neg30_y_0_d_5.png\")\n\nax.set_xlim(0, 95)\nax.set_xticks(range(0, 100, 10))\nax.set_xlabel('Particle separation d (nm)')\nax.set_ylabel(r'Frequency change $\\Delta f$ (MHz)')\nax.legend(numpoints=1, loc='upper right')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bretthandrews/marvin
docs/sphinx/jupyter/saving_and_restoring.ipynb
bsd-3-clause
[ "Saving and Restoring Marvin objects\nWith all Marvin Tools, you can save the object you are working with locally to your filesystem, and restore it later on. This works using the Python pickle package. The objects are pickled (i.e. formatted and compressed) into a pickle file object. All Marvin Tools, Queries, and Results can be saved and restored.\nWe can save a map...", "# let's grab the H-alpha emission line flux map\nfrom marvin.tools.maps import Maps\nmapfile = '/Users/Brian/Work/Manga/analysis/v2_0_1/2.0.2/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz'\nmaps = Maps(filename=mapfile)\nhaflux = maps.getMap('emline_gflux', channel='ha_6564')\nprint(haflux)", "We can save any Marvin object with the save method. This methods accepts a string filename+path as the name of the pickled file. If a full file path is not specified, it defaults to the current directory. save also accepts an overwrite boolean keyword in case you want to overwrite an existing file.", "haflux.save('my_haflux_map')", "Now we have a saved map. We can restore it anytime we want using the restore class method. A class method means you call it from the imported class itself, and not on the instance. restore accepts a string filename as input and returns the instantianted object.", "# import the individual Map class\nfrom marvin.tools.map import Map\n\n# restore the Halpha flux map into a new variable\nfilename = '/Users/Brian/Work/github_projects/Marvin/docs/sphinx/jupyter/my_haflux_map'\nnewflux = Map.restore(filename)\nprint(newflux)", "We can also save and restore Marvin Queries and Results. First let's create and run a simple query...", "from marvin.tools.query import Query, Results\nfrom marvin import config\n\nconfig.mode = 'remote'\nconfig.switchSasUrl('local')\n\n# let's make a query\nf = 'nsa.z < 0.1'\nq = Query(searchfilter=f)\nprint(q)\n\n# and run it\nr = q.run()\nprint(r)", "Let's save both the query and results for later use. Without specifiying a filename, by default Marvin will name the query or results using your provided search filter.", "q.save()\nr.save()", "By default, if you don't specify a filename for the pickled file, Marvin will auto assign one for you with extension .mpf (MaNGA Pickle File).\nNow let's restore...", "newquery = Query.restore('/Users/Brian/marvin_query_nsa.z<0.1.mpf')\nprint('query', newquery)\nprint('filter', newquery.searchfilter)\n\nmyresults = Results.restore('/Users/Brian/marvin_results_nsa.z<0.1.mpf')\nprint(myresults.results)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
keras-team/autokeras
docs/ipynb/timeseries_forecaster.ipynb
apache-2.0
[ "!pip install autokeras\n\n\nimport pandas as pd\nimport tensorflow as tf\n\nimport autokeras as ak\n", "To make this tutorial easy to follow, we use the UCI Airquality dataset, and try to\nforecast the AH value at the different timesteps. Some basic preprocessing has also\nbeen performed on the dataset as it required cleanup.\nA Simple Example\nThe first step is to prepare your data. Here we use the UCI Airquality\ndataset as an example.", "dataset = tf.keras.utils.get_file(\n fname=\"AirQualityUCI.csv\",\n origin=\"https://archive.ics.uci.edu/ml/machine-learning-databases/00360/\"\n \"AirQualityUCI.zip\",\n extract=True,\n)\n\ndataset = pd.read_csv(dataset, sep=\";\")\ndataset = dataset[dataset.columns[:-2]]\ndataset = dataset.dropna()\ndataset = dataset.replace(\",\", \".\", regex=True)\n\nval_split = int(len(dataset) * 0.7)\ndata_train = dataset[:val_split]\nvalidation_data = dataset[val_split:]\n\ndata_x = data_train[\n [\n \"CO(GT)\",\n \"PT08.S1(CO)\",\n \"NMHC(GT)\",\n \"C6H6(GT)\",\n \"PT08.S2(NMHC)\",\n \"NOx(GT)\",\n \"PT08.S3(NOx)\",\n \"NO2(GT)\",\n \"PT08.S4(NO2)\",\n \"PT08.S5(O3)\",\n \"T\",\n \"RH\",\n ]\n].astype(\"float64\")\n\ndata_x_val = validation_data[\n [\n \"CO(GT)\",\n \"PT08.S1(CO)\",\n \"NMHC(GT)\",\n \"C6H6(GT)\",\n \"PT08.S2(NMHC)\",\n \"NOx(GT)\",\n \"PT08.S3(NOx)\",\n \"NO2(GT)\",\n \"PT08.S4(NO2)\",\n \"PT08.S5(O3)\",\n \"T\",\n \"RH\",\n ]\n].astype(\"float64\")\n\n# Data with train data and the unseen data from subsequent time steps.\ndata_x_test = dataset[\n [\n \"CO(GT)\",\n \"PT08.S1(CO)\",\n \"NMHC(GT)\",\n \"C6H6(GT)\",\n \"PT08.S2(NMHC)\",\n \"NOx(GT)\",\n \"PT08.S3(NOx)\",\n \"NO2(GT)\",\n \"PT08.S4(NO2)\",\n \"PT08.S5(O3)\",\n \"T\",\n \"RH\",\n ]\n].astype(\"float64\")\n\ndata_y = data_train[\"AH\"].astype(\"float64\")\n\ndata_y_val = validation_data[\"AH\"].astype(\"float64\")\n\nprint(data_x.shape) # (6549, 12)\nprint(data_y.shape) # (6549,)\n", "The second step is to run the TimeSeriesForecaster.\nAs a quick demo, we set epochs to 10.\nYou can also leave the epochs unspecified for an adaptive number of epochs.", "predict_from = 1\npredict_until = 10\nlookback = 3\nclf = ak.TimeseriesForecaster(\n lookback=lookback,\n predict_from=predict_from,\n predict_until=predict_until,\n max_trials=1,\n objective=\"val_loss\",\n)\n# Train the TimeSeriesForecaster with train data\nclf.fit(\n x=data_x,\n y=data_y,\n validation_data=(data_x_val, data_y_val),\n batch_size=32,\n epochs=10,\n)\n# Predict with the best model(includes original training data).\npredictions = clf.predict(data_x_test)\nprint(predictions.shape)\n# Evaluate the best model with testing data.\nprint(clf.evaluate(data_x_val, data_y_val))\n" ]
[ "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.17/_downloads/4c66a907fef8e4e049497d46de605e3a/plot_define_target_events.ipynb
bsd-3-clause
[ "%matplotlib inline", "============================================================\nDefine target events based on time lag, plot evoked response\n============================================================\nThis script shows how to define higher order events based on\ntime lag between reference and target events. For\nillustration, we will put face stimuli presented into two\nclasses, that is 1) followed by an early button press\n(within 590 milliseconds) and followed by a late button\npress (later than 590 milliseconds). Finally, we will\nvisualize the evoked responses to both 'quickly-processed'\nand 'slowly-processed' face stimuli.", "# Authors: Denis Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import io\nfrom mne.event import define_target_events\nfrom mne.datasets import sample\nimport matplotlib.pyplot as plt\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + STI 014 - bad channels (modify to your needs)\ninclude = [] # or stim channels ['STI 014']\nraw.info['bads'] += ['EEG 053'] # bads\n\n# pick MEG channels\npicks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False, eog=True,\n include=include, exclude='bads')", "Find stimulus event followed by quick button presses", "reference_id = 5 # presentation of a smiley face\ntarget_id = 32 # button press\nsfreq = raw.info['sfreq'] # sampling rate\ntmin = 0.1 # trials leading to very early responses will be rejected\ntmax = 0.59 # ignore face stimuli followed by button press later than 590 ms\nnew_id = 42 # the new event id for a hit. If None, reference_id is used.\nfill_na = 99 # the fill value for misses\n\nevents_, lag = define_target_events(events, reference_id, target_id,\n sfreq, tmin, tmax, new_id, fill_na)\n\nprint(events_) # The 99 indicates missing or too late button presses\n\n# besides the events also the lag between target and reference is returned\n# this could e.g. be used as parametric regressor in subsequent analyses.\n\nprint(lag[lag != fill_na]) # lag in milliseconds\n\n# #############################################################################\n# Construct epochs\n\ntmin_ = -0.2\ntmax_ = 0.4\nevent_id = dict(early=new_id, late=fill_na)\n\nepochs = mne.Epochs(raw, events_, event_id, tmin_,\n tmax_, picks=picks, baseline=(None, 0),\n reject=dict(mag=4e-12))\n\n# average epochs and get an Evoked dataset.\n\nearly, late = [epochs[k].average() for k in event_id]", "View evoked response", "times = 1e3 * epochs.times # time in milliseconds\ntitle = 'Evoked response followed by %s button press'\n\nfig, axes = plt.subplots(2, 1)\nearly.plot(axes=axes[0], time_unit='s')\naxes[0].set(title=title % 'late', ylabel='Evoked field (fT)')\nlate.plot(axes=axes[1], time_unit='s')\naxes[1].set(title=title % 'early', ylabel='Evoked field (fT)')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jserenson/Python_Bootcamp
Objects and Data Structures Assessment Test-Solution.ipynb
gpl-3.0
[ "Objects and Data Structures Assessment Test\nTest your knowledge.\n Answer the following questions \nWrite a brief description of all the following Object Types and Data Structures we've learned about: \nFor the full answers, review the Jupyter notebook introductions of each topic!\nNumbers\nStrings\nLists\nTuples\nDictionaries\nNumbers\nWrite an equation that uses multiplication, division, an exponent, addition, and subtraction that is equal to 100.25.\nHint: This is just to test your memory of the basic arithmetic commands, work backwards from 100.25", "# Your answer is probably different\n(20000 - (10 ** 2) / 12 * 34) - 19627.75", "Explain what the cell below will produce and why. Can you change it so the answer is correct?", "2/3", "Answer: Because Python 2 performs classic division for integers. Use floats to perform true division. For example:\n2.0/3\nAnswer these 3 questions without typing code. Then type code to check your answer.\nWhat is the value of the expression 4 * (6 + 5)\n\nWhat is the value of the expression 4 * 6 + 5\n\nWhat is the value of the expression 4 + 6 * 5", "4 * (6 + 5)\n\n4 * 6 + 5 \n\n4 + 6 * 5 ", "What is the type of the result of the expression 3 + 1.5 + 4?\nAnswer: Floating Point Number\nWhat would you use to find a number’s square root, as well as its square?", "100 ** 0.5\n\n10 ** 2", "Strings\nGiven the string 'hello' give an index command that returns 'e'. Use the code below:", "s = 'hello'\n# Print out 'e' using indexing\ns[1]", "Reverse the string 'hello' using indexing:", "s ='hello'\n\n# Reverse the string using indexing\n\ns[::-1]", "Given the string hello, give two methods of producing the letter 'o' using indexing.", "s ='hello'\n\n# Print out the\n\ns[-1]\n\ns[4]", "Lists\nBuild this list [0,0,0] two separate ways.", "#Method 1\n[0]*3\n\n#Method 2\nl = [0,0,0]\nl", "Reassign 'hello' in this nested list to say 'goodbye' item in this list:", "l = [1,2,[3,4,'hello']]\n\nl[2][2] = 'goodbye'\n\nl", "Sort the list below:", "l = [5,3,4,6,1]\n\n#Method 1\nsorted(l)\n\n#Method 2\nl.sort()\nl", "Dictionaries\nUsing keys and indexing, grab the 'hello' from the following dictionaries:", "d = {'simple_key':'hello'}\n# Grab 'hello'\n\nd['simple_key']\n\nd = {'k1':{'k2':'hello'}}\n# Grab 'hello'\n\nd['k1']['k2']\n\n# Getting a little tricker\nd = {'k1':[{'nest_key':['this is deep',['hello']]}]}\n\n# This was harder than I expected...\nd['k1'][0]['nest_key'][1][0]\n\n# This will be hard and annoying!\nd = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]}\n\n# Phew\nd['k1'][2]['k2'][1]['tough'][2][0]", "Can you sort a dictionary? Why or why not?\nAnswer: No! Because normal dictionaries are mappings not a sequence. \nTuples\nWhat is the major difference between tuples and lists?\nTuples are immutable!\nHow do you create a tuple?", "t = (1,2,3)", "Sets\nWhat is unique about a set?\nAnswer: They don't allow for duplicate items!\nUse a set to find the unique values of the list below:", "l = [1,2,2,33,4,4,11,22,3,3,2]\n\nset(l)", "Booleans\nFor the following quiz questions, we will get a preview of comparison operators:\n<table class=\"table table-bordered\">\n<tr>\n<th style=\"width:10%\">Operator</th><th style=\"width:45%\">Description</th><th>Example</th>\n</tr>\n<tr>\n<td>==</td>\n<td>If the values of two operands are equal, then the condition becomes true.</td>\n<td> (a == b) is not true.</td>\n</tr>\n<tr>\n<td>!=</td>\n<td>If values of two operands are not equal, then condition becomes true.</td>\n</tr>\n<tr>\n<td>&lt;&gt;</td>\n<td>If values of two operands are not equal, then condition becomes true.</td>\n<td> (a &lt;&gt; b) is true. This is similar to != operator.</td>\n</tr>\n<tr>\n<td>&gt;</td>\n<td>If the value of left operand is greater than the value of right operand, then condition becomes true.</td>\n<td> (a &gt; b) is not true.</td>\n</tr>\n<tr>\n<td>&lt;</td>\n<td>If the value of left operand is less than the value of right operand, then condition becomes true.</td>\n<td> (a &lt; b) is true.</td>\n</tr>\n<tr>\n<td>&gt;=</td>\n<td>If the value of left operand is greater than or equal to the value of right operand, then condition becomes true.</td>\n<td> (a &gt;= b) is not true. </td>\n</tr>\n<tr>\n<td>&lt;=</td>\n<td>If the value of left operand is less than or equal to the value of right operand, then condition becomes true.</td>\n<td> (a &lt;= b) is true. </td>\n</tr>\n</table>\n\nWhat will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!)", "# Answer before running cell\n2 > 3\n\n# Answer before running cell\n3 <= 2\n\n# Answer before running cell\n3 == 2.0\n\n# Answer before running cell\n3.0 == 3\n\n# Answer before running cell\n4**0.5 != 2", "Final Question: What is the boolean output of the cell block below?", "# two nested lists\nl_one = [1,2,[3,4]]\nl_two = [1,2,{'k1':4}]\n\n#True or False?\nl_one[2][0] >= l_two[2]['k1']", "Great Job on your first assessment!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
grigorisg9gr/menpo-notebooks
menpowidgets/Custom Widgets/Widgets Tools.ipynb
bsd-3-clause
[ "Widgets Tools\nHerein, we present MenpoWidgets's basic widget tools that implement lower level widget functionalities, such as colour selection, zoom options, axes options, etc. These are the main ingredients in order to synthesize higher-level widget classes, such as the ones presented in Widgets Components.ipynb. All the widgets of this category live in menpowidgets.tools. \nBelow we present the functionalities of each one of them separately. Specifically we split this notebook in the following subsections:\n\nBasics\nMenpo Logo\nList Definition and Slicing\nColour Selection\nIndex Selection\nZoom\nImage Options\nLine Options\nMarker Options\nNumbering Options\nAxes Options\nLegend Options\nGrid Options\nHOG, DSIFT, Daisy, LBP, IGO Options\n\n<a name=\"sec:basics\"></a>1. Basics\nAs explained in the Introduction.ipynb notebook, all the widgets presented here are subclasses of menpo.abstract.MenpoWidget, thus they follow the same rules, which are:\n\nThey expect as input the initial options, as well as the rendering callback function.\nThey implement add_render_function(), remove_render_function(), replace_render_function() and call_render_function().\nThey implement set_widget_state(), which updates the widget state with a new set of options.\nThey implement style() which takes a set of options that change the style of the widget, such as font-related options, border-related options, etc.\n\nBefore presenting each widget separately, let's first import the things that are required.", "from menpowidgets.tools import (LogoWidget, ListWidget, SlicingCommandWidget, ColourSelectionWidget, \n IndexButtonsWidget, IndexSliderWidget, ZoomOneScaleWidget, ZoomTwoScalesWidget, \n ImageOptionsWidget, LineOptionsWidget, MarkerOptionsWidget, NumberingOptionsWidget, \n AxesLimitsWidget, AxesTicksWidget, AxesOptionsWidget, LegendOptionsWidget, \n GridOptionsWidget, HOGOptionsWidget, DSIFTOptionsWidget, DaisyOptionsWidget, \n LBPOptionsWidget, IGOOptionsWidget)\nfrom menpowidgets.style import map_styles_to_hex_colours", "Let us also define a generic print function that will be the callback trigger when the selected_values trait of all the widgets changes.\nThe function must have a single argument, which will be a dict with the following keys:\n* 'name': The name of the trait that is monitored and triggers the callback. In the case of a MenpoWidget subclass, this is always 'selected_values'.\n* 'type': The type of event that happens on the trait. In the case of a MenpoWidget subclass, this is always 'change'.\n* 'new': The currently selected value attached to selected_values.\n* 'old': The previous value of selected_values.\n* 'owner': Pointer to the widget object.\nConsequently, the selected values of a widget object (e.g. wid) can be retrieved in any of the following 3 equivalent ways:\n1. wid.selected_values\n2. change['new']\n3. change['owner'].selected_values\nFor this notebook, we choose the second way which is independent of the widget object.", "from menpo.visualize import print_dynamic\n\ndef render_function(change):\n print(change['new'])", "<a name=\"sec:logo\"></a>2. Menpo Logo\nThis is a simple widget that can be used for embedding an image into an ipywidgets widget are using the ipywidgets.Image class.", "from menpowidgets.tools import LogoWidget\nLogoWidget(style='danger')\n", "<a name=\"sec:list\"></a>3. List Definition and Slicing\nMenpoWidgets has a widget for defining a list of numbers. The widget is smart enough to accept any valid python command, such as \npython\n'range(10)', '[1, 2, 3]', '10'\nand complain about syntactic mistakes. It can be defined to expect either int or float numbers and has an optional example as guide.", "list_cmd = [0, 1, 2]\n \nwid = ListWidget(list_cmd, mode='int', description='List:', render_function=render_function, example_visible=True)\nwid", "Note that you need to press Enter in order to pass a new value into the textbox. Also, try typing a wrong command, such as \npython\n'10, 20,,', '10, a, None'\nto see the corresponding error messages.\nThe styling of the widget can be changed using the style() method.", "wid.style(box_style='danger', font_size=15)", "The state of the widget can be updated with the set_widget_state() method. Note that since allow_callback=False, nothing gets printed after running the command, even though selected_values is updated.", "wid.set_widget_state([20, 16], allow_callback=False)", "Similar to the list widget, MenpoWidgets has a widget for defining a command for slicing a list (or numpy.array). Commands can have any vald Python syntax, such as\npython\n':3:', '::2', '1:2:10', '-1::', '0, 3, 7', 'range(5)'\nThe widget gets as argument a dict with the initial slicing command as well as the length of the list.", "# Initial options\nslice_cmd = {'command': ':3', \n 'length': 10}\n\n# Create widget\nwid = SlicingCommandWidget(slice_cmd, description='Command:', render_function=render_function, \n example_visible=True, orientation='horizontal')\n\n# Display widget\nwid", "Note that by defining a single int number, then an ipywidget.IntSlider appears that allows to select the index. Similarly, by inserting any slicing command with a constant step, then an ipywidgets.IntRangeSlider appears. The sliders are disabled when inserting a slicing command with non-constant step. The placement of the sliders with respect to the textbox is controlled by the orientation argument.\nAdditionally, similar to the ListWidget, the widget is smart enough to detect any syntactic errors and print a relevant message.\nThe styling of the widget can be changed as", "wid.style(border_visible=True, border_style='dashed', font_weight='bold')", "To update the widget's state, you need to pass in a new dict of options, as", "wid.set_widget_state({'command': ':40', 'length': 40}, allow_callback=True)", "<a name=\"sec:colour\"></a>4. Colour Selection\nMenpoWidgets is using the standard Java colour picker defined in ipywidgets.ColorPicker. However, ColourSelectionWidget has the additional functionality to select colours for a set of objects. Thus the widget constructor gets a list of colours (either the colour name str or the RGB values), as well as the labels list that has the names of the objects.", "wid = ColourSelectionWidget([[255, 38, 31], 'blue', 'green'], labels=['a', 'b', 'c'], \n render_function=render_function)\n\n# Set styling\nwid.style(box_style='warning', apply_to_all_style='info', label_colour='black', \n label_background_colour=map_styles_to_hex_colours('info', background=True), font_weight='bold')\n\n# Display widget\nwid", "The Apply to all button sets the currently selected colour to all the labels.\nThe colours can also be updated with the set_colours() function as", "wid.set_colours(['red', 'orange', 'pink'], allow_callback=True)", "In case there is only one label, defined either with a list of length 1 or by setting labels=None, then the drop-down menu to select object does not appear. For example, let's update the state of the widget:", "wid.set_widget_state(['red'], None)", "<a name=\"sec:index\"></a>5. Index Selection\nThe following two widgets give the ability to select a single integer number from a specified range. Thus, they can be seen as index selectors. The user must pass in a dict that defines the minimum, maximum and step of the allowed range, as well as the initially selected index. Then the selected_values trait always keeps track of the selected index, thus it has int type.\nAn index selection widget, where the selector is an ipywidgets.IntSlider can be created as", "# Initial options\nindex = {'min': 0, \n 'max': 100, \n 'step': 1, \n 'index': 10}\n\n# Crete widget\nwid = IndexSliderWidget(index, description='Index: ', render_function=render_function, continuous_update=False)\n\n# Set styling\nwid.style(box_style='danger', slider_handle_colour=map_styles_to_hex_colours('danger'), \n slider_bar_colour=map_styles_to_hex_colours('danger'))\n\n# Display widget\nwid", "As with all widgets, the state can be updated as:", "wid.set_widget_state({'min': 10, 'max': 500, 'step': 2, 'index': 50}, allow_callback=True)", "An index selection widget where the selection can be performed with -/+ (previous/next) buttons can be created as:", "index = {'min': 0, 'max': 100, 'step': 1, 'index': 10}\n\nwid = IndexButtonsWidget(index, render_function=render_function, loop_enabled=False, text_editable=True)\nwid", "Note that since text_editable is True, you can actually edit the index directly from the textbox. Additionally, by setting loop_enabled=True means that by pressing '+' when the textbox is at the last index, it takes you to the minimum index.\nLet's update the styling of the widget:", "wid.style(box_style='danger', plus_style='success', minus_style='danger', text_colour='blue', \n text_background_colour=map_styles_to_hex_colours('info', background=True))", "Let's also update its state with a new set of options:", "wid.set_widget_state({'min': 20, 'max': 500, 'step': 2, 'index': 50}, loop_enabled=True, text_editable=True, \n allow_callback=True)", "<a name=\"sec:zoom\"></a>6. Zoom\nThere are two widgets for zooming into a figure. Both are using ipywidgets.FloatSLider and get as input a dict with the minimum and maximum values, the step of the slider(s) and the initial zoom value.\nThe first one defines a single zoom float, as", "# Initial options\nzoom_options = {'min': 0.1, \n 'max': 4., \n 'step': 0.05, \n 'zoom': 1.}\n\n# Create widget\nwid = ZoomOneScaleWidget(zoom_options, render_function=render_function)\n\n# Set styling\nwid.style(box_style='danger')\nwid.zoom_slider.background_color = map_styles_to_hex_colours('info')\nwid.zoom_slider.slider_color = map_styles_to_hex_colours('danger')\n\n# Display widget\nwid", "and its state can be updated as:", "wid.set_widget_state({'zoom': 0.5, 'min': 0., 'max': 4., 'step': 0.2}, allow_callback=True)", "The second one defines two zoom values that are intended to control the height and width of a figure.", "# Initial options\nzoom_options = {'min': 0.1, \n 'max': 4., \n 'step': 0.1, \n 'zoom': [1., 1.], \n 'lock_aspect_ratio': False}\n\n# Create widget\nwid = ZoomTwoScalesWidget(zoom_options, render_function=render_function, continuous_update=True)\n\n# Set styling\nwid.style(box_style='danger')\n\n# Display widget\nwid", "Note that the sliders can be linkedd in order to preserve the aspect ratio of the figure. The state can be updated as:", "zoom_options = {'min': 0.5, 'max': 10., 'step': 0.3, 'zoom': [2., 3.]}\nwid.set_widget_state(zoom_options, allow_callback=True)", "<a name=\"sec:image\"></a>7. Image Options\nThis is a widget for selecting options related to rendering an image. It defines the colourmap, the alpha value for transparency as well as the interpolation. Specifically:", "# Initial options\nimage_options = {'alpha': 1., \n 'interpolation': 'bilinear', \n 'cmap_name': None}\n\n# Create widget\nwid = ImageOptionsWidget(image_options, render_function=render_function)\n\n# Set styling\nwid.style(box_style='success', padding=10, border_visible=True, border_radius=45)\n\n# Display widget\nwid", "The widget can be updated with a new dict of options as:", "wid.set_widget_state({'alpha': 0.8, 'interpolation': 'none', 'cmap_name': 'gray'}, allow_callback=True)", "<a name=\"sec:line\"></a>8. Line Options\nThe following widget allows the selection of options for rendering line objects. The initial options are passed in as a dict and control the width, style and colour of the lines. Note that a different colour can be defined for different objects using the labels argument.", "# Initial options\nline_options = {'render_lines': True, \n 'line_width': 1, \n 'line_colour': ['blue', 'red'], \n 'line_style': '-'}\n\n# Create widget\nwid = LineOptionsWidget(line_options, render_function=render_function, \n labels=['menpo', 'widgets'])\n\n# Set styling\nwid.style(box_style='danger', padding=6)\n\n# Display widget\nwid", "The Render lines tick box also controls the visibility of the rest of the options. So by updating the state with render_lines=False, the options disappear.", "wid.set_widget_state({'render_lines': False, 'line_width': 5, 'line_colour': ['purple'], 'line_style': '--'}, \n allow_callback=True, labels=None)", "<a name=\"sec:marker\"></a>9. Marker Options\nSimilar to the LineOptionsWidget, this widget allows to selecting options for rendering markers. The options define the edge width, face colour, edge colour, style and size of the markers.", "# Initial options\nmarker_options = {'render_markers': True, \n 'marker_size': 20, \n 'marker_face_colour': ['red', 'green'], \n 'marker_edge_colour': ['black', 'blue'], \n 'marker_style': 'o', \n 'marker_edge_width': 1}\n\n# Create widget\nwid = MarkerOptionsWidget(marker_options, render_function=render_function, \n labels=['a', 'b'])\n\n# Set styling\nwid.style(box_style='info', padding=6)\n\n# Display widget\nwid\n\nwid.set_widget_state({'render_markers': True, 'marker_size': 20, 'marker_face_colour': ['red'], \n 'marker_edge_colour': ['black'], 'marker_style': 'o', 'marker_edge_width': 1}, \n labels=None, allow_callback=True)", "<a name=\"sec:numbering\"></a>10. Numbering Options\nThe NumberingOptionsWidget is used in case you want to render some numbers next to the plotted points.", "# Initial options\nnumbers_options = {'render_numbering': True, \n 'numbers_font_name': 'serif', \n 'numbers_font_size': 10, \n 'numbers_font_style': 'normal', \n 'numbers_font_weight': 'normal', \n 'numbers_font_colour': ['black'], \n 'numbers_horizontal_align': 'center', \n 'numbers_vertical_align': 'bottom'}\n\n# Create widget\nwid = NumberingOptionsWidget(numbers_options, render_function=render_function)\n\n# Set styling\nwid.style(box_style='success', border_visible=True, border_colour='black', border_style='solid', border_width=1, \n border_radius=0, padding=10, margin=10)\n\n# Display widget\nwid", "Of course the state of the widget can be updated as:", "wid.set_widget_state({'render_numbering': True, 'numbers_font_name': 'serif', 'numbers_font_size': 10, \n 'numbers_font_style': 'normal', 'numbers_font_weight': 'normal', \n 'numbers_font_colour': ['green'], 'numbers_horizontal_align': 'center', \n 'numbers_vertical_align': 'bottom'}, allow_callback=True)", "<a name=\"sec:axes\"></a>11. Axes Options\nBefore presenting the AxesOptionsWidget, let's first see two widgets that are ued as its basic components for selecting the axes limits as well as the axes ticks.\nAxesLimitsWidget has 3 basic functions per axis:\n* auto: Allows matplotlib to automatically set the limits.\n* percentage: It expects a float that defines the percentage of padding to allow around the rendered object's region.\n* range: It expects two numbers that define the minimum and maximum values of the limits.", "# Create widget\nwid = AxesLimitsWidget(axes_x_limits=[0, 10], axes_y_limits=0.1, render_function=render_function)\n\n# Set styling\nwid.style(box_style='danger')\n\n# Display widget\nwid", "Note that the percentage mode is accompanied by a ListWidget that expects a single float, whereas the range mode invokes a ListWidget that expects two float numbers. The state of the widget can be changed as:", "wid.set_widget_state([-200, 200], None, allow_callback=True)", "On the other hand, AxesTicksWidget has two functionalities per axis:\n* auto: Allows matplotlib to automatically set the ticks.\n* list: Enables a ListWidget to select the ticks.", "# Initial options\naxes_ticks = {'x': [], \n 'y': [10., 20., 30.]}\n\n# Create widget\nwid = AxesTicksWidget(axes_ticks, render_function=render_function)\n\n# St styling\nwid.style(box_style='danger')\n\n# Display widget\nwid", "The state can be updated as:", "wid.set_widget_state({'x': list(range(5)), 'y': None}, allow_callback=True)", "The AxesOptionsWidget involves the AxesLimitsWidget and AxesTicksWidget widgets and also allows the selection of font-related options. As always, the initial options are provided in a dict:", "# Initial options\naxes_options = {'render_axes': True, \n 'axes_font_name': 'serif', \n 'axes_font_size': 10, \n 'axes_font_style': 'normal', \n 'axes_font_weight': 'normal', \n 'axes_x_limits': None, \n 'axes_y_limits': None, \n 'axes_x_ticks': [0, 100], \n 'axes_y_ticks': None}\n\n# Create widget\nwid = AxesOptionsWidget(axes_options, render_function=render_function)\n\n# Set styling\nwid.style(box_style='warning', padding=6, border_visible=True, border_colour=map_styles_to_hex_colours('warning'))\n\n# Display widget\nwid", "The state of the widget can be updated as:", "axes_options = {'render_axes': True, 'axes_font_name': 'serif', \n 'axes_font_size': 10, 'axes_font_style': 'normal', 'axes_font_weight': 'normal', \n 'axes_x_limits': [0., 0.05], 'axes_y_limits': 0.1, 'axes_x_ticks': [0, 100], 'axes_y_ticks': None}\nwid.set_widget_state(axes_options, allow_callback=True)", "<a name=\"sec:legend\"></a>12. Legend Options\nLegendOptionsWidget allows to control the (many) options of renderinf the legend of a figure.", "# Initial options\nlegend_options = {'render_legend': True,\n 'legend_title': '',\n 'legend_font_name': 'serif',\n 'legend_font_style': 'normal',\n 'legend_font_size': 10,\n 'legend_font_weight': 'normal',\n 'legend_marker_scale': 1.,\n 'legend_location': 2,\n 'legend_bbox_to_anchor': (1.05, 1.),\n 'legend_border_axes_pad': 1.,\n 'legend_n_columns': 1,\n 'legend_horizontal_spacing': 1.,\n 'legend_vertical_spacing': 1.,\n 'legend_border': True,\n 'legend_border_padding': 0.5,\n 'legend_shadow': False,\n 'legend_rounded_corners': True}\n\n# Create widget\nwid = LegendOptionsWidget(legend_options, render_function=render_function)\n\n# Set styling\nwid.style(border_visible=True, font_size=15)\n\n# Display widget\nwid\n\nlegend_options = {'render_legend': True, 'legend_title': 'asd', 'legend_font_name': 'sans-serif', \n 'legend_font_style': 'normal', 'legend_font_size': 60, 'legend_font_weight': 'normal',\n 'legend_marker_scale': 2., 'legend_location': 7, 'legend_bbox_to_anchor': (1.05, 1.),\n 'legend_border_axes_pad': 1., 'legend_n_columns': 2, 'legend_horizontal_spacing': 3.,\n 'legend_vertical_spacing': 7., 'legend_border': False,\n 'legend_border_padding': 0.5, 'legend_shadow': True, 'legend_rounded_corners': True}\nwid.set_widget_state(legend_options, allow_callback=True)", "<a name=\"sec:grid\"></a>13. Grid Options\nThe following simple widget controls the rendering of the grid lines of a plot, their style and width.", "# Initial options\ngrid_options = {'render_grid': True, \n 'grid_line_width': 1, \n 'grid_line_style': '-'}\n\n# Create widget\nwid = GridOptionsWidget(grid_options, render_function=render_function)\n\n# Set styling\nwid.style(box_style='warning')\n\n# Display widget\nwid\n\nwid.set_widget_state({'render_grid': True, 'grid_line_width': 10, 'grid_line_style': ':'})", "<a name=\"sec:features\"></a>14. HOG, DSIFT, Daisy, LBP, IGO Options\nThe following widgets allow to select options regarding HOG, DSIFT, Daisy, LBP and IGO features.", "# Initial options\nhog_options = {'mode': 'dense',\n 'algorithm': 'dalaltriggs',\n 'num_bins': 9,\n 'cell_size': 8,\n 'block_size': 2,\n 'signed_gradient': True,\n 'l2_norm_clip': 0.2,\n 'window_height': 1,\n 'window_width': 1,\n 'window_unit': 'blocks',\n 'window_step_vertical': 1,\n 'window_step_horizontal': 1,\n 'window_step_unit': 'pixels',\n 'padding': True}\n\n# Create widget\nwid = HOGOptionsWidget(hog_options, render_function=render_function)\n\n# Set styling\nwid.style('info')\n\n# Display widget\nwid\n\n# Initial options\ndsift_options = {'window_step_horizontal': 1,\n 'window_step_vertical': 1,\n 'num_bins_horizontal': 2,\n 'num_bins_vertical': 2,\n 'num_or_bins': 9,\n 'cell_size_horizontal': 6,\n 'cell_size_vertical': 6,\n 'fast': True}\n\n# Create widget\nwid = DSIFTOptionsWidget(dsift_options, render_function=render_function)\n\n# Set styling\nwid.style('success')\n\n# Display widget\nwid\n\n# Initial options\ndaisy_options = {'step': 1,\n 'radius': 15,\n 'rings': 2,\n 'histograms': 2,\n 'orientations': 8,\n 'normalization': 'l1',\n 'sigmas': None,\n 'ring_radii': None}\n \n# Create widget\nwid = DaisyOptionsWidget(daisy_options, render_function=render_function)\n\n# Set styling\nwid.style('danger')\n\n# Display widget\nwid\n\n# Initial options\nlbp_options = {'radius': list(range(1, 5)),\n 'samples': [8] * 4,\n 'mapping_type': 'u2',\n 'window_step_vertical': 1,\n 'window_step_horizontal': 1,\n 'window_step_unit': 'pixels',\n 'padding': True}\n \n# Create widget\nwid = LBPOptionsWidget(lbp_options, render_function=render_function)\n\n# Set styling\nwid.style(box_style='warning')\n\n# Display widget\nwid\n\nwid = IGOOptionsWidget({'double_angles': True}, render_function=render_function)\nwid" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DhashS/Olin-Complexity-Final-Project
reports/01_exact_algorithms.ipynb
gpl-3.0
[ "Exact algorithims\n\nDescription\nSolving the travelling salesman problem to resolve the lowest weight tour exactly is computationally tractable for graphs with a low node count. Often, getting the actual perfect solution is not required, but in the case it is, two algorithms exist.\nBrute-force\nThe brute-force search is the simplest algorithim. Consider all permutations of the nodes of the graph, which is the same as all possible tours of a graph (if it is complete). Compute the cost of all of them, and choose the one with the minimum cost. Clearly since we're evaluating all permutations of a graph's nodes, it's complexity is $O(n!)$.\n\nA python implementation of a brute force search is below", "# %load -s brute_force algs.py\ndef brute_force(p, perf=False):\n import itertools as it\n #Generate all possible tours (complete graph)\n tours = list(it.permutations(p.nodes())) #O(V!)\n costs = []\n \n if not perf:\n cost_data = pd.DataFrame(columns=[\"$N$\", \"cost\"])\n \n #Evaluate all tours\n for tour in tours:\n cost = 0\n for n1, n2 in zip(tour, tour[1:]): #O(V)\n cost += p[n1][n2]['weight']\n costs.append(cost)\n \n if not perf:\n cost_data = cost_data.append({\"$N$\" : len(p.nodes()),\n \"cost\" : min(costs),\n \"opt_tour\" : tours[np.argmin(costs)]},\n ignore_index = True)\n return (cost_data, pd.DataFrame()) \n \n #Choose tour with lowest cost\n return tours[np.argmin(costs)]\n", "If we visualize how the algorithm progresses, we can pre-emptiveley stop execution of the tour evaluation. Since the order of the permutations is deterministic, we can observe that the cost monotonically decreases.\nThis monotonic decrease is a result of the min function we call on costs. In actuality, since we're evaluating all tours, and only storing the smallest one (a reduce), we make no assumptions about the structure of the graph. One can see that all edge evaluations are seperate from one another, so our final evaluation is equally likeley to be the lowest-weight tour as the last\nLet's set up our visualization, creating a random euclidean 2D graph, and seeing how it performs as we vary $N$, the tour at which it stops evaluating. If we choose the size of the graph to be 8, solving it exactly is feasable. Any larger, and this notebook becomes computationally intractable.", "from algs import brute_force_N, brute_force\nfrom parsers import TSP\nfrom graphgen import EUC_2D\nfrom parstats import get_stats, dist_across_cost, scatter_vis\nfrom itertools import permutations\n\ntsp_prob = TSP('../data/a280.tsp')\ntsp_prob.graph = EUC_2D(6)\ntsp_prob.spec = dict(comment=\"Random euclidean graph\",\n dimension=11,\n edge_weight_type=\"EUC_2D\",\n name=\"Random cities\")\n\n%%bash\n./cluster.sh 8\n\n@get_stats(name=\"Brute force, monotonic reduction\",\n data=tsp_prob,\n plots=[scatter_vis])\ndef vis_brute(*args, **kwargs):\n return brute_force_N(*args, **kwargs)\n\nvis_brute(range(2, len(list(permutations(tsp_prob.graph.nodes())))));", "If we tweak the code slightly, we can see what it's doing without a reduce step:", "# %load -s brute_force_N_no_reduce algs.py\ndef brute_force_N_no_reduce(p, n, perf=False):\n import itertools as it\n #Generate all possible tours (complete graph)\n tours = list(it.permutations(p.nodes())) #O(V!)\n costs = []\n \n if not perf:\n cost_data = pd.DataFrame(columns=[\"$N$\", \"cost\", \"opt_cost\"])\n \n #Evaluate all tours\n for tour in tours[:n]:\n cost = 0\n for n1, n2 in zip(tour, tour[1:]): #O(V)\n cost += p[n1][n2]['weight']\n costs.append(cost)\n \n if not perf:\n cost_data = cost_data.append({\"$N$\" : n,\n \"cost\" : costs[-1],\n \"opt_cost\" : min(costs)},\n ignore_index = True)\n return (cost_data, pd.DataFrame())\n \n #Choose tour with lowest cost\n return tours[np.argmin(costs)]\n\n\n@get_stats(name=\"Brute force, no reduce\",\n data=tsp_prob,\n plots=[scatter_vis, dist_across_cost])\ndef vis_brute_no_reduce(*args, **kwargs):\n return brute_force_N_no_reduce(*args, **kwargs)\n\ncost_stats, _ = vis_brute_no_reduce(range(2, len(list(permutations(tsp_prob.graph.nodes())))))", "Given this is a randomly distributed dataset, it makes sense that the distribution across costs looks like a gaussian. Let's confirm by checking how correlated they are", "from scipy.stats import pearsonr\n\npearsonr(cost_stats.cost, cost_stats.opt_cost)\n\npearsonr(cost_stats[\"$N$\"], cost_stats.cost)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
elmaso/tno-ai
aind2-cnn/cifar10-classification/cifar10_mlp.ipynb
gpl-3.0
[ "Artificial Intelligence Nanodegree\nConvolutional Neural Networks\n\nIn this notebook, we train an MLP to classify images from the CIFAR-10 database.\n1. Load CIFAR-10 Database", "import keras\nfrom keras.datasets import cifar10\n\n# load the pre-shuffled train and test data\n(x_train, y_train), (x_test, y_test) = cifar10.load_data()", "2. Visualize the First 24 Training Images", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfig = plt.figure(figsize=(20,5))\nfor i in range(36):\n ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(x_train[i]))", "3. Rescale the Images by Dividing Every Pixel in Every Image by 255", "# rescale [0,255] --> [0,1]\nx_train = x_train.astype('float32')/255\nx_test = x_test.astype('float32')/255 ", "4. Break Dataset into Training, Testing, and Validation Sets", "from keras.utils import np_utils\n\n# one-hot encode the labels\nnum_classes = len(np.unique(y_train))\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\n\n# break training set into training and validation sets\n(x_train, x_valid) = x_train[5000:], x_train[:5000]\n(y_train, y_valid) = y_train[5000:], y_train[:5000]\n\n# print shape of training set\nprint('x_train shape:', x_train.shape)\n\n# print number of training, validation, and test images\nprint(x_train.shape[0], 'train samples')\nprint(x_test.shape[0], 'test samples')\nprint(x_valid.shape[0], 'validation samples')", "5. Define the Model Architecture", "from keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Flatten\n\n# define the model\nmodel = Sequential()\nmodel.add(Flatten(input_shape = x_train.shape[1:]))\nmodel.add(Dense(1000, activation='relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(num_classes, activation='softmax'))\n\nmodel.summary()", "6. Compile the Model", "# compile the model\nmodel.compile(loss='categorical_crossentropy', optimizer='rmsprop', \n metrics=['accuracy'])", "7. Train the Model", "from keras.callbacks import ModelCheckpoint \n\n# train the model\ncheckpointer = ModelCheckpoint(filepath='MLP.weights.best.hdf5', verbose=1, \n save_best_only=True)\nhist = model.fit(x_train, y_train, batch_size=32, epochs=20,\n validation_data=(x_valid, y_valid), callbacks=[checkpointer], \n verbose=2, shuffle=True)", "8. Load the Model with the Best Classification Accuracy on the Validation Set", "# load the weights that yielded the best validation accuracy\nmodel.load_weights('MLP.weights.best.hdf5')", "9. Calculate Classification Accuracy on Test Set", "# evaluate and print test accuracy\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint('\\n', 'Test accuracy:', score[1])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ClementPhil/deep-learning
first-neural-network/Your_first_neural_network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n #def sigmoid(x):\n # return 0 # Replace 0 with your sigmoid calculation here\n #self.activation_function = sigmoid\n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = None # signals into hidden layer\n hidden_outputs = None # signals from hidden layer\n\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = None # signals into final output layer\n final_outputs = None # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = None # Output layer error is the difference between desired target and actual output.\n \n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = None\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = None\n hidden_error_term = None\n\n # Weight step (input to hidden)\n delta_weights_i_h += None\n # Weight step (hidden to output)\n delta_weights_h_o += None\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += None # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += None # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = None # signals into hidden layer\n hidden_outputs = None # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = None # signals into final output layer\n final_outputs = None # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "import sys\n\n### Set the hyperparameters here ###\niterations = 100\nlearning_rate = 0.1\nhidden_nodes = 2\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JavascriptMick/deeplearning
transfer-learning/Transfer_Learning.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n \n # TODO: Build the vgg network here\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n codes_batch = \n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "labels_vecs = # Your one-hot encoded labels array here", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "train_x, train_y = \nval_x, val_y = \ntest_x, test_y = \n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\n# TODO: Classifier layers and operations\n\nlogits = # output layer logits\ncost = # cross entropy loss\n\noptimizer = # training optimizer\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!", "saver = tf.train.Saver()\nwith tf.Session() as sess:\n \n # TODO: Your training code here\n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
joommf/tutorial
workshops/Durham/.ipynb_checkpoints/tutorial0_first_notebook-checkpoint.ipynb
bsd-3-clause
[ "Tutorial 0 - First JOOMMF notebook\nThe goal of this tutorial is for all participants to familiarise themselves with running JOOMMF simulations in Jupyter notebook. The only thing you need to know for this tutorial is how to execute individual cells: this is done by pressing Shift + Return.\nSimple JOOMMF simulation", "import oommfc as oc\nimport discretisedfield as df\n%matplotlib inline\nprint(df.__file__)", "We create a system object and provide:\n\nHamiltonian,\ndynamics, and\nmagnetisation configuration.", "system = oc.System(name=\"first_notebook\")", "Our Hamiltonian should only contain exchange, demagnetisation, and Zeeman energy terms. We will apply the external magnetic field in the $x$ direction for the purpose of this demonstration:", "A = 1e-12 # exchange energy constant (J/m)\nH = (5e6, 0, 0) # external magnetic field in x-direction (A/m)\nsystem.hamiltonian = oc.Exchange(A=A) + oc.Demag() + oc.Zeeman(H=H)", "The dynamics of the system is governed by the LLG equation containing precession and damping terms:", "gamma = 2.211e5 # gamma parameter (m/As)\nalpha = 0.2 # Gilbert damping\nsystem.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)", "We initialise the system in positive $y$ direction, i.e. (0, 1, 0), which is different from the equlibrium state we expect for the external Zeeman field applied in $x$ direction:", "L = 100e-9 # cubic sample edge length (m)\nd = 5e-9 # discretisation cell size (m)\nmesh = oc.Mesh(p1=(0, 0, 0), p2=(L, L, L), cell=(d, d, d))\n\nMs = 8e6 # saturation magnetisation (A/m)\nsystem.m = df.Field(mesh, value=(0, 1, 0), norm=Ms)", "We can check the characteristics of the system we defined by asking objects to represent themselves:", "mesh\n\nsystem.hamiltonian\n\nsystem.dynamics", "We can also visualise the current magnetisation field:", "system.m.plot_plane(\"z\");", "After the system object is created, we can minimise its energy (relax it) using the Minimisation Driver (MinDriver).", "md = oc.MinDriver()\nmd.drive(system)", "The system is now relaxed, and we can plot its slice and compute its average magnetisation.", "# centre of the system is assumed for plane to be plotted\nsystem.m.plot_plane(\"z\");\n\n# plane can be chosen manually as well\nsystem.m.plot_plane(z=10e-9);\n\nsystem.m.average", "We can see that the magnetisation is aligned along the $x$ direction, as expected having in mind we applied the external magnetic field in that direction." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sassoftware/sas-viya-programming
communities/Your First CAS Connection from Python.ipynb
apache-2.0
[ "Your First CAS Connection from Python\nLet's start with a gentle introduction to the Python CAS client by doing some basic operations like creating a CAS connection and running a simple action. You'll need to have Python installed as well as the SWAT Python package from SAS, and you'll need a running CAS server.\nWe will be using Python 3 for our example. Specifically, we will be using the IPython interactive prompt (type 'ipython' rather than 'python' at your command prompt). The first thing we need to do is import SWAT and create a CAS session. We will use the name 'mycas1' for our CAS hostname and 12345 as our CAS port name. In this case, we will use username/password authentication, but other authentication mechanisms are also possible depending on your configuration.", "# Import the SWAT package which contains the CAS interface\nimport swat\n\n# Create a CAS session on mycas1 port 12345\nconn = swat.CAS('mycas1', 12345, 'username', 'password') ", "As you can see above, we have a session on the server. It has been assigned a unique session ID and more user-friendly name. In this case, we are using the binary CAS protocol as opposed to the REST interface. We can now run CAS actions in the session. Let's begin with a simple one: listnodes.", "# Run the builtins.listnodes action\nnodes = conn.listnodes()\nnodes", "The listnodes action returns a CASResults object (which is just a subclass of Python's ordered dictionary). It contains one key ('nodelist') which holds a Pandas DataFrame. We can now grab that DataFrame to do further operations on it.", "# Grab the nodelist DataFrame\ndf = nodes['nodelist']\ndf", "Use DataFrame selection to subset the columns.", "roles = df[['name', 'role']]\nroles\n\n# Extract the worker nodes using a DataFrame mask\nroles[roles.role == 'worker']\n\n# Extract the controllers using a DataFrame mask\nroles[roles.role == 'controller']", "In the code above, we are doing some standard DataFrame operations using expressions to filter the DataFrame to include only worker nodes or controller nodes. Pandas DataFrames support lots of ways of slicing and dicing your data. If you aren't familiar with them, you'll want to get acquainted on the Pandas web site.\nWhen you are finished with a CAS session, it's always a good idea to clean up.", "conn.close()", "Those are the very basics of connecting to CAS, running an action, and manipulating the results on the client side. You should now be able to jump to other topics on the Python CAS client to do some more interesting work." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
solowPy/binder
notebooks/4 Solving the model.ipynb
mit
[ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport sympy as sym\n\nimport solowpy", "4. Solving the model\n4.1 Solow model as an initial value problem\nThe Solow model with can be formulated as an initial value problem (IVP) as follows.\n$$ \\dot{k}(t) = sf(k(t)) - (g + n + \\delta)k(t),\\ t\\ge t_0,\\ k(t_0) = k_0 \\tag{4.1.0} $$\nThe solution to this IVP is a function $k(t)$ describing the time-path of capital stock (per unit effective labor). Our objective in this section will be to explore methods for approximating the function $k(t)$. The methods we will learn are completely general and can be used to solve any IVP. Those interested in learning more about these methods should start by reading Chapter 10 of Numerical Methods for Economists by Ken Judd before proceeding to John Butcher's excellent book entitled Numerical Methods for solving Ordinary Differential Equations.\nBefore discussing numerical methods we should stop and consider whether or not there are any special cases (i.e., combintions of parameters) for which the the initial value problem defined in 4.2.1 might have an analytic solution. Analytic results can be very useful in building intuition about the economic mechanisms at play in a model and are invaluable for debugging code.\n4.2 Analytic methods\n4.2.1 Analytic solution for a model with Cobb-Douglas production\nThe Solow model with Cobb-Douglas production happens to have a completely general analytic solution:\n$$ k(t) = \\left[\\left(\\frac{s}{n+g+\\delta}\\right)\\left(1 - e^{-(n + g + \\delta) (1-\\alpha) t}\\right) + k_0^{1-\\alpha}e^{-(n + g + \\delta) (1-\\alpha) t}\\right]^{\\frac{1}{1-\\alpha}} \\tag{4.2.0}$$\nThis analytic result is available via the analytic_solution method of the solow.CobbDouglasModel class.", "solowpy.CobbDouglasModel.analytic_solution?", "Example: Computing the analytic trajectory\nWe can compute an analytic solution for our Solow model like so...", "# define model parameters\ncobb_douglas_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,\n 'delta': 0.05, 'alpha': 0.33}\n\n# create an instance of the solow.Model class\ncobb_douglas_model = solowpy.CobbDouglasModel(params=cobb_douglas_params)\n\n# specify some initial condition\nk0 = 0.5 * cobb_douglas_model.steady_state\n\n# grid of t values for which we want the value of k(t)\nti = np.linspace(0, 100, 10)\n\n# generate a trajectory!\ncobb_douglas_model.analytic_solution(ti, k0)", "...and we can make a plot of this solution like so...", "fig, ax = plt.subplots(1, 1, figsize=(8,6))\n\n# compute the solution\nti = np.linspace(0, 100, 1000)\nanalytic_traj = cobb_douglas_model.analytic_solution(ti, k0)\n\n# plot this trajectory\nax.plot(ti, analytic_traj[:,1], 'r-')\n\n# equilibrium value of capital stock (per unit effective labor)\nax.axhline(cobb_douglas_model.steady_state, linestyle='dashed',\n color='k', label='$k^*$')\n\n# axes, labels, title, etc\nax.set_xlabel('Time, $t$', fontsize=20, family='serif')\nax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')\nax.set_title('Analytic solution to a Solow model\\nwith Cobb-Douglas production',\n fontsize=25, family='serif')\nax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))\nax.grid('on')\n\nplt.show()", "4.2.2 Linearized solution to general model\nIn general there will not be closed-form solutions for the Solow model. The standard approach to obtaining general analytical results for the Solow model is to linearize the equation of motion for capital stock (per unit effective labor). Linearizing the equation of motion of capital (per unit effective labor) amounts to taking a first-order Taylor approximation of equation 4.1.0 around its long-run equilibrium $k^*$:\n$$ \\dot{k}(t) \\approx -\\lambda (k(t) - k^*),\\ t \\ge t_0,\\ k(t_0)=k_0 \\tag{4.2.1}$$\nwhere the speed of convergence, $\\lambda$, is defined as \n$$ \\lambda = -\\frac{\\partial \\dot{k}(k(t))}{\\partial k(t)}\\bigg|_{k(t)=k^*} \\tag{4.2.2} $$\nThe solution the the linear differential equation 4.2.1 is\n$$ k(t) = k^ + e^{-\\lambda t}(k_0 - k^). \\tag{4.2.3} $$\nTo complete the solution it remains to find an expression for the speed of convergence $\\lambda$:\n\\begin{align}\n \\lambda \\equiv -\\frac{\\partial \\dot{k}(k(t))}{\\partial k(t)}\\bigg|_{k(t)=k^} =& -[sf'(k^) - (g + n+ \\delta)] \\\n =& (g + n+ \\delta) - sf'(k^) \\\n =& (g + n + \\delta) - (g + n + \\delta)\\frac{k^f'(k^)}{f(k^)} \\\n =& (1 - \\alpha_K(k^*))(g + n + \\delta) \\tag{4.2.4}\n\\end{align}\nwhere the elasticity of output with respect to capital, $\\alpha_K(k)$, is\n$$\\alpha_K(k) = \\frac{k^f'(k^)}{f(k^*)}. \\tag{4.2.5}$$\nExample: Computing the linearized trajectory\nOne can compute a linear approximation of the model solution using the linearized_solution method of the solow.Model class as follows.", "# specify some initial condition\nk0 = 0.5 * cobb_douglas_model.steady_state\n\n# grid of t values for which we want the value of k(t)\nti = np.linspace(0, 100, 10)\n\n# generate a trajectory!\ncobb_douglas_model.linearized_solution(ti, k0)", "4.2.3 Accuracy of the linear approximation", "# initial condition\nt0, k0 = 0.0, 0.5 * cobb_douglas_model.steady_state\n\n# grid of t values for which we want the value of k(t)\nti = np.linspace(t0, 100, 1000)\n\n# generate the trajectories\nanalytic = cobb_douglas_model.analytic_solution(ti, k0)\nlinearized = cobb_douglas_model.linearized_solution(ti, k0)\n\nfig, ax = plt.subplots(1, 1, figsize=(8,6))\n\nax.plot(ti, analytic[:,1], 'r-', label='Analytic')\nax.plot(ti, linearized[:,1], 'b-', label='Linearized')\n\n# demarcate k*\nax.axhline(cobb_douglas_model.steady_state, linestyle='dashed', \n color='k', label='$k^*$')\n\n# axes, labels, title, etc\nax.set_xlabel('Time, $t$', fontsize=20, family='serif')\nax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')\nax.set_title('Analytic vs. linearized solutions', fontsize=25, family='serif')\nax.legend(loc='best', frameon=False, prop={'family':'serif'},\n bbox_to_anchor=(1.0, 1.0))\nax.grid('on')\n\nfig.show()", "4.3 Finite-difference methods\nFour of the best, most widely used ODE integrators have been implemented in the scipy.integrate module (they are called dopri5, dop85, lsoda, and vode). Each of these integrators uses some type of adaptive step-size control: the integrator adaptively adjusts the step size $h$ in order to keep the approximation error below some user-specified threshold). The cells below contain code which compares the approximation error of the forward Euler with those of lsoda and vode. Instead of simple linear interpolation (i.e., k=1), I set k=5 for 5th order B-spline interpolation.\n...finally, we can plot trajectories for different initial conditions. Note that the analytic solutions converge to the long-run equilibrium no matter the initial condition of capital stock (per unit of effective labor) providing a nice graphical demonstration that the Solow model is globally stable.", "fig, ax = plt.subplots(1, 1, figsize=(8,6))\n\n# lower and upper bounds for initial conditions\nk_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)\nk_l = 0.5 * k_star\nk_u = 2.0 * k_star\n\nfor k0 in np.linspace(k_l, k_u, 5):\n\n # compute the solution\n ti = np.linspace(0, 100, 1000)\n analytic_traj = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)\n \n # plot this trajectory\n ax.plot(ti, analytic_traj[:,1])\n\n# equilibrium value of capital stock (per unit effective labor)\nax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')\n\n# axes, labels, title, etc\nax.set_xlabel('Time, $t$', fontsize=15, family='serif')\nax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')\nax.set_title('Analytic solution to a Solow model\\nwith Cobb-Douglas production',\n fontsize=20, family='serif')\nax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))\nax.grid('on')\n\nplt.show()\n\nk0 = 0.5 * ces_model.steady_state\nnumeric_trajectory = ces_model.ivp.solve(t0=0, y0=k0, h=0.5, T=100, integrator='dopri5')\n\nti = numeric_trajectory[:,0]\nlinearized_trajectory = ces_model.linearized_solution(ti, k0)\n", "4.3.2 Accuracy of finite-difference methods", "t0, k0 = 0.0, 0.5\nnumeric_soln = cobb_douglas_model.ivp.solve(t0, k0, T=100, integrator='lsoda')\n\nfig, ax = plt.subplots(1, 1, figsize=(8,6))\n\n# compute and plot the numeric approximation\nt0, k0 = 0.0, 0.5\nnumeric_soln = cobb_douglas_model.ivp.solve(t0, k0, T=100, integrator='lsoda')\nax.plot(numeric_soln[:,0], numeric_soln[:,1], 'bo', markersize=3.0)\n\n# compute and plot the analytic solution\nti = np.linspace(0, 100, 1000)\nanalytic_soln = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)\nax.plot(ti, analytic_soln[:,1], 'r-')\n\n# equilibrium value of capital stock (per unit effective labor)\nk_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)\nax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')\n\n# axes, labels, title, etc\nax.set_xlabel('Time, $t$', fontsize=15, family='serif')\nax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')\nax.set_title('Numerical approximation of the solution',\n fontsize=20, family='serif')\nax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))\nax.grid('on')\n\nplt.show()\n\nti = np.linspace(0, 100, 1000)\ninterpolated_soln = cobb_douglas_model.ivp.interpolate(numeric_soln, ti, k=3)\n\nfig, ax = plt.subplots(1, 1, figsize=(8,6))\n\n# compute and plot the numeric approximation\nti = np.linspace(0, 100, 1000)\ninterpolated_soln = cobb_douglas_model.ivp.interpolate(numeric_soln, ti, k=3)\nax.plot(ti, interpolated_soln[:,1], 'b-')\n\n# compute and plot the analytic solution\nanalytic_soln = solow.cobb_douglas.analytic_solution(cobb_douglas_model, ti, k0)\nax.plot(ti, analytic_soln[:,1], 'r-')\n\n# equilibrium value of capital stock (per unit effective labor)\nk_star = solow.cobb_douglas.analytic_steady_state(cobb_douglas_model)\nax.axhline(k_star, linestyle='dashed', color='k', label='$k^*$')\n\n# axes, labels, title, etc\nax.set_xlabel('Time, $t$', fontsize=15, family='serif')\nax.set_ylabel('$k(t)$', rotation='horizontal', fontsize=20, family='serif')\nax.set_title('Numerical approximation of the solution',\n fontsize=20, family='serif')\nax.legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))\nax.grid('on')\n\nplt.show()\n\nti = np.linspace(0, 100, 1000)\nresidual = cobb_douglas_model.ivp.compute_residual(numeric_soln, ti, k=3)\n\n# extract the raw residuals\ncapital_residual = residual[:, 1]\n\n# typically, normalize residual by the level of the variable\nnorm_capital_residual = np.abs(capital_residual) / interpolated_soln[:,1]\n\n# create the plot\nfig = plt.figure(figsize=(8, 6))\nplt.plot(ti, norm_capital_residual, 'b-', label='$k(t)$')\nplt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps')\nplt.xlabel('Time', fontsize=15)\nplt.ylim(1e-16, 1)\nplt.ylabel('Residuals (normalized)', fontsize=15, family='serif')\nplt.yscale('log')\nplt.title('Residual', fontsize=20, family='serif')\nplt.grid()\nplt.legend(loc=0, frameon=False, bbox_to_anchor=(1.0,1.0))\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
shareactorIO/pipeline
oreilly.ml/high-performance-tensorflow/notebooks/03_Train_Model_CPU.ipynb
apache-2.0
[ "Train Model with CPU", "import tensorflow as tf\nfrom tensorflow.python.client import timeline\nimport pylab\nimport numpy as np\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\ntf.logging.set_verbosity(tf.logging.INFO)\n\ntf.reset_default_graph()\n\nnum_samples = 100000\n\nfrom datetime import datetime \n\nversion = int(datetime.now().strftime(\"%s\"))\nprint(version)\n\nx_train = np.random.rand(num_samples).astype(np.float32)\nprint(x_train)\n\nnoise = np.random.normal(scale=0.01, size=len(x_train))\n\ny_train = x_train * 0.1 + 0.3 + noise\nprint(y_train)\n\npylab.plot(x_train, y_train, '.')", "Create Model Test/Validation Data", "x_test = np.random.rand(len(x_train)).astype(np.float32)\nprint(x_test)\n\nnoise = np.random.normal(scale=0.01, size=len(x_train))\n\ny_test = x_test * 0.1 + 0.3 + noise\nprint(y_test)\n\npylab.plot(x_train, y_train, '.')\n\nwith tf.device(\"/cpu:0\"):\n W = tf.get_variable(shape=[], name='weights')\n print(W)\n\n b = tf.get_variable(shape=[], name='bias')\n print(b)\n\n x_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='x_observed')\n print(x_observed)\n\nwith tf.device(\"/cpu:0\"):\n y_pred = W * x_observed + b\n print(y_pred)\n\nwith tf.device(\"/cpu:0\"):\n\n y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')\n print(y_observed)\n\n loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))\n\n optimizer_op = tf.train.GradientDescentOptimizer(0.025) \n train_op = optimizer_op.minimize(loss_op) \n\n print(\"loss:\", loss_op)\n print(\"optimizer:\", optimizer_op)\n print(\"train:\", train_op)\n\nwith tf.device(\"/cpu:0\"):\n init_op = tf.global_variables_initializer()\n print(init_op)\n\ntrain_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/cpu/%s/train' % version, graph=tf.get_default_graph())\n\ntest_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/cpu/%s/test' % version, graph=tf.get_default_graph())\n\nconfig = tf.ConfigProto(\n log_device_placement=True,\n)\nprint(config)\n\nsess = tf.Session(config=config)\n\nsess.run(init_op)\nprint(sess.run(W))\nprint(sess.run(b))", "Look at the Model Graph In Tensorboard\nNavigate to the Graph tab at this URL:\nhttp://[ip-address]:6006\nAccuracy of Random Weights", "def test(x, y):\n return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})\n\ntest(x=x_test, y=y_test)\n\nloss_summary_scalar_op = tf.summary.scalar('loss', loss_op)\nloss_summary_merge_all_op = tf.summary.merge_all()", "Train Model", "%%time\n\nmax_steps = 400\n\nrun_metadata = tf.RunMetadata()\n\nfor step in range(max_steps):\n if (step < max_steps):\n test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})\n train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})\n else: \n test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})\n train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train}, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE), run_metadata=run_metadata)\n trace = timeline.Timeline(step_stats=run_metadata.step_stats) \n with open('cpu-timeline.json', 'w') as trace_file:\n trace_file.write(trace.generate_chrome_trace_format(show_memory=True))\n\n if step % 1 == 0:\n print(step, sess.run([W, b]))\n train_summary_writer.add_summary(train_summary_log, step)\n train_summary_writer.flush()\n test_summary_writer.add_summary(test_summary_log, step)\n test_summary_writer.flush()\n\npylab.plot(x_train, y_train, '.', label=\"target\")\npylab.plot(x_train, sess.run(y_pred, feed_dict={x_observed: x_train, y_observed: y_train}), \".\", label=\"predicted\")\npylab.legend()\npylab.ylim(0, 1.0)\n\ntest(x=x_test, y=y_test)", "Look at the Train and Test Loss Summary In Tensorboard\nNavigate to the Scalars tab at this URL:\nhttp://[ip-address]:6006", "from tensorflow.python.saved_model import utils\n\ntensor_info_x_observed = utils.build_tensor_info(x_observed)\nprint(tensor_info_x_observed)\n\ntensor_info_y_pred = utils.build_tensor_info(y_pred)\nprint(tensor_info_y_pred)\n\nexport_path = \"/root/models/linear/cpu/%s\" % version\nprint(export_path)\n\nfrom tensorflow.python.saved_model import builder as saved_model_builder\nfrom tensorflow.python.saved_model import signature_constants\nfrom tensorflow.python.saved_model import signature_def_utils\nfrom tensorflow.python.saved_model import tag_constants\n\nwith tf.device(\"/cpu:0\"):\n builder = saved_model_builder.SavedModelBuilder(export_path)\n\nprediction_signature = signature_def_utils.build_signature_def(\n inputs = {'x_observed': tensor_info_x_observed}, \n outputs = {'y_pred': tensor_info_y_pred}, \n method_name = signature_constants.PREDICT_METHOD_NAME) \n\nlegacy_init_op = tf.group(tf.initialize_all_tables(), name='legacy_init_op')\n\nbuilder.add_meta_graph_and_variables(sess, \n [tag_constants.SERVING],\n signature_def_map={'predict':prediction_signature,\n signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature}, \n legacy_init_op=legacy_init_op)\n\nbuilder.save()", "Look at the Model On Disk\nYou must replace [version] with the version number", "%%bash\n\nls -l /root/models/linear/cpu/[version]", "HACK: Save Model in Previous Model Format\nWe will use this later.", "from tensorflow.python.framework import graph_io\ngraph_io.write_graph(sess.graph, \"/root/models/optimize_me/\", \"unoptimized_cpu.pb\")\n\nsess.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ComputationalPhysics2015-IPM/Python-01
Python-02.ipynb
gpl-2.0
[ "Three-Way Decisions\nif, elif, else", "t=0\n\nif t > 60:\n print('its very hot')\nelif t > 50:\n print('its hot')\nelif t > 40:\n print('its warm')\nelse:\n print('its cool')\n\nt=55\n\nif t > 40:\n print('its very hot')\nelif t > 50:\n print('its hot')\nelif t > 60:\n print('its warm')\nelse:\n print('its cool')", "S be carefull!\nwhile Loop", "i=0\nwhile i<10:\n print(i)\n i+=1", "Queez\nOnce upon a time, there was king, who wanted lots of soldiers. So he commanded every couple in the country to have children, until their first dauter born. Then the family is banned from having any more child.\nWhat will be the ratio of boy/girls in this country?", "from random import randint\n\nchildren = 0\nboy = 0\n\nfor i in range(10000):\n gender = randint(0,1) # boy=1, girl=0\n children += 1\n while gender != 0:\n boy += gender\n gender = randint(0,1)\n children += 1\nprint(boy/children)", "Control Statments\nbreak, continue and pass", "for i in range(10):\n print(i)\n if i == 5:\n break\n\nfor i in range(10):\n print(i)\n if i > 5:\n continue\n print(\"Hey\")\n\ndef func():\n pass\n\nfunc()", "tuple", "t = (0,1,'test')\n\nprint(t)\nt[0]=1\n\n(1,)", "Dictionaries\nitems get keys pop update values", "d = {}\nd['name'] = 'Hamed'\nd['family name'] = 'Seyed-allaei'\nd[0]=12\nd['a']=''\nprint(d)\nprint(d['name'])\nprint(d[0])\n\nfor i,j in d.items():\n print(i,j)\n", "set\nin, not in, len(), ==, !=, <=, <, |, &, -, ^", "a = set(['c', 'a','b','b'])\nb = set(['c', 'd','e'])\nprint(a,b)\n\na | b\n\na & b\n\na - b\n\nb - a\n\na ^ b", "List comprehention", "l = []\nfor i in range(10):\n l.append(i*i)\nprint(l)\n\n[i*i for i in range(10)]\n\n{i:i**2 for i in range(10)}", "Generators\nnext()", "def myrange(n):\n i = 0\n while i < n:\n yield i\n yield i**2\n i+=1\n \nx = myrange(10)\ntype(x)\n\nnext(x)\n\n[i for i in myrange(10)]\n\nfor i in myrange(10):\n print(i)", "Fibonacci\nThis time as a generator." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dataventures/workshops
4/0-Time-Series-Analysis.ipynb
mit
[ "Time Series Analysis and Forecasting\nSometimes the data we're working with has a special dependence on time as its primary predictive feature, and we want to predict how a variable evolves with time. These situations occur all of the time, from predicting stock prices to tomorrow's weather. In these cases, the data is called a time series, and we can apply a special set of statistical methods for analyzing the data. \nTime Series Exploration With Pandas", "import pandas as pd\nimport numpy as np\nimport matplotlib.pylab as plt\n%matplotlib inline\nfrom matplotlib.pylab import rcParams\nrcParams['figure.figsize'] = 15, 6", "Pandas has great support for datetime objects and general time series analysis operations. We'll be working with an example of predicting the number of airline passengers (in thousands) by month adapted from this tutorial.\nFirst, download this dataset and load it into a Pandas Dataframe by specifying the 'Month' column as the datetime index.", "dateparse = lambda dates: pd.datetime.strptime(dates, '%Y-%m')\ndata = pd.read_csv('AirPassengers.csv', parse_dates=['Month'], index_col='Month',date_parser=dateparse)\nprint data.head()", "Note that Pandas is using the 'Month' column as the Dataframe index.", "ts = data[\"#Passengers\"]\nts.index", "We can index into the Dataframe in two ways - either by using a string representation for the index or by constructing a datetime object.", "ts['1949-01-01']\n\nfrom datetime import datetime\nts[datetime(1949,1,1)]", "We can also use the Pandas datetime index support to retrieve entire years", "ts['1949']\n\nts['1949-01-01':'1949-05-01']", "Finally, let's plot the time series to get an intial visualization of how the series grows.", "plt.plot(ts)", "Stationarity\nMost of the important results for time series forecasting (including the ARIMA model, which we focus on today) assume that the series is stationary - that is, its statistical properties like mean and variance are constant. However, the graph above certainly isn't stationary, given the obvious growth. Thus, we want to manipulate the time series to make it stationary. This process of reducing a time series to a stationary series is a hallmark of time series analysis and forecasting, as most real world time series' aren't initially stationary.\nTo solve this nonstationarity issue, we can break a time series up into its trend and seasonality. These are the two factors that make a series nonstationary, so the main idea is to remove these factors, operate on the resulting stationary series, then add these factors back in.\nFirst, we will take the log of the series to reduce the positive trend. This gives a seemingly linear trend, making it easier to estimate.", "ts_log = np.log(ts)\nplt.plot(ts_log)", "A simple moving average is the most basic way to predict the trend of a series, taking advantage of the generally continuous nature of trends. For example, if I told you to predict the number of wins of a basketball team this season, without giving you any information about the team apart from its past record, you would take the average of the team's wins over the last few seasons as a reasonable predictor. \nThe simple moving average operates on this exact principle. Choosing an $n$ element window to average over, the prediction at each point is obtained by taking the average of the last $n$ values. Notice that the moving average is undefined for first 12 values because we're looking at a 12 value window.", "moving_avg = pd.Series(ts_log).rolling(window=12).mean()\n\nplt.plot(ts_log)\nplt.plot(moving_avg, color='red')", "You might be unhappy with having to choose a window size. How do we know what window size we want if we don't know much about the data? One solution is to average over all past data, discounting earlier values because they have less predictive power than more recent values. This method is known as smoothing.", "expwighted_avg = pd.Series(ts_log).ewm(halflife=12).mean()\n\nplt.plot(ts_log)\nplt.plot(expwighted_avg, color='red')", "Now we can subtract the trend from the original data (eliminating the null values in the case of the simple moving average) to create a new series that is hopefully more stationary. The blue graph represents the smoothing difference, while the red graph represents the simple moving average difference", "ts_exp_moving_avg_diff = ts_log - expwighted_avg\nts_log_moving_avg_diff = ts_log - moving_avg\nts_log_moving_avg_diff.dropna(inplace=True)\nplt.plot(ts_exp_moving_avg_diff)\nplt.plot(ts_log_moving_avg_diff, color='red')", "Now there is no longer an upward trend, suggesting a stationarity. There does seem to be a strong seasonality effect, as the number of passengers is low at the beginning and middle of the year but spikes at the first and third quarters.\nDealing with Seasonality\nOne baseline way of dealing with both trend and seasonality at once is differencing, taking a single step lag (subtracting the last value of the series from the current value at each step) to represent how the time series grows. Of course, this method can be refined by factoring in more complex lags.", "ts_log_diff = ts_log - ts_log.shift()\nplt.plot(ts_log_diff)", "Another method of dealing with trend and seasonality is separating the two effects, then removing both from the time series to obtain the stationary series. We'll be using the statsmodels module, which you can get via pip by running the following command in the terminal.\npython -mpip install statsmodels\nWe will use the seasonal decompose tool to separate seasonality from trend.", "from statsmodels.tsa.seasonal import seasonal_decompose\ndecomposition = seasonal_decompose(ts_log)\n\ntrend = decomposition.trend\nseasonal = decomposition.seasonal\nresidual = decomposition.resid\n\nplt.subplot(411)\nplt.plot(ts_log, label='Original')\nplt.legend(loc='best')\nplt.subplot(412)\nplt.plot(trend, label='Trend')\nplt.legend(loc='best')\nplt.subplot(413)\nplt.plot(seasonal,label='Seasonality')\nplt.legend(loc='best')\nplt.subplot(414)\nplt.plot(residual, label='Residuals')\nplt.legend(loc='best')\nplt.tight_layout()", "Forecasting\nUsing the seasonal decomposition, we were able to separate the trend and seasonality effects, which is great for time series analysis. However, another goal of working with time series is forecasting the future - how do we do that given the tools that we've been using and the stationary series we've obtained?\nThe ARIMA (Autoregressive Integrated Moving Average) model, which operates on stationary series', is one of the most commonly used models for time series forecasting. ARIMA, with parameters $p$, $d$, and $q$, combines an Autoregressive Model with a Moving Average model. Let's take a look at what this means.\nAutoregressive model: output variable depends linearly on previous values. The $p$ parameter determines the number of lag terms used in the regression. Formally, $X_t = c + \\sum_{i = 1}^p \\varphi_iX_{t - i} + \\epsilon_t$.\nMoving average model: generalizes the same concept of moving average we saw earlier - the $q$ parameter determines the order of the model. Formally, $X_t = \\mu + \\sum_{i = 1}^q \\theta_i\\epsilon_{t - i}$.\nIntegrated model: the $d$ parameter represents the number of times past values have been subtracted, extending on the differencing method described earlier. This integrates the differencing for stationality into the ARIMA model, allowing it to be fit on non-stationalized data.\nWe don't have time to cover the math behind these models in depth, but Wikipedia provides some more detailed descriptions of the AR, MA, ARMA, and ARIMA models. \nComparing our model's results (red) to the actual differenced data (blue).", "from statsmodels.tsa.arima_model import ARIMA\n\nmodel = ARIMA(ts_log, order=(2, 1, 2)) \nresults_ARIMA = model.fit(disp=-1) \nplt.plot(ts_log_diff)\nplt.plot(results_ARIMA.fittedvalues, color='red')\nplt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues-ts_log_diff)**2))", "Now that we have a model for the stationary series that we can use to predict future values in the stationary series, and we want to get back to the original series. Note that we won't have a value for the first element because we are working with a one step lag. The following procedure takes care of that.", "predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True)\npredictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()\npredictions_ARIMA_log = pd.Series(ts_log.ix[0], index=ts_log.index)\npredictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum,fill_value=0)", "Now, we can plot the prediction (green) against the actual data. Note that the prediction model captures the seasonality and trend of the original series. It's not perfect, and additional steps can be made to tune the model. The important takeaway from this workshop is the general time series procedure of separating the time series into the trend and seasonality effects, and understanding how to work with time series' in Pandas.", "predictions_ARIMA = np.exp(predictions_ARIMA_log)\nplt.plot(ts)\nplt.plot(predictions_ARIMA)\nplt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-ts)**2)/len(ts)))", "Challenge: ARIMA Tuning\nThis is an open ended challenge. There aren't any right or wrong answers, we'd just like to see how you would approach tuning the ARIMA model.\nAs you can see above, the ARIMA predictions could certainly use some tuning. Try manually tuning $p$, $d$, and $q$ and see how that changes the ARIMA predictions. How would you use the AR, MA, and ARMA models individually using the ARIMA model? Do these results match what you would expect from these individual models? Can you automate this process to find the parameters that minimize RMSE? Do you see any issues with tuning $p$, $d$ and $q$ this way?", "# TODO: adjust the p, d, and q parameters to model the AR, MA, and ARMA models. Then, adjust these parameters to optimally tune the ARIMA model.\ntest_model = ARIMA(ts_log, order=(2, 1, 2)) \ntest_results_ARIMA = test_model.fit(disp=-1) \n\ntest_predictions_ARIMA_diff = pd.Series(test_results_ARIMA.fittedvalues, copy=True)\ntest_predictions_ARIMA_diff_cumsum = test_predictions_ARIMA_diff.cumsum()\ntest_predictions_ARIMA_log = pd.Series(ts_log.ix[0], index=ts_log.index)\ntest_predictions_ARIMA_log = test_predictions_ARIMA_log.add(test_predictions_ARIMA_diff_cumsum,fill_value=0)\n\ntest_predictions_ARIMA = np.exp(test_predictions_ARIMA_log)\nplt.plot(ts)\nplt.plot(test_predictions_ARIMA)\nplt.title('RMSE: %.4f'% np.sqrt(sum((test_predictions_ARIMA-ts)**2)/len(ts)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pyro-ppl/numpyro
notebooks/source/truncated_distributions.ipynb
apache-2.0
[ "Truncated and folded distributions\nThis tutorial will cover how to work with truncated and folded\ndistributions in NumPyro.\nIt is assumed that you're already familiar with the basics of NumPyro.\nTo get the most out of this tutorial you'll need some background in probability.\nTable of contents\n\n0. Setup\n1. What is a truncated distribution?\n2. What is a folded distribution?\n3. Sampling from truncated and folded distributions\n4. Ready-to-use truncated and folded distributions\n5. Building your own truncanted distributions\n5.1 Recap of NumPyro distributions\n5.2 Right-truncated normal\n5.3 Left-truncated Poisson\n\n\n6. References and related material\n\nSetup <a class=\"anchor\" id=\"0\"></a>\nTo run this notebook, we are going to need the following imports", "!pip install -q git+https://github.com/pyro-ppl/numpyro.git\n\nimport jax\nimport jax.numpy as jnp\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport numpyro\nimport numpyro.distributions as dist\nfrom jax import lax, random\nfrom jax.scipy.special import ndtr, ndtri\nfrom jax.scipy.stats import poisson, norm\nfrom numpyro.distributions import (\n constraints,\n Distribution,\n FoldedDistribution,\n SoftLaplace,\n StudentT,\n TruncatedDistribution,\n TruncatedNormal,\n)\nfrom numpyro.distributions.util import promote_shapes\nfrom numpyro.infer import DiscreteHMCGibbs, MCMC, NUTS, Predictive\nfrom scipy.stats import poisson as sp_poisson\n\nnumpyro.enable_x64()\nRNG = random.PRNGKey(0)\nPRIOR_RNG, MCMC_RNG, PRED_RNG = random.split(RNG, 3)\nMCMC_KWARGS = dict(\n num_warmup=2000,\n num_samples=2000,\n num_chains=4,\n chain_method=\"sequential\",\n)", "1. What are truncated distributions?\n<a class=\"anchor\" id=\"1\"></a>\nThe support of a probability distribution is the set of values\nin the domain with non-zero probability. For example, the\nsupport of the normal distribution is the whole real line (even if\nthe density gets very small as we move away from the mean, technically\nspeaking, it is never quite zero). The support of the uniform distribution,\nas coded in jax.random.uniform with the default arguments, is the interval $\\left[0, 1)\\right.$, because any\nvalue outside of that interval has zero probability. The support of the Poisson distribution is the set of non-negative integers, etc.\nTruncating a distribution makes its support smaller\nso that any value outside our desired domain has zero probability. In practice, this can be useful\nfor modelling situations in which certain biases are introduced during data collection.\nFor example, some physical detectors only get triggered when the signal is above some\nminimum threshold, or sometimes the detectors fail if the signal exceeds a certain value.\nAs a result, the observed values are constrained to be within a limited range of values,\neven though the true signal does not have the same constraints.\nSee, for example, section 3.1 of Information Theory and Learning Algorithms by David Mackay.\nNaively, if $S$ is the support of the original density $p_Y(y)$, then by truncating to a new support\n$T\\subset S$ we are effectively defining a new random variable $Z$ for which the density is\n$$\n\\begin{align}\n p_Z(z) \\propto\n \\begin{cases}\n p_Y(z) & \\text{if $z$ is in $T$}\\\n 0 & \\text{if $z$ is outside $T$}\\\n \\end{cases}\n\\end{align}\n$$\nThe reason for writing a $\\propto$ (proportional to) sign instead of a strict equation is that,\ndefined in the above way, the resulting function does not integrate to $1$ and so it cannot be strictly considered a probability density. To make it into a probability density we need to re-distribute the truncated mass\namong the part of the distribution that remains. To do this, we simply re-weight every point by the same constant:\n$$\n\\begin{align}\n p_Z(z) =\n \\begin{cases}\n \\frac{1}{M}p_Y(z) & \\text{if $z$ is in $T$}\\\n 0 & \\text{if $z$ is outside $T$}\\\n \\end{cases}\n\\end{align}\n$$\nwhere $M = \\int_T p_Y(y)\\mathrm{d}y$.\nIn practice, the truncation is often one-sided. This means that if, for example, the support before truncation is the interval $(a, b)$, then the support after truncation is of the form $(a, c)$ or $(c, b)$, with $a < c < b$. The figure below illustrates a left-sided truncation at zero of a normal distribution $N(1, 1)$.\n<figure>\n <img src=\"https://i.ibb.co/6vHyFfq/truncated-normal.png\" alt=\"truncated\" width=\"900\"/>\n</figure>\n\nThe original distribution (left side) is truncated at the vertical dotted line. The truncated mass (orange region) is redistributed in the new support (right side image) so that the total area under the curve remains equal to 1 even after truncation. This method of re-weighting ensures that the density ratio between any two points, $p(a)/p(b)$ remains the same before and after the reweighting is done (as long as the points are inside the new support, of course).\nNote: Truncated data is different from censored data. Censoring also hides values that are outside some desired support but, contrary to truncated data, we know when a value has been censored. The typical example is the household scale which does not report values above 300 pounds. Censored data will not be covered in this tutorial.\n2. What is a folded distribution? <a class=\"anchor\" id=\"2\"></a>\nFolding is achieved by taking the absolute value of a random variable, $Z = \\lvert Y \\rvert$. This obviously modifies the support of the original distribution since negative values now have zero\nprobability:\n$$\n\\begin{align}\n p_Z(z) =\n \\begin{cases}\n p_Y(z) + p_Y(-z) & \\text{if $z\\ge 0$}\\\n 0 & \\text{if $z\\lt 0$}\\\n \\end{cases}\n\\end{align}\n$$\nThe figure below illustrates a folded normal distribution $N(1, 1)$.\n<figure>\n <img src=\"https://i.ibb.co/3d2xJbc/folded-normal.png\" alt=\"folded\" width=\"900\"/>\n</figure>\n\nAs you can see, the resulting distribution is different from the truncated case. In particular, the density ratio between points, $p(a)/p(b)$, is in general not the same after folding. For some examples in which folding is relevant see references 3 and 4\nIf the original distribution is symmetric around zero, then folding and truncating at zero have the same effect.\n3. Sampling from truncated and folded distributions <a class=\"anchor\" id=\"3\"></a>\nTruncated distributions\nUsually, we already have a sampler for the pre-truncated distribution (e.g. np.random.normal).\nSo, a seemingly simple way of generating samples from the truncated distribution would be to\nsample from the original distribution, and then discard the samples that are outside the \ndesired support. For example, if we wanted samples from a normal distribution truncated to the\nsupport $(-\\infty, 1)$, we'd simply do:\npython\nupper = 1\nsamples = np.random.normal(size=1000)\ntruncated_samples = samples[samples &lt; upper]\nThis is called rejection sampling but it is not very efficient.\nIf the region we truncated had a sufficiently high probability mass, then we'd be discarding a lot of samples and it might be a while before we accumulate sufficient samples for the truncated distribution. For example, the above snippet would only result in approximately 840 truncated samples even though we initially drew 1000. This can easily get a lot worse for other combinations of parameters.\nA more efficient approach is to use a method known as inverse transform sampling.\nIn this method, we first sample from a uniform distribution in (0, 1) and then transform those samples with the inverse cumulative distribution of our truncated distribution.\nThis method ensures that no samples are wasted in the process, though it does have the slight complication that\nwe need to calculate the inverse CDF (ICDF) of our truncated distribution. This might sound too complicated at first but, with a bit of algebra, we can often calculate the truncated ICDF in terms of the untruncated ICDF. The untruncated ICDF for many distributions is already available.\nFolded distributions\nThis case is a lot simpler. Since we already have a sampler for the pre-folded distribution, all we need to do is to take the absolute value of those samples:\npython\nsamples = np.random.normal(size=1000)\nfolded_samples = np.abs(samples)\n4. Ready to use truncated and folded distributions <a class=\"anchor\" id=\"4\"></a>\nThe later sections in this tutorial will show you how to construct your own truncated and folded distributions, but you don't have to reinvent the wheel. NumPyro has a bunch of truncated distributions already implemented.\nSuppose, for example, that you want a normal distribution truncated on the right.\nFor that purpose, we use the TruncatedNormal distribution. The parameters of this distribution are loc and scale, corresponding to the loc and scale of the untruncated normal, and low and/or high corresponding to the truncation points. Importantly, the low and high are keyword only arguments, only loc and scale are valid as positional arguments.\nThis is how you can use this class in a model:", "def truncated_normal_model(num_observations, high, x=None):\n loc = numpyro.sample(\"loc\", dist.Normal())\n scale = numpyro.sample(\"scale\", dist.LogNormal())\n with numpyro.plate(\"observations\", num_observations):\n numpyro.sample(\"x\", TruncatedNormal(loc, scale, high=high), obs=x)", "Let's now check that we can use this model in a typical MCMC workflow.\nPrior simulation", "high = 1.2\nnum_observations = 250\nnum_prior_samples = 100\n\nprior = Predictive(truncated_normal_model, num_samples=num_prior_samples)\nprior_samples = prior(PRIOR_RNG, num_observations, high)", "Inference\nTo test our model, we run mcmc against some synthetic data.\nThe synthetic data can be any arbitrary sample from the prior simulation.", "# -- select an arbitrary prior sample as true data\ntrue_idx = 0\ntrue_loc = prior_samples[\"loc\"][true_idx]\ntrue_scale = prior_samples[\"scale\"][true_idx]\ntrue_x = prior_samples[\"x\"][true_idx]\n\nplt.hist(true_x.copy(), bins=20)\nplt.axvline(high, linestyle=\":\", color=\"k\")\nplt.xlabel(\"x\")\nplt.show()\n\n# --- Run MCMC and check estimates and diagnostics\nmcmc = MCMC(NUTS(truncated_normal_model), **MCMC_KWARGS)\nmcmc.run(MCMC_RNG, num_observations, high, true_x)\nmcmc.print_summary()\n\n# --- Compare to ground truth\nprint(f\"True loc : {true_loc:3.2}\")\nprint(f\"True scale: {true_scale:3.2}\")", "Removing the truncation\nOnce we have inferred the parameters of our model, a common task is to understand what the data would look like without the truncation. In this example, this is easily done by simply \"pushing\" the value of high to infinity.", "pred = Predictive(truncated_normal_model, posterior_samples=mcmc.get_samples())\npred_samples = pred(PRED_RNG, num_observations, high=float(\"inf\"))", "Let's finally plot these samples and compare them to the original, observed data.", "# thin the samples to not saturate matplotlib\nsamples_thinned = pred_samples[\"x\"].ravel()[::1000]\n\nf, axes = plt.subplots(1, 2, figsize=(15, 5), sharex=True)\n\naxes[0].hist(\n samples_thinned.copy(), label=\"Untruncated posterior\", bins=20, density=True\n)\naxes[0].set_title(\"Untruncated posterior\")\n\nvals, bins, _ = axes[1].hist(\n samples_thinned[samples_thinned < high].copy(),\n label=\"Tail of untruncated posterior\",\n bins=10,\n density=True,\n)\naxes[1].hist(\n true_x.copy(), bins=bins, label=\"Observed, truncated data\", density=True, alpha=0.5\n)\naxes[1].set_title(\"Comparison to observed data\")\n\nfor ax in axes:\n ax.axvline(high, linestyle=\":\", color=\"k\", label=\"Truncation point\")\n ax.legend()\n\nplt.show()", "The plot on the left shows data simulated from the posterior distribution with the truncation removed, so we are able to see how the data would look like if it were not truncated. To sense check this, we discard the simulated samples that are above the truncation point and make histogram of those and compare it to a histogram of the true data (right plot).\nThe TruncatedDistribution class\nThe source code for the TruncatedNormal in NumPyro uses a class called TruncatedDistribution which abstracts away the logic for sample and log_prob that\nwe will discuss in the next sections. At the moment, though, this logic only works continuous, symmetric distributions with real support.\nWe can use this class to quickly construct other truncated distributions. For example, if we need a truncated SoftLaplace we can use the following pattern:", "def TruncatedSoftLaplace(\n loc=0.0, scale=1.0, *, low=None, high=None, validate_args=None\n):\n return TruncatedDistribution(\n base_dist=SoftLaplace(loc, scale),\n low=low,\n high=high,\n validate_args=validate_args,\n )\n\ndef truncated_soft_laplace_model(num_observations, high, x=None):\n loc = numpyro.sample(\"loc\", dist.Normal())\n scale = numpyro.sample(\"scale\", dist.LogNormal())\n with numpyro.plate(\"obs\", num_observations):\n numpyro.sample(\"x\", TruncatedSoftLaplace(loc, scale, high=high), obs=x)", "And, as before, we check that we can use this model in the steps of a typical workflow:", "high = 2.3\nnum_observations = 200\nnum_prior_samples = 100\n\nprior = Predictive(truncated_soft_laplace_model, num_samples=num_prior_samples)\nprior_samples = prior(PRIOR_RNG, num_observations, high)\n\ntrue_idx = 0\ntrue_x = prior_samples[\"x\"][true_idx]\ntrue_loc = prior_samples[\"loc\"][true_idx]\ntrue_scale = prior_samples[\"scale\"][true_idx]\n\nmcmc = MCMC(\n NUTS(truncated_soft_laplace_model),\n **MCMC_KWARGS,\n)\n\nmcmc.run(\n MCMC_RNG,\n num_observations,\n high,\n true_x,\n)\n\nmcmc.print_summary()\n\nprint(f\"True loc : {true_loc:3.2}\")\nprint(f\"True scale: {true_scale:3.2}\")", "Important\nThe sample method of the TruncatedDistribution class relies on inverse-transform sampling.\nThis has the implicit requirement that the base distribution should have an icdf method already available.\nIf this is not the case, we will not be able to call the sample method on any instances of our distribution, nor use it with the Predictive class.\nHowever, the log_prob method only depends on the cdf method (which is more frequently available than the icdf). If the log_prob method is available, then we can use our distribution as prior/likelihood in a model.\nThe FoldedDistribution class\nSimilar to truncated distributions, NumPyro has the FoldedDistribution class to help you quickly construct folded distributions. Popular examples of folded distributions are the so-called \"half-normal\", \"half-student\" or \"half-cauchy\". As the name suggests, these distributions keep only (the positive) half of the distribution. Implicit in the name of these \"half\" distributions is that they are centered at zero before folding. But, of course, you can fold a distribution even if its not centered at zero. For instance, this is how you would define a folded student-t distribution.", "def FoldedStudentT(df, loc=0.0, scale=1.0):\n return FoldedDistribution(StudentT(df, loc=loc, scale=scale))\n\ndef folded_student_model(num_observations, x=None):\n df = numpyro.sample(\"df\", dist.Gamma(6, 2))\n loc = numpyro.sample(\"loc\", dist.Normal())\n scale = numpyro.sample(\"scale\", dist.LogNormal())\n with numpyro.plate(\"obs\", num_observations):\n numpyro.sample(\"x\", FoldedStudentT(df, loc, scale), obs=x)", "And we check that we can use our distribution in a typical workflow:", "# --- prior sampling\nnum_observations = 500\nnum_prior_samples = 100\nprior = Predictive(folded_student_model, num_samples=num_prior_samples)\nprior_samples = prior(PRIOR_RNG, num_observations)\n\n\n# --- choose any prior sample as the ground truth\ntrue_idx = 0\ntrue_df = prior_samples[\"df\"][true_idx]\ntrue_loc = prior_samples[\"loc\"][true_idx]\ntrue_scale = prior_samples[\"scale\"][true_idx]\ntrue_x = prior_samples[\"x\"][true_idx]\n\n# --- do inference with MCMC\nmcmc = MCMC(\n NUTS(folded_student_model),\n **MCMC_KWARGS,\n)\nmcmc.run(MCMC_RNG, num_observations, true_x)\n\n# --- Check diagostics\nmcmc.print_summary()\n\n# --- Compare to ground truth:\nprint(f\"True df : {true_df:3.2f}\")\nprint(f\"True loc : {true_loc:3.2f}\")\nprint(f\"True scale: {true_scale:3.2f}\")", "5. Building your own truncated distribution <a class=\"anchor\" id=\"5\"></a>\nIf the\nTruncatedDistribution and\nFoldedDistribution\nclasses are not sufficient to solve your problem,\nyou might want to look into writing your own truncated distribution from the ground up.\nThis can be a tedious process, so this section will give you some guidance and examples to help you with it.\n5.1 Recap of NumPyro distributions <a class=\"anchor\" id=\"5.1\"></a>\nA NumPyro distribution should subclass Distribution and implement a few basic ingredients:\nClass attributes\nThe class attributes serve a few different purposes. Here we will mainly care about two:\n1. arg_constraints: Impose some requirements on the parameters of the distribution. Errors are raised at instantiation time if the parameters passed do not satisfy the constraints.\n2. support: It is used in some inference algorithms like MCMC and SVI with auto-guides, where we need to perform the algorithm in the unconstrained space. Knowing the support, we can automatically reparametrize things under the hood.\nWe'll explain other class attributes as we go.\nThe __init__ method\nThis is where we define the parameters of the distribution.\nWe also use jax and lax to promote the parameters to shapes that are valid for broadcasting.\nThe __init__ method of the parent class is also required because that's where the validation of our parameters is done.\nThe log_prob method\nImplementing the log_prob method ensures that we can do inference. As the name suggests, this method returns the logarithm of the density evaluated at the argument.\nThe sample method\nThis method is used for drawing independent samples from our distribution. It is particularly useful for doing prior and posterior predictive checks. Note, in particular, that this method is not needed if you only need to use your distribution as prior in a model - the log_prob method will suffice.\nThe place-holder code for any of our implementations can be written as\n```python\nclass MyDistribution(Distribution):\n # class attributes\n arg_constraints = {}\n support = None\n def init(self):\n pass\ndef log_prob(self, value):\n pass\n\ndef sample(self, key, sample_shape=()):\n pass\n\n```\n5.2 Example: Right-truncated normal <a class=\"anchor\" id=\"5.2\"></a>\nWe are going to modify a normal distribution so that its new support is\nof the form (-inf, high), with high a real number. This could be done with the TruncatedNormal distribution but, for the sake of illustration, we are not going to rely on it.\nWe'll call our distribution RightTruncatedNormal. Let's write the skeleton code and then proceed to fill in the blanks.\n```python\nclass RightTruncatedNormal(Distribution):\n # <class attributes>\n def init(self):\n pass\ndef log_prob(self, value):\n pass\n\ndef sample(self, key, sample_shape=()):\n pass\n\n```\nClass attributes\nRemember that a non-truncated normal distribution is specified in NumPyro by two parameters, loc and scale,\nwhich correspond to the mean and standard deviation.\nLooking at the source code for the Normal distribution we see the following lines:\npython\narg_constraints = {\"loc\": constraints.real, \"scale\": constraints.positive}\nsupport = constraints.real\nreparametrized_params = [\"loc\", \"scale\"]\nThe reparametrized_params attribute is used by variational inference algorithms when constructing gradient estimators. The parameters of many common distributions with continuous support (e.g. the Normal distribution) are reparameterizable, while the parameters of discrete distributions are not. Note that reparametrized_params is irrelevant for MCMC algorithms like HMC. See SVI Part III for more details.\nWe must adapt these attributes to our case by including the \"high\" parameter, but there are two issues we need to deal with:\n\nconstraints.real is a bit too restrictive. We'd like jnp.inf to be a valid value for high (equivalent to no truncation), but at the moment infinity is not a valid real number. We deal with this situation by defining our own constraint. The source code for constraints.real is easy to imitate:\n\n```python\nclass _RightExtendedReal(constraints.Constraint):\n \"\"\"\n Any number in the interval (-inf, inf].\n \"\"\"\n def call(self, x):\n return (x == x) & (x != float(\"-inf\"))\ndef feasible_like(self, prototype):\n return jnp.zeros_like(prototype)\n\nright_extended_real = _RightExtendedReal()\n```\n\nsupport can no longer be a class attribute as it will depend on the value of high. So instead we implement it as a dependent property.\n\nOur distribution then looks as follows:\n```python\nclass RightTruncatedNormal(Distribution):\n arg_constraints = {\n \"loc\": constraints.real,\n \"scale\": constraints.positive,\n \"high\": right_extended_real,\n }\n reparametrized_params = [\"loc\", \"scale\", \"high\"]\n# ...\n\n@constraints.dependent_property\ndef support(self):\n return constraints.lower_than(self.high)\n\n```\nThe __init__ method\nOnce again we take inspiration from the source code for the normal distribution. The key point is the use of lax and jax to check the shapes of the arguments passed and make sure that such shapes are consistent for broadcasting. We follow the same pattern for our use case -- all we need to do is include the high parameter.\nIn the source implementation of Normal, both parameters loc and scale are given defaults so that one recovers a standard normal distribution if no arguments are specified. In the same spirit, we choose float(\"inf\") as a default for high which would be equivalent to no truncation.\n```python\n...\ndef __init__(self, loc=0.0, scale=1.0, high=float(\"inf\"), validate_args=None):\n batch_shape = lax.broadcast_shapes(\n jnp.shape(loc),\n jnp.shape(scale),\n jnp.shape(high),\n )\n self.loc, self.scale, self.high = promote_shapes(loc, scale, high)\n super().__init__(batch_shape, validate_args=validate_args)\n\n...\n```\nThe log_prob method\nFor a truncated distribution, the log density is given by\n$$\n\\begin{align}\n \\log p_Z(z) =\n \\begin{cases}\n \\log p_Y(z) - \\log M & \\text{if $z$ is in $T$}\\\n -\\infty & \\text{if $z$ is outside $T$}\\\n \\end{cases}\n\\end{align}\n$$\nwhere, again, $p_Z$ is the density of the truncated distribution, $p_Y$ is the density before truncation, and $M = \\int_T p_Y(y)\\mathrm{d}y$. For the specific case of truncating the normal distribution to the interval (-inf, high), the constant $M$ is equal to the cumulative density evaluated at the truncation point. We can easily implement this log-density method because jax.scipy.stats already has a norm module that we can use.\n```python\n...\ndef log_prob(self, value):\n log_m = norm.logcdf(self.high, self.loc, self.scale)\n log_p = norm.logpdf(value, self.loc, self.scale)\n return jnp.where(value &lt; self.high, log_p - log_m, -jnp.inf)\n\n...\n```\nThe sample method\nTo implement the sample method using inverse-transform sampling, we need to also implement the inverse cumulative distribution function. For this, we can use the ndtri function that lives inside jax.scipy.special. This function returns the inverse cdf for the standard normal distribution. We can do a bit of algebra to obtain the inverse cdf of the truncated, non-standard normal. First recall that if $X\\sim Normal(0, 1)$ and $Y = \\mu + \\sigma X$, then $Y\\sim Normal(\\mu, \\sigma)$. Then if $Z$ is the truncated $Y$, its cumulative density is given by:\n$$\n\\begin{align}\nF_Z(y) &= \\int_{-\\infty}^{y}p_Z(r)dr\\newline\n &= \\frac{1}{M}\\int_{-\\infty}^{y}p_Y(s)ds \\quad\\text{if $y < high$} \\newline\n &= \\frac{1}{M}F_Y(y)\n\\end{align}\n$$\nAnd so its inverse is\n$$\n\\begin{align}\nF_Z^{-1}(u) = \\left(\\frac{1}{M}F_Y\\right)^{-1}(u)\n = F_Y^{-1}(M u)\n = F_{\\mu + \\sigma X}^{-1}(Mu)\n = \\mu + \\sigma F_X^{-1}(Mu)\n\\end{align}\n$$\nThe translation of the above math into code is\n```python\n...\ndef sample(self, key, sample_shape=()):\n shape = sample_shape + self.batch_shape\n minval = jnp.finfo(jnp.result_type(float)).tiny\n u = random.uniform(key, shape, minval=minval)\n return self.icdf(u)\n\n\ndef icdf(self, u):\n m = norm.cdf(self.high, self.loc, self.scale)\n return self.loc + self.scale * ndtri(m * u)\n\n```\nWith everything in place, the final implementation is as below.", "class _RightExtendedReal(constraints.Constraint):\n \"\"\"\n Any number in the interval (-inf, inf].\n \"\"\"\n\n def __call__(self, x):\n return (x == x) & (x != float(\"-inf\"))\n\n def feasible_like(self, prototype):\n return jnp.zeros_like(prototype)\n\n\nright_extended_real = _RightExtendedReal()\n\n\nclass RightTruncatedNormal(Distribution):\n \"\"\"\n A truncated Normal distribution.\n :param numpy.ndarray loc: location parameter of the untruncated normal\n :param numpy.ndarray scale: scale parameter of the untruncated normal\n :param numpy.ndarray high: point at which the truncation happens\n \"\"\"\n\n arg_constraints = {\n \"loc\": constraints.real,\n \"scale\": constraints.positive,\n \"high\": right_extended_real,\n }\n reparametrized_params = [\"loc\", \"scale\", \"high\"]\n\n def __init__(self, loc=0.0, scale=1.0, high=float(\"inf\"), validate_args=True):\n batch_shape = lax.broadcast_shapes(\n jnp.shape(loc),\n jnp.shape(scale),\n jnp.shape(high),\n )\n self.loc, self.scale, self.high = promote_shapes(loc, scale, high)\n super().__init__(batch_shape, validate_args=validate_args)\n\n def log_prob(self, value):\n log_m = norm.logcdf(self.high, self.loc, self.scale)\n log_p = norm.logpdf(value, self.loc, self.scale)\n return jnp.where(value < self.high, log_p - log_m, -jnp.inf)\n\n def sample(self, key, sample_shape=()):\n shape = sample_shape + self.batch_shape\n minval = jnp.finfo(jnp.result_type(float)).tiny\n u = random.uniform(key, shape, minval=minval)\n return self.icdf(u)\n\n def icdf(self, u):\n m = norm.cdf(self.high, self.loc, self.scale)\n return self.loc + self.scale * ndtri(m * u)\n\n @constraints.dependent_property\n def support(self):\n return constraints.less_than(self.high)", "Let's try it out!", "def truncated_normal_model(num_observations, x=None):\n loc = numpyro.sample(\"loc\", dist.Normal())\n scale = numpyro.sample(\"scale\", dist.LogNormal())\n high = numpyro.sample(\"high\", dist.Normal())\n with numpyro.plate(\"observations\", num_observations):\n numpyro.sample(\"x\", RightTruncatedNormal(loc, scale, high), obs=x)\n\nnum_observations = 1000\nnum_prior_samples = 100\nprior = Predictive(truncated_normal_model, num_samples=num_prior_samples)\nprior_samples = prior(PRIOR_RNG, num_observations)", "As before, we run mcmc against some synthetic data.\nWe select any random sample from the prior as the ground truth:", "true_idx = 0\ntrue_loc = prior_samples[\"loc\"][true_idx]\ntrue_scale = prior_samples[\"scale\"][true_idx]\ntrue_high = prior_samples[\"high\"][true_idx]\ntrue_x = prior_samples[\"x\"][true_idx]\n\nplt.hist(true_x.copy())\nplt.axvline(true_high, linestyle=\":\", color=\"k\")\nplt.xlabel(\"x\")\nplt.show()", "Run MCMC and check the estimates:", "mcmc = MCMC(NUTS(truncated_normal_model), **MCMC_KWARGS)\nmcmc.run(MCMC_RNG, num_observations, true_x)\nmcmc.print_summary()", "Compare estimates against the ground truth:", "print(f\"True high : {true_high:3.2f}\")\nprint(f\"True loc : {true_loc:3.2f}\")\nprint(f\"True scale: {true_scale:3.2f}\")", "Note that, even though we can recover good estimates for the true values,\nwe had a very high number of divergences. These divergences happen because\nthe data can be outside of the support that we are allowing with our priors.\nTo fix this, we can change the prior on high so that it depends on the observations:", "def truncated_normal_model_2(num_observations, x=None):\n loc = numpyro.sample(\"loc\", dist.Normal())\n scale = numpyro.sample(\"scale\", dist.LogNormal())\n if x is None:\n high = numpyro.sample(\"high\", dist.Normal())\n else:\n # high is greater or equal to the max value in x:\n delta = numpyro.sample(\"delta\", dist.HalfNormal())\n high = numpyro.deterministic(\"high\", delta + x.max())\n\n with numpyro.plate(\"observations\", num_observations):\n numpyro.sample(\"x\", RightTruncatedNormal(loc, scale, high), obs=x)\n\nmcmc = MCMC(NUTS(truncated_normal_model_2), **MCMC_KWARGS)\nmcmc.run(MCMC_RNG, num_observations, true_x)\nmcmc.print_summary(exclude_deterministic=False)", "And the divergences are gone.\nIn practice, we usually want to understand how the data\nwould look like without the truncation. To do that in NumPyro,\nthere is no need of writing a separate model, we can simply\nrely on the condition handler to push the truncation point to infinity:", "model_without_truncation = numpyro.handlers.condition(\n truncated_normal_model,\n {\"high\": float(\"inf\")},\n)\nestimates = mcmc.get_samples().copy()\nestimates.pop(\"high\") # Drop to make sure these are not used\npred = Predictive(\n model_without_truncation,\n posterior_samples=estimates,\n)\npred_samples = pred(PRED_RNG, num_observations=1000)\n\n# thin the samples for a faster histogram\nsamples_thinned = pred_samples[\"x\"].ravel()[::1000]\n\nf, axes = plt.subplots(1, 2, figsize=(15, 5))\n\naxes[0].hist(\n samples_thinned.copy(), label=\"Untruncated posterior\", bins=20, density=True\n)\naxes[0].axvline(true_high, linestyle=\":\", color=\"k\", label=\"Truncation point\")\naxes[0].set_title(\"Untruncated posterior\")\naxes[0].legend()\n\naxes[1].hist(\n samples_thinned[samples_thinned < true_high].copy(),\n label=\"Tail of untruncated posterior\",\n bins=20,\n density=True,\n)\naxes[1].hist(true_x.copy(), label=\"Observed, truncated data\", density=True, alpha=0.5)\naxes[1].axvline(true_high, linestyle=\":\", color=\"k\", label=\"Truncation point\")\naxes[1].set_title(\"Comparison to observed data\")\naxes[1].legend()\nplt.show()", "5.3 Example: Left-truncated Poisson <a class=\"anchor\" id=\"5.3\"></a>\nAs a final example, we now implement a left-truncated Poisson distribution.\nNote that a right-truncated Poisson could be reformulated as a particular\ncase of a categorical distribution, so we focus on the less trivial case.\nClass attributes\nFor a truncated Poisson we need two parameters, the rate of the original Poisson\ndistribution and a low parameter to indicate the truncation point.\nAs this is a discrete distribution, we need to clarify whether or not the truncation point is included\nin the support. In this tutorial, we'll take the convention that the truncation point low\nis part of the support.\nThe low parameter has to be given a 'non-negative integer' constraint. As it is a discrete parameter, it will not be possible to do inference for this parameter using NUTS. This is likely not a problem since the truncation point is often known in advance. However, if we really must infer the low parameter, it is possible to do so with DiscreteHMCGibbs though one is limited to using priors with enumerate support.\nLike in the case of the truncated normal, the support of this distribution will be defined as a property and not as a class attribute because it depends on the specific value of the low parameter.\n```python\nclass LeftTruncatedPoisson:\n arg_constraints = {\n \"low\": constraints.nonnegative_integer,\n \"rate\": constraints.positive,\n }\n# ... \n@constraints.dependent_property(is_discrete=True)\ndef support(self):\n return constraints.integer_greater_than(self.low - 1)\n\n```\nThe is_discrete argument passed in the dependent_property decorator is used to tell the inference algorithms which variables are discrete latent variables.\nThe __init__ method\nHere we just follow the same pattern as in the previous example.\npython\n # ...\n def __init__(self, rate=1.0, low=0, validate_args=None):\n batch_shape = lax.broadcast_shapes(\n jnp.shape(low), jnp.shape(rate)\n )\n self.low, self.rate = promote_shapes(low, rate)\n super().__init__(batch_shape, validate_args=validate_args)\n # ...\nThe log_prob method\nThe logic is very similar to the truncated normal case. But this time we are truncating on the left, so the correct normalization is the complementary cumulative density:\n$$\n\\begin{align}\nM = \\sum_{n=L}^{\\infty} p_Y(n) = 1 - \\sum_{n=0}^{L - 1} p_Y(n) = 1 - F_Y(L - 1)\n\\end{align}\n$$\nFor the code, we can rely on the poisson module that lives inside jax.scipy.stats.\npython\n # ...\n def log_prob(self, value):\n m = 1 - poisson.cdf(self.low - 1, self.rate)\n log_p = poisson.logpmf(value, self.rate)\n return jnp.where(value &gt;= self.low, log_p - jnp.log(m), -jnp.inf)\n # ...\nThe sample method\nInverse-transform sampling also works for discrete distributions. The \"inverse\" cdf of a discrete distribution being defined as:\n$$\n\\begin{align}\nF^{-1}(u) = \\max\\left{n\\in \\mathbb{N} \\rvert F(n) \\lt u\\right}\n\\end{align}\n$$\nOr, in plain English, $F^{-1}(u)$ is the highest number for which the cumulative density is less than $u$.\nHowever, there's currently no implementation of $F^{-1}$ for the Poisson distribution in Jax (at least, at the moment of writing this tutorial). We have to rely on our own implementation. Fortunately, we can take advantage of the discrete nature of the distribution and easily implement a \"brute-force\" version that will work for most cases. The brute force approach consists of simply scanning all non-negative integers in order, one by one, until the value of the cumulative density exceeds the argument $u$. The implicit requirement is that we need a way to evaluate the cumulative density for the truncated distribution, but we can calculate that:\n$$\n\\begin{align}\nF_Z(z) &= \\sum_{n=0}^z p_z(n)\\newline\n &= \\frac{1}{M}\\sum_{n=L}^z p_Y(n)\\quad \\text{assuming $z >= L$}\\newline\n &= \\frac{1}{M}\\left(\\sum_{n=0}^z p_Y(n) - \\sum_{n=0}^{L-1}p_Y(n)\\right)\\newline\n &= \\frac{1}{M}\\left(F_Y(z) - F_Y (L-1)\\right)\n\\end{align}\n$$\nAnd, of course, the value of $F_Z(z)$ is equal to zero if $z < L$.\n(As in the previous example, we are using $Y$ to denote the original, un-truncated variable, and we are using $Z$ to denote the truncated variable)\n```python\n # ...\n def sample(self, key, sample_shape=()):\n shape = sample_shape + self.batch_shape\n minval = jnp.finfo(jnp.result_type(float)).tiny\n u = random.uniform(key, shape, minval=minval)\n return self.icdf(u)\ndef icdf(self, u):\n def cond_fn(val):\n n, cdf = val\n return jnp.any(cdf &lt; u)\n\n def body_fn(val):\n n, cdf = val\n n_new = jnp.where(cdf &lt; u, n + 1, n)\n return n_new, self.cdf(n_new)\n\n low = self.low * jnp.ones_like(u)\n cdf = self.cdf(low)\n n, _ = lax.while_loop(cond_fn, body_fn, (low, cdf))\n return n.astype(jnp.result_type(int))\n\ndef cdf(self, value):\n m = 1 - poisson.cdf(self.low - 1, self.rate)\n f = poisson.cdf(value, self.rate) - poisson.cdf(self.low - 1, self.rate)\n return jnp.where(k &gt;= self.low, f / m, 0)\n\n```\nA few comments with respect to the above implementation:\n* Even with double precision, if rate is much less than low, the above code will not work. Due to numerical limitations, one obtains that poisson.cdf(low - 1, rate) is equal (or very close) to 1.0. This makes it impossible to re-weight the distribution accurately because the normalization constant would be 0.0.\n* The brute-force icdf is of course very slow, particularly when rate is high. If you need faster sampling, one option would be to rely on a faster search algorithm. For example:\npython\ndef icdf_faster(self, u):\n num_bins = 200 # Choose a reasonably large value\n bins = jnp.arange(num_bins)\n cdf = self.cdf(bins)\n indices = jnp.searchsorted(cdf, u)\n return bins[indices]\nThe obvious limitation here is that the number of bins has to be fixed a priori (jax does not allow for dynamically sized arrays). Another option would be to rely on an approximate implementation, as proposed in this article.\n\nYet another alternative for the icdf is to rely on scipy's implementation and make use of Jax's host_callback module. This feature allows you to use Python functions without having to code them in Jax. This means that we can simply make use of scipy's implementation of the Poisson ICDF! From the last equation, we can write the truncated icdf as:\n\n$$\n\\begin{align}\nF_Z^{-1}(u) = F_Y^{-1}(Mu + F_Y(L-1))\n\\end{align}\n$$\nAnd in python:\npython\n def scipy_truncated_poisson_icdf(args): # Note: all arguments are passed inside a tuple\n rate, low, u = args\n rate = np.asarray(rate)\n low = np.asarray(low)\n u = np.asarray(u)\n density = sp_poisson(rate)\n low_cdf = density.cdf(low - 1)\n normalizer = 1.0 - low_cdf\n x = normalizer * u + low_cdf\n return density.ppf(x)\nIn principle, it wouldn't be possible to use the above function in our NumPyro distribution because it is not coded in Jax. The jax.experimental.host_callback.call function solves precisely that problem. The code below shows you how to use it, but keep in mind that this is currently an experimental feature so you should expect changes to the module. See the host_callback docs for more details.\npython\n # ...\n def icdf_scipy(self, u):\n result_shape = jax.ShapeDtypeStruct(\n u.shape,\n jnp.result_type(float) # int type not currently supported\n )\n result = jax.experimental.host_callback.call(\n scipy_truncated_poisson_icdf,\n (self.rate, self.low, u),\n result_shape=result_shape\n )\n return result.astype(jnp.result_type(int))\n # ...\nPutting it all together, the implementation is as below:", "def scipy_truncated_poisson_icdf(args): # Note: all arguments are passed inside a tuple\n rate, low, u = args\n rate = np.asarray(rate)\n low = np.asarray(low)\n u = np.asarray(u)\n density = sp_poisson(rate)\n low_cdf = density.cdf(low - 1)\n normalizer = 1.0 - low_cdf\n x = normalizer * u + low_cdf\n return density.ppf(x)\n\n\nclass LeftTruncatedPoisson(Distribution):\n \"\"\"\n A truncated Poisson distribution.\n :param numpy.ndarray low: lower bound at which truncation happens\n :param numpy.ndarray rate: rate of the Poisson distribution.\n \"\"\"\n\n arg_constraints = {\n \"low\": constraints.nonnegative_integer,\n \"rate\": constraints.positive,\n }\n\n def __init__(self, rate=1.0, low=0, validate_args=None):\n batch_shape = lax.broadcast_shapes(jnp.shape(low), jnp.shape(rate))\n self.low, self.rate = promote_shapes(low, rate)\n super().__init__(batch_shape, validate_args=validate_args)\n\n def log_prob(self, value):\n m = 1 - poisson.cdf(self.low - 1, self.rate)\n log_p = poisson.logpmf(value, self.rate)\n return jnp.where(value >= self.low, log_p - jnp.log(m), -jnp.inf)\n\n def sample(self, key, sample_shape=()):\n shape = sample_shape + self.batch_shape\n float_type = jnp.result_type(float)\n minval = jnp.finfo(float_type).tiny\n u = random.uniform(key, shape, minval=minval)\n # return self.icdf(u) # Brute force\n # return self.icdf_faster(u) # For faster sampling.\n return self.icdf_scipy(u) # Using `host_callback`\n\n def icdf(self, u):\n def cond_fn(val):\n n, cdf = val\n return jnp.any(cdf < u)\n\n def body_fn(val):\n n, cdf = val\n n_new = jnp.where(cdf < u, n + 1, n)\n return n_new, self.cdf(n_new)\n\n low = self.low * jnp.ones_like(u)\n cdf = self.cdf(low)\n n, _ = lax.while_loop(cond_fn, body_fn, (low, cdf))\n return n.astype(jnp.result_type(int))\n\n def icdf_faster(self, u):\n num_bins = 200 # Choose a reasonably large value\n bins = jnp.arange(num_bins)\n cdf = self.cdf(bins)\n indices = jnp.searchsorted(cdf, u)\n return bins[indices]\n\n def icdf_scipy(self, u):\n result_shape = jax.ShapeDtypeStruct(u.shape, jnp.result_type(float))\n result = jax.experimental.host_callback.call(\n scipy_truncated_poisson_icdf,\n (self.rate, self.low, u),\n result_shape=result_shape,\n )\n return result.astype(jnp.result_type(int))\n\n def cdf(self, value):\n m = 1 - poisson.cdf(self.low - 1, self.rate)\n f = poisson.cdf(value, self.rate) - poisson.cdf(self.low - 1, self.rate)\n return jnp.where(value >= self.low, f / m, 0)\n\n @constraints.dependent_property(is_discrete=True)\n def support(self):\n return constraints.integer_greater_than(self.low - 1)", "Let's try it out!", "def discrete_distplot(samples, ax=None, **kwargs):\n \"\"\"\n Utility function for plotting the samples as a barplot.\n \"\"\"\n x, y = np.unique(samples, return_counts=True)\n y = y / sum(y)\n if ax is None:\n ax = plt.gca()\n\n ax.bar(x, y, **kwargs)\n return ax\n\ndef truncated_poisson_model(num_observations, x=None):\n low = numpyro.sample(\"low\", dist.Categorical(0.2 * jnp.ones((5,))))\n rate = numpyro.sample(\"rate\", dist.LogNormal(1, 1))\n with numpyro.plate(\"observations\", num_observations):\n numpyro.sample(\"x\", LeftTruncatedPoisson(rate, low), obs=x)", "Prior samples", "# -- prior samples\nnum_observations = 1000\nnum_prior_samples = 100\nprior = Predictive(truncated_poisson_model, num_samples=num_prior_samples)\nprior_samples = prior(PRIOR_RNG, num_observations)", "Inference\nAs in the case for the truncated normal, here it is better to replace\nthe prior on the low parameter so that it is consistent with the observed data.\nWe'd like to have a categorical prior on low (so that we can use DiscreteHMCGibbs)\nwhose highest category is equal to the minimum value of x (so that prior and data are consistent).\nHowever, we have to be careful in the way we write such model because Jax does not allow for dynamically sized arrays. A simple way of coding this model is to simply specify the number of categories as an argument:", "def truncated_poisson_model(num_observations, x=None, k=5):\n zeros = jnp.zeros((k,))\n low = numpyro.sample(\"low\", dist.Categorical(logits=zeros))\n rate = numpyro.sample(\"rate\", dist.LogNormal(1, 1))\n with numpyro.plate(\"observations\", num_observations):\n numpyro.sample(\"x\", LeftTruncatedPoisson(rate, low), obs=x)\n\n# Take any prior sample as the true process.\ntrue_idx = 6\ntrue_low = prior_samples[\"low\"][true_idx]\ntrue_rate = prior_samples[\"rate\"][true_idx]\ntrue_x = prior_samples[\"x\"][true_idx]\ndiscrete_distplot(true_x.copy());", "To do inference, we set k = x.min() + 1. Note also the use of DiscreteHMCGibbs:", "mcmc = MCMC(DiscreteHMCGibbs(NUTS(truncated_poisson_model)), **MCMC_KWARGS)\nmcmc.run(MCMC_RNG, num_observations, true_x, k=true_x.min() + 1)\nmcmc.print_summary()\n\ntrue_rate", "As before, one needs to be extra careful when estimating the truncation point.\nIf the truncation point is known is best to provide it.", "model_with_known_low = numpyro.handlers.condition(\n truncated_poisson_model, {\"low\": true_low}\n)", "And note we can use NUTS directly because there's no need to infer any discrete parameters.", "mcmc = MCMC(\n NUTS(model_with_known_low),\n **MCMC_KWARGS,\n)\n\nmcmc.run(MCMC_RNG, num_observations, true_x)\nmcmc.print_summary()", "Removing the truncation", "model_without_truncation = numpyro.handlers.condition(\n truncated_poisson_model,\n {\"low\": 0},\n)\npred = Predictive(model_without_truncation, posterior_samples=mcmc.get_samples())\npred_samples = pred(PRED_RNG, num_observations)\nthinned_samples = pred_samples[\"x\"][::500]\n\ndiscrete_distplot(thinned_samples.copy());", "References and related material <a class=\"anchor\" id=\"references\"></a>\n\nWikipedia page on inverse transform sampling\nDavid Mackay's book on information theory\n<a class=\"anchor\" id=\"ref3\"></a>Composite models with underlying folded distributions\n<a class=\"anchor\" id=\"ref4\"></a>Application of the generalized folded-normal distribution to the process capability measures\nPyro SVI tutorial part 3\nApproximation of the inverse Poisson cumulative distribution function" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.13.2/examples/notebooks/generated/exponential_smoothing.ipynb
bsd-3-clause
[ "Exponential smoothing\nLet us consider chapter 7 of the excellent treatise on the subject of Exponential Smoothing By Hyndman and Athanasopoulos [1].\nWe will work through all the examples in the chapter as they unfold.\n[1] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice. OTexts, 2014.\nLoading data\nFirst we load some data. We have included the R data in the notebook for expedience.", "import os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt\n\n%matplotlib inline\n\ndata = [\n 446.6565,\n 454.4733,\n 455.663,\n 423.6322,\n 456.2713,\n 440.5881,\n 425.3325,\n 485.1494,\n 506.0482,\n 526.792,\n 514.2689,\n 494.211,\n]\nindex = pd.date_range(start=\"1996\", end=\"2008\", freq=\"A\")\noildata = pd.Series(data, index)\n\ndata = [\n 17.5534,\n 21.86,\n 23.8866,\n 26.9293,\n 26.8885,\n 28.8314,\n 30.0751,\n 30.9535,\n 30.1857,\n 31.5797,\n 32.5776,\n 33.4774,\n 39.0216,\n 41.3864,\n 41.5966,\n]\nindex = pd.date_range(start=\"1990\", end=\"2005\", freq=\"A\")\nair = pd.Series(data, index)\n\ndata = [\n 263.9177,\n 268.3072,\n 260.6626,\n 266.6394,\n 277.5158,\n 283.834,\n 290.309,\n 292.4742,\n 300.8307,\n 309.2867,\n 318.3311,\n 329.3724,\n 338.884,\n 339.2441,\n 328.6006,\n 314.2554,\n 314.4597,\n 321.4138,\n 329.7893,\n 346.3852,\n 352.2979,\n 348.3705,\n 417.5629,\n 417.1236,\n 417.7495,\n 412.2339,\n 411.9468,\n 394.6971,\n 401.4993,\n 408.2705,\n 414.2428,\n]\nindex = pd.date_range(start=\"1970\", end=\"2001\", freq=\"A\")\nlivestock2 = pd.Series(data, index)\n\ndata = [407.9979, 403.4608, 413.8249, 428.105, 445.3387, 452.9942, 455.7402]\nindex = pd.date_range(start=\"2001\", end=\"2008\", freq=\"A\")\nlivestock3 = pd.Series(data, index)\n\ndata = [\n 41.7275,\n 24.0418,\n 32.3281,\n 37.3287,\n 46.2132,\n 29.3463,\n 36.4829,\n 42.9777,\n 48.9015,\n 31.1802,\n 37.7179,\n 40.4202,\n 51.2069,\n 31.8872,\n 40.9783,\n 43.7725,\n 55.5586,\n 33.8509,\n 42.0764,\n 45.6423,\n 59.7668,\n 35.1919,\n 44.3197,\n 47.9137,\n]\nindex = pd.date_range(start=\"2005\", end=\"2010-Q4\", freq=\"QS-OCT\")\naust = pd.Series(data, index)", "Simple Exponential Smoothing\nLets use Simple Exponential Smoothing to forecast the below oil data.", "ax = oildata.plot()\nax.set_xlabel(\"Year\")\nax.set_ylabel(\"Oil (millions of tonnes)\")\nprint(\"Figure 7.1: Oil production in Saudi Arabia from 1996 to 2007.\")", "Here we run three variants of simple exponential smoothing:\n1. In fit1 we do not use the auto optimization but instead choose to explicitly provide the model with the $\\alpha=0.2$ parameter\n2. In fit2 as above we choose an $\\alpha=0.6$\n3. In fit3 we allow statsmodels to automatically find an optimized $\\alpha$ value for us. This is the recommended approach.", "fit1 = SimpleExpSmoothing(oildata, initialization_method=\"heuristic\").fit(\n smoothing_level=0.2, optimized=False\n)\nfcast1 = fit1.forecast(3).rename(r\"$\\alpha=0.2$\")\nfit2 = SimpleExpSmoothing(oildata, initialization_method=\"heuristic\").fit(\n smoothing_level=0.6, optimized=False\n)\nfcast2 = fit2.forecast(3).rename(r\"$\\alpha=0.6$\")\nfit3 = SimpleExpSmoothing(oildata, initialization_method=\"estimated\").fit()\nfcast3 = fit3.forecast(3).rename(r\"$\\alpha=%s$\" % fit3.model.params[\"smoothing_level\"])\n\nplt.figure(figsize=(12, 8))\nplt.plot(oildata, marker=\"o\", color=\"black\")\nplt.plot(fit1.fittedvalues, marker=\"o\", color=\"blue\")\n(line1,) = plt.plot(fcast1, marker=\"o\", color=\"blue\")\nplt.plot(fit2.fittedvalues, marker=\"o\", color=\"red\")\n(line2,) = plt.plot(fcast2, marker=\"o\", color=\"red\")\nplt.plot(fit3.fittedvalues, marker=\"o\", color=\"green\")\n(line3,) = plt.plot(fcast3, marker=\"o\", color=\"green\")\nplt.legend([line1, line2, line3], [fcast1.name, fcast2.name, fcast3.name])", "Holt's Method\nLets take a look at another example.\nThis time we use air pollution data and the Holt's Method.\nWe will fit three examples again.\n1. In fit1 we again choose not to use the optimizer and provide explicit values for $\\alpha=0.8$ and $\\beta=0.2$\n2. In fit2 we do the same as in fit1 but choose to use an exponential model rather than a Holt's additive model.\n3. In fit3 we used a damped versions of the Holt's additive model but allow the dampening parameter $\\phi$ to be optimized while fixing the values for $\\alpha=0.8$ and $\\beta=0.2$", "fit1 = Holt(air, initialization_method=\"estimated\").fit(\n smoothing_level=0.8, smoothing_trend=0.2, optimized=False\n)\nfcast1 = fit1.forecast(5).rename(\"Holt's linear trend\")\nfit2 = Holt(air, exponential=True, initialization_method=\"estimated\").fit(\n smoothing_level=0.8, smoothing_trend=0.2, optimized=False\n)\nfcast2 = fit2.forecast(5).rename(\"Exponential trend\")\nfit3 = Holt(air, damped_trend=True, initialization_method=\"estimated\").fit(\n smoothing_level=0.8, smoothing_trend=0.2\n)\nfcast3 = fit3.forecast(5).rename(\"Additive damped trend\")\n\nplt.figure(figsize=(12, 8))\nplt.plot(air, marker=\"o\", color=\"black\")\nplt.plot(fit1.fittedvalues, color=\"blue\")\n(line1,) = plt.plot(fcast1, marker=\"o\", color=\"blue\")\nplt.plot(fit2.fittedvalues, color=\"red\")\n(line2,) = plt.plot(fcast2, marker=\"o\", color=\"red\")\nplt.plot(fit3.fittedvalues, color=\"green\")\n(line3,) = plt.plot(fcast3, marker=\"o\", color=\"green\")\nplt.legend([line1, line2, line3], [fcast1.name, fcast2.name, fcast3.name])", "Seasonally adjusted data\nLets look at some seasonally adjusted livestock data. We fit five Holt's models.\nThe below table allows us to compare results when we use exponential versus additive and damped versus non-damped.\nNote: fit4 does not allow the parameter $\\phi$ to be optimized by providing a fixed value of $\\phi=0.98$", "fit1 = SimpleExpSmoothing(livestock2, initialization_method=\"estimated\").fit()\nfit2 = Holt(livestock2, initialization_method=\"estimated\").fit()\nfit3 = Holt(livestock2, exponential=True, initialization_method=\"estimated\").fit()\nfit4 = Holt(livestock2, damped_trend=True, initialization_method=\"estimated\").fit(\n damping_trend=0.98\n)\nfit5 = Holt(\n livestock2, exponential=True, damped_trend=True, initialization_method=\"estimated\"\n).fit()\nparams = [\n \"smoothing_level\",\n \"smoothing_trend\",\n \"damping_trend\",\n \"initial_level\",\n \"initial_trend\",\n]\nresults = pd.DataFrame(\n index=[r\"$\\alpha$\", r\"$\\beta$\", r\"$\\phi$\", r\"$l_0$\", \"$b_0$\", \"SSE\"],\n columns=[\"SES\", \"Holt's\", \"Exponential\", \"Additive\", \"Multiplicative\"],\n)\nresults[\"SES\"] = [fit1.params[p] for p in params] + [fit1.sse]\nresults[\"Holt's\"] = [fit2.params[p] for p in params] + [fit2.sse]\nresults[\"Exponential\"] = [fit3.params[p] for p in params] + [fit3.sse]\nresults[\"Additive\"] = [fit4.params[p] for p in params] + [fit4.sse]\nresults[\"Multiplicative\"] = [fit5.params[p] for p in params] + [fit5.sse]\nresults", "Plots of Seasonally Adjusted Data\nThe following plots allow us to evaluate the level and slope/trend components of the above table's fits.", "for fit in [fit2, fit4]:\n pd.DataFrame(np.c_[fit.level, fit.trend]).rename(\n columns={0: \"level\", 1: \"slope\"}\n ).plot(subplots=True)\nplt.show()\nprint(\n \"Figure 7.4: Level and slope components for Holt’s linear trend method and the additive damped trend method.\"\n)", "Comparison\nHere we plot a comparison Simple Exponential Smoothing and Holt's Methods for various additive, exponential and damped combinations. All of the models parameters will be optimized by statsmodels.", "fit1 = SimpleExpSmoothing(livestock2, initialization_method=\"estimated\").fit()\nfcast1 = fit1.forecast(9).rename(\"SES\")\nfit2 = Holt(livestock2, initialization_method=\"estimated\").fit()\nfcast2 = fit2.forecast(9).rename(\"Holt's\")\nfit3 = Holt(livestock2, exponential=True, initialization_method=\"estimated\").fit()\nfcast3 = fit3.forecast(9).rename(\"Exponential\")\nfit4 = Holt(livestock2, damped_trend=True, initialization_method=\"estimated\").fit(\n damping_trend=0.98\n)\nfcast4 = fit4.forecast(9).rename(\"Additive Damped\")\nfit5 = Holt(\n livestock2, exponential=True, damped_trend=True, initialization_method=\"estimated\"\n).fit()\nfcast5 = fit5.forecast(9).rename(\"Multiplicative Damped\")\n\nax = livestock2.plot(color=\"black\", marker=\"o\", figsize=(12, 8))\nlivestock3.plot(ax=ax, color=\"black\", marker=\"o\", legend=False)\nfcast1.plot(ax=ax, color=\"red\", legend=True)\nfcast2.plot(ax=ax, color=\"green\", legend=True)\nfcast3.plot(ax=ax, color=\"blue\", legend=True)\nfcast4.plot(ax=ax, color=\"cyan\", legend=True)\nfcast5.plot(ax=ax, color=\"magenta\", legend=True)\nax.set_ylabel(\"Livestock, sheep in Asia (millions)\")\nplt.show()\nprint(\n \"Figure 7.5: Forecasting livestock, sheep in Asia: comparing forecasting performance of non-seasonal methods.\"\n)", "Holt's Winters Seasonal\nFinally we are able to run full Holt's Winters Seasonal Exponential Smoothing including a trend component and a seasonal component.\nstatsmodels allows for all the combinations including as shown in the examples below:\n1. fit1 additive trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation.\n1. fit2 additive trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation..\n1. fit3 additive damped trend, additive seasonal of period season_length=4 and the use of a Box-Cox transformation.\n1. fit4 additive damped trend, multiplicative seasonal of period season_length=4 and the use of a Box-Cox transformation.\nThe plot shows the results and forecast for fit1 and fit2.\nThe table allows us to compare the results and parameterizations.", "fit1 = ExponentialSmoothing(\n aust,\n seasonal_periods=4,\n trend=\"add\",\n seasonal=\"add\",\n use_boxcox=True,\n initialization_method=\"estimated\",\n).fit()\nfit2 = ExponentialSmoothing(\n aust,\n seasonal_periods=4,\n trend=\"add\",\n seasonal=\"mul\",\n use_boxcox=True,\n initialization_method=\"estimated\",\n).fit()\nfit3 = ExponentialSmoothing(\n aust,\n seasonal_periods=4,\n trend=\"add\",\n seasonal=\"add\",\n damped_trend=True,\n use_boxcox=True,\n initialization_method=\"estimated\",\n).fit()\nfit4 = ExponentialSmoothing(\n aust,\n seasonal_periods=4,\n trend=\"add\",\n seasonal=\"mul\",\n damped_trend=True,\n use_boxcox=True,\n initialization_method=\"estimated\",\n).fit()\nresults = pd.DataFrame(\n index=[r\"$\\alpha$\", r\"$\\beta$\", r\"$\\phi$\", r\"$\\gamma$\", r\"$l_0$\", \"$b_0$\", \"SSE\"]\n)\nparams = [\n \"smoothing_level\",\n \"smoothing_trend\",\n \"damping_trend\",\n \"smoothing_seasonal\",\n \"initial_level\",\n \"initial_trend\",\n]\nresults[\"Additive\"] = [fit1.params[p] for p in params] + [fit1.sse]\nresults[\"Multiplicative\"] = [fit2.params[p] for p in params] + [fit2.sse]\nresults[\"Additive Dam\"] = [fit3.params[p] for p in params] + [fit3.sse]\nresults[\"Multiplica Dam\"] = [fit4.params[p] for p in params] + [fit4.sse]\n\nax = aust.plot(\n figsize=(10, 6),\n marker=\"o\",\n color=\"black\",\n title=\"Forecasts from Holt-Winters' multiplicative method\",\n)\nax.set_ylabel(\"International visitor night in Australia (millions)\")\nax.set_xlabel(\"Year\")\nfit1.fittedvalues.plot(ax=ax, style=\"--\", color=\"red\")\nfit2.fittedvalues.plot(ax=ax, style=\"--\", color=\"green\")\n\nfit1.forecast(8).rename(\"Holt-Winters (add-add-seasonal)\").plot(\n ax=ax, style=\"--\", marker=\"o\", color=\"red\", legend=True\n)\nfit2.forecast(8).rename(\"Holt-Winters (add-mul-seasonal)\").plot(\n ax=ax, style=\"--\", marker=\"o\", color=\"green\", legend=True\n)\n\nplt.show()\nprint(\n \"Figure 7.6: Forecasting international visitor nights in Australia using Holt-Winters method with both additive and multiplicative seasonality.\"\n)\n\nresults", "The Internals\nIt is possible to get at the internals of the Exponential Smoothing models. \nHere we show some tables that allow you to view side by side the original values $y_t$, the level $l_t$, the trend $b_t$, the season $s_t$ and the fitted values $\\hat{y}_t$. Note that these values only have meaningful values in the space of your original data if the fit is performed without a Box-Cox transformation.", "fit1 = ExponentialSmoothing(\n aust,\n seasonal_periods=4,\n trend=\"add\",\n seasonal=\"add\",\n initialization_method=\"estimated\",\n).fit()\nfit2 = ExponentialSmoothing(\n aust,\n seasonal_periods=4,\n trend=\"add\",\n seasonal=\"mul\",\n initialization_method=\"estimated\",\n).fit()\n\ndf = pd.DataFrame(\n np.c_[aust, fit1.level, fit1.trend, fit1.season, fit1.fittedvalues],\n columns=[r\"$y_t$\", r\"$l_t$\", r\"$b_t$\", r\"$s_t$\", r\"$\\hat{y}_t$\"],\n index=aust.index,\n)\ndf.append(fit1.forecast(8).rename(r\"$\\hat{y}_t$\").to_frame(), sort=True)\n\ndf = pd.DataFrame(\n np.c_[aust, fit2.level, fit2.trend, fit2.season, fit2.fittedvalues],\n columns=[r\"$y_t$\", r\"$l_t$\", r\"$b_t$\", r\"$s_t$\", r\"$\\hat{y}_t$\"],\n index=aust.index,\n)\ndf.append(fit2.forecast(8).rename(r\"$\\hat{y}_t$\").to_frame(), sort=True)", "Finally lets look at the levels, slopes/trends and seasonal components of the models.", "states1 = pd.DataFrame(\n np.c_[fit1.level, fit1.trend, fit1.season],\n columns=[\"level\", \"slope\", \"seasonal\"],\n index=aust.index,\n)\nstates2 = pd.DataFrame(\n np.c_[fit2.level, fit2.trend, fit2.season],\n columns=[\"level\", \"slope\", \"seasonal\"],\n index=aust.index,\n)\nfig, [[ax1, ax4], [ax2, ax5], [ax3, ax6]] = plt.subplots(3, 2, figsize=(12, 8))\nstates1[[\"level\"]].plot(ax=ax1)\nstates1[[\"slope\"]].plot(ax=ax2)\nstates1[[\"seasonal\"]].plot(ax=ax3)\nstates2[[\"level\"]].plot(ax=ax4)\nstates2[[\"slope\"]].plot(ax=ax5)\nstates2[[\"seasonal\"]].plot(ax=ax6)\nplt.show()", "Simulations and Confidence Intervals\nBy using a state space formulation, we can perform simulations of future values. The mathematical details are described in Hyndman and Athanasopoulos [2] and in the documentation of HoltWintersResults.simulate.\nSimilar to the example in [2], we use the model with additive trend, multiplicative seasonality, and multiplicative error. We simulate up to 8 steps into the future, and perform 1000 simulations. As can be seen in the below figure, the simulations match the forecast values quite well.\n[2] Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice, 2nd edition. OTexts, 2018.", "fit = ExponentialSmoothing(\n aust,\n seasonal_periods=4,\n trend=\"add\",\n seasonal=\"mul\",\n initialization_method=\"estimated\",\n).fit()\nsimulations = fit.simulate(8, repetitions=100, error=\"mul\")\n\nax = aust.plot(\n figsize=(10, 6),\n marker=\"o\",\n color=\"black\",\n title=\"Forecasts and simulations from Holt-Winters' multiplicative method\",\n)\nax.set_ylabel(\"International visitor night in Australia (millions)\")\nax.set_xlabel(\"Year\")\nfit.fittedvalues.plot(ax=ax, style=\"--\", color=\"green\")\nsimulations.plot(ax=ax, style=\"-\", alpha=0.05, color=\"grey\", legend=False)\nfit.forecast(8).rename(\"Holt-Winters (add-mul-seasonal)\").plot(\n ax=ax, style=\"--\", marker=\"o\", color=\"green\", legend=True\n)\nplt.show()", "Simulations can also be started at different points in time, and there are multiple options for choosing the random noise.", "fit = ExponentialSmoothing(\n aust,\n seasonal_periods=4,\n trend=\"add\",\n seasonal=\"mul\",\n initialization_method=\"estimated\",\n).fit()\nsimulations = fit.simulate(\n 16, anchor=\"2009-01-01\", repetitions=100, error=\"mul\", random_errors=\"bootstrap\"\n)\n\nax = aust.plot(\n figsize=(10, 6),\n marker=\"o\",\n color=\"black\",\n title=\"Forecasts and simulations from Holt-Winters' multiplicative method\",\n)\nax.set_ylabel(\"International visitor night in Australia (millions)\")\nax.set_xlabel(\"Year\")\nfit.fittedvalues.plot(ax=ax, style=\"--\", color=\"green\")\nsimulations.plot(ax=ax, style=\"-\", alpha=0.05, color=\"grey\", legend=False)\nfit.forecast(8).rename(\"Holt-Winters (add-mul-seasonal)\").plot(\n ax=ax, style=\"--\", marker=\"o\", color=\"green\", legend=True\n)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
IS-ENES-Data/submission_forms
dkrz_forms/Templates/Retrieve_Form.ipynb
apache-2.0
[ "Retrieve your DKRZ data form\nVia this form you can retrieve previously generated data forms and make them accessible via the Web again for completion.\nAdditionally you can get information on the data ingest process status related to your form based request.\n\nPlease provide your last name\nplease set your last name in the cell below \n(e.g. MY_LAST_NAME = \"mueller\")\nand evaluate the cell (Press \"Shift\"-Return in the cell)\nyou will then be asked for the password associated to your form \n(the password was provided to you as part of your previous form generation step)", "# please provide your last name - replacing ... below e.g. MY_LAST_NAME = \"schulz\"\nMY_LAST_NAME = \"......\" \n\n#----------------------------------------------------------\nfrom dkrz_forms import form_handler, form_widgets\nform_info = form_widgets.check_and_retrieve(MY_LAST_NAME)", "Get status information related to your form based request", "# To be completed", "Contact the DKRZ data managers for form related issues", "# tob be completed" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ioam/scipy-2017-holoviews-tutorial
notebooks/01-introduction-to-elements.ipynb
bsd-3-clause
[ "<a href='http://www.holoviews.org'><img src=\"assets/hv+bk.png\" alt=\"HV+BK logos\" width=\"40%;\" align=\"left\"/></a>\n<div style=\"float:right;\"><h2>01. Introduction to Elements</h2></div>\n\nPreliminaries\nIf the hvtutorial environment has been correctly created and activated using the instructions listed on the welcome page, the following imports should run and hv.extension('bokeh') should present a small HoloViews logo:", "import numpy as np\nimport pandas as pd\nimport holoviews as hv\nhv.extension('bokeh')", "Here we import the NumPy and pandas data libraries with their standard abbreviations, plus HoloViews with its standard abbreviation hv. The line reading hv.extension('bokeh') loads and activates the bokeh plotting backend, so all visualizations will be generated using Bokeh. We will see how to use matplotlib instead of bokeh later in the tutorial Customizing Visual Appearance.\nWhat are elements?\nIn short, elements are HoloViews' most basic, core primitives. All the various types of hv.Element accept semantic metadata that allows their input data to be given an automatic, visual representation. Most importantly, element objects always preserve the raw data they are supplied.\nIn this notebook we will explore a number of different element types and examine some of the ways that elements can supplement the supplied data with useful semantic data. To choose your own types to use in the exercises, you can browse them all in the reference gallery.\nCreating elements\nAll basic elements accept their data as a single, mandatory positional argument which may be supplied in a number of different formats, some of which we will now examine. A handful of annotation elements are exceptions to this rule, namely Arrow, Text, Bounds, Box and Ellipse, as they require additional positional arguments.\nA simple curve\nTo start with a simple example, we will sample a quadratic function $y=100-x^2$ at 21 different values of $x$ and wrap that data in a HoloViews element:", "xs = [i for i in range(-10,11)]\nys = [100-(x**2) for x in xs]\nsimple_curve = hv.Curve((xs,ys))\nsimple_curve", "Here we supplied two lists of values as a tuple to [hv.Curve]((http://build.holoviews.org/reference/elements/bokeh/Curve.html), assigned the result to the attribute simple_curve, and let Jupyter display the object using its default visual representation. As you can see, that default visual representation is a Bokeh plot, which is automatically generated by HoloViews when Jupyter requests it. But simple_curve itself is just a wrapper around your data, not a plot, and you can choose other representations that are not plots. For instance, printing the object will give you a purely textual representation instead:", "print(simple_curve)", "The textual representation indicates that this object is a continuous mapping from x to y, which is how HoloViews knew to render it as a continuous curve. You can also access the full original data if you wish:", "#simple_curve.data", "If you uncomment that line, you should see the original data values, though in some cases like this one the data has been converted to a better format (a Pandas dataframe instead of Python lists).\nThere are a number of similar elements to Curve such as Area and Scatter, which you can try out for yourself in the exercises.", "# Exercise: Try switching hv.Curve with hv.Area and hv.Scatter\n\n\n# Optional: \n# Look at the .data attribute of the elements you created to see the raw data (as a pandas DataFrame)\n", "Annotating the curve\nWrapping your data (xs and ys) here as a HoloViews element is sufficient to make it visualizable, but there are many other aspects of the data that we can capture to convey more about its meaning to HoloViews. For instance, we might want to specify what the x-axis and y-axis actually correspond to, in the real world. Perhaps this parabola is the trajectory of a ball thrown into the air, in which case we could declare the object as:", "trajectory = hv.Curve((xs,ys), kdims=['distance'], vdims=['height'])\ntrajectory", "Here we have added semantic information about our data to the Curve element. Specifically, we told HoloViews that the kdim or key dimension of our data corresponds to the real-world independent variable ('distance'), and the vdim or value dimension 'height' is the real-world dependent variable. Even though the additional information we provided is about the data, not directly about the plot, HoloViews is designed to reveal the properties of your data accurately, and so the axes now update to show what these dimensions represent.", "# Exercise: Take a look at trajectory.vdims\n", "Casting between elements\nThe type of an element is a declaration of important facts about your data, which gives HoloViews the appropriate hint required to generate a suitable visual representation from it. For instance, calling it a Curve is a declaration from the user that the data consists of samples from an underlying continuous function, which is why HoloViews plots it as a connected object. If we convert to an hv.Scatter object instead, the same set of data will show up as separated points, because \"Scatter\" does not make an assumption that the data is meant to be continuous:", "hv.Scatter(simple_curve)", "Casting the same data between different Element types in this way is often useful as a way to see your data differently, particularly if you are not certain of a single best way to interpret the data. Casting preserves your declared metadata as much as possible, propagating your declarations from the original object to the new one.", "# How do you predict the representation for hv.Scatter(trajectory) will differ from\n# hv.Scatter(simple_curve) above? Try it!\n\n\n# Also try casting the trajectory to an area then back to a curve.\n", "Turning arrays into elements\nThe curve above was constructed from a list of x-values and a list of y-values. Next we will create an element using an entirely different datatype, namely a NumPy array:", "x = np.linspace(0, 10, 500)\ny = np.linspace(0, 10, 500)\nxx, yy = np.meshgrid(x, y)\n\narr = np.sin(xx)*np.cos(yy)\nimage = hv.Image(arr)", "As above, we know that this data was sampled from a continuous function, but this time the data is mapping from two key dimensions, so we declare it as an [hv.Image]((http://build.holoviews.org/reference/elements/bokeh/Image.html) object. As you might expect, an Image object is visualized as an image by default:", "image\n\n# Exercise: Try visualizing different two-dimensional arrays.\n# You can try a new function entirely or simple modifications of the existing one\n# E.g., explore the effect of squaring and cubing the sine and cosine terms\n\n\n# Optional: Try supplying appropriate labels for the x- and y- axes\n# Hint: The x,y positions are how you *index* (or key) the array *values* (so x and y are both kdims)\n", "Selecting columns from tables to make elements\nIn addition to basic Python datatypes and xarray and NumPy array types, HoloViews elements can be passed tabular data in the form of pandas DataFrames:", "economic_data = pd.read_csv('../data/macro.csv')\neconomic_data.tail()", "Let's build an element that helps us understand how the percentage growth in US GDP varies over time. As our dataframe contains GDP growth data for lots of countries, let us select the United States from the table and create a Curve element from it:", "US_data = economic_data[economic_data['country'] == 'United States'] # Select data for the US only\nUS_data.tail()\n\ngrowth_curve = hv.Curve(US_data, kdims=['year'], vdims=['growth'])\ngrowth_curve", "In this case, declaring the kdims and vdims does not simply declare the axis labels, it allows HoloViews to discover which columns of the data should be used from the dataframe for each of the axes.", "# Exercise: Plot the unemployment (unem) over year\n", "Dimension labels\nIn this example, the simplistic axis labels are starting to get rather limiting. Changing the kdims and vdims is no longer trivial either, as they need to match the column names in the dataframe. Is the only solution to rename the columns in our dataframe to something more descriptive but more awkward to type?\nLuckily, no. The recommendation is that you continue to use short, programmer and pandas-friendly, tab-completeable column names as these are also the most convenient dimension names to use with HoloViews.\nWhat you should do instead is set the dimension labels, using the fact that dimensions are full, rich objects behind the scenes:", "gdp_growth = growth_curve.redim.label(growth='GDP growth')\ngdp_growth", "With the redim method, we have associated a dimension label with the growth dimension, resulting in a new element called gdp_growth (you can check for yourself that growth_curve is unchanged). Let's look at what the new dimension contains:", "gdp_growth.vdims\n\n# Exercise: Use redim.label to give the year dimension a better label\n", "The redim utility lets you easily change other dimension parameters, and as an example let's give our GDP growth dimension the appropriate unit:", "gdp_growth.redim.unit(growth='%')\n\n# Exercise: Use redim.unit to give the year dimension a better unit \n# For instance, relabel to 'Time' then give the unit as 'year'\n", "Composing elements together\nViewing a single element at a time often conveys very little information for the space used. In this section, we introduce the two composition operators + and * to build Layout and Overlay objects.\nLayouts\nEarlier on we were casting a parabola to different element types. Viewing the different types was awkward, wasting lots of vertical space in the notebook. What we will often want to do is view these elements side by side:", "layout = trajectory + hv.Scatter(trajectory) + hv.Area(trajectory) + hv.Spikes(trajectory)\nlayout.cols(2)", "What we have created with the + operator is an hv.Layout object (with a hint that a two-column layout is desired):", "print(layout)", "Now let us build a new layout by selecting elements from layout:", "layout.Curve.I + layout.Spikes.I", "We see that a Layout lets us pick component elements via two levels of tab-completable attribute access. Note that by default the type of the element defines the first level of access and the second level of access automatically uses Roman numerals (because Python identifiers cannot start with numbers).\nThese two levels correspond to another type of semantic declaration that applies to the elements directly (rather than their dimensions), called group and label. Specifically, group allows you to declare what kind of thing this object is, while label allows you to label which specific object it is. What you put in those declarations, if anything, will form the title of the plot:", "cannonball = trajectory.relabel('Cannonball', group='Trajectory')\nintegral = hv.Area(trajectory).relabel('Filled', group='Trajectory')\nlabelled_layout = cannonball + integral\nlabelled_layout \n\n# Exercise: Try out the tab-completion of labelled_layout to build a new layout swapping the position of these elements\n\n\n# Optional: Try using two levels of dictionary-style access to grab the cannonball trajectory\n", "Overlays\nLayout places objects side by side, allowing it to collect (almost!) any HoloViews objects that you want to indicate are related. Another operator * allows you to overlay elements into a single plot, if they live in the same space (with matching dimensions and similar ranges over those dimensions). The result of * is an Overlay:", "trajectory * hv.Spikes(trajectory)", "The indexing system of Overlay is identical to that of Layout.", "# Exercise: Make an overlay of the Spikes object from layout on top of the filled trajectory area of labelled_layout\n", "One thing that is specific to Overlays is the use of color cycles to automatically differentiate between elements of the same type and group:", "tennis_ball = cannonball.clone((xs, 0.5*np.array(ys)), label='Tennis Ball')\ncannonball + tennis_ball + (cannonball * tennis_ball)", "Here we use the clone method to make a shallower tennis-ball trajectory: the clone method create a new object that preserves semantic metadata while allowing overrides (in this case we override the input data and the label).\nAs you can see, HoloViews can determine that the two overlaid curves will be distinguished by color, and so it also provides a legend so that the mapping from color to data is clear.", "# Optional Exercise: \n# 1. Create a thrown_ball curve with half the height of tennis_ball by cloning it and assigning the label 'Thrown ball'\n# 2. Add thrown_ball to the overlay\n", "Slicing and selecting\nHoloViews elements can be easily sliced using array-style syntax or using the .select method. The following example shows how we can slice the cannonball trajectory into its ascending and descending components:", "full_trajectory = cannonball.redim.label(distance='Horizontal distance', height='Vertical height')\nascending = full_trajectory[-10:1].relabel('ascending')\ndescending = cannonball.select(distance=(0,11.)).relabel('descending')\nascending * descending", "Note that the slicing in HoloViews is done in the continuous space of the dimension and not in the integer space of individual data samples. In this instance, the slice is over the distance dimension and we can see that the slicing semantics follow the usual Python convention of an inclusive lower bound and an exclusive upper bound.\nThis example also illustrates why we keep simple identifiers for dimension names and reserve longer descriptions for the dimension labels: certain methods such as the select method shown above accept dimension names as keywords.\nOnwards\nLater in the tutorial, we will see how elements and the principles of composition extend to containers (such as ) which make data exploration quick, easy and interactive. Before we examine the container types, we will look at how to customize the appearance of elements, change the plotting extension and specify output formats.\nFor a quick demonstration related to what we will be covering, hit the kernel restart button (⟳) in the toolbar for this notebook, change hv.extension('bokeh') to hv.extension('matplotlib') in the first cell and rerun the notebook!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
olivertomic/hoggorm
examples/PCA/PCA_on_spectroscopy_data.ipynb
bsd-2-clause
[ "Principal component analysis (PCA) on spectroscopy data\nThis notebook illustrates how to use the hoggorm package to carry out principal component analysis (PCA) on spectroscopy data. Furthermore, we will learn how to visualise the results of the PCA using the hoggormPlot package.\n\nImport packages and prepare data\nFirst import hoggorm for analysis of the data and hoggormPlot for plotting of the analysis results. We'll also import pandas such that we can read the data into a data frame. numpy is needed for checking dimensions of the data.", "import hoggorm as ho\nimport hoggormplot as hop\nimport pandas as pd\nimport numpy as np", "Next, load the spectroscopy data that we are going to analyse using hoggorm. After the data has been loaded into the pandas data frame, we'll display it in the notebook.", "# Load data\n\n# Insert code for reading data from other folder in repository instead of directly from same repository.\ndata_df = pd.read_csv('gasoline_NIR.txt', header=None, sep='\\s+')", "Let's have a look at the dimensions of the data frame.", "np.shape(data_df)", "The nipalsPCA class in hoggorm accepts only numpy arrays with numerical values and not pandas data frames. Therefore, the pandas data frame holding the imported data needs to be \"taken apart\" into three parts: \n* a numpy array holding the numeric values\n* a Python list holding variable (column) names\n* a Python list holding object (row) names. \nThe array with values will be used as input for the nipalsPCA class for analysis. The Python lists holding the variable and row names will be used later in the plotting function from the hoggormPlot package when visualising the results of the analysis. Below is the code needed to access both data, variable names and object names.", "# Get the values from the data frame\ndata = data_df.values", "Apply PCA to our data\nNow, let's run PCA on the data using the nipalsPCA class. The documentation provides a description of the input parameters. Using input paramter arrX we define which numpy array we would like to analyse. By setting input parameter Xstand=False we make sure that the variables are only mean centered, not scaled to unit variance. This is the default setting and actually doesn't need to expressed explicitly. Setting paramter cvType=[\"loo\"] we make sure that we compute the PCA model using full cross validation. \"loo\" means \"Leave One Out\". By setting paramter numpComp=4 we ask for four principal components (PC) to be computed.", "model = ho.nipalsPCA(arrX=data, Xstand=False, cvType=[\"loo\"], numComp=5)", "That's it, the PCA model has been computed. Now we would like to inspect the results by visualising them. We can do this using the taylor-made plotting function for PCA from the separate hoggormPlot package. If we wish to plot the results for component 1 and component 2, we can do this by setting the input argument comp=[1, 2]. The input argument plots=[1, 6] lets the user define which plots are to be plotted. If this list for example contains value 1, the function will generate the scores plot for the model. If the list contains value 6, the function will generate a explained variance plot. The hoggormPlot documentation provides a description of input paramters.", "hop.plot(model, comp=[1, 2], \n plots=[1, 6])", "It is also possible to generate the same plots one by one with specific plot functions as shown below.", "hop.loadings(model, line=True)", "Accessing numerical results\nNow that we have visualised the PCA results, we may also want to access the numerical results. Below are some examples. For a complete list of accessible results, please see this part of the documentation.", "# Get scores and store in numpy array\nscores = model.X_scores()\n\n# Get scores and store in pandas dataframe with row and column names\nscores_df = pd.DataFrame(model.X_scores())\n#scores_df.index = data_objNames\nscores_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_scores().shape[1])]\nscores_df\n\nhelp(ho.nipalsPCA.X_scores)\n\n# Dimension of the scores\nnp.shape(model.X_scores())", "We see that the numpy array holds the scores for four components as required when computing the PCA model.", "# Get loadings and store in numpy array\nloadings = model.X_loadings()\n\n# Get loadings and store in pandas dataframe with row and column names\nloadings_df = pd.DataFrame(model.X_loadings()) \n#loadings_df.index = data_varNames\nloadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]\nloadings_df\n\nhelp(ho.nipalsPCA.X_loadings)\n\nnp.shape(model.X_loadings())\n\n# Get loadings and store in numpy array\nloadings = model.X_corrLoadings()\n\n# Get loadings and store in pandas dataframe with row and column names\nloadings_df = pd.DataFrame(model.X_corrLoadings()) \n#loadings_df.index = data_varNames\nloadings_df.columns = ['PC{0}'.format(x+1) for x in range(model.X_corrLoadings().shape[1])]\nloadings_df\n\nhelp(ho.nipalsPCA.X_corrLoadings)\n\n# Get calibrated explained variance of each component\ncalExplVar = model.X_calExplVar()\n\n# Get calibrated explained variance and store in pandas dataframe with row and column names\ncalExplVar_df = pd.DataFrame(model.X_calExplVar())\ncalExplVar_df.columns = ['calibrated explained variance']\ncalExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]\ncalExplVar_df\n\nhelp(ho.nipalsPCA.X_calExplVar)\n\n# Get cumulative calibrated explained variance\ncumCalExplVar = model.X_cumCalExplVar()\n\n# Get cumulative calibrated explained variance and store in pandas dataframe with row and column names\ncumCalExplVar_df = pd.DataFrame(model.X_cumCalExplVar())\ncumCalExplVar_df.columns = ['cumulative calibrated explained variance']\ncumCalExplVar_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]\ncumCalExplVar_df\n\nhelp(ho.nipalsPCA.X_cumCalExplVar)\n\n# Get cumulative calibrated explained variance for each variable\ncumCalExplVar_ind = model.X_cumCalExplVar_indVar()\n\n# Get cumulative calibrated explained variance for each variable and store in pandas dataframe with row and column names\ncumCalExplVar_ind_df = pd.DataFrame(model.X_cumCalExplVar_indVar()) \n#cumCalExplVar_ind_df.columns = data_varNames\ncumCalExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]\ncumCalExplVar_ind_df\n\nhelp(ho.nipalsPCA.X_cumCalExplVar_indVar)\n\n# Get calibrated predicted X for a given number of components\n\n# Predicted X from calibration using 1 component\nX_from_1_component = model.X_predCal()[1]\n\n# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names\nX_from_1_component_df = pd.DataFrame(model.X_predCal()[1])\n#X_from_1_component_df.index = data_objNames\n#X_from_1_component_df.columns = data_varNames\nX_from_1_component_df\n\n# Get predicted X for a given number of components\n\n# Predicted X from calibration using 4 components\nX_from_4_component = model.X_predCal()[4]\n\n# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names\nX_from_4_component_df = pd.DataFrame(model.X_predCal()[4])\n#X_from_4_component_df.index = data_objNames\n#X_from_4_component_df.columns = data_varNames\nX_from_4_component_df\n\nhelp(ho.nipalsPCA.X_predCal)\n\n# Get validated explained variance of each component\nvalExplVar = model.X_valExplVar()\n\n# Get calibrated explained variance and store in pandas dataframe with row and column names\nvalExplVar_df = pd.DataFrame(model.X_valExplVar())\nvalExplVar_df.columns = ['validated explained variance']\nvalExplVar_df.index = ['PC{0}'.format(x+1) for x in range(model.X_loadings().shape[1])]\nvalExplVar_df\n\nhelp(ho.nipalsPCA.X_valExplVar)\n\n# Get cumulative validated explained variance\ncumValExplVar = model.X_cumValExplVar()\n\n# Get cumulative validated explained variance and store in pandas dataframe with row and column names\ncumValExplVar_df = pd.DataFrame(model.X_cumValExplVar())\ncumValExplVar_df.columns = ['cumulative validated explained variance']\ncumValExplVar_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]\ncumValExplVar_df\n\nhelp(ho.nipalsPCA.X_cumValExplVar)\n\n# Get cumulative validated explained variance for each variable\ncumCalExplVar_ind = model.X_cumCalExplVar_indVar()\n\n# Get cumulative validated explained variance for each variable and store in pandas dataframe with row and column names\ncumValExplVar_ind_df = pd.DataFrame(model.X_cumValExplVar_indVar())\n#cumValExplVar_ind_df.columns = data_varNames\ncumValExplVar_ind_df.index = ['PC{0}'.format(x) for x in range(model.X_loadings().shape[1] + 1)]\ncumValExplVar_ind_df\n\nhelp(ho.nipalsPCA.X_cumValExplVar_indVar)\n\n# Get validated predicted X for a given number of components\n\n# Predicted X from validation using 1 component\nX_from_1_component_val = model.X_predVal()[1]\n\n# Predicted X from calibration using 1 component stored in pandas data frame with row and columns names\nX_from_1_component_val_df = pd.DataFrame(model.X_predVal()[1])\n#X_from_1_component_val_df.index = data_objNames\n#X_from_1_component_val_df.columns = data_varNames\nX_from_1_component_val_df\n\n# Get validated predicted X for a given number of components\n\n# Predicted X from validation using 3 components\nX_from_3_component_val = model.X_predVal()[3]\n\n# Predicted X from calibration using 3 components stored in pandas data frame with row and columns names\nX_from_3_component_val_df = pd.DataFrame(model.X_predVal()[3])\n#X_from_3_component_val_df.index = data_objNames\n#X_from_3_component_val_df.columns = data_varNames\nX_from_3_component_val_df\n\nhelp(ho.nipalsPCA.X_predVal)\n\n# Get predicted scores for new measurements (objects) of X\n\n# First pretend that we acquired new X data by using part of the existing data and overlaying some noise\nimport numpy.random as npr\nnew_data = data[0:4, :] + npr.rand(4, np.shape(data)[1])\nnp.shape(new_data)\n\n# Now insert the new data into the existing model and compute scores for two components (numComp=2)\npred_scores = model.X_scores_predict(new_data, numComp=2)\n\n# Same as above, but results stored in a pandas dataframe with row names and column names\npred_scores_df = pd.DataFrame(model.X_scores_predict(new_data, numComp=2))\npred_scores_df.columns = ['PC{0}'.format(x) for x in range(2)]\npred_scores_df.index = ['new object {0}'.format(x) for x in range(np.shape(new_data)[0])]\npred_scores_df\n\nhelp(ho.nipalsPCA.X_scores_predict)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
squishbug/DataScienceProgramming
DataScienceProgramming/09-Machine-Learning-II/HW6_orig.ipynb
cc0-1.0
[ "# %load nbinit.py\nfrom IPython.display import HTML\nHTML(\"\"\"\n<style>\n.container { width: 100% !important; padding-left: 1em; padding-right: 2em; }\ndiv.output_stderr { background: #FFA; }\n</style>\n\"\"\")", "<div style=\"float: right; color: red;\">Please, rename this file to <code style=\"color:red\">HW6.ipynb</code> and save it in <code style=\"color:red\">MSA8010F16/HW6</code>\n</div>\n\nHomework 6: Preprocessing Data\nWe use a data set from the UCI Machine Learning Repository\nhttps://archive.ics.uci.edu/ml/datasets/Bank+Marketing \nto experiment with a Decision Tree classifier http://www.saedsayad.com/decision_tree.htm\nScikit-Learn: http://scikit-learn.org/stable/modules/tree.html#tree\nBook slides:\n- http://131.96.197.204/~pmolnar/mlbook/BookSlides_4A_Information-based_Learning.pdf\n- http://131.96.197.204/~pmolnar/mlbook/BookSlides_4B_Information-based_Learning.pdf\nBank Marketing Data Set\nThe data is related with direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe a term deposit (variable y).\nData Set Information:\nThe data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed. \nThere are four datasets: \n1) bank-additional-full.csv with all examples (41188) and 20 inputs, ordered by date (from May 2008 to November 2010), very close to the data analyzed in [Moro et al., 2014]\n2) bank-additional.csv with 10% of the examples (4119), randomly selected from 1), and 20 inputs.\n3) bank-full.csv with all examples and 17 inputs, ordered by date (older version of this dataset with less inputs). \n4) bank.csv with 10% of the examples and 17 inputs, randomly selected from 3 (older version of this dataset with less inputs). \nThe smallest datasets are provided to test more computationally demanding machine learning algorithms (e.g., SVM). \nThe classification goal is to predict if the client will subscribe (yes/no) a term deposit (variable y).\nAttribute Information:\nInput variables:\n- bank client data:\n 1 age (numeric)\n 2 job : type of job (categorical: 'admin.','blue-collar','entrepreneur','housemaid','management','retired','self-employed','services','student','technician','unemployed','unknown')\n 3 marital : marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed)\n 4 education (categorical: 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown')\n 5 default: has credit in default? (categorical: 'no','yes','unknown')\n 6 housing: has housing loan? (categorical: 'no','yes','unknown')\n 7 loan: has personal loan? (categorical: 'no','yes','unknown')\n- related with the last contact of the current campaign:\n 8 contact: contact communication type (categorical: 'cellular','telephone') \n 9 month: last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec')\n 10 day_of_week: last contact day of the week (categorical: 'mon','tue','wed','thu','fri')\n 11 duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model.\n- other attributes:\n 12 campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)\n 13 pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)\n 14 previous: number of contacts performed before this campaign and for this client (numeric)\n 15 poutcome: outcome of the previous marketing campaign (categorical: 'failure','nonexistent','success')\n- social and economic context attributes\n 16 emp.var.rate: employment variation rate - quarterly indicator (numeric)\n 17 cons.price.idx: consumer price index - monthly indicator (numeric) \n 18 cons.conf.idx: consumer confidence index - monthly indicator (numeric) \n 19 euribor3m: euribor 3 month rate - daily indicator (numeric)\n 20 nr.employed: number of employees - quarterly indicator (numeric)\nOutput variable (desired target):\n 21 y - has the client subscribed a term deposit? (binary: 'yes','no')", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nDATAFILE = '/home/data/archive.ics.uci.edu/BankMarketing/bank.csv'\n###DATAFILE = 'data/bank.csv' ### using locally\n\ndf = pd.read_csv(DATAFILE, sep=';')\nlist(df.columns)", "Step 1: Investigate Data Set\n\nWe have a number of categorical data: What's their cardinality? How are the levels distributed?\nWhat's the distribution on numeric values? Do we see any correlations?\n\nLet's first look at columns (i.e. variables) with continuous values. We can get a sense of the distribution from aggregate functions like mean, standard variation, quantiles, as well as, minimum and maximum values.\nThe Pandas method describe creates a table view of those metrics. (The methods can also be used to identify numeric features in the data frame.", "### use sets and '-' difference operation 'A-B'. Also there is a symmetric different '^'\nall_features = set(df.columns)-set(['y'])\nnum_features = set(df.describe().columns)\ncat_features = all_features-num_features\n\nprint(\"All features: \", \", \".join(all_features), \"\\nNumerical features: \", \", \".join(num_features), \"\\nCategorical features: \", \", \".join(cat_features))\n\nset(df.columns)-set(df.describe().columns)-set('y')\n\n### Describe Columns\nhelp(pd.DataFrame.describe)\n\n### Let's get the description of the numeric data for each of the target values separately.\n### We need to rename the columns before we can properly join the tables. The column names may look strange...\ndesc_yes = df[df.y=='yes'].describe().rename_axis(lambda c: \"%s|A\"%c, axis='columns')\ndesc_no = df[df.y=='no'].describe().rename_axis(lambda c: \"%s|B\"%c, axis='columns')\n\n### ...but this way we can get them in the desired order...\ndesc = desc_yes.join(desc_no).reindex_axis(sorted(desc_yes.columns), axis=1)\n### ...because we're changing them anyway:\n\n#desc.set_axis(1, [sorted(list(num_features)*2), ['yes', 'no']*len(num_features)])\n#desc", "Let's look at the distribution of numerical features...", "%matplotlib inline\nfig = plt.figure(figsize=(32, 8))\nfor i in range(len(num_features)):\n f = list(num_features)[i]\n plt.subplot(2, 4, i+1)\n hst = plt.hist(df[f], alpha=0.5)\n plt.title(f)\nplt.suptitle('Distribution of Numeric Values', fontsize=20)\nNone", "Now, let's look at the categorical variables and their distribution...", "for f in cat_features:\n tab = df[f].value_counts()\n print('%s:\\t%s' % (f, ', '.join([ (\"%s(%d)\" %(tab.index[i], tab.values[i])) for i in range(len(tab))]) ))", "Results in a data frame:", "mat = pd.DataFrame(\n [ df[f].value_counts() for f in list(cat_features) ],\n index=list(cat_features)\n ).stack()\n\npd.DataFrame(mat.values, index=mat.index)", "Step 2: Prepare for ML algorithm\nThe ML algorithms in Scikit-Learn use Matrices (with numeric values). We need to convert our data-frame into a feature matrix X and a target vector y.\nMany algorithms also require the features to be in the same range. Decision-trees don't bother because they don't perform any operations across features.\nUse the pd.DataFrame.as_matrix method to convert a DataFrame into a matrix.", "help(pd.DataFrame.as_matrix)\n\n## We copy our original dataframe into a new one, and then perform replacements on categorical levels.\n## We may also keep track of our replacement\nlevel_substitution = {}\n\ndef levels2index(levels):\n dct = {}\n for i in range(len(levels)):\n dct[levels[i]] = i\n return dct\n\ndf_num = df.copy()\n\nfor c in cat_features:\n level_substitution[c] = levels2index(df[c].unique())\n df_num[c].replace(level_substitution[c], inplace=True)\n\n## same for target\ndf_num.y.replace({'no':0, 'yes':1}, inplace=True)\n\ndf_num\n\nlevel_substitution", "Step 3: Training\nNow that we have our DataFrame prepared, we can create the feature matrix X and target vector y:\n1. split data into training and test sets\n2. fit the model", "X = df_num[list(all_features)].as_matrix()\ny = df_num.y.as_matrix()\nX, y\n\n### Scikit-learn provides us with a nice function to split\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4, random_state=42)\n\nfrom sklearn.tree import DecisionTreeClassifier\nclf = DecisionTreeClassifier(max_depth=5)\n\nclf.fit(X_train, y_train)\nscore_train = clf.score(X_train, y_train)\nscore_test = clf.score(X_test, y_test)\nprint('Ratio of correctly classified samples for:\\n\\tTraining-set:\\t%f\\n\\tTest-set:\\t%f'%(score_train, score_test))", "score returns the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. For binary classification it means percentage of correctly classified samples.\nThe score should be close to 1. Though, one single number does not tell the whole story...\nStep 4: Evaluate Model\n\npredict $\\hat y$ for your model on test set\ncalculate confusion matrix and derive measures\nvisualize if suitable\n\nLet's see what we got. We can actually print the entire decision tree and trace for each sample ... though you may need to use the viz-wall for that.", "import sklearn.tree\nimport pydot_ng as pdot\ndot_data = sklearn.tree.export_graphviz(clf, out_file=None, feature_names = list(all_features), class_names=['no', 'yes'])\ngraph = pdot.graph_from_dot_data(dot_data)\n#--- we can save the graph into a file ... preferrably vector graphics\n#graph.write_svg('mydt.svg')\ngraph.write_pdf('/home/pmolnar/public_html/mydt.pdf')\n\n#--- or display right here \n##from IPython.display import HTML\nHTML(str(graph.create_svg().decode('utf-8')))", "Now, we use out classifier and predict on the test set (In order to get the ŷ character type: 'y\\hat' followed by the TAB-key.)", "ŷ = clf.predict(X_test)\n\n## a function that produces the confusion matrix: 1. parameter y=actual target, 2. parameter ŷ=predicted\ndef binary_confusion_matrix(y,ŷ):\n TP = ((y+ŷ)== 2).sum()\n TN = ((y+ŷ)== 0).sum()\n FP = ((y-ŷ)== -1).sum()\n FN = ((y-ŷ)== 1).sum()\n return pd.DataFrame( [[TP, FP], [FN, TN]], index=[['Prediction', 'Prediction'],['Yes', 'No']], columns=[['Actual', 'Actual'],['Yes', 'No']])\n\ncm = binary_confusion_matrix(y_test, ŷ)\ncm\n\n### Scikit-Learn can do that too ... so so nice though\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, ŷ)\ncm\n\n### Here are some metrics \nfrom sklearn.metrics import classification_report\nprint(classification_report(y_test, ŷ))\n\n### http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py\nimport itertools\nnp.set_printoptions(precision=2)\ndef plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, cm[i, j],\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n\n%matplotlib inline\n\nfig = plt.figure()\nplot_confusion_matrix(cm, classes=['No', 'Yes'], normalize=True, title='Normalized confusion matrix')\nplt.show()", "Step 5: Figure out how to improve and go back to Step 2 or 3\nThis is an experiemnt. What can we change to improve the performance of the model?\n- Include or exclude certain features\n- Scale or transform values of feature vectors\n- Identify outliers (noise) and remove them\n- Adjust parameters of the ML algorithm" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
survey-methods/samplics
docs/source/tutorial/psu_selection.ipynb
mit
[ "Selection of primary sampling units (PSUs) <a name=\"section1\"></a>\nIn the sections below, we draw primary sampling units (PSUs) using probability proportional to size (PPS) sampling techniques implemented in the Sample class. The class Sample has two main methods that is inclusion_probs and select. The method inclusion_probs() computes the probability of selection and select() draws the random samples. \nThe following will illustrate the use of samplics for sample selection. For the illustration,\n- We consider a stratified cluster design.\n- We will a priori decide how many PSUs to sample from each stratum\n- For the clusters selection, we demonstrate PPS methods\nThis example is not meant to be exhaustif. There are many use cases that are not covered in this tutorial. For example, some PSUs may be segmented due to their size and segments selected in a subsequent step. Segment selection can be done with Samplics in a similar way as the PSUs selection, with PPS or SRS, after the segements have been created by the user.\nFirst, let us import the python packages necessary to run the tutorial.", "import numpy as np\nimport pandas as pd\n\nimport samplics\nfrom samplics.datasets import PSUFrame\nfrom samplics.sampling import SampleSelection", "Sample Dataset <a name=\"section10\"></a>\nThe file sample_frame.csv - shown below - contains synthetic data of 100 clusters classified by region (East, North, South and West). Clusters represent a group of households. In the file, each cluster has an associated number of households (number_households) and a status variable indicating whether the cluster is in scope or not. \nThis synthetic data represents a simplified version of enumeration areas (EAs) frames found in many countries and used by major household survey programs such as the Demographic and Health Surveys (DHS), the Population-based HIV Impact Assessment (PHIA) surveys and the Multiple Cluster Indicator Surveys (MICS).", "psu_frame_cls = PSUFrame()\npsu_frame_cls.load_data()\n\npsu_frame = psu_frame_cls.data\npsu_frame.head(25)", "Often, sampling frames are not available for the sampling units of interest. For example, most countries do not have a list of all households or people living in the country. Even if such frames exist, it may not be operationally and financially feasible to directly select sampling units without any form of clustering. \nHence, stage sampling is a common strategy used by large household national surveys for selecting samples of households and people. At the first stage, geographic or administrative clusters of households are selected. At the second stage, a frame of households is created from the selected clusters and a sample of households is selected. At the third stage (if applicable), a sample of people is selected from the households in the sample. This is a high level description of the process; usually implementations are much less straightforward and may require many adjustments to address complexities. \nPSU Probability of Selection <a name=\"section11\"></a>\nAt the first stage, we use the proportional to size (pps) method to select a random sample of clusters. The measure of size is the number of households (number_households) as provided in the psu sampling frame. The sample is stratified by region. The probabilities, for stratified pps, is obtained as follow: \\begin{equation} p_{hi} = \\frac{n_h M_{hi}}{\\sum_{i=1}^{N_h}{M_{hi}}} \\end{equation} where $p_{hi}$ is the probability of selection for unit $i$ from stratum $h$, $M_{hi}$ is the measure of size (mos), $n_h$ and $N_h$ are the sample size and the total number of clusters in stratum $h$, respectively.\nImportant. The pps method is used in many surveys not just for multistage household surveys. For example, in business surveys, establishments can greatly vary in size; hence pps methods are often use to select samples. Simarly, facility-based surveys can benefit from pps methods when frames with measures of size are available. \nPSU Sample size\nFor a stratified sampling design, the sample size is provided using a Python dictionary. Python dictionaries allow us to pair the strata with the sample sizes. Let's say that we want to select 3 clusters from stratum East, 2 from West, 2 from North and 3 from South. The snippet of code below demonstrates how to create the Python dictionary. Note that it is important to correctly spell out the keys of the dictionary which corresponds to the values of the variable stratum (in our case it's region).", "psu_sample_size = {\"East\":3, \"West\": 2, \"North\": 2, \"South\": 3}\n\nprint(f\"\\nThe sample size per domain is: {psu_sample_size}\\n\")", "The function array_to_dict() converts an array to a dictionnary by pairing the values of the array to their frequency. We can use this function to calculates the number of clusters per stratum and store the result in a Python dictionnary. Then, we modify the values of the dictionnary to create the sample size dictionnary.\nIf some of the clusters are certainties then an exception will be raised. Hence, the user will have to manually handle the certaininties. Better handling of certainties is planned for future versions of the library samplics.", "from samplics import array_to_dict\n\nframe_size = array_to_dict(psu_frame[\"region\"])\nprint(f\"\\nThe number of clusters per stratum is: {frame_size}\")\n\npsu_sample_size = frame_size.copy()\npsu_sample_size[\"East\"] = 3\npsu_sample_size[\"North\"] = 2\npsu_sample_size[\"South\"] = 3\npsu_sample_size[\"West\"] = 2\nprint(f\"\\nThe sample size per stratum is: {psu_sample_size}\\n\")\n\nstage1_design = SampleSelection(method=\"pps-sys\", stratification=True, with_replacement=False)\n\npsu_frame[\"psu_prob\"] = stage1_design.inclusion_probs(\n psu_frame[\"cluster\"], \n psu_sample_size, \n psu_frame[\"region\"],\n psu_frame[\"number_households_census\"],\n )\n\nnb_obs = 15\nprint(f\"\\nFirst {nb_obs} observations of the PSU frame \\n\")\npsu_frame.head(nb_obs)", "PSU Selection <a name=\"section12\"></a>\nIn this section, we select a sample of psus using pps methods. In the section above, we have calculated the probabilities of selection. That step is not necessary when using samplics. We can use the method select() to calculate the probability of selection and select the sample, in one run. As shown below, select() method returns a tuple of three arrays. \n The first array indicates the selected units (i.e. psu_sample = 1 if selected, and 0 if not selected). \n The second array provides the number of hits, useful when the sample is selected with replacement. \n* The third array is the probability of selection. \nNB: np.random.seed() fixes the random seed to allow us to reproduce the random selection.", "np.random.seed(23)\n\npsu_frame[\"psu_sample\"], psu_frame[\"psu_hits\"], psu_frame[\"psu_probs\"] = stage1_design.select(\n psu_frame[\"cluster\"], \n psu_sample_size, \n psu_frame[\"region\"], \n psu_frame[\"number_households_census\"]\n )\n\nnb_obs = 15\nprint(f\"\\nFirst {nb_obs} observations of the PSU frame with the sampling information \\n\")\npsu_frame.head(nb_obs)", "The default setting sample_only=False returns the entire frame. We can easily reduce the output data to the sample by filtering i.e. psu_sample == 1. However, if we are only interested in the sample, we could use sample_only=True when calling select(). This will reduce the output data to the sampled units and to_dataframe=true will convert the data to a pandas dataframe (pd.DataFrame). Note that the columns in the dataframe will be reduced to the minimum.", "np.random.seed(23)\n\npsu_sample = stage1_design.select(\n psu_frame[\"cluster\"], \n psu_sample_size, \n psu_frame[\"region\"], \n psu_frame[\"number_households_census\"],\n to_dataframe = True,\n sample_only = True\n )\n\nprint(\"\\nPSU sample without the non-sampled units\\n\")\npsu_sample", "The systematic selection method can be implemented with or without replacement. The other samplics algorithms for selecting sample with unequal probablities of selection are Brewer, Hanurav-Vijayan (hv), Murphy, and Rao-Sampford (rs) methods. As shown below, all these sampling techniques can be specified when extentiating a Sample class; then call select() to draw samples. \npython \nSample(method=\"pps-sys\", with_replacement=True)\nSample(method=\"pps-sys\", with_replacement=False)\nSample(method=\"pps-brewer\", with_replacement=False)\nSample(method=\"pps-hv\", with_replacement=False) # Hanurav-Vijayan method\nSample(method=\"pps-murphy\", with_replacement=False)\nSample(method=\"pps-sampford\", with_replacement=False) # Rao-Sampford method\nFor example, if we wanted to select the sample using the Rao-Sampford method, we could use the following snippet of code.", "np.random.seed(23)\n\nstage1_sampford = SampleSelection(method=\"pps-rs\", stratification=True, with_replacement=False)\n\npsu_sample_sampford = stage1_sampford.select(\n psu_frame[\"cluster\"], \n psu_sample_size, \n psu_frame[\"region\"], \n psu_frame[\"number_households_census\"],\n to_dataframe=True,\n sample_only=False\n )\n\npsu_sample_sampford" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wuafeing/Python3-Tutorial
01 data structures and algorithms/01.13 sort list of dicts by key.ipynb
gpl-3.0
[ "Previous\n1.13 通过某个关键字排序一个字典列表\n问题\n你有一个字典列表,你想根据某个或某几个字典字段来排序这个列表。\n解决方案\n通过使用 operator 模块的 itemgetter 函数,可以非常容易的排序这样的数据结构。 假设你从数据库中检索出来网站会员信息列表,并且以下列的数据结构返回:", "rows = [\n {\"fname\": \"Brian\", \"lname\": \"Jones\", \"uid\": 1003},\n {\"fname\": \"David\", \"lname\": \"Beazley\", \"uid\": 1002},\n {\"fname\": \"John\", \"lname\": \"Cleese\", \"uid\": 1001},\n {\"fname\": \"Big\", \"lname\": \"Jones\", \"uid\": 1004}\n]", "根据任意的字典字段来排序输入结果行是很容易实现的,代码示例:", "from operator import itemgetter\nrows_by_fname = sorted(rows, key = itemgetter(\"fname\"))\nprint(rows_by_fname)\n\nrows_by_uid = sorted(rows, key = itemgetter(\"uid\"))\nprint(rows_by_uid)", "代码的输出如上:\nitemgetter() 函数也支持多个 keys ,比如下面的代码:", "rows_by_lfname = sorted(rows, key = itemgetter(\"lname\", \"fname\"))\nprint(rows_by_lfname)", "讨论\n在上面例子中, rows 被传递给接受一个关键字参数的 sorted() 内置函数。 这个参数是 callable 类型,并且从 rows 中接受一个单一元素,然后返回被用来排序的值。 itemgetter() 函数就是负责创建这个 callable 对象的。\noperator.itemgetter() 函数有一个被 rows 中的记录用来查找值的索引参数。可以是一个字典键名称, 一个整形值或者任何能够传入一个对象的 __getitem__() 方法的值。 如果你传入多个索引参数给 itemgetter() ,它生成的 callable 对象会返回一个包含所有元素值的元组, 并且 sorted() 函数会根据这个元组中元素顺序去排序。 但你想要同时在几个字段上面进行排序(比如通过姓和名来排序,也就是例子中的那样)的时候这种方法是很有用的。\nitemgetter() 有时候也可以用 lambda 表达式代替,比如:", "rows_by_fname = sorted(rows, key = lambda r: r[\"fname\"])\nrows_by_lfname = sorted(rows, key = lambda r: (r[\"lname\"], r[\"fname\"]))", "这种方案也不错。但是,使用 itemgetter() 方式会运行的稍微快点。因此,如果你对性能要求比较高的话就使用 itemgetter() 方式。\n最后,不要忘了这节中展示的技术也同样适用于 min() 和 max() 等函数。比如:", "min(rows, key = itemgetter(\"uid\"))\n\nmax(rows, key = itemgetter(\"uid\"))", "Next" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nwjs/chromium.src
third_party/tensorflow-text/src/docs/tutorials/classify_text_with_bert.ipynb
bsd-3-clause
[ "Copyright 2020 The TensorFlow Hub Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/text/tutorials/classify_text_with_bert\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/classify_text_with_bert.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/collections/bert/1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>\n\nClassify text with BERT\nThis tutorial contains complete code to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews.\nIn addition to training a model, you will learn how to preprocess text into an appropriate format.\nIn this notebook, you will:\n\nLoad the IMDB dataset\nLoad a BERT model from TensorFlow Hub\nBuild your own model by combining BERT with a classifier\nTrain your own model, fine-tuning BERT as part of that\nSave your model and use it to classify sentences\n\nIf you're new to working with the IMDB dataset, please see Basic text classification for more details.\nAbout BERT\nBERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name: Bidirectional Encoder Representations from Transformers. \nBERT models are usually pre-trained on a large corpus of text, then fine-tuned for specific tasks.\nSetup", "# A dependency of the preprocessing for BERT inputs\n!pip install -q -U tensorflow-text", "You will use the AdamW optimizer from tensorflow/models.", "!pip install -q tf-models-official\n\nimport os\nimport shutil\n\nimport tensorflow as tf\nimport tensorflow_hub as hub\nimport tensorflow_text as text\nfrom official.nlp import optimization # to create AdamW optimizer\n\nimport matplotlib.pyplot as plt\n\ntf.get_logger().setLevel('ERROR')", "Sentiment analysis\nThis notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review.\nYou'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database.\nDownload the IMDB dataset\nLet's download and extract the dataset, then explore the directory structure.", "url = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'\n\ndataset = tf.keras.utils.get_file('aclImdb_v1.tar.gz', url,\n untar=True, cache_dir='.',\n cache_subdir='')\n\ndataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')\n\ntrain_dir = os.path.join(dataset_dir, 'train')\n\n# remove unused folders to make it easier to load the data\nremove_dir = os.path.join(train_dir, 'unsup')\nshutil.rmtree(remove_dir)", "Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset.\nThe IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the validation_split argument below.\nNote: When using the validation_split and subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation and training splits have no overlap.", "AUTOTUNE = tf.data.AUTOTUNE\nbatch_size = 32\nseed = 42\n\nraw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(\n 'aclImdb/train',\n batch_size=batch_size,\n validation_split=0.2,\n subset='training',\n seed=seed)\n\nclass_names = raw_train_ds.class_names\ntrain_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)\n\nval_ds = tf.keras.preprocessing.text_dataset_from_directory(\n 'aclImdb/train',\n batch_size=batch_size,\n validation_split=0.2,\n subset='validation',\n seed=seed)\n\nval_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)\n\ntest_ds = tf.keras.preprocessing.text_dataset_from_directory(\n 'aclImdb/test',\n batch_size=batch_size)\n\ntest_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)", "Let's take a look at a few reviews.", "for text_batch, label_batch in train_ds.take(1):\n for i in range(3):\n print(f'Review: {text_batch.numpy()[i]}')\n label = label_batch.numpy()[i]\n print(f'Label : {label} ({class_names[label]})')", "Loading models from TensorFlow Hub\nHere you can choose which BERT model you will load from TensorFlow Hub and fine-tune. There are multiple BERT models available.\n\nBERT-Base, Uncased and seven more models with trained weights released by the original BERT authors.\nSmall BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.\nALBERT: four different sizes of \"A Lite BERT\" that reduces model size (but not computation time) by sharing parameters between layers.\nBERT Experts: eight models that all have the BERT-base architecture but offer a choice between different pre-training domains, to align more closely with the target task.\nElectra has the same architecture as BERT (in three different sizes), but gets pre-trained as a discriminator in a set-up that resembles a Generative Adversarial Network (GAN).\nBERT with Talking-Heads Attention and Gated GELU [base, large] has two improvements to the core of the Transformer architecture.\n\nThe model documentation on TensorFlow Hub has more details and references to the\nresearch literature. Follow the links above, or click on the tfhub.dev URL\nprinted after the next cell execution.\nThe suggestion is to start with a Small BERT (with fewer parameters) since they are faster to fine-tune. If you like a small model but with higher accuracy, ALBERT might be your next option. If you want even better accuracy, choose\none of the classic BERT sizes or their recent refinements like Electra, Talking Heads, or a BERT Expert.\nAside from the models available below, there are multiple versions of the models that are larger and can yield even better accuracy, but they are too big to be fine-tuned on a single GPU. You will be able to do that on the Solve GLUE tasks using BERT on a TPU colab.\nYou'll see in the code below that switching the tfhub.dev URL is enough to try any of these models, because all the differences between them are encapsulated in the SavedModels from TF Hub.", "#@title Choose a BERT model to fine-tune\n\nbert_model_name = 'small_bert/bert_en_uncased_L-4_H-512_A-8' #@param [\"bert_en_uncased_L-12_H-768_A-12\", \"bert_en_cased_L-12_H-768_A-12\", \"bert_multi_cased_L-12_H-768_A-12\", \"small_bert/bert_en_uncased_L-2_H-128_A-2\", \"small_bert/bert_en_uncased_L-2_H-256_A-4\", \"small_bert/bert_en_uncased_L-2_H-512_A-8\", \"small_bert/bert_en_uncased_L-2_H-768_A-12\", \"small_bert/bert_en_uncased_L-4_H-128_A-2\", \"small_bert/bert_en_uncased_L-4_H-256_A-4\", \"small_bert/bert_en_uncased_L-4_H-512_A-8\", \"small_bert/bert_en_uncased_L-4_H-768_A-12\", \"small_bert/bert_en_uncased_L-6_H-128_A-2\", \"small_bert/bert_en_uncased_L-6_H-256_A-4\", \"small_bert/bert_en_uncased_L-6_H-512_A-8\", \"small_bert/bert_en_uncased_L-6_H-768_A-12\", \"small_bert/bert_en_uncased_L-8_H-128_A-2\", \"small_bert/bert_en_uncased_L-8_H-256_A-4\", \"small_bert/bert_en_uncased_L-8_H-512_A-8\", \"small_bert/bert_en_uncased_L-8_H-768_A-12\", \"small_bert/bert_en_uncased_L-10_H-128_A-2\", \"small_bert/bert_en_uncased_L-10_H-256_A-4\", \"small_bert/bert_en_uncased_L-10_H-512_A-8\", \"small_bert/bert_en_uncased_L-10_H-768_A-12\", \"small_bert/bert_en_uncased_L-12_H-128_A-2\", \"small_bert/bert_en_uncased_L-12_H-256_A-4\", \"small_bert/bert_en_uncased_L-12_H-512_A-8\", \"small_bert/bert_en_uncased_L-12_H-768_A-12\", \"albert_en_base\", \"electra_small\", \"electra_base\", \"experts_pubmed\", \"experts_wiki_books\", \"talking-heads_base\"]\n\nmap_name_to_handle = {\n 'bert_en_uncased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',\n 'bert_en_cased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',\n 'bert_multi_cased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',\n 'small_bert/bert_en_uncased_L-2_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-2_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-2_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-2_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-4_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-4_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-4_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-4_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-6_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-6_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-6_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-6_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-8_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-8_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-8_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-8_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-10_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-10_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-10_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-10_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',\n 'small_bert/bert_en_uncased_L-12_H-128_A-2':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',\n 'small_bert/bert_en_uncased_L-12_H-256_A-4':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',\n 'small_bert/bert_en_uncased_L-12_H-512_A-8':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',\n 'small_bert/bert_en_uncased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',\n 'albert_en_base':\n 'https://tfhub.dev/tensorflow/albert_en_base/2',\n 'electra_small':\n 'https://tfhub.dev/google/electra_small/2',\n 'electra_base':\n 'https://tfhub.dev/google/electra_base/2',\n 'experts_pubmed':\n 'https://tfhub.dev/google/experts/bert/pubmed/2',\n 'experts_wiki_books':\n 'https://tfhub.dev/google/experts/bert/wiki_books/2',\n 'talking-heads_base':\n 'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',\n}\n\nmap_model_to_preprocess = {\n 'bert_en_uncased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'bert_en_cased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',\n 'small_bert/bert_en_uncased_L-2_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-2_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-2_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-2_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-4_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-4_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-4_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-4_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-6_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-6_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-6_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-6_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-8_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-8_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-8_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-8_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-10_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-10_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-10_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-10_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-12_H-128_A-2':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-12_H-256_A-4':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-12_H-512_A-8':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'small_bert/bert_en_uncased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'bert_multi_cased_L-12_H-768_A-12':\n 'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',\n 'albert_en_base':\n 'https://tfhub.dev/tensorflow/albert_en_preprocess/3',\n 'electra_small':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'electra_base':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'experts_pubmed':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'experts_wiki_books':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n 'talking-heads_base':\n 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',\n}\n\ntfhub_handle_encoder = map_name_to_handle[bert_model_name]\ntfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]\n\nprint(f'BERT model selected : {tfhub_handle_encoder}')\nprint(f'Preprocess model auto-selected: {tfhub_handle_preprocess}')", "The preprocessing model\nText inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text.\nThe preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically.\nNote: You will load the preprocessing model into a hub.KerasLayer to compose your fine-tuned model. This is the preferred API to load a TF2-style SavedModel from TF Hub into a Keras model.", "bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess)", "Let's try the preprocessing model on some text and see the output:", "text_test = ['this is such an amazing movie!']\ntext_preprocessed = bert_preprocess_model(text_test)\n\nprint(f'Keys : {list(text_preprocessed.keys())}')\nprint(f'Shape : {text_preprocessed[\"input_word_ids\"].shape}')\nprint(f'Word Ids : {text_preprocessed[\"input_word_ids\"][0, :12]}')\nprint(f'Input Mask : {text_preprocessed[\"input_mask\"][0, :12]}')\nprint(f'Type Ids : {text_preprocessed[\"input_type_ids\"][0, :12]}')", "As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids).\nSome other important points:\n- The input is truncated to 128 tokens. The number of tokens can be customized, and you can see more details on the Solve GLUE tasks using BERT on a TPU colab.\n- The input_type_ids only have one value (0) because this is a single sentence input. For a multiple sentence input, it would have one number for each input.\nSince this text preprocessor is a TensorFlow model, It can be included in your model directly.\nUsing the BERT model\nBefore putting BERT into your own model, let's take a look at its outputs. You will load it from TF Hub and see the returned values.", "bert_model = hub.KerasLayer(tfhub_handle_encoder)\n\nbert_results = bert_model(text_preprocessed)\n\nprint(f'Loaded BERT: {tfhub_handle_encoder}')\nprint(f'Pooled Outputs Shape:{bert_results[\"pooled_output\"].shape}')\nprint(f'Pooled Outputs Values:{bert_results[\"pooled_output\"][0, :12]}')\nprint(f'Sequence Outputs Shape:{bert_results[\"sequence_output\"].shape}')\nprint(f'Sequence Outputs Values:{bert_results[\"sequence_output\"][0, :12]}')", "The BERT models return a map with 3 important keys: pooled_output, sequence_output, encoder_outputs:\n\npooled_output represents each input sequence as a whole. The shape is [batch_size, H]. You can think of this as an embedding for the entire movie review.\nsequence_output represents each input token in the context. The shape is [batch_size, seq_length, H]. You can think of this as a contextual embedding for every token in the movie review.\nencoder_outputs are the intermediate activations of the L Transformer blocks. outputs[\"encoder_outputs\"][i] is a Tensor of shape [batch_size, seq_length, 1024] with the outputs of the i-th Transformer block, for 0 &lt;= i &lt; L. The last value of the list is equal to sequence_output.\n\nFor the fine-tuning you are going to use the pooled_output array.\nDefine your model\nYou will create a very simple fine-tuned model, with the preprocessing model, the selected BERT model, one Dense and a Dropout layer.\nNote: for more information about the base model's input and output you can follow the model's URL for documentation. Here specifically, you don't need to worry about it because the preprocessing model will take care of that for you.", "def build_classifier_model():\n text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')\n preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='preprocessing')\n encoder_inputs = preprocessing_layer(text_input)\n encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT_encoder')\n outputs = encoder(encoder_inputs)\n net = outputs['pooled_output']\n net = tf.keras.layers.Dropout(0.1)(net)\n net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)\n return tf.keras.Model(text_input, net)", "Let's check that the model runs with the output of the preprocessing model.", "classifier_model = build_classifier_model()\nbert_raw_result = classifier_model(tf.constant(text_test))\nprint(tf.sigmoid(bert_raw_result))", "The output is meaningless, of course, because the model has not been trained yet.\nLet's take a look at the model's structure.", "tf.keras.utils.plot_model(classifier_model)", "Model training\nYou now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier.\nLoss function\nSince this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use losses.BinaryCrossentropy loss function.", "loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)\nmetrics = tf.metrics.BinaryAccuracy()", "Optimizer\nFor fine-tuning, let's use the same optimizer that BERT was originally trained with: the \"Adaptive Moments\" (Adam). This optimizer minimizes the prediction loss and does regularization by weight decay (not using moments), which is also known as AdamW.\nFor the learning rate (init_lr), you will use the same schedule as BERT pre-training: linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5).", "epochs = 5\nsteps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()\nnum_train_steps = steps_per_epoch * epochs\nnum_warmup_steps = int(0.1*num_train_steps)\n\ninit_lr = 3e-5\noptimizer = optimization.create_optimizer(init_lr=init_lr,\n num_train_steps=num_train_steps,\n num_warmup_steps=num_warmup_steps,\n optimizer_type='adamw')", "Loading the BERT model and training\nUsing the classifier_model you created earlier, you can compile the model with the loss, metric and optimizer.", "classifier_model.compile(optimizer=optimizer,\n loss=loss,\n metrics=metrics)", "Note: training time will vary depending on the complexity of the BERT model you have selected.", "print(f'Training model with {tfhub_handle_encoder}')\nhistory = classifier_model.fit(x=train_ds,\n validation_data=val_ds,\n epochs=epochs)", "Evaluate the model\nLet's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy.", "loss, accuracy = classifier_model.evaluate(test_ds)\n\nprint(f'Loss: {loss}')\nprint(f'Accuracy: {accuracy}')", "Plot the accuracy and loss over time\nBased on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy:", "history_dict = history.history\nprint(history_dict.keys())\n\nacc = history_dict['binary_accuracy']\nval_acc = history_dict['val_binary_accuracy']\nloss = history_dict['loss']\nval_loss = history_dict['val_loss']\n\nepochs = range(1, len(acc) + 1)\nfig = plt.figure(figsize=(10, 6))\nfig.tight_layout()\n\nplt.subplot(2, 1, 1)\n# r is for \"solid red line\"\nplt.plot(epochs, loss, 'r', label='Training loss')\n# b is for \"solid blue line\"\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\n# plt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\n\nplt.subplot(2, 1, 2)\nplt.plot(epochs, acc, 'r', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend(loc='lower right')", "In this plot, the red lines represent the training loss and accuracy, and the blue lines are the validation loss and accuracy.\nExport for inference\nNow you just save your fine-tuned model for later use.", "dataset_name = 'imdb'\nsaved_model_path = './{}_bert'.format(dataset_name.replace('/', '_'))\n\nclassifier_model.save(saved_model_path, include_optimizer=False)", "Let's reload the model, so you can try it side by side with the model that is still in memory.", "reloaded_model = tf.saved_model.load(saved_model_path)", "Here you can test your model on any sentence you want, just add to the examples variable below.", "def print_my_examples(inputs, results):\n result_for_printing = \\\n [f'input: {inputs[i]:<30} : score: {results[i][0]:.6f}'\n for i in range(len(inputs))]\n print(*result_for_printing, sep='\\n')\n print()\n\n\nexamples = [\n 'this is such an amazing movie!', # this is the same sentence tried earlier\n 'The movie was great!',\n 'The movie was meh.',\n 'The movie was okish.',\n 'The movie was terrible...'\n]\n\nreloaded_results = tf.sigmoid(reloaded_model(tf.constant(examples)))\noriginal_results = tf.sigmoid(classifier_model(tf.constant(examples)))\n\nprint('Results from the saved model:')\nprint_my_examples(examples, reloaded_results)\nprint('Results from the model in memory:')\nprint_my_examples(examples, original_results)", "If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows:", "serving_results = reloaded_model \\\n .signatures['serving_default'](tf.constant(examples))\n\nserving_results = tf.sigmoid(serving_results['classifier'])\n\nprint_my_examples(examples, serving_results)", "Next steps\nAs a next step, you can try Solve GLUE tasks using BERT on a TPU tutorial, which runs on a TPU and shows you how to work with multiple inputs." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pligor/predicting-future-product-prices
04_time_series_prediction/17_price_history_seq2seq-overfitting.ipynb
agpl-3.0
[ "# -*- coding: UTF-8 -*-\n#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2", "https://www.youtube.com/watch?v=ElmBrKyMXxs\nhttps://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb\nhttps://github.com/ematvey/tensorflow-seq2seq-tutorials", "from __future__ import division\nimport tensorflow as tf\nfrom os import path, remove\nimport numpy as np\nimport pandas as pd\nimport csv\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom time import time\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \\\n renderStatsListWithLabels, renderStatsCollectionOfCrossValids\nfrom tensorflow.contrib import rnn\nfrom tensorflow.contrib import learn\nimport shutil\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom mylibs.tf_helper import getDefaultGPUconfig\nfrom sklearn.metrics import r2_score\nfrom mylibs.py_helper import factors\nfrom fastdtw import fastdtw\nfrom collections import OrderedDict\nfrom scipy.spatial.distance import euclidean\nfrom statsmodels.tsa.stattools import coint\nfrom common import get_or_run_nn\nfrom data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider\nfrom data_providers.price_history_dataset_generator import PriceHistoryDatasetGenerator\nfrom skopt.space.space import Integer, Real\nfrom skopt import gp_minimize\nfrom skopt.plots import plot_convergence\nimport pickle\nimport inspect\nimport dill\nimport sys\nfrom models.price_history_seq2seq_raw_dummy import PriceHistorySeq2SeqRawDummy\n\ndtype = tf.float32\nseed = 16011984\nrandom_state = np.random.RandomState(seed=seed)\nconfig = getDefaultGPUconfig()\nn_jobs = 1\n%matplotlib inline\n\nbb = tf.constant(0., dtype=tf.float32)\nbb.get_shape()\n\naa = tf.zeros((40, 2))\naa.get_shape().concatenate(tf.TensorShape([1]))", "Step 0 - hyperparams\nvocab_size is all the potential words you could have (classification for translation case)\nand max sequence length are the SAME thing\ndecoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now", "epochs = 15\n\nnum_features = 1\nnum_units = 400 #state size\n\ninput_len = 60\ntarget_len = 30\n\nbatch_size = 50 #47\n#trunc_backprop_len = ??\nrnn_cell = PriceHistorySeq2SeqRawDummy.RNN_CELLS.GRU\n\nwith_EOS = False\n\ntotal_train_size = 57994\ntrain_size = 6400 \ntest_size = 1282", "Once generate data", "data_path = '../data/price_history'\n\n#npz_full_train = data_path + '/price_history_03_dp_60to30_train.npz'\n#npz_full_train = data_path + '/price_history_60to30_targets_normed_train.npz'\n\n#npz_train = data_path + '/price_history_03_dp_60to30_57980_train.npz'\n#npz_train = data_path + '/price_history_03_dp_60to30_6400_train.npz'\nnpz_train = data_path + '/price_history_60to30_6400_targets_normed_train.npz'\n\n#npz_test = data_path + '/price_history_03_dp_60to30_test.npz'\nnpz_test = data_path + '/price_history_60to30_targets_normed_test.npz'\n\n# PriceHistoryDatasetGenerator.create_subsampled(inpath=npz_full_train, target_size=6400, outpath=npz_train,\n# random_state=random_state)\n\n# %%time\n# csv_in = '../price_history_03_seq_start_suddens_trimmed.csv'\n\n# train_sku_ids, train_XX, train_YY, train_sequence_lens, train_seq_mask, test_pack = \\\n# PriceHistoryDatasetGenerator(random_state=random_state).\\\n# createAndSaveDataset(\n# csv_in=csv_in,\n# input_seq_len=input_len,\n# target_seq_len=target_len,\n# allowSmallerSequencesThanWindow=False,\n# #min_date = '2016-11-01',\n# split_fraction = 0.40,\n# #keep_training_fraction = 0.22, #57994 * 0.22 = 12758.68\n# normalize_targets = True,\n# #disable saving for now since we have already created them\n# save_files_dic = {\"train\": npz_full_train, \"test\": npz_test,},\n# )\n\n# print train_sku_ids.shape, train_XX.shape, train_YY.shape, train_sequence_lens.shape, train_seq_mask.shape\n# aa,bb,cc,dd,ee = test_pack.get_data()\n# aa.shape,bb.shape,cc.shape,dd.shape,ee.shape", "Step 1 - collect data", "dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)\ndp.inputs.shape, dp.targets.shape\n\naa, bb = dp.next()\naa.shape, bb.shape", "Step 2 - Build model", "model = PriceHistorySeq2SeqRawDummy(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)\n\ngraph = model.getGraph(batch_size=batch_size,\n num_units=num_units,\n input_len=input_len,\n target_len=target_len,\n rnn_cell=rnn_cell)\n\n#show_graph(graph)", "Step 3 training the network\nRECALL: baseline is around 4 for huber loss for current problem, anything above 4 should be considered as major errors", "#rnn_cell = PriceHistorySeq2SeqCV.RNN_CELLS.GRU\n#cross_val_n_splits = 5\nepochs, num_units, batch_size\n\n#set(factors(train_size)).intersection(factors(train_size/5))\n\nbest_learning_rate = 1e-3 #0.0026945952539362472\n\ndef experiment():\n return model.run(npz_path=npz_train,\n epochs=10,\n batch_size = 50,\n num_units = 400,\n input_len=input_len,\n target_len=target_len,\n learning_rate = best_learning_rate,\n preds_gather_enabled=True,\n #eos_token = float(1e3),\n rnn_cell=rnn_cell)\n\ndyn_stats, preds_dict = experiment()", "Recall that without batch normalization within 10 epochs with num units 400 and batch_size 64 we reached at 4.940\nand with having the decoder inputs NOT filled from the outputs", "%%time\ndyn_stats, preds_dict = get_or_run_nn(experiment,\n filename='017_seq2seq_60to30_epochs{}_learning_rate_{:.4f}'.format(\n epochs, best_learning_rate\n ))\n\ndyn_stats.plotStats()\nplt.show()\n\nr2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])\n for ind in range(len(dp.targets))]\n\nind = np.argmin(r2_scores)\nind\n\nreals = dp.targets[ind]\npreds = preds_dict[ind]\n\nr2_score(y_true=reals, y_pred=preds)\n\nsns.tsplot(data=dp.inputs[ind].flatten())\n\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()\n\n%%time\ndtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]\n for ind in range(len(dp.targets))]\n\nnp.mean(dtw_scores)\n\ncoint(preds, reals)\n\ncur_ind = np.random.randint(len(dp.targets))\nreals = dp.targets[cur_ind]\npreds = preds_dict[cur_ind]\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()", "Conclusion\n???" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
luizgh/sigver_wiwd
interactive_example.ipynb
bsd-2-clause
[ "Using a feature representation learned for signature images\nThis notebook contains code to pre-process signature images and to obtain feature-vectors using the learned feature representation on the GPDS dataset", "import numpy as np\n\n# Functions to load and pre-process the images:\nfrom scipy.misc import imread, imsave\nfrom preprocess.normalize import normalize_image, resize_image, crop_center, preprocess_signature\n\n# Functions to load the CNN model\nimport signet\nfrom cnn_model import CNNModel\n\n# Functions for plotting:\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['image.cmap'] = 'Greys'", "Pre-processing a single image", "original = imread('data/some_signature.png')\n\n\n# Manually normalizing the image following the steps provided in the paper.\n# These steps are also implemented in preprocess.normalize.preprocess_signature\n\nnormalized = 255 - normalize_image(original, size=(952, 1360))\nresized = resize_image(normalized, (170, 242))\ncropped = crop_center(resized, (150,220))\n\n\n# Visualizing the intermediate steps\n\nf, ax = plt.subplots(4,1, figsize=(6,15))\nax[0].imshow(original, cmap='Greys_r')\nax[1].imshow(normalized)\nax[2].imshow(resized)\nax[3].imshow(cropped)\n\nax[0].set_title('Original')\nax[1].set_title('Background removed/centered')\nax[2].set_title('Resized')\nax[3].set_title('Cropped center of the image')", "Processing multiple images and obtaining feature vectors", "user1_sigs = [imread('data/a%d.png' % i) for i in [1,2]]\nuser2_sigs = [imread('data/b%d.png' % i) for i in [1,2]]\n\ncanvas_size = (952, 1360)\n\nprocessed_user1_sigs = np.array([preprocess_signature(sig, canvas_size) for sig in user1_sigs])\nprocessed_user2_sigs = np.array([preprocess_signature(sig, canvas_size) for sig in user2_sigs])\n\n# Shows pre-processed samples of the two users\n\nf, ax = plt.subplots(2,2, figsize=(10,6))\nax[0,0].imshow(processed_user1_sigs[0])\nax[0,1].imshow(processed_user1_sigs[1])\n\nax[1,0].imshow(processed_user2_sigs[0])\nax[1,1].imshow(processed_user2_sigs[1])", "Using the CNN to obtain the feature representations", "# Path to the learned weights\nmodel_weight_path = 'models/signet.pkl'\n\n# Instantiate the model\nmodel = CNNModel(signet, model_weight_path)\n\n# Obtain the features. Note that you can process multiple images at the same time\n\nuser1_features = model.get_feature_vector_multiple(processed_user1_sigs, layer='fc2')\nuser2_features = model.get_feature_vector_multiple(processed_user2_sigs, layer='fc2')", "Inspecting the learned features\nThe feature vectors have size 2048:", "user1_features.shape\n\nprint('Euclidean distance between signatures from the same user')\nprint(np.linalg.norm(user1_features[0] - user1_features[1]))\nprint(np.linalg.norm(user2_features[0] - user2_features[1]))\n\nprint('Euclidean distance between signatures from different users')\n\ndists = [np.linalg.norm(u1 - u2) for u1 in user1_features for u2 in user2_features]\nprint(dists)\n\n# Other models:\n# model_weight_path = 'models/signetf_lambda0.95.pkl'\n# model_weight_path = 'models/signetf_lambda0.999.pkl'", "Using SPP models (signatures from different sizes)\nFor the SPP models, we can use images of any size as input, to obtain a feature vector of a fixed size. Note that in the paper we obtained better results by padding small images to a fixed canvas size, and processed larger images in their original size. More information can be found in the paper: https://arxiv.org/abs/1804.00448", "from preprocess.normalize import remove_background\n\n# To illustrate that images from any size can be used, let's process the signatures just \n# by removing the background and inverting the image\n\nnormalized_spp = 255 - remove_background(original)\n\nplt.imshow(normalized_spp)\n\n# Note that now we need to use lists instead of numpy arrays, since the images will have different sizes. \n# We will also process each image individually\n\nprocessed_user1_sigs_spp = [255-remove_background(sig) for sig in user1_sigs]\nprocessed_user2_sigs_spp = [255-remove_background(sig) for sig in user2_sigs]\n\n# Shows pre-processed samples of the two users\n\nf, ax = plt.subplots(2,2, figsize=(10,6))\nax[0,0].imshow(processed_user1_sigs_spp[0])\nax[0,1].imshow(processed_user1_sigs_spp[1])\n\nax[1,0].imshow(processed_user2_sigs_spp[0])\nax[1,1].imshow(processed_user2_sigs_spp[1])\n\nimport signet_spp_300dpi\n# Instantiate the model\nmodel = CNNModel(signet_spp_300dpi, 'models/signet_spp_300dpi.pkl')\n\n# Obtain the features. Note that we need to process them individually here since they have different sizes\n\nuser1_features_spp = [model.get_feature_vector(sig, layer='fc2') for sig in processed_user1_sigs_spp]\nuser2_features_spp = [model.get_feature_vector(sig, layer='fc2') for sig in processed_user2_sigs_spp]\n\nprint('Euclidean distance between signatures from the same user')\nprint(np.linalg.norm(user1_features_spp[0] - user1_features_spp[1]))\nprint(np.linalg.norm(user2_features_spp[0] - user2_features_spp[1]))\n\nprint('Euclidean distance between signatures from different users')\n\ndists = [np.linalg.norm(u1 - u2) for u1 in user1_features_spp for u2 in user2_features_spp]\nprint(dists)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wasit7/PythonDay
notebook/Somkiat's Basic Python.ipynb
bsd-3-clause
[ "Environment setup: Python and Jupyter\nVariables: Numbers, String, Tuple, List, Dictionary\nBasic operators: Arithmetic and Boolean operators\nControl flow: if/else, for, while, pass, break, continue\nList: access, update, del, len(), + , in, for, slicing, append(), insert(), pop(), remove()\nDictionary: access, update, del, in\nFunction: function definition, pass by reference, keyword argument, default argument, lambda\nmap reduce filter\nModule: from, import, reload(), package has init.py, init and str\nI/O: raw_input(), input(), open(), close(), write(), read(), rename(), remove(), mkdir(), chdir(), rmdir()\nPass by value, Pass by reference\nDate/time: local time and time zone, pytz module\nVariables: Numbers, String, Tuple, List, Dictionary", "x=1\nprint x\n\ntype(x)\n\nx.conjugate()\n\ntype(1+2j)\n\nz=1+2j\nprint z\n\n(1,2)\n\nt=(1,2,\"text\")\n\nt\n\n\nt\n\ndef foo():\n return (1,2)\nx,y=foo()\n\nprint x\nprint y\n\ndef swap(x,y):\n return (y,x)\n\nx=1;y=2\nprint \"{0:d} {1:d}\".format(x,y)\nx,y=swap(x,y)\nprint \"{:f} {:f}\".format(x,y)\n\ndir(1)\n\nx=[]\n\nx.append(\"text\")\n\nx\n\nx.append(1)\n\nx.pop()\n\nx.append([1,2,3])\n\nx\n\nx.append(2)\n\nx\n\nprint x[0]\nprint x[-2]\n\nx.pop(-2)\n\nx\n\n%%timeit -n10\nx=[]\nfor i in range(100000):\n x.append(2*i+1)\n\n%%timeit -n10\nx=[]\nfor i in xrange(100000):\n x.append(2*i+1)\n\nrange(10)\n\ny=[2*i+1 for i in xrange(10)]\nprint y\n\ntype({})\n\nx={\"key\":\"value\",\"foo\":\"bar\"}\nprint x\n\nkey=\"key1\"\nif key in x:\n print x[key]\n\ny={ i:i*i for i in xrange(10)}\n\ny\n\nz=[v for (k,v) in y.iteritems()]\nprint z", "if/else, for, while, pass, break, continue", "p=[]\nfor i in xrange(2,100):\n isprime=1\n for j in p:\n if(i%j==0):\n isprime=0\n break\n if isprime:\n p.append(i)\n \nprint p\n\nfor i in xrange(10):\n pass\n\ni=10\nwhile i>0:\n i=i-1\n print i\n\nx=['text',\"str\",''' Hello World\\\\n ''']\nprint x", "List: access, update, del, len(), + , in, for, slicing, append(), insert(), pop(), remove()", "x=['a','b','c']\n#access\nprint x[0]\n#update\nx[0]='d'\nprint x\nprint \"size of x%d is\"%len(x)\n\ny=['x','y','z']\nz=x+y\ngamma=y+x\nprint z\nprint gamma\n\nprint 'a' in x\n\nprint y\ny.remove('y')# remove by vavlue\nprint y\n\nprint y\ny.pop(0)# remove by index\nprint y\n\ny.insert(0,'x')\ny.insert(1,'y')\nprint y\n\nx=[i*i for i in xrange(10)]\nprint x\n\nx[:3]\n\nx[-3:]\n\nx[-1:]\n\nx[3:-3]\n\nx[1:6]\n\nx[::2]\n\nprint x\nx.reverse()\nprint x\n\nprint x\nprint x[::-1]\nprint x", "Dictionary: access, update, del, in", "x={}\n\nx={'key':'value'}\n\nx['foo']='bar'\n\nx\n\nx['foo']='Hello'\n\nx\n\nx['m']=123\n\nx['foo','key']\n\nkeys=['foo','key']\n[x[k] for k in keys]\n\nprint x\ndel x\nprint x", "Function: function definition, pass by reference, keyword argument, default argument, lambda\na built-in immutable type: str, int, long, bool, float, tuple", "def foo(x):\n x=x+1\n y=2*x\n return y\nprint foo(3)\n\nx=3\nprint foo(x)\nprint x\n\ndef bar(x=[]):\n x.append(7)\n print \"in loop: {}\".format(x)\n\nx=[1,2,3]\nprint x\nbar(x)\nprint x\n\ndef func(x=0,y=0,z=0):#defualt input argument\n return x*100+y*10+z\n\nfunc(1,2)\n\nfunc(y=2,z=3,x=1)#keyword input argument\n\nf=func\n\n\nf(y=2)\n\ndistance=[13,500,1370]#meter\ndef meter2Kilometer(d):\n return d/1000.0;\n\nmeter2Kilometer(distance)\n\n[meter2Kilometer(d) for d in distance]\n\nd2 = map(meter2Kilometer,distance)\nprint d2\n\nd3 = map(lambda x: x/1000.0,distance)\nprint d3\n\ndistance=[13,500,1370]#meter\ntime=[1,10,100]\nd3 = map(lambda s,t: s/float(t)*3.6, distance,time )\nprint d3\n\nd4=filter(lambda s: s<1000, distance)\nprint d4\n\ntotal_distance=reduce(lambda i,j : i+j, distance)\ntotal_distance\n\nimport numpy as np\nx=np.arange(101)\nprint x\n\nnp.histogram(x,bins=[0,50,60,70,80,100])\n\nprint np.sort(x)", "Module: class, from, import, reload(), package and init", "class Obj:\n def __init__(self, _x, _y):\n self.x = _x\n self.y = _y\n \n def update(self, _x, _y):\n self.x += _x\n self.y += _y \n \n def __str__(self):\n return \"x:%d, y:%d\"%(self.x,self.y)\n\na=Obj(5,7)#call __init__\nprint a#call __str__\na.update(1,2)#call update\nprint a\n\nimport sys\nimport os\npath=os.getcwd()\npath=os.path.join(path,'lib')\nprint path\nsys.path.insert(0, path)\nfrom Obj import Obj as ob\n\nb=ob(7,9)\nprint b\nb.update(3,7)\nprint b\n\nos.getcwd()\n\nfrom mylib import mymodule as mm\n\nmm=reload(mm)\n\nprint mm.Obj2(8,9)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/intro_to_programming/raw/tut1.ipynb
apache-2.0
[ "Welcome to the Intro to Programming course! This course is for you if you have never written a single line of code, and you are interested in learning data science and machine learning. (If you do have programming experience and are just new to the Python language, the Python course is a better fit to get started.)\nIn this course, you will learn how to use code to get a computer to perform certain tasks for you. Python is one of the most popular programming languages for data science, and it's the language you'll learn in this course. Once you complete this course, you'll be fully prepared to progress to the Python course, followed by the Intro to Machine Learning course.\nIn this tutorial, you'll see several examples of Python code. You'll get a chance to write your own code in the exercise. (If you'd like to preview the exercise, feel free to take a look now. We also provide a link to the exercise at the end of this tutorial.)\nPrinting\nOne of the simplest (and most important!) tasks you can ask a computer to do is to print a message.\nIn Python, we ask a computer to print a message for us by writing print() and putting the message inside the parentheses and enclosed in quotation marks. Below, we ask the computer to print the message Hello, world!.", "print(\"Hello, world!\")", "The code is inside the box (known as a code cell), and the computer's response (called the output of the code) is shown below the box. As you can see, the computer printed the message that we wanted.\nArithmetic\nWe can also print the value of some arithmetic operation (such as addition, subtraction, multiplication, or division).\nFor instance, in the next code cell, the computer adds 2 to 1 and then prints the result, which is 3. Note that unlike when we were simply printing text, we don't use any quotation marks.", "print(1 + 2)", "We can also do subtraction in python. The next code cell subtracts 5 from 9 and prints the result, which is 4.", "print(9 - 5)", "You can actually do a lot of calculations with python! See the table below for some examples.\n<table style=\"width: 100%;\">\n<tbody>\n<tr><th><b>Operation</b></th><th><b>Symbol</b></th><th><b>Example</b></th></tr>\n<tr>\n<td>Addition</td>\n<td>+</td>\n<td>1 + 2 = 3</td>\n</tr>\n<tr>\n<td>Subtraction</td>\n<td>-</td>\n<td>5 - 4 = 1</td>\n</tr>\n<tr>\n<td>Multiplication</td>\n<td>*</td>\n<td>2 * 4 = 8</td>\n</tr>\n<tr>\n<td>Division</td>\n<td>/</td>\n<td>6 / 3 = 2</td>\n</tr>\n<tr>\n<td>Exponent</td>\n<td>**</td>\n<td>3 ** 2 = 9</td>\n</tr>\n</tbody>\n</table>\n\nYou can control the order of operations in long calculations with parentheses.", "print(((1 + 3) * (9 - 2) / 2) ** 2)", "In general, Python follows the PEMDAS rule when deciding the order of operations.\nComments\nWe use comments to annotate what code is doing. They help other people to understand your code, and they can also be helpful if you haven't looked at your own code in a while. So far, the code that we have written is very short, but annotations become more important when you have written a lot of code. \nFor instance, in the next code cell, we multiply 3 by 2. We also add a comment (# Multiply 3 by 2) above the code to describe what the code is doing.", "# Multiply 3 by 2\nprint(3 * 2)", "To indicate to Python that a line is comment (and not Python code), you need to write a pound sign (#) as the very first character. \nOnce Python sees the pound sign and recognizes that the line is a comment, it is completely ignored by the computer. This is important, because just like English or Hindi (or any other language!), Python is a language with very strict rules that need to be followed. Python is stricter than a human listener, though, and will just error if it can't understand the code.\nWe can see an example of this, in the code cell below. Python errors if we remove the pound sign, because the text in the comment is not valid Python code, so it can't be interpreted properly.", "Multiply 3 by 2", "Variables\nSo far, you have used code to make a calculation and print the result, and the result isn't saved anywhere. However, you can imagine that you might want to save the result to work with it later. For this, you'll need to use variables.\nCreating variables\nThe next code cell creates a variable named test_var and assigns it the value that we get when we add 5 to 4.\nWe then print the value that is assigned to the variable, which is 9.", "# Create a variable called test_var and give it a value of 4+5\ntest_var = 4 + 5\n\n# Print the value of test_var\nprint(test_var)", "In general, to work with a variable, you need to begin by selecting the name you want to use. Variable names are ideally short and descriptive. They also need to satisfy several requirements:\n- They can't have spaces (e.g., test var is not allowed)\n- They can only include letters, numbers, and underscores (e.g., test_var! is not allowed)\n- They have to start with a letter or underscore (e.g., 1_var is not allowed)\nThen, to create the variable, you need to use = to assign the value that you want it to have. \nYou can always take a look at the value assigned to the variable by using print() and putting the name of the variable in parentheses.\nOver time, you'll learn how to select good names for Python variables. It's completely fine for it to feel uncomfortable now, and the best way to learn is just by viewing a lot of Python code!\nManipulating variables\nYou can always change the value assigned to a variable by overriding the previous value.\nIn the code cell below, we change the value of my_var from 3 to 100.", "# Set the value of a new variable to 3\nmy_var = 3\n\n# Print the value assigned to my_var\nprint(my_var)\n\n# Change the value of the variable to 100\nmy_var = 100\n\n# Print the new value assigned to my_var\nprint(my_var)", "Note that in general, whenever you define a variable in a code cell, all of the code cells that follow also have access to the variable. For instance, we use the next code cell to access the values of my_var (from the code cell above) and test_var (from earlier in this tutorial).", "print(my_var)\nprint(test_var)", "The next code cell tells Python to increase the current value of my_var by 3.\nTo do this, we still need to use my_var = like before. And also just like before, the new value we want to assign to the variable is to the right of the = sign.", "# Increase the value by 3\nmy_var = my_var + 3\n\n# Print the value assigned to my_var\nprint(my_var)", "Using multiple variables\nIt's common for code to use multiple variables. This is especially useful when we have to do a long calculation with multiple inputs.\nIn the next code cell, we calculate the number of seconds in four years. This calculation uses five inputs.", "# Create variables\nnum_years = 4\ndays_per_year = 365 \nhours_per_day = 24\nmins_per_hour = 60\nsecs_per_min = 60\n\n# Calculate number of seconds in four years\ntotal_secs = secs_per_min * mins_per_hour * hours_per_day * days_per_year * num_years\nprint(total_secs)", "As calculated above, there are 126144000 seconds in four years. \nNote it is possible to do this calculation without variables as just 60 * 60 * 24 * 365 * 4, but it is much harder to check that the calculation without variables does not have some error, because it is not as readable. When we use variables (such as num_years, days_per_year, etc), we can better keep track of each part of the calculation and more easily check for and correct any mistakes.\nNote that it is particularly useful to use variables when the values of the inputs can change. For instance, say we want to slightly improve our estimate by updating the value of the number of days in a year from 365 to 365.25, to account for leap years. Then we can change the value assigned to days_per_year without changing any of the other variables and redo the calculation.", "# Update to include leap years\ndays_per_year = 365.25\n\n# Calculate number of seconds in four years\ntotal_secs = secs_per_min * mins_per_hour * hours_per_day * days_per_year * num_years\nprint(total_secs)", "Note: You might have noticed the .0 added at the end of the number, which might look unnecessary. This is caused by the fact that in the second calculation, we used a number with a fractional part (365.25), whereas the first calculation multipled just numbers with no fractional part. You'll learn more about this in Lesson 3, when we cover data types.\nDebugging\nOne common error when working with variables is to accidentally introduce typos. For instance, if we spell hours_per_day as hours_per_dy, Python will error with message NameError: name 'hours_per_dy' is not defined.", "print(hours_per_dy)", "When you see NameError like this, it's an indication that you should check how you have spelled the variable that it references as \"not defined\". Then, to fix the error, you need only correct the spelling.", "print(hours_per_day)", "What's next?\nNow it's your turn to practice manipulating variables with arithmetic." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nberliner/delveData
notebooks/Load climate data.ipynb
gpl-2.0
[ "Loading the climate data\n\nThe Origin.\nWorld wide climate data was obtained from the US National Oceanic and Atmospheric Administration (NOAA) (see here).\nThe Format.\nDaily measurements are made available for weather stations with latitude and longitude information of each station. To make use of the data in this project, each station is mapped to its country and an aggregate statistic is calculated for each country and each year.\nThe Indicator.\nIn order to make the climate data compatible with the already implemented data (World Bank, UNHCR, etc.) yearly aggregate indicators need to be generated from the daily measurements of temperature, precipitation, etc. I believe that one promising way of looking at the data could be to focus on extreme events that occur during one year. But what is an extreme event? The average weather observed during one month should be relatively stable over the years and I decided to declare a month as extreme if its average value lies outside of a certain standard deviation window of the average over all past years.\nPractically speaking I calculate for each year and each month the average temperature observed during that month and compare it to the past. If the average deviates from the experience it will be called extreme.\n\nIn the following I will look at the distribution of how much monthly average readings deviate from the expectation based on the past history. This will aid in determining a good metric for when to call an weather event extreme.\nLet's load the climate data and have a brief look. We will restrict the weather data to the years from 1900 to 2014 and we will set the optimiseFactor flag to obtain the averagage and standart deviation calculation for each weather station.", "%pylab inline\nimport sys\nsys.path.insert(0,\"../lib/\")\nfrom scipy.stats import norm\n\nfrom climateData import WeatherData\n\nweatherData = WeatherData(years=[1900,2014], optimiseFactor=True)", "Now we have the weatherData object with its data variable containing four extra columns, i.e. Month, _LastYearsAvg, _LastYearsStd, and _ThisYear. The columns _LastYearsAvg and _LastYearsStd contain the average and standard deviation of all previous years of month Month respectively. Columns _ThisYear contains the average of the current month of the year.", "weatherData.data.head()", "First let's remove the early years in which we are not interested in. Note also that there might be months containing -9999. This can happen if they constitute the first months in which the measurement was available (and no last years are available). In addition create a new column containing the difference of the measured averages, thus making them better comparable.", "data = weatherData.data[ weatherData.data[\"Year\"] >= 1980 ]\ndata = data[ np.isclose(data[\"_LastYearsStd\"], -9999.0) == False ]\ndata[\"avg_diff\"] = data[\"_LastYearsAvg\"] - data[\"_ThisYear\"]", "In order get a feel for a good classification for extreme climate events we can look at the distribution of the difference between the average values and the distribution of the standard deviations. I will take the maximum temperature reading as an example.", "data_tmax = data[ data[\"Element\"] == \"TMAX\" ]\nhist_avg, bin_edges_avg = np.histogram( np.abs(np.abs(np.asarray( data_tmax[\"avg_diff\"] ))), bins=100 )\nhist_std, bin_edges_std = np.histogram( np.abs(np.asarray( data_tmax[\"_LastYearsStd\"] )), bins=100 )\n\nfig = plt.figure(figsize=(10, 8), dpi=200)\nax = fig.add_subplot(111)\nax.tick_params(axis='both', which='major', labelsize=12)\n\nlabel_avg = \"Distribution of differences of monthly average\\ntemperatures compared to past years\"\nlabel_std = \"Distribution of standard deviations of monthly\\naverage temperatures\"\n\nax.bar(bin_edges_avg[:-1], hist_avg, width = 1.1, facecolor=\"red\", alpha=0.9, label=label_avg);\nax.bar(bin_edges_std[:-1], hist_std, width = 1.1, facecolor=\"blue\", alpha=0.6, label=label_std, zorder=5);\n\nplt.legend();", "This already gives us an indication that there are events that are extreme in the sense that they deviate by more than 1-2 standard deviations from the average of past years. However this still does not give us a robust and good indicator of when to call a climate event extreme.\nA relative measure will be much more helpful in determining a good cutoff value. So instead of looking at the two distributions we can look at how much the average of the current year deviates from the past years in units of the standard deviation. Values that deviate by more than one standard deviation will then have values above one (and vice versa).", "data[\"avg_diff_fold\"] = np.abs(data[\"avg_diff\"]) / data[\"_LastYearsStd\"]\ndata_tmax = data[ data[\"Element\"] == \"TMAX\" ]", "For plotting we will remove the few events that deviate extremely and would render the plotting impossible", "tmpData = np.abs(np.asarray( data_tmax[\"avg_diff_fold\"] ))\ntmpData = tmpData[ tmpData < np.percentile(tmpData, 99.9) ]\nhist_avg_fold, bin_edges_avg_fold = np.histogram(tmpData, bins=100, density=True)", "Here I will take prior knowlede (I looked already at the plot and went back one step) and assume that the distribution will look like a normal distribution. To visually emphasize this point we can fit the distribution and add the fit to the plot.", "mu, std = norm.fit(np.concatenate((-tmpData,tmpData), axis=1))\nx = np.linspace(0, 5, 100)\np = norm.pdf(x, mu, std)\nprint(\"Fitted a normal distribution at %.1f with standard deviation %.2f\" %(mu, std))\n\nfig = plt.figure(figsize=(10, 8), dpi=200)\nax = fig.add_subplot(111)\nax.tick_params(axis='both', which='major', labelsize=12)\n\nlabel_avg_fold = \"Distribution of fold differences of monthly average\\ntemperatures compared to past years\"\n\nax.bar(bin_edges_avg_fold[:-1], hist_avg_fold, width = 0.04, facecolor=\"green\", edgecolor=\"green\", alpha=0.9, label=label_avg_fold);\nax.plot(x, 2*p, 'k', linewidth=2)\n\nplt.legend();", "From the plot we can see that there is no obvious cutoff point that we could choose so we will have to use common sense. I would argue that a good measure would be to declare the 25% highest values as extreme. This will give us a cuttoff point of:", "cutoff = np.percentile(tmpData, 85)\nprint(\"The cutoff point is set to %.2f\" %cutoff)", "What the plot above is not telling us is how the individual bins of the histogram are populated in time. By that I mean that each event in the histogram is linked to the year in which the measurement was taken. We can now ask the question if events that deviate far from the all time averages are more likely to have occured in the recent past or if they are equally distributed.\nTwo answer that question let us look at the average of the years for each bin.", "bin_years = list()\nfor i in range(1,len(bin_edges_avg_fold)):\n start, end = bin_edges_avg_fold[i-1], bin_edges_avg_fold[i]\n tmp = data_tmax[ data_tmax[\"avg_diff_fold\"] > start ]\n tmp = tmp[ tmp[\"avg_diff_fold\"] < end ]\n bin_years.append(tmp[\"Year\"])\n\navg_time = [ np.average(item) for item in bin_years ]\navg_time_X = [ i*0.05 for i in range(1,len(avg_time)+1) ] # make the plot go from 0 to 5 and not from 0 to 100\n\nfig = plt.figure(figsize=(8, 6), dpi=200)\nax = fig.add_subplot(111)\nax.tick_params(axis='both', which='major', labelsize=12)\n\nax.plot(avg_time_X, avg_time, label=\"Average year of the histogram bin\");\nax.axhline(np.average(data_tmax[\"Year\"]), 0, 100, color=\"red\", label=\"Total average of years\");\nplt.legend(loc=2, fontsize=16);", "This is a very interesting plot! For each bin of the histogram shown above we calculated the average year of all events falling into this bin. If we would assume that the weather is stable over the years we would expect that each bin should have the same average. In that case we would expect that the blue line would fluctuate around the red line. What we can see however, looks like an effect of climate change. What this plot tells us is that more extreme events became more likely in the recent years! To most of us this will not come as a surprise but it is always reassuring to find known events in the data after doing a lot of processing to it." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eds-uga/csci1360-fa16
assignments/A7/A7_Q1.ipynb
mit
[ "Q1\nIn this question, you'll go over some of the core terms and concepts in statistics.\nA\nWrite a function, mean, which computes the mean of a list of numbers.\nThe function takes one argument: a list or 1D NumPy array of numbers. It returns one floating-point number: the average value.\nYou can use numpy.array but no other NumPy functions or built-in Python functions.", "try:\n mean\nexcept:\n assert False\nelse:\n assert True\n\nimport numpy as np\n\nnp.random.seed(2342348)\nx = np.random.random(100)\nnp.testing.assert_allclose(0.51465810266723755, mean(x))\n\nnp.random.seed(5825)\ny = np.random.random(1000)\nnp.testing.assert_allclose(0.50133630983357202, mean(y))", "B\nWrite a function, variance, which computes the variance of a list of numbers.\nThe function takes one argument: a list or 1D NumPy array of numbers. It returns one floating-point number: the variance of all the numbers.\nRecall the formula for variance:\n$$\nvariance = \\frac{1}{N - 1} \\sum_{i = 1}^{N} (x_i - \\mu_x)^2\n$$\nwhere $N$ is the number of numbers in your list, $x_i$ is the number at index $i$ in the list, and $\\mu_x$ is the average value of all the $x$ values.\nYou can use numpy.array and your mean function from Part A, but no other NumPy functions or built-in Python functions.", "try:\n variance\nexcept:\n assert False\nelse:\n assert True\n\nimport numpy as np\n\nnp.random.seed(5987968)\nx = np.random.random(8491)\nv = x.var(ddof = 1)\nnp.testing.assert_allclose(v, variance(x))\n\nnp.random.seed(4159)\ny = np.random.random(25)\nw = y.var(ddof = 1)\nnp.testing.assert_allclose(w, variance(y))", "C\nThe lecture on statistics mentions latent variables, specifically how you cannot know what the underlying process is that's generating your data; all you have is the data, on which you have to impose certain assumptions in order to derive hypotheses about what generated the data in the first place.\nTo illustrate this, the code provided below generates sample data from distributions with mean and variance that are normally not known to you. You'll use the functions you wrote in parts A and B to compute the statistics on the sample data itself and observe how these statistics change.\nIn the space provided, compute and print the mean and variance of each of the three samples:\n - sample1\n - sample2\n - sample3\nUse the functions you wrote in Parts A and B.", "import numpy as np\nnp.random.seed(5735636)\n\nsample1 = np.random.normal(loc = 10, scale = 5, size = 10)\nsample2 = np.random.normal(loc = 10, scale = 5, size = 1000)\nsample3 = np.random.normal(loc = 10, scale = 5, size = 1000000)\n\n#########################\n# DON'T MODIFY ANYTHING #\n# ABOVE THIS BLOCK #\n#########################\n\n### BEGIN SOLUTION\n\n### END SOLUTION", "D\nSince you don't usually know the true mean and variance of the process that presumably generated your data, the mean and variance you compute yourself are estimates of the true mean and variance. Explain what you saw in the estimates you computed above as they related to the number of samples. What implications does this have for computing statistics as part of real-world analyses?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
daniestevez/jupyter_notebooks
CE5/CE-5 frame analysis ATA 2021-01-23.ipynb
gpl-3.0
[ "%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom construct import *\nimport scipy.signal\n\nimport sys\nsys.path.append('../Tianwen/') # to import ccsds\nimport ccsds\n\nimport struct\nimport collections\nimport pathlib\n\nCE5_AOSInsertZone = Struct(\n 'unknown1' / Hex(Int8ub),\n 'unknown2' / Int8ub,\n 'unknown3' / Hex(Int8ub),\n 'unknown4' / Hex(Int8ub),\n 'timestamp' / Int32ul, # in units of 1s, epoch 2012-08-01 UTC\n)\n\nCE5_AOSFrame = Struct(\n 'primary_header' / ccsds.AOSPrimaryHeader,\n 'insert_zone' / CE5_AOSInsertZone,\n 'm_pdu_header' / ccsds.M_PDU_Header,\n 'm_pdu_packet_zone' / GreedyBytes\n)\n\ndef get_packet(p):\n return p[0] if type(p) is tuple else p\n\ndef packets_asarray(packets):\n return np.array([np.frombuffer(get_packet(p)[ccsds.SpacePacketPrimaryHeader.sizeof():], 'uint8')\n for p in packets])\n\ndef plot_apids(apids, sc, vc):\n for apid in sorted(apids.keys()):\n plt.figure(figsize = (16,16), facecolor = 'w')\n ps = packets_asarray(apids[apid])\n plt.imshow(ps, aspect = ps.shape[1]/ps.shape[0])\n plt.title(f\"Chang'e 5 Spacecraft ID {sc} APID {apid} Virtual channel {vc}\")\n\ndef get_packet_timestamps(packets):\n return np.datetime64('2012-08-01') + np.timedelta64(1,'s')*np.array([p[1] for p in packets])", "Here we look at some Chang'e 5 low data rate telemetry received with Allen Telescope array on 2021-01-23, during its transfer to the Sun-Earth L1 point. The recorded data corresponds to the frequency 8471.2 MHz, which corresponds to the lander.\nThe frames are CCSDS concatenated frames with a frames size of 220 bytes.", "def load_frames(path):\n frame_size = 220\n frames = np.fromfile(path, dtype = 'uint8')\n frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size))\n return frames\n\nframes = load_frames('ATA_2021-01-23/ce5_frames_1.u8')", "AOS frames\nAOS frames come from spacecraft 91 and virtual channels 1 and 2. Other combinations are most likely to corruted frames despite the fact that the Reed-Solomon decoder was successful.", "aos = [CE5_AOSFrame.parse(f) for f in frames]\n\ncollections.Counter([a.primary_header.transfer_frame_version_number for a in aos])\n\ncollections.Counter([a.primary_header.spacecraft_id for a in aos\n if a.primary_header.transfer_frame_version_number == 1])\n\ncollections.Counter([a.primary_header.virtual_channel_id for a in aos\n if a.primary_header.transfer_frame_version_number == 1\n and a.primary_header.spacecraft_id == 108])", "Virtual channel 1\nThe vast majority of frames belong to virtual channel 1, which seems to send real-time telemetry.", "[a.primary_header for a in aos if a.primary_header.virtual_channel_id == 1][:10]\n\nvc1 = [a for a in aos if a.primary_header.virtual_channel_id == 1]\nfc = np.array([a.primary_header.virtual_channel_frame_count for a in vc1])\n[a.insert_zone for a in aos[:10]]\n\nt_vc1 = np.datetime64('2012-08-01') + np.timedelta64(1, 's') * np.array([a.insert_zone.timestamp for a in vc1])\n\nplt.figure(figsize = (10,6), facecolor = 'w')\nplt.plot(t_vc1, fc, '.')\nplt.title(\"Chang'e 5 virtual channel 1 timestamps\")\nplt.xlabel('AOS frame timestamp')\nplt.ylabel('AOS virtual channel frame counter');\n\nplt.figure(figsize = (10,6), facecolor = 'w')\nplt.plot(t_vc1[1:], np.diff(fc)-1, '.')\nplt.title(\"Chang'e 5 spacecraft 91 virtual channel 1 frame loss\")\nplt.xlabel('AOS frame timestamp')\nplt.ylabel('Frame loss')\nplt.ylim((-1,50));", "We need to sort the data, since the different files we've loaded up are not in chronological order.", "vc1_packets = list(ccsds.extract_space_packets(vc1, 108, 1, get_timestamps = True))\n\nvc1_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p[0]) for p in vc1_packets]", "There are space packets in may APIDs. The contents of each APID are shown belown in plot form, but it's not easy to guess what any of the values mean.", "vc1_apids = collections.Counter([p.APID for p in vc1_sp_headers])\nvc1_apids\n\nvc1_by_apid = {apid : [p for h,p in zip(vc1_sp_headers, vc1_packets)\n if h.APID == apid] for apid in vc1_apids}\n\nplot_apids(vc1_by_apid, 108, 1)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zzsza/Datascience_School
06. 기초 선형대수/01. NumPy 소개.ipynb
mit
[ "NumPy 소개\nNumPy(보통 \"넘파이\"라고 발음한다)는 2005년에 Travis Oliphant가 발표한 수치해석용 Python 패키지이다. 다차원의 행렬 자료구조인 ndarray 를 지원하여 벡터와 행렬을 사용하는 선형대수 계산에 주로 사용된다. 내부적으로는 BLAS 라이브러리와 LAPACK 라이브러리에 기반하고 있어서 C로 구현된 CPython에서만 사용할 수 있으며 Jython, IronPython, PyPy 등의 Python 구현에서는 사용할 수 없다. NumPy의 행렬 연산은 C로 구현된 내부 반복문을 사용하기 때문에 Python 반복문에 비해 속도가 빠르다. 행렬 인덱싱(array indexing)을 사용한 질의(Query) 기능을 이용하여 짧고 간단한 코드로 복잡한 수식을 계산할 수 있다.\n\nNumPy \n수치해석용 Python 라이브러리 \nCPython에서만 사용 가능\nBLAS/LAPACK 기반\nndarray 다차원 행렬 자료 구조 제공\n내부 반복문 사용으로 빠른 행렬 연산 가능\n행렬 인덱싱(array indexing) 기능\n\nndarray 클래스\nNumPy의 핵심은 ndarray라고 하는 클래스 이다. ndarray 클래스는 다차원 행렬 자료 구조를 지원한다. 실제로 ndarray를 사용하여 1차원 행렬(벡터)을 만들어 보자", "import numpy as np\na = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\nprint(type(a))\na", "만들어진 ndarray 객체의 표현식(representation)을 보면 바깥쪽에 array()란 것이 붙어 있을 뿐 리스트와 동일한 구조처럼 보인다. 실제로 0, 1, 2, 3 이라는 원소가 있는 리스트는 다음과 같이 만든다.", "L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\nprint(type(L))\nL", "그러나 ndarray 클래스 객체 a와 리스트 클래스 객체 b는 많은 차이가 있다. 우선 리스트 클래스 객체는 내부적으로 linked list와 같은 형태를 가지므로 각각의 원소가 다른 자료형이 될 수 있다. 그러나 ndarray 클래스 객체는 C언어의 행렬처럼 연속적인 메모리 배치를 가지기 때문에 모든 원소가 같은 자료형이어야 한다. 이러한 제약을 가지는 대신 내부의 원소에 대한 접근과 반복문 실행이 빨라진다.\nndarray 클래스의 또 다른 특성은 행렬의 각 원소에 대한 연산을 한 번에 처리하는 벡터화 연산(vectorized operation)을 지원한다는 점이다. 예를 들어 ndarray 클래스 객체의 원소의 크기를 모두 제곱하기 위해서는 객체 자체를 제곱하는 것만으로 원하는 결과를 얻을 수 있다.", "a = np.arange(1000) #arange : 그냥 array range임 array로 바꿈\n%time a2 = a**2\n\na1 = np.arange(10)\nprint(a1)\nprint(2 * a1)", "리스트 객체의 경우에는 다음과 같이 반복문을 사용해야 한다.", "L = range(1000)\n%time L2 = [i**2 for i in L]", "각각의 코드 실행시에 IPython의 %time 매직 명령을 이용하여 실행 시간을 측정한 결과 ndarray의 유니버설 연산 실행 속도가 리스트 반복문 보다 빠른 것을 볼 수 있다. ndarray의 메모리 할당을 한 번에 하는 것도 빨라진 이유의 하나이고 유니버설 연산을 사용하게 되면 NumPy 내부적으로 구현된 반복문을 사용하기 때문에 반복문 실행 자체도 빨라진다.\n따라서 Python의 성능 개선을 위해 반드시 지켜야하는 코딩 관례 중의 하나가 NumPy의 ndarray의 벡터화 연산으로 대체할 수 있는 경우에는 Python 자체의 반복문을 사용하지 않는다는 점이다.(for문)\n\n\nPython 리스트\n\n\n여러가지 타입의 원소\n\nlinked List 구현\n메모리 용량이 크고 속도가 느림\n\n벡터화 연산 불가\n\n\nNumPy ndarray\n\n\n동일 타입의 원소\n\ncontiguous memory layout\n메모리 최적화, 계산 속도 향상\n벡터화 연산 가능\n\n참고로 일반적인 리스트 객체에 정수를 곱하면 객체의 크기가 정수배 만큼으로 증가한다.", "L = range(10)\nprint(L)\nprint(2 * L)", "다차원 행렬의 생성\nndarray 는 N-dimensional Array의 약자이다. 이름 그대로 ndarray 클래스는 단순 리스트와 유사한 1차원 행렬 이외에도 2차원 행렬, 3차원 행렬 등의 다차원 행렬 자료 구조를 지원한다. \n예를 들어 다음과 같이 리스트의 리스트를 이용하여 2차원 행렬을 생성하거나 리스트의 리스트의 리스트를 이용하여 3차원 행렬을 생성할 수 있다.", "a = np.array([0, 1, 2]) \na\n\nb = np.array([[0, 1, 2], [3, 4, 5]]) # 2 x 3 array\nb\n\na = np.array([0, 0, 0, 1])\na\n\nc = np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) # 2 x 2 x 2 array\nc", "행렬의 차원 및 크기는 ndim 속성과 shape 속성으로 알 수 있다.", "print(a.ndim)\nprint(a.shape)\n\na = np.array([[1,2,3 ],[3,4,5]])\n\na\n\n\na.ndim\n\n\na.shape\n\nprint(b.ndim)\nprint(b.shape)\n\nprint(c.ndim)\nprint(c.shape)", "다차원 행렬의 인덱싱\nndarray 클래스로 구현한 다차원 행렬의 원소 하나 하나는 다음과 같이 콤마(comma ,)를 사용하여 접근할 수 있다. 콤마로 구분된 차원을 축(axis)이라고도 한다. 플롯의 x축과 y축을 떠올리면 될 것이다.", "a = np.array([[0, 1, 2], [3, 4, 5]])\na\n\na[0,0] # 첫번째 행의 첫번째 열\n\na[0,1] # 첫번째 행의 두번째 열\n\na[-1, -1] # 마지막 행의 마지막 열", "다차원 행렬의 슬라이싱\nndarray 클래스로 구현한 다차원 행렬의 원소 중 복수 개를 접근하려면 일반적인 파이썬 슬라이싱(slicing)과 comma(,)를 함께 사용하면 된다.", "a = np.array([[0, 1, 2, 3], [4, 5, 6, 7]])\na\n\na[0, :] # 첫번째 행 전체\n\na[:, 1] # 두번째 열 전체\n\na[1, 1:] # 두번째 행의 두번째 열부터 끝열까지", "행렬 인덱싱\nNumPy ndarray 클래스의 또다른 강력한 기능은 행렬 인덱싱(fancy indexing)이라고도 부르는 행렬 인덱싱(array indexing) 방법이다. 인덱싱이라는 이름이 붙었지만 사실은 데이터베이스의 질의(Query) 기능을 수행한다.\n행렬 인덱싱에서는 대괄호(Bracket, [])안의 인덱스 정보로 숫자나 슬라이스가 아닌 ndarray 행렬을 받을 수 있다. 여기에서는 이 행렬을 편의상 인덱스 행렬이라고 부르겠다. 행렬 인덱싱의 방식에은 불리안(Boolean) 행렬 방식과 정수 행렬 방식 두가지가 있다.\n먼저 불리안 행렬 인덱싱 방식은 인덱스 행렬의 원소가 True, False 두 값으로만 구성되며 인덱스 행렬의 크기가 원래 ndarray 객체의 크기와 같아야 한다.\n예를 들어 다음과 같은 1차원 ndarray에서 홀수인 원소만 골라내려면 홀수인 원소에 대응하는 인덱스 값이 True이고 짝수인 원소에 대응하는 인덱스 값이 False인 인덱스 행렬 사용한다.", "a = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\nidx = np.array([True, False, True, False, True, False, True, False, True, False])\na[idx]", "이는 다음과 같이 간단하게 쓸 수도 있다.", "a[a % 2 == 0]\n\na[a % 2] # 0이 True, 1이 False", "2차원 이상의 인덱스인 경우에는 다음과 같이", "a = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]])\n[a % 2 == 0]\n\n\na[[a % 2 == 0]]\n\na[a % 2]", "정수 행렬 인덱싱에서는 인덱스 행렬의 원소 각각이 원래 ndarray 객체 원소 하나를 가리키는 인덱스 정수이여야 한다.\n예를 들어 1차원 행렬에서 홀수번째 원소만 골라내려만 다음과 같다", "a = np.array([0, 1, 2, 3, 4, 10, 6, 7, 8, 9]) * 10\nidx = np.array([0, 5, 7, 9, 9]) #위치를 뜻함\na[idx]", "정수 행렬 인덱스의 크기는 원래의 행렬 크기와 달라도 상관없다. 같은 원소를 반복해서 가리키는 경우에는 원래의 행렬보다 더 커지기도 한다.", "a = np.array([0, 1, 2, 3]) * 10\nidx = np.array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2])\na[idx]\n\na[0]", "행렬 인덱싱\n\n\n불리안(Boolean) 방식 행렬 인덱싱\n\nTrue인 원소만 선택 \n인덱스의 크기가 행렬의 크기와 같아야 한다.\n\n\n\n위치 지정 방식 행렬 인덱싱\n\n지정된 위치의 원소만 선택\n인덱스의 크기가 행렬의 크기와 달라도 된다.", "joobun = np.array([\"BSY\",\"PJY\",\"PJG\",\"BSJ\"])\nidx = np.array([0,0,0,1,1,1,2,2,2,3,3,3,0,1,2,3])\njoobun[idx]\n\na = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\na[(a % 2 == 0) & (a % 3 == 1)]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xaibeing/cn-deep-learning
first-neural-network/Your_first_neural_network.ipynb
mit
[ "你的第一个神经网络\n在此项目中,你将构建你的第一个神经网络,并用该网络预测每日自行车租客人数。我们提供了一些代码,但是需要你来实现神经网络(大部分内容)。提交此项目后,欢迎进一步探索该数据和模型。", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "加载和准备数据\n构建神经网络的关键一步是正确地准备数据。不同尺度级别的变量使网络难以高效地掌握正确的权重。我们在下方已经提供了加载和准备数据的代码。你很快将进一步学习这些代码!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "数据简介\n此数据集包含的是从 2011 年 1 月 1 日到 2012 年 12 月 31 日期间每天每小时的骑车人数。骑车用户分成临时用户和注册用户,cnt 列是骑车用户数汇总列。你可以在上方看到前几行数据。\n下图展示的是数据集中前 10 天左右的骑车人数(某些天不一定是 24 个条目,所以不是精确的 10 天)。你可以在这里看到每小时租金。这些数据很复杂!周末的骑行人数少些,工作日上下班期间是骑行高峰期。我们还可以从上方的数据中看到温度、湿度和风速信息,所有这些信息都会影响骑行人数。你需要用你的模型展示所有这些数据。", "rides[:24*10].plot(x='dteday', y='cnt', figsize=(10,4))", "查看每天的骑行数据,对比2011年和2012年", "day_rides = pd.read_csv('Bike-Sharing-Dataset/day.csv')\nday_rides = day_rides.set_index(['dteday'])\n\nday_rides.loc['2011-08-01':'2011-12-31'].plot(y='cnt', figsize=(10,4))\nday_rides.loc['2012-08-01':'2012-12-31'].plot(y='cnt', figsize=(10,4))", "虚拟变量(哑变量)\n下面是一些分类变量,例如季节、天气、月份。要在我们的模型中包含这些数据,我们需要创建二进制虚拟变量。用 Pandas 库中的 get_dummies() 就可以轻松实现。", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()\n\nquant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "调整目标变量\n为了更轻松地训练网络,我们将对每个连续变量标准化,即转换和调整变量,使它们的均值为 0,标准差为 1。\n我们会保存换算因子,以便当我们使用网络进行预测时可以还原数据。\n将数据拆分为训练、测试和验证数据集\n我们将大约最后 21 天的数据保存为测试数据集,这些数据集会在训练完网络后使用。我们将使用该数据集进行预测,并与实际的骑行人数进行对比。", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "开始构建网络\n下面你将构建自己的网络。我们已经构建好结构和反向传递部分。你将实现网络的前向传递部分。还需要设置超参数:学习速率、隐藏单元的数量,以及训练传递数量。\n<img src=\"assets/neural_network.png\" width=300px>\n该网络有两个层级,一个隐藏层和一个输出层。隐藏层级将使用 S 型函数作为激活函数。输出层只有一个节点,用于递归,节点的输出和节点的输入相同。即激活函数是 $f(x)=x$。这种函数获得输入信号,并生成输出信号,但是会考虑阈值,称为激活函数。我们完成网络的每个层级,并计算每个神经元的输出。一个层级的所有输出变成下一层级神经元的输入。这一流程叫做前向传播(forward propagation)。\n我们在神经网络中使用权重将信号从输入层传播到输出层。我们还使用权重将错误从输出层传播回网络,以便更新权重。这叫做反向传播(backpropagation)。\n\n提示:你需要为反向传播实现计算输出激活函数 ($f(x) = x$) 的导数。如果你不熟悉微积分,其实该函数就等同于等式 $y = x$。该等式的斜率是多少?也就是导数 $f(x)$。\n\n你需要完成以下任务:\n\n实现 S 型激活函数。将 __init__ 中的 self.activation_function 设为你的 S 型函数。\n在 train 方法中实现前向传递。\n在 train 方法中实现反向传播算法,包括计算输出错误。\n在 run 方法中实现前向传递。", "\nclass NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n# \n# def sigmoid(x):\n# return 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation here\n# self.activation_function = sigmoid\n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n #print('features',features)\n #print('targets',targets)\n# nCount = 0\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n# nCount += 1\n# if(nCount > 1):\n# break\n #print('#######################################')\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n #print('X.shape',X.shape)\n #print('X',X)\n #print('X[None,:].shape', X[None,:].shape)\n #print('X[None,:]', X[None,:])\n #print('y.shape',y.shape)\n #print('y',y)\n #print('weights_input_to_hidden.shape', self.weights_input_to_hidden.shape)\n #print('weights_hidden_to_output.shape', self.weights_hidden_to_output.shape)\n hidden_inputs = X[None,:] @ self.weights_input_to_hidden # signals into hidden layer\n #print('hidden_inputs.shape', hidden_inputs.shape)\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n #print('hidden_outputs.shape',hidden_outputs.shape)\n\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = hidden_outputs @ self.weights_hidden_to_output # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n #print('final_inputs.shape', final_inputs.shape)\n #print('final_outputs.shape', final_outputs.shape)\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n #print('error.shape', error.shape)\n #print('y',y)\n #print('final_outputs',final_outputs)\n #print('error',error)\n output_error_term = error * 1\n #print('output_error_term',output_error_term)\n \n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = output_error_term @ self.weights_hidden_to_output.T\n# hidden_error = output_error_term * self.weights_hidden_to_output.T\n #print('hidden_error.shape',hidden_error.shape)\n# print('hidden_error1',hidden_error1)\n# print('hidden_error',hidden_error)\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n #output_error_term = None\n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n #print('hidden_error_term.shape', hidden_error_term.shape)\n\n # Weight step (input to hidden)\n tmp = X[:,None] @ hidden_error_term\n #print('X[:,None]',X[:,None])\n #print('hidden_error_term',hidden_error_term)\n #print('tmp.shape',tmp.shape)\n #print('tmp',tmp)\n delta_weights_i_h += tmp\n #print('delta_weights_i_h.shape',delta_weights_i_h.shape)\n #print('-------------------')\n # Weight step (hidden to output)\n #print('hidden_outputs', hidden_outputs)\n #print('output_error_term', output_error_term)\n tmp = hidden_outputs.T * output_error_term\n #print('tmp.shape', tmp.shape)\n #print('tmp', tmp)\n delta_weights_h_o += tmp\n #print('delta_weights_h_o.shape', delta_weights_h_o.shape)\n\n # TODO: Update the weights - Replace these values with your calculations.\n #print('self.lr, n_records',self.lr, n_records)\n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n #print('self.weights_hidden_to_output', self.weights_hidden_to_output)\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n #print('self.weights_input_to_hidden', self.weights_input_to_hidden)\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n# #print('features.shape', features.shape)\n# #print(features)\n hidden_inputs = features @ self.weights_input_to_hidden # signals into hidden layer\n# #print('hedden_inputs', hidden_inputs)\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n# #print('hidden_outputs', hidden_outputs)\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = hidden_outputs @ self.weights_hidden_to_output # signals into final output layer\n# #print('final_inputs', final_inputs)\n final_outputs = final_inputs # signals from final output layer \n# #print('final_outputs', final_outputs)\n \n return final_outputs\n\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "单元测试\n运行这些单元测试,检查你的网络实现是否正确。这样可以帮助你确保网络已正确实现,然后再开始训练网络。这些测试必须成功才能通过此项目。", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", "训练网络\n现在你将设置网络的超参数。策略是设置的超参数使训练集上的错误很小但是数据不会过拟合。如果网络训练时间太长,或者有太多的隐藏节点,可能就会过于针对特定训练集,无法泛化到验证数据集。即当训练集的损失降低时,验证集的损失将开始增大。\n你还将采用随机梯度下降 (SGD) 方法训练网络。对于每次训练,都获取随机样本数据,而不是整个数据集。与普通梯度下降相比,训练次数要更多,但是每次时间更短。这样的话,网络训练效率更高。稍后你将详细了解 SGD。\n选择迭代次数\n也就是训练网络时从训练数据中抽样的批次数量。迭代次数越多,模型就与数据越拟合。但是,如果迭代次数太多,模型就无法很好地泛化到其他数据,这叫做过拟合。你需要选择一个使训练损失很低并且验证损失保持中等水平的数字。当你开始过拟合时,你会发现训练损失继续下降,但是验证损失开始上升。\n选择学习速率\n速率可以调整权重更新幅度。如果速率太大,权重就会太大,导致网络无法与数据相拟合。建议从 0.1 开始。如果网络在与数据拟合时遇到问题,尝试降低学习速率。注意,学习速率越低,权重更新的步长就越小,神经网络收敛的时间就越长。\n选择隐藏节点数量\n隐藏节点越多,模型的预测结果就越准确。尝试不同的隐藏节点的数量,看看对性能有何影响。你可以查看损失字典,寻找网络性能指标。如果隐藏单元的数量太少,那么模型就没有足够的空间进行学习,如果太多,则学习方向就有太多的选择。选择隐藏单元数量的技巧在于找到合适的平衡点。", "import sys\n\n### Set the hyperparameters here ###\niterations = 4000\nlearning_rate = 0.5\nhidden_nodes = 20\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\naxes = plt.gca()\naxes.plot(losses['train'], label='Training loss')\naxes.plot(losses['validation'], label='Validation loss')\naxes.legend()\n_ = axes.set_ylim([0,3])", "检查预测结果\n使用测试数据看看网络对数据建模的效果如何。如果完全错了,请确保网络中的每步都正确实现。", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "可选:思考下你的结果(我们不会评估这道题的答案)\n请针对你的结果回答以下问题。模型对数据的预测效果如何?哪里出现问题了?为何出现问题呢?\n\n注意:你可以通过双击该单元编辑文本。如果想要预览文本,请按 Control + Enter\n\n请将你的答案填写在下方\n预测结果与实际数据大致吻合。比较大的差异出现在12月下旬,有可能跟圣诞节有关。圣诞节每年一次,但这里只有2年的数据,所以这种跟年度相关的特征难以被模型学会。虽然数据中有holiday字段,但对于圣诞节也只有25号当天其holiday=1,难以体现这一重大节日的影响。如果有多年的数据可能提升性能,或者改进holiday字段可能也有帮助。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
planet-os/notebooks
api-examples/ERA5_tutorial.ipynb
mit
[ "ERA5 tutorial\nERA5 contains historical weather data, which can be used to analyse very wide range of problems. In this tutorial we briefly demonstrate how to:\n1. Get timerange data for single location and single variable\n2. Analyse possible wind farm production potential for three different locations\nTo read more about the ERA5 dataset, please follow these links:\n1. https://data.planetos.com/datasets/ecmwf_era5\n2. https://software.ecmwf.int/wiki/display/CKB\nTo learn more about Planet OS/Intertrust datahub, please refer to documentation in https://data.planetos.com/datasets", "# Initialize notebook environment.\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n# Import Planet OS API\nfrom API_client.python.datahub import datahub_main\nfrom API_client.python.lib.dataset import dataset\nfrom API_client.python import package_api", "Initialize dataset, print available variable names. In order to be able to access the data, save your API key in the file APIKEY and put it into the same folder where you run this notebook.", "apikey = open('APIKEY').readlines()[0].strip()\ndh = datahub_main(apikey)\nds = dataset('ecmwf_era5',dh,debug=True)\n\nds.variable_names()\n\n# Choose location, variable\nlon = 26\nlat = 58\nvariable = '2_metre_temperature_surface'\n\nd1 = ds.get_json_data_in_pandas(count=100000,**{'vars':'2_metre_temperature_surface','lon':lon,'lat':lat})\n\nfig=plt.figure()\nplt.plot(d1['time'],d1['2_metre_temperature_surface']-273.15)\nplt.ylabel('Temperature $C \\degree$')\nplt.grid()\nfig.autofmt_xdate()\nplt.show()", "Wind energy example\nAs a more advanced use case, let's try to analyse potential wind energy production in three different locations. \nFor this, we take wind speed at 100 m height, apply a simplified energy production curve to hourly data, and get a statistics for hourly and weekly average (wa) data.", "def production_curve(wind_speed):\n \"\"\" Production curve of a turbine/farm can be simplified as a \n cube of wind speed, with production \n starting from a particular wind speed 'v_cut_in'\n getting maximum output power at wind speed 'v_rated'\n production halted at very strong wind speeds 'v_cut_off'\n Note that this simplified approach is useful for comparing places only.\n \"\"\"\n v_cut_in = 5\n v_rated = 15\n v_cut_off = 25\n max_production = 15**3\n rt = np.zeros_like(wind_speed)\n rt = np.where(wind_speed>v_cut_in, wind_speed**3,wind_speed)\n rt = np.where(wind_speed>v_rated,max_production,rt)\n rt = np.where(wind_speed>v_cut_off,0,rt)\n return rt\n \ndef wind_production_smooth(wspd):\n \"\"\"Weekly average production as a moving average\"\"\"\n def moving_average(a, n=3) :\n ret = np.cumsum(a, dtype=float)\n ret[n:] = ret[n:] - ret[:-n]\n return ret[n - 1:] / n\n return moving_average(np.array(wspd), n=7*24)\n\ndef station_statistics(lonlats, count = 100):\n \"\"\"\n Compute min, max, mean, 5'th and 95'th percentiles for both \n hourly values and weekly averages\n \"\"\"\n ddwd = [(name,ds.get_json_data_in_pandas(count=count, \n **{'vars':'100_metre_U_wind_component_surface,100_metre_V_wind_component_surface',\n 'lon':lon,\n 'lat':lat})) for name,lon,lat in lonlats]\n wspds = [(name,np.sqrt(dd['100_metre_U_wind_component_surface']**2 + dd['100_metre_V_wind_component_surface']**2))\n for name,dd in ddwd]\n pcurves = [(name,production_curve(dd)) for name, dd in wspds]\n weekly_ave = [wind_production_smooth(dd) for name,dd in pcurves] \n retdic = {}\n for dd, wa in zip(pcurves,weekly_ave):\n retdic[dd[0]] = (np.amin(dd[1]),np.amax(dd[1]),np.mean(dd[1]),np.percentile(dd[1],5),np.percentile(dd[1],95),\n np.amin(wa), np.amax(wa), np.mean(wa), np.percentile(wa,5), np.percentile(wa,95),\n np.sum(dd[1])) \n \n return pd.DataFrame(retdic, index=['min','max','mean','5 percentile', '95 percentile',\n 'wa min','wa max','wa mean','wa 5 percentile', 'wa 95 percentile', 'sum'])\n\n## Power curve demo\nx = np.arange(0,30, 0.2)\nfig = plt.figure()\nplt.plot(x,production_curve(x))\nplt.xlabel('Wind speed m/s')\nplt.ylabel('Output power')\nplt.title(\"Idealized turbine production curve\")\nplt.show()\n\nstations = [('Hiiumaa',22.1, 59), ## planned off-shore wind farm in North-West Estonia\n ('Tõravere',26+28/60, 58+16/60), ## Inland climate station in South-East Estonia\n ('GYM',-3 -35/60, 53+27/60)] ## existing off-shore wind farm west to Scotland\nabc = station_statistics(stations,count=10000)\n\nabc.transpose()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
AssembleSoftware/IoTPy
examples/ExamplesOfSplit.ipynb
bsd-3-clause
[ "Examples of split\nA split agent has a single input stream and two or more output streams.", "import os\nimport sys\nsys.path.append(\"../\")\n\nfrom IoTPy.core.stream import Stream, run\nfrom IoTPy.agent_types.split import split_element, split_list, split_window\nfrom IoTPy.agent_types.split import unzip, separate, timed_unzip\nfrom IoTPy.agent_types.basics import split_e, fsplit_2e\nfrom IoTPy.helper_functions.recent_values import recent_values", "split_element\n<b>split_element(func, in_stream, out_streams)</b>\n<br>\n<br>\nwhere\n<ol>\n <li><b>func</b> is a function with an argument which is an element of a single input stream and that returns a list with one element for each out_stream. <i>func</i> may have additional keyword arguments and may also have a state.</li>\n <li><b>in_stream</b> is a single input stream.</li>\n <li><b>out_streams</b> is a list of output streams.</li>\n</ol>\nIn the example below, <i>func</i> is <i>f</i> which takes a single argument v (an element of the input stream) and returns a list of two values, one value for each of two output streams.\n<br>\nThe agent split_element has a single input stream, <b>x</b> and a list <b>[y, z]</b> of output streams. The list of output streams correspond to the list of values returned by f. \n<br>\n<br>\n<b>y[n], z[n] = f(x[n])</b>\n<br>\n<br>\nIn this example, \n<br>\ny[n] = x[n]+100 and z[n] = x[n]*2\n<br>\nCode\nThe code creates streams, x, y, and z, creates the split_element agent, and extends stream x. Calling run() executes a step in which all specified agents execute until all inputs have been processed. Then recent values of the output streams are printed.", "def simple_example_of_split_element():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Specify encapsulated functions\n def f(v): return [v+100, v*2]\n\n # Create agent with input stream x and output streams y, z.\n split_element(func=f, in_stream=x, out_streams=[y,z])\n \n # Put test values in the input streams.\n x.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n print ('Finished first run')\n \n # Put more test values in the input streams.\n x.extend(list(range(100, 105)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n print ('Finished second run.')\n\nsimple_example_of_split_element()", "Using Lambda Expressions\nLambda expressions in split_element can be convenient as shown in this example which is essentially the same as the previous one.", "def example_of_split_element_with_lambda():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Create agent with input stream x and output streams y, z.\n split_element(lambda v: [v+100, v*2], x, [y,z])\n \n # Put test values in the input streams.\n x.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\nexample_of_split_element_with_lambda()", "Example of the decorator @split_e\nThe decorator <b>@split_e</b> operates the same as split_element, except that the agent is created by calling the decorated function.\n<br>\nCompare this example with the first example which used <i>split_element</i>. The two examples are almost identical. The difference is in the way that the agent is created. In this example, the agent is created by calling (the decorated) function <i>f</i> whereas in the previous example, the agent was created by calling <i>split_element</i>.", "def simple_example_of_split_e():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Specify encapsulated functions\n @split_e\n def f(v): return [v+100, v*2]\n\n # Create agent with input stream x and output streams y, z.\n f(in_stream=x, out_streams=[y,z])\n \n # Put test values in the input streams.\n x.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nsimple_example_of_split_e()", "Example of functional forms\nYou may want to use a function that returns the streams resulting from a split instead of having the streams specified in out_streams, i.e. you may prefer to write:\n<br>\n<br>\na, b, c = h(u)\n<br>\n<br>\nwhere <i>u</i> is a stream that is split into streams <i>a</i>, <i>b</i>, and <i>c</i>, \ninstead of writing:\n<br>\n<br>\nh(in_stream=u, out_streams=[a, b, c])\n<br>\n<br>\nThis example illustrates how a functional form can be specified and used. Function <i>h</i> creates and returns the three streams <i>x</i>, <i>y</i>, and <i>z</i>. Calling the function creates a <i>split_element</i> agent.", "def simple_example_of_functional_form():\n\n # ------------------------------------------------------\n # Specifying a functional form\n # The functional form takes a single input stream and returns\n # three streams.\n def h(w):\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Specify encapsulated functions\n def f(v): return [v+100, v*2, v**2]\n\n # Create agent with input stream x and output streams y, z.\n split_element(func=f, in_stream=w, out_streams=[x,y,z])\n\n # Return streams created by this function.\n return x, y, z\n # ------------------------------------------------------\n\n # Using the functional form.\n # Specify streams\n w = Stream('w')\n\n # Create agent with input stream x and output streams a, b, c.\n a, b, c = h(w)\n \n # Put test values in the input streams.\n w.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream a are')\n print (recent_values(a))\n print ('recent values of stream b are')\n print (recent_values(b))\n print ('recent values of stream c are')\n print (recent_values(c))\n\nsimple_example_of_functional_form()", "Example with keyword arguments\nThis example shows how to pass keyword arguments to <i>split_element</i>. In the example, <i>addend</i> and <i>multiplicand</i> are arguments of <i>f</i> the encapsulated function, and these arguments are passed as keyword arguments to <i>split_element</i>.", "def example_of_split_element_with_keyword_args():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Specify encapsulated functions\n def f(v, addend, multiplicand): \n return [v+addend, v*multiplicand]\n\n # Create agent with input stream x and output streams y, z.\n split_element(func=f, in_stream=x, out_streams=[y,z], addend=100, multiplicand=2)\n \n # Put test values in the input streams.\n x.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nexample_of_split_element_with_keyword_args()", "Split element with state\nThis example shows how to create an agent with state. The encapsulated function takes two arguments --- an element of the input stream and a <b>state</b> --- and it returns two values: a list of elements corresponding to the output streams and the <b>next state</b>. The function may have additional arguments which are passed as keyword arguments to <i>split_element</i>.\n<br>\n<br>\nThe call <i>split_element(...)</i> to create the agent must have a keyword argument called <b>state</b> with its initial value. For example:\n<br>\nsplit_element(func=f, in_stream=x, out_streams=[y,z], <b>state=0</b>)\n<br>\nIn this example, the sequence of values of <i>state</i> is 0, 1, 2, .... which is also the sequence of values of the input stream and hence also of <i>v</i>.", "def example_of_split_element_with_state():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Specify encapsulated functions\n def f(v, state):\n next_state = state+1\n return ([v+state, v*state], next_state)\n\n # Create agent with input stream x and output streams y, z.\n split_element(func=f, in_stream=x, out_streams=[y,z], state=0)\n \n # Put test values in the input streams.\n x.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nexample_of_split_element_with_state()", "Example with state and keyword arguments\nThis example shows an encapsulated function with a state and an argument called <i>state_increment</i> which is passed as a keyword argument to <i>split_element</i>.", "def example_of_split_element_with_state_and_keyword_args():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Specify encapsulated functions\n def f(v, state, state_increment):\n next_state = state + state_increment\n return ([v+state, v*state], next_state)\n\n # Create agent with input stream x and output streams y, z.\n split_element(func=f, in_stream=x, out_streams=[y,z], state=0, state_increment=10)\n \n # Put test values in the input streams.\n x.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nexample_of_split_element_with_state_and_keyword_args()", "Example with StreamArray and NumPy arrays", "import numpy as np\nfrom IoTPy.core.stream import StreamArray\n\ndef example_of_split_element_with_stream_array():\n # Specify streams\n x = StreamArray('x')\n y = StreamArray('y')\n z = StreamArray('z')\n\n # Specify encapsulated functions\n def f(v, addend, multiplier):\n return [v+addend, v*multiplier]\n\n # Create agent with input stream x and output streams y, z.\n split_element(func=f, in_stream=x, out_streams=[y,z],\n addend=1.0, multiplier=2.0)\n \n # Put test values in the input streams.\n A = np.linspace(0.0, 4.0, 5)\n x.extend(A)\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n assert np.array_equal(recent_values(y), A + 1.0)\n assert np.array_equal(recent_values(z), A * 2.0)\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nexample_of_split_element_with_stream_array()", "Example of split list\nsplit_list is the same as split_element except that the encapsulated function operates on a <i>list</i> of elements of the input stream rather than on a single element. Operating on a list can be more efficient than operating sequentially on each of the elements of the list. This is especially important when working with arrays.\n<br>\n<br>\nIn this example, f operates on a list, <i>lst</i> of elements, and has keyword arguments <i>addend</i> and <i>multiplier</i>. It returns two lists corresponding to two output streams of the agent.", "def example_of_split_list():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Specify encapsulated functions\n def f(lst, addend, multiplier):\n return ([v+addend for v in lst], [v*multiplier for v in lst])\n\n # Create agent with input stream x and output streams y, z.\n split_list(func=f, in_stream=x, out_streams=[y,z], addend=100, multiplier=2)\n \n # Put test values in the input streams.\n x.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nexample_of_split_list()", "Example of split list with arrays\nIn this example, the encapsulated function <i>f</i> operates on an array <i>a</i> which is a segment of the input stream array, <i>x</i>. The operations in <i>f</i> are array operations (not list operations). For example, the result of <i>a * multiplier </i> is specified by numpy multiplication of an array with a scalar.", "def example_of_split_list_with_arrays():\n # Specify streams\n x = StreamArray('x')\n y = StreamArray('y')\n z = StreamArray('z')\n\n # Specify encapsulated functions\n def f(a, addend, multiplier):\n # a is an array\n # return two arrays.\n return (a + addend, a * multiplier)\n\n # Create agent with input stream x and output streams y, z.\n split_list(func=f, in_stream=x, out_streams=[y,z], addend=100, multiplier=2)\n \n # Put test values in the input streams.\n x.extend(np.arange(5.0))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nexample_of_split_list_with_arrays()", "Test of unzip\nunzip is the opposite of zip_stream.\n<br>\n<br>\nAn element of the input stream is a list or tuple whose length is the same as the number of output streams; the <i>j</i>-th element of the list is placed in the <i>j</i>-th output stream.\n<br>\n<br>\nIn this example, when the unzip agent receives the triple (1, 10, 100) on the input stream <i>w</i> it puts 1 on stream <i>x</i>, and 10 on stream <i>y</i>, and 100 on stream <i>z</i>.", "def simple_test_unzip():\n # Specify streams\n w = Stream('w')\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Create agent with input stream x and output streams y, z.\n unzip(in_stream=w, out_streams=[x,y,z])\n \n # Put test values in the input streams.\n w.extend([(1, 10, 100), (2, 20, 200), (3, 30, 300)])\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream x are')\n print (recent_values(x))\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nsimple_test_unzip()", "Example of separate\n<b>separate</b> is the opposite of <b>mix</b>.\n<br>\nThe elements of the input stream are pairs (index, value). When a pair <i>(i,v)</i> arrives on the input stream the value <i>v</i> is appended to the <i>i</i>-th output stream.\n<br>\n<br>\nIn this example, when (0, 1) and (2, 100) arrive on the input stream <i>x</i>, the value 1 is appended to the 0-th output stream which is <i>y</i> and the value 100 is appended to output stream indexed 2 which is stream <i>w</i>.", "def simple_test_separate():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n w = Stream('w')\n\n # Create agent with input stream x and output streams y, z.\n separate(in_stream=x, out_streams=[y,z,w])\n \n # Put test values in the input streams.\n x.extend([(0,1), (2, 100), (0, 2), (1, 10), (1, 20)])\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n print ('recent values of stream w are')\n print (recent_values(w))\n\nsimple_test_separate()", "Example of separate with stream arrays.\nThis is the same example as the previous case. The only difference is that since the elements of the input stream are pairs, the dimension of <i>x</i> is 2.", "def test_separate_with_stream_array():\n # Specify streams\n x = StreamArray('x', dimension=2)\n y = StreamArray('y')\n z = StreamArray('z')\n\n # Create agent with input stream x and output streams y, z.\n separate(in_stream=x, out_streams=[y,z])\n \n # Put test values in the input streams.\n x.extend(np.array([[1.0, 10.0], [0.0, 2.0], [1.0, 20.0], [0.0, 4.0]]))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\ntest_separate_with_stream_array()", "Example of split window\nThe input stream is broken up into windows. In this example, with <i>window_size</i>=2 and <i>step_size</i>=2, the sequence of windows are <i>x[0, 1], x[2, 3], x[4, 5], ....</i>.\n<br>\n<br>\nThe encapsulated function operates on a window and returns <i>n</i> values where <i>n</i> is the number of output streams. In this example, max(window) is appended to the output stream with index 0, i.e. stream <i>y</i>, and min(window) is appended to the output stream with index 1, i.e., stream <i>z</i>.\n<br>\n<br>\nNote: You can also use the lambda function as in:\n<br>\nsplit_window(lambda window: (max(window), min(window)), x, [y,z], 2, 2)", "def simple_example_of_split_window():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Specify encapsulated functions\n def f(window): return (max(window), min(window))\n\n # Create agent with input stream x and output streams y, z.\n split_window(func=f, in_stream=x, out_streams=[y,z],\n window_size=2, step_size=2)\n \n # Put test values in the input streams.\n x.extend(list(range(5)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\nsimple_example_of_split_window()", "Example that illustrates zip followed by unzip is the identity.\nzip_stream followed by unzip returns the initial streams.", "from IoTPy.agent_types.merge import zip_stream\ndef example_zip_plus_unzip():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n u = Stream('u')\n v = Stream('v')\n\n # Create agents\n zip_stream(in_streams=[x,y], out_stream=z)\n unzip(in_stream=z, out_streams=[u,v])\n \n # Put test values in the input streams.\n x.extend(['A', 'B', 'C'])\n y.extend(list(range(100, 1000, 100)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream u are')\n print (recent_values(u))\n print ('recent values of stream v are')\n print (recent_values(v))\n\nexample_zip_plus_unzip()", "Example that illustrates that mix followed by separate is the identity.", "from IoTPy.agent_types.merge import mix\ndef example_mix_plus_separate():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n u = Stream('u')\n v = Stream('v')\n\n # Create agents\n mix(in_streams=[x,y], out_stream=z)\n separate(in_stream=z, out_streams=[u,v])\n \n # Put test values in the input streams.\n x.extend(['A', 'B', 'C'])\n y.extend(list(range(100, 1000, 100)))\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream u are')\n print (recent_values(u))\n print ('recent values of stream v are')\n print (recent_values(v))\n\nexample_mix_plus_separate()", "Simple example of timed_unzip\nAn element of the input stream is a pair (timestamp, list). The sequence of timestamps must be increasing. The list has length n where n is the number of output streams. The m-th element of the list is the value of the m-th output stream associated with that timestamp. For example, if an element of the input stream <i>x</i> is (5, [\"B\", \"a\"]) then (5, \"B\") is appended to stream <i>y</i> and (5, \"a') is appended to stream <i>z</i>.", "def test_timed_unzip():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n\n # Create agent with input stream x and output streams y, z.\n timed_unzip(in_stream=x, out_streams=[y,z])\n \n # Put test values in the input streams.\n x.extend([(1, [\"A\", None]), (5, [\"B\", \"a\"]), (7, [None, \"b\"]),\n (9, [\"C\", \"c\"]), (10, [None, \"d\"])])\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream y are')\n print (recent_values(y))\n print ('recent values of stream z are')\n print (recent_values(z))\n\ntest_timed_unzip()", "Example that illustrates that timed_zip followed by timed_unzip is the identity.", "from IoTPy.agent_types.merge import timed_zip\ndef test_timed_zip_plus_timed_unzip():\n # Specify streams\n x = Stream('x')\n y = Stream('y')\n z = Stream('z')\n u = Stream('u')\n v = Stream('v')\n\n # Create agents\n timed_zip(in_streams=[x,y], out_stream=z)\n timed_unzip(in_stream=z, out_streams=[u,v])\n \n # Put test values in the input streams.\n x.extend([[1, 'a'], [3, 'b'], [10, 'd'], [15, 'e'], [17, 'f']])\n y.extend([[2, 'A'], [3, 'B'], [9, 'D'], [20, 'E']])\n\n # Execute a step\n run()\n\n # Look at recent values of streams.\n print ('recent values of stream u are')\n print (recent_values(u))\n print ('recent values of stream v are')\n print (recent_values(v))\n\ntest_timed_zip_plus_timed_unzip()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/csir-csiro/cmip6/models/vresm-1-0/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: CSIR-CSIRO\nSource ID: VRESM-1-0\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:54\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'csir-csiro', 'vresm-1-0', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
moonbury/pythonanywhere
github/MasteringMatplotlib/mmpl-high-level.ipynb
gpl-3.0
[ "High-level Plotting and Data Analysis\nIn the following sections of this IPython Notebook we be looking at the following:\n\nHigh-level plotting\nHistorical background\nmatplotlib\nNetworkX\nPandas\nGrammar of graphics\nNew styles in matplotlib\nBokeh\nggploto by ŷhat\nSeaborn\nData analysis\nPandas, SciPy, and Seaborn\nExamining and shaping a data set\nAnalysis of Temperature, 1894-2013\nAnalysis of Precipitation, 1894-2013\n\nWarm-up proceedures:", "import matplotlib\nmatplotlib.use('nbagg')\n%matplotlib inline", "Let's continue with the necessary imports:", "import calendar\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport networkx as nx\nfrom scipy import stats\nimport pandas as pd \nimport statsmodels.api as sm \nfrom typecheck import typecheck\n\nimport sys\nsys.path.append(\"../lib\")", "High-level plotting\nFor our purposes, we will define high-level plotting as anything in the matplotlib world which:\n\nUtilizes matplotlib APIs under the covers\nIn order to accomplish complicated plotting tasks\nAnd provide an API for those tasks to the user, to employ either in conjunction with matplotlib directly, or instead of it (using matplotlib under the covers)\n\nWe will examine a couple of examples in the following sections.\nNote that some obvious library choices are not dicussed in this context as they will be used in different, subsequent sections.\nBackground\nThe world of data visualization has an interesting and eclectic history, covered quite nicely in Michael Friendly's paper Milestones in the History of Data Visualization: A Case Study in Statistical Historiography with a related and quite wonderful graphic outline and dedicated site.\nBelow we have provided some highlights from the history data visualization.\n1644 - Michael Florent van Langren\nCredit for the vist visual representation of statistical data for his graph of 12 contemporary estimates for the distance from Toledo to Rome:\n<img src=\"B02036_05_01.png\" width=\"1000\"/>\n1686 - Edmond Halley\nA theoretical plot predicting barometric pressure vs. altitude, derived from experimental observation:\n<img src=\"B02036_05_02.png\" width=\"1000\" />\n1786 - William Playfair\nThe first line graph, depicting English imports to and from Denmark and Norway over the course of 80 years:\n<img src=\"B02036_05_03.png\" width=\"1000\"/>\n1869 - Charles Minard\nA map of Napoleon's Russian Campaign of 1812 combined with a Sankey diagram depicting the successive losses of soldiers in the French Army. Note that this Sankey diagram actually predates the steam engine diagram of 1898 by Matthew Henry Phineas Riall Sankey, for whom the diagram is named.\n<img src=\"B02036_05_03.5.png\" width=\"1000\"/>\nmatplotlib\nIn matplotlib, the Sankey class provides what is probably one of the better examples of high-level plotting in the base library. Here's a (slightly) modified version of a matplotlib.sankey demonstration from the matplotlib gallery:", "from matplotlib import sankey\n\nfig = plt.figure(figsize=(18, 22))\nax = fig.add_subplot(\n 1, 1, 1,\n xticks=[], yticks=[])\nax.set_title((\"Rankine Power Cycle\\nExample 8.6 from Moran and Shapiro\\n\"\n \"$Fundamentals \\ of \\ Engineering \\ Thermodynamics$, \"\n \"6th ed., 2008\"),\n fontsize=\"30\")\nHdot = [260.431, 35.078, 180.794, 221.115, 22.700,\n 142.361, 10.193, 10.210, 43.670, 44.312,\n 68.631, 10.758, 10.758, 0.017, 0.642,\n 232.121, 44.559, 100.613, 132.168]\nsnky = sankey.Sankey(\n ax=ax, format='%.3G', unit=' MW', gap=0.5, scale=1.0/Hdot[0], margin=0.6, shoulder=0.03, radius=0.25)\nsnky.add(patchlabel='\\n\\nPump 1', rotation=90, facecolor='#8ab88a', linewidth=0,\n flows=[Hdot[13], Hdot[6], -Hdot[7]],\n labels=['Shaft power', '', None],\n pathlengths=[0.4, 0.883, 0.25],\n orientations=[1, -1, 0])\nsnky.add(patchlabel='\\n\\nOpen\\nheater', facecolor='#8ab88a', linewidth=0,\n flows=[Hdot[11], Hdot[7], Hdot[4], -Hdot[8]],\n labels=[None, '', None, None],\n pathlengths=[0.25, 0.25, 1.93, 0.25],\n orientations=[1, 0, -1, 0], prior=0, connect=(2, 1))\nsnky.add(patchlabel='\\n\\nPump 2', facecolor='#8ab88a', linewidth=0,\n flows=[Hdot[14], Hdot[8], -Hdot[9]],\n labels=['\\nShaft power', '', None],\n pathlengths=[0.4, 0.25, 0.25],\n orientations=[1, 0, 0], prior=1, connect=(3, 1))\nsnky.add(patchlabel='Closed\\nheater', trunklength=2.914, fc='#8ab88a', linewidth=0,\n flows=[Hdot[9], Hdot[1], -Hdot[11], -Hdot[10]],\n pathlengths=[0.25, 1.543, 0.25, 0.25],\n labels=['', '', None, None],\n orientations=[0, -1, 1, -1], prior=2, connect=(2, 0))\nsnky.add(patchlabel='Trap', facecolor='#8ab88a', linewidth=0, trunklength=5.102,\n flows=[Hdot[11], -Hdot[12]],\n labels=['\\n', None],\n pathlengths=[1.0, 1.01],\n orientations=[1, 1], prior=3, connect=(2, 0))\nsnky.add(patchlabel='Steam\\ngenerator', facecolor='#d9a4a3', linewidth=0,\n flows=[Hdot[15], Hdot[10], Hdot[2], -Hdot[3], -Hdot[0]],\n labels=['Heat rate', '', '', None, None],\n pathlengths=0.25,\n orientations=[1, 0, -1, -1, -1], prior=3, connect=(3, 1))\nsnky.add(patchlabel='\\n\\n\\nTurbine 1\\n', facecolor='#8ab88a', linewidth=0,\n flows=[Hdot[0], -Hdot[16], -Hdot[1], -Hdot[2]],\n labels=['', None, None, None],\n pathlengths=[0.25, 0.153, 1.543, 0.25],\n orientations=[0, 1, -1, -1], prior=5, connect=(4, 0))\nsnky.add(patchlabel='\\n\\n\\nReheat', facecolor='#8ab88a', linewidth=0,\n flows=[Hdot[2], -Hdot[2]],\n labels=[None, None],\n pathlengths=[0.725, 0.25],\n orientations=[-1, 0], prior=6, connect=(3, 0))\nsnky.add(patchlabel='Turbine 2', trunklength=3.212, facecolor='#8ab88a', linewidth=0,\n flows=[Hdot[3], Hdot[16], -Hdot[5], -Hdot[4], -Hdot[17]],\n labels=[None, 'Shaft power', None, '', 'Shaft\\npower'],\n pathlengths=[0.751, 0.15, 0.25, 1.93, 0.25],\n orientations=[0, -1, 0, -1, 1], prior=6, connect=(1, 1))\nsnky.add(patchlabel='Condenser', facecolor='#a7d1de', linewidth=0, trunklength=1.764,\n flows=[Hdot[5], -Hdot[18], -Hdot[6]],\n labels=['', 'Heat rate', None],\n pathlengths=[0.45, 0.25, 0.883],\n orientations=[-1, 1, 0], prior=8, connect=(2, 0))\ndiagrams = snky.finish()\n\nfor diagram in diagrams:\n diagram.text.set_fontweight('bold')\n diagram.text.set_fontsize('22')\n for text in diagram.texts:\n text.set_fontsize('22')\n \n# Notice that the explicit connections are handled automatically, but the\n# implicit ones currently are not. The lengths of the paths and the trunks\n# must be adjusted manually, and that is a bit tricky.\n\nplt.show()", "A more common example is code you would write in your own projects. For instance, if you recall from the notebook covering the matplotlib APIs, we created a module to demonstrate the object-oriented API for programmatic workflows. That little module is a perfect example of high-level plotting with matplotlib.\nNetworkX\nAs we saw in the architecture notebook, NetworkX provides high-level matplotlib plotting support for its graphs. Below is another exmple, this one adapted from the NetworkX gallery. Let's look at the code comments in particular right now:", "import lanl\nfrom networkx.drawing.nx_agraph import graphviz_layout\n\n# Set up the plot's figure instance\nplt.figure(figsize=(14,14))\n\n# Generate the data graph structure representing the route relationships\nG = lanl.get_routes_graph(debug=True)\n\n# Perform the high-level plotting operations in NetworkX\npos = graphviz_layout(G, prog=\"twopi\", root=0)\nnx.draw(G, pos,\n node_color=[G.rtt[v] for v in G],\n with_labels=False,\n alpha=0.5,\n node_size=70)\n\n# Update the ranges\nxmax = 1.02 * max(xx for xx, _ in pos.values())\nymax = 1.02 * max(yy for _, yy in pos.values())\n\n# Final matplotlib tweaks and rendering\nplt.xlim(0, xmax)\nplt.ylim(0, ymax)\nplt.show()", "Like we saw when creating a graph that would neatly render the matplotlib module layout in accordance with the philosophy of its architecture, Aric Hagberg had to do something similar when rendering the Internet routes from Los Alamos National Laboratory. We've put this code in the lanl module for this notebook repo; it's where all the logic is defined for converting the route data to graph relationships.\nWe can see how NetworkX acts as a high-level plotting library by taking a look at some of the functions and related objects we used above. Let's start with the layout function. NetworkX provides several possible graph library backends, and to do so in a manner that makes it easier for the end user, some of the imports can be quite obscured. Let's get the location of the graphviz_layout function the easy way:", "graphviz_layout", "Taking a look at that file, we can see that graphviz_layout wraps the pygraphviz_layout function. From there, we see that NetworkX is converting pygraphviz's node data structure to something general that can be used for all backends. We're already several layers deep in NetworkX's high-level API internals.\nNext, let's take a look at the function that uses this node data, nx.draw:", "nx.draw", "Now we're getting close to matplotlib! The nx_pylab module's draw function makes direct use of matplotlib.pyplot in order to:\n\nGet the current figure from pyplot\nOr, if it exists, from the axes object\nHold and un-hold the matplotlib figures\nCall a matplotlib draw function\n\nIt also makes a subsequent call to the NetworkX graph backend to draw the actual edges and nodes. Theses additional calls get node, edge, and label data and make further calls to matplotlib draw functions. None of which we have to do; we simply call nx.draw (with appropriate parameters).\nPandas\nThe next example is from a library whose purpose is to provide Python users and developers extensive support for high-level data analysis. Pandas offers several highly performant data structures for this purpose, in large part built around the NumPy scientific computing library. Of these, Pandas incorporates plotting functionality into the Series and DataFrame data structures.\nLet's take a look aDataFrame example where we generate some random data and then utilize the plot function made available on DataFrame (taken from the Pandas documentation on visualization, with adaptations).\nRandom data samples, using a Rayleigh distribution:", "from scipy.stats import norm, rayleigh\n\na = rayleigh.rvs(loc=5, scale=2, size=1000) + 1\nb = rayleigh.rvs(loc=5, scale=2, size=1000)\nc = rayleigh.rvs(loc=5, scale=2, size=1000) - 1", "Create a Pandas data structure instance:", "data = pd.DataFrame({\"a\": a, \"b\": b, \"c\": c}, columns=[\"a\", \"b\", \"c\"])", "Let's take a look at a histogram plot of this data:", "axes = data.plot(kind=\"hist\", stacked=True, bins=30, figsize=(16, 8))\naxes.set_title(\"Fabricated Wind Speed Data\", fontsize=20)\naxes.set_xlabel(\"Mean Hourly Wind Speed (km/hr)\", fontsize=16)\n_ = axes.set_ylabel(\"Velocity Counts\", fontsize=16)", "So what's going on here? Well, if you take a dive into the Pandas codebase, you'll find that there are wrappers in pandas.tools.plotting that do a bunch of work under the covers to make plotting from the DataFrame object an exercise in simplicity. In particular, look at plotting.plot_frame and plotting._plot.\nGrammar of graphics\nThe Grammar of Graphics has done for the world of statistical data plotting and visualization what Design Patterns did for a subset of programming, and A Pattern Language did for architecture and urban design. The Grammar of Graphics explores the space of data, its graphical representation, the human minds that view these, and the ways in which these are connected, both obviously and subtly. The book provides a conceptual framework for the cognitive analysis of our statistical tools and how we can make them better, allowing us to ultimately create visualizations that are more clear, meaningful, and reveal more of the underlying problem space.\nThe first software implementation that was inspired by the Grammar of Graphics was SPSS's nViZn (based on work done in 1996). This was followed by:\n * R's ggplot by Hadley Wickham (2005)\n * R's ggplot2 also by Hadley Wickham, a complete rewrite of ggplot (2007)\n * Python's Bokeh by Peter Wang (2012; the first commit had a Python ggplot module)\n * Python's Seaborn by Michael Waskom (2012)\n * Python's ggplot by ŷhat (2013; originally named yagg, \"yet another ggplot\")\n * matplotlib released with support for a ggplot style (2014)\nLet's take a look at Python ones, starting with Bokeh.\nBokeh\nOne of the first Python libraries to explore the space of Grammar of Graphics is the Bokeh project. In many ways, Bokeh views itself as a natural successor to matplotlib, offering their view of improvements in overall architecture, scalability, size of problem set data, APIs, and usability. In contrast to matplotlib, Bokeh focuses its attention in the web browser.\nThe Bokeh project gallery is full of very intersting examples, but one of the most unusual and aesthetically pleasing is the re-creation of the Will Burtin antibiotics chart (a copy of the 1951 original diagram is available in this pdf).\nLet's set up Bokeh for use in an IPython notebook:", "from bokeh.plotting import output_notebook\noutput_notebook()", "The Bokeh example has been saved to this notebook's repo (and modified slightly); you can load it up with the following:", "import burtin", "Since this series of notebooks are focused on matplotlib (and not Bokeh!), we won't go into too much detail, but it is definitely worth mentioning that Bokeh provides a matplotlib compatibility layer. It doesn't cover 100% of all matplotlib API usage a given project may entail, but enough so that one should be able to very easily incorporate Bokeh into existing matplotlib projects.\nŷhat ggplot\nThe folks at ŷhat have a great reputation for expertise in maching learning and statistical computing in genereal. They are users of not only the R programming language, but Python and Julia as well. They created ggplot for Python because they really wanted to have the R ggplot2 experience in Python, not just use libraries that were inspired by it. From the github project README, ggplot has the following goals:\n * same API as ggplot2 for R\n * ability to use both American and British English spellings of aesthetics\n * tight integration with Pandas\n * pip installable\nIn particular, they wanted anyone coming from R to Python to have a nearly identical API experience. We can see this reflected in the code they shared in their blog post about it:\nExample usage in R\n```R\nlibrary(ggplot2)\nggplot(meat, aes(date,beef)) + \n geom_line(colour='black') + \n scale_x_date(breaks=date_breaks('7 years'),labels = date_format(\"%b %Y\")) + \n scale_y_continuous(labels=comma)\n```\nIn Python\n```python\nfrom ggplot import *\nggplot(meat, aes('date','beef')) + \\\n geom_line(color='black') + \\\n scale_x_date(breaks=date_breaks('7 years'), labels='%b %Y') + \\\n scale_y_continuous(labels='comma')\n```\nIn pure matplotlib\n```python\nimport matplotlib.pyplot as plt\nfrom matplotlib.dates import YearLocator, DateFormatter\nfrom ggplot import meat\ntick_every_n = YearLocator(7)\ndate_formatter = DateFormatter('%b %Y')\nx = meat.date\ny = meat.beef\nfig, ax = plt.subplots()\nax.plot(x, y, 'black')\nax.xaxis.set_major_locator(tick_every_n)\nax.xaxis.set_major_formatter(date_formatter)\nfig.autofmt_xdate()\nplt.show()\n```\nFurthermore, the rendered outputs of those two are also nearly identical.\nggplot is a high-level implementation that uses matplotlib under the hood. Let's import it and try out some of their examples in IPython. Since we're not trying to adhere to the exact experience of R, we'll use explicit imports.", "import ggplot\nfrom ggplot import components, geoms, scales, stats\nfrom ggplot import exampledata", "Here's a quick look at some movie data collected over the 20th century:", "data = exampledata.movies\naesthetics = components.aes(x='year', y='budget')\n\n(ggplot.ggplot(aesthetics, data=data) +\n stats.stat_smooth(span=.15, color='red', se=True) +\n geoms.ggtitle(\"Movie Budgets over Time\") +\n geoms.xlab(\"Year\") + \n geoms.ylab(\"Dollars\"))", "Views of data collected on the cuts and qualities of diamonds:", "data = exampledata.diamonds\naesthetics = components.aes(x='price', fill='cut')\n\n(ggplot.ggplot(aesthetics, data=data) +\n geoms.geom_density(alpha=0.25) +\n geoms.facet_wrap(\"clarity\"))", "ggplot offers some nice color options that, when used effectively, can be more revealing of data in hidden relationships. The following are examples of ggplot's support for ColorBrewer:", "aesthetics = components.aes(x='price', y='carat', color='clarity')\nplot = ggplot.ggplot(aesthetics, data=data)\npoint = geoms.geom_point(alpha=0.6)\n(plot +\n point +\n scales.scale_color_brewer(type='qual', palette='Set1'))\n\n(plot +\n point +\n scales.scale_color_brewer(type='seq', palette='YlOrRd'))\n\n(plot +\n point +\n scales.scale_color_brewer(type='div', palette='BrBg'))", "There is a great deal more to explore with ggplot. Be sure to check out:\n * the main site\n * the docs\n * the Github repo\nNew styles in matplotlib\nmatplotlib has recently added support for ggplot styling. Let's contrast it with the defaults. First, import the demo code:", "import mplggplot", "Now let's take a look at the default styling of some plots:", "figure, axes = plt.subplots(ncols=2, nrows=2, figsize=(10, 10))\nmplggplot.demo(axes)\nplt.show()", "Though we are looking at the ggplot style here, there are actually several styles to choose from. Here's the list of the available ones:", "plt.style.available", "Now we'll select \"ggplot\" and re-render our demo plots:", "plt.style.use('bmh')\n\nfigure, axes = plt.subplots(ncols=2, nrows=2, figsize=(10, 10))\nmplggplot.demo(axes)\nplt.show()", "It's not really a wonder that ggplot has so much appeal in the community :-)\nSeaborn\nAs noted, the development of seaborn has been greatly inspired by the Grammar of Graphics and R's ggplot in particular. Here's what the notes say on the seaborn site:\n<blockquote>\nSeaborn aims to make visualization a central part of exploring and understanding data. The plotting functions operate on dataframes and arrays containing a whole dataset and internally perform the necessary aggregation and statistical model-fitting to produce informative plots. Seaborn’s goals are similar to those of R’s ggplot, but it takes a different approach with an imperative and object-oriented style that tries to make it straightforward to construct sophisticated plots. If matplotlib “tries to make easy things easy and hard things possible”, seaborn aims to make a well-defined set of hard things easy too.\n</blockquote>\n\nWe've already used seaborn in the other notebooks, so it shouldn't be too new to you now, but let's run a couple of the samples from the seaborn gallery to further expose you to some of its visual goodness.\nLet's import seaborn and set up the styles we'll use for the rest of the notebook, and then look at a few seaborn examples.", "import seaborn as sns\n\npallete_name = \"husl\"\ncolors = sns.color_palette(pallete_name, 8)\ncolors.reverse()\ncmap = mpl.colors.LinearSegmentedColormap.from_list(pallete_name, colors)", "Scatter Plot Matrices\nScatter plots offer a unique view on multivariate data sets, allowing one to see what sorts of correlations exist between variables, if any. Here is the famous \"iris\" data set in a seaborn pairplot scatter plot matrix which focuses on pair-wise relationships:", "sns.set()\ndata_frame = sns.load_dataset(\"iris\")\n_ = sns.pairplot(data_frame, hue=\"species\", size=2.5)", "Facet Grids\nWhen you want to split up a data set by one or more variables, and then group subplots of these separated variables, you probably want to use a facet grid. Another use case is for examining repeated runs of an experiment to reveal potentially conditional relationships between variables. Below is a concocted instance of the latter from the seaborn examples. It displaying data from a generated data set simulating repeated observations of walking behaviour, examining positions of each step of a multi-step walk.", "import seademo\n\nsns.set(style=\"ticks\")\n\ndata = seademo.get_data_set()\n\ngrid = sns.FacetGrid(data, col=\"walk\", hue=\"walk\", col_wrap=5, size=2)\ngrid.map(plt.axhline, y=0, ls=\":\", c=\".5\")\ngrid.map(plt.plot, \"step\", \"position\", marker=\"o\", ms=4)\ngrid.set(xticks=np.arange(5), yticks=[-3, 3],\n xlim=(-.5, 4.5), ylim=(-3.5, 3.5))\ngrid.fig.tight_layout(w_pad=1)", "Violin Plots\nViolin plots are similar to box plots (the latter of which we will see more use of below in the \"Data Analysis\" section). Where box plots display variation in samples of a statistical population with different parts of the box indicating the degree of spread (among other things), the violin plot provide insight on the probability density at different values.", "sns.set(style=\"whitegrid\")\n\ndf = sns.load_dataset(\"brain_networks\", header=[0, 1, 2], index_col=0)\nused_networks = [1, 3, 4, 5, 6, 7, 8, 11, 12, 13, 16, 17]\nlevel_values = df.columns.get_level_values(\"network\").astype(int)\nused_columns = level_values.isin(used_networks)\ndf = df.loc[:, used_columns]\n\ncorr_df = df.corr().groupby(level=\"network\").mean()\ncorr_df.index = corr_df.index.astype(int)\ncorr_df = corr_df.sort_index().T\n\n(figure, axes) = plt.subplots(figsize=(14, 12))\nsns.violinplot(corr_df, color=\"Set3\", bw=0.1, cut=1.5,\n lw=1, inner=\"stick\", inner_kws={\"ms\": 6})\naxes.set(ylim=(-0.7, 1.05))\nsns.despine(left=True, bottom=True)", "This concludes our overview of high-level plotting with regard to the topic of the Grammar of Graphics in the Python (and especially matplotlib) world.\nNext we will look at high-level plotting examples in the context of a particular data set and various methods for analyzing trends in that data.\nData analysis\nThis next section will cover the use of matplotlib and some of the realted Python libraries from the scientific computing ecosystem in order to explore more facets of high-level plotting, but with a focus on the practical, hands-on aspect.\nPandas, SciPy, and Seaborn\nIn this section on data analysis, we will be making heavy use of the Pandas, SciPy, and Seaborn libraries. Here is a quick review of each:\n\nPandas - Python has long been great for data munging and preparation, but less so for data analysis and modeling. pandas helps fill this gap, enabling you to carry out your entire data analysis workflow in Python without having to switch to a more domain specific language like R.\nSciPy - The SciPy library is one of the core packages that make up the SciPy stack. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization; clustering; image analysis and singal processing; and statistics, among others.\nSeaborn - Seaborn aims to make visualization a central part of exploring and understanding data. The plotting functions operate on dataframes and arrays containing a whole dataset and internally perform the necessary aggregation and statistical model-fitting to produce informative plots. Seaborn’s goals are similar to those of R’s ggplot, but it takes a different approach with an imperative and object-oriented style that tries to make it straightforward to construct sophisticated plots. If matplotlib “tries to make easy things easy and hard things possible”, seaborn aims to make a well-defined set of hard things easy too.\n\nExamining and shaping a data set\nLet's do the imports we will need and set the Seaborn style for our plots:", "sns.set(style=\"darkgrid\")", "For the following sections we will be using the precipitation and temperature data for Saint Francis, Kansas, USA, from 1894 to 2013. You can obtain CSV files for weather stations that interest you from the United States Historical Climatology Network.\nLet's load the CSV data that's been prepared for us, using the Pandas CSV converter:", "data_file = \"../data/KS147093_0563_data_only.csv\"\ndata = pd.read_csv(data_file)", "This will have read the data in and instantiated a Pandas DataFrame object, converting the first row to column data:", "data.columns", "Here's what the data set looks like (well, the first bit of it, anyway):", "data.head()", "We'd like to see the \"Month\" data as names rather than numbers, so let's update that (but let's create a copy of the original, in case we need it later). We will be using month numbers and names later, so we'll set those now as well.", "data_raw = pd.read_csv(data_file)\nmonth_nums = list(range(1, 13))\nmonth_lookup = {x: calendar.month_name[x] for x in month_nums}\nmonth_names = [x[1] for x in sorted(month_lookup.items())]\ndata[\"Month\"] = data[\"Month\"].map(month_lookup)\ndata.head()", "That's better :-)\nWe're going to make repeated use of some of this data more than others, so let's pull those bits out:", "years = data[\"Year\"].values\ntemps_degrees = data[\"Mean Temperature (F)\"].values\nprecips_inches = data[\"Precipitation (in)\"].values", "Let's confirm the date range we're working with:", "years_min = data.get(\"Year\").min()\nyears_min\n\nyears_max = data.get(\"Year\").max()\nyears_max", "Let's set get the maximum and minimum values for our mean temperature and precipitation data:", "temp_max = data.get(\"Mean Temperature (F)\").max()\ntemp_max\n\ntemp_min = data.get(\"Mean Temperature (F)\").min()\ntemp_min\n\nprecip_max = data.get(\"Precipitation (in)\").max()\nprecip_max\n\nprecip_min = data.get(\"Precipitation (in)\").min()\nprecip_min", "Next, we'll create a Pandas pivot table, providing us with a convenient view of our data (making some of our analysis tasks much easier). If we use our converted data frame here (the one where we updated month numbers to names), our table will have the data in alphabetical order by month. As such, we'll want to use the raw data (the copy we made before converting), and only once it has been put into a pivot table (in numerical order) will we update it with month names.\nHere's how:", "temps = data_raw.pivot(\"Month\", \"Year\", \"Mean Temperature (F)\")\ntemps.index = [calendar.month_name[x] for x in temps.index]\ntemps", "Let's do the the same thing for precipitation:", "precips = data_raw.pivot(\"Month\", \"Year\", \"Precipitation (in)\")\nprecips.index = [calendar.month_name[x] for x in precips.index]\nprecips", "We've extracted most of the data and views that we'll need for the following sections, which are:\n * Analysis of Temperature, 1894-2013\n * Analysis of Precipitation, 1894-2013\nWe've got everything we need, now; let's get started!\nAnalysis of Temperature, 1894-2013\nWe're going to be analyzing temperatures in this section; create an appropriate a color map to use in our various plots:", "temps_colors = [\"#FCF8D4\", \"#FAEAB9\", \"#FAD873\", \"#FFA500\", \"#FF8C00\", \"#B22222\"]\nsns.palplot(temps_colors)\n\ntemps_cmap = mpl.colors.LinearSegmentedColormap.from_list(\"temp colors\", temps_colors)", "Now let's take a look at the temperature data we have:", "sns.set(style=\"ticks\")\n\n(figure, axes) = plt.subplots(figsize=(18,6))\nscatter = axes.scatter(years, temps_degrees, s=100, color=\"0.5\", alpha=0.5)\naxes.set_xlim([years_min, years_max])\naxes.set_ylim([temp_min - 5, temp_max + 5])\naxes.set_title(\"Mean Monthly Temperatures from 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)\naxes.set_xlabel(\"Years\", fontsize=16)\n_ = axes.set_ylabel(\"Temperature (F)\", fontsize=16)", "Notice something? The banding around the minimum and maximum values looks to be trending upwards. The scatter plot makes it a bit hard to see, though. We're going to need to do some work to make sure we're not just seeing things.\nSo what do we want to do?\n * get the maximum and minimum values for every year\n * find the best fit line though those points\n * examine the slopes\n * compare the slopes\nLet's do math!\nThere are a couple of conveniences we can take advantage of:\n * SciPy provides several options for linear (and polynomial!) fitting and regression\n * We can create a Pandas Series instance that represents our linear model and use it like the other Pandas objects we're working with in this section.", "def get_fit(series, m, b):\n x = series.index\n y = m * x + b\n return pd.Series(y, x)\n\ntemps_max_x = temps.max().index\ntemps_max_y = temps.max().values\ntemps_min_x = temps.min().index\ntemps_min_y = temps.min().values\n\n(temps_max_slope,\n temps_max_intercept,\n temps_max_r_value,\n temps_max_p_value,\n temps_max_std_err) = stats.linregress(temps_max_x, temps_max_y) \ntemps_max_fit = get_fit(temps.max(), temps_max_slope, temps_max_intercept)\n\n(temps_min_slope,\n temps_min_intercept,\n temps_min_r_value,\n temps_min_p_value,\n temps_min_std_err) = stats.linregress(temps_min_x, temps_min_y)\ntemps_min_fit = get_fit(temps.min(), temps_min_slope, temps_min_intercept)", "Let's look at the slopes of the two:", "(temps_max_slope, temps_min_slope)", "Quick refresher: the slope $m$ is defined as the change in $y$ values over the change in $x$ values:\n\\begin{align}\nm = \\frac{\\Delta y}{\\Delta x} = \\frac{\\text{vertical} \\, \\text{change} }{\\text{horizontal} \\, \\text{change} }\n\\end{align}\nIn our case, the $y$ values are the minimum and maximum mean monthly temperatures in degrees Fahrenheit; the $x$ values are the years these measurements were taken.\nThe slope for the minimum mean monthly temperatures over the last 120 years is about 3 times greater than that of the maximum mean monthly temperatures:", "temps_min_slope/temps_max_slope", "Let's go back to our scatter plot and superimpose our linear fits for the maximum and minimum annual means:", "(figure, axes) = plt.subplots(figsize=(18,6))\nscatter = axes.scatter(years, temps_degrees, s=100, color=\"0.5\", alpha=0.5)\ntemps_max_fit.plot(ax=axes, lw=5, color=temps_colors[5], alpha=0.7)\ntemps_min_fit.plot(ax=axes, lw=5, color=temps_colors[3], alpha=0.7)\naxes.set_xlim([years_min, years_max])\naxes.set_ylim([temp_min - 5, temp_max + 5])\naxes.set_title((\"Mean Monthly Temperatures from 1894-2013\\n\"\n \"Saint Francis, KS, USA\\n(with max and min fit)\"), fontsize=20)\naxes.set_xlabel(\"Years\", fontsize=16)\n_ = axes.set_ylabel(\"Temperature (F)\", fontsize=16)", "By looking at at the gaps above and below the min and max fits, it seems like there is a greater rise in the minimums. We can get a better visual, though, by superimposing the two lines. Let's remove the vertical distance and compare:", "diff_1894 = temps_max_fit.iloc[0] - temps_min_fit.iloc[0]\ndiff_2013 = temps_max_fit.iloc[-1] - temps_min_fit.iloc[-1]\n(diff_1894, diff_2013)", "So that's the difference between the high and low for 1894 and then the difference in 2013. As we can see, the trend over the last century for this one weather station has been a lessening in the difference between the maximum and minimum values.\nLet's shift the highs down by the difference in 2013 and compare the slopes overlaid:", "vert_shift = temps_max_fit - diff_2013\n\n(figure, axes) = plt.subplots(figsize=(18,6))\nvert_shift.plot(ax=axes, lw=5, color=temps_colors[5], alpha=0.7)\ntemps_min_fit.plot(ax=axes, lw=5, color=temps_colors[3], alpha=0.7)\naxes.set_xlim([years_min, years_max])\naxes.set_ylim([vert_shift.min() - 5, vert_shift.max() + 1])\naxes.set_title((\"Mean Monthly Temperature Difference from 1894-2013\\n\"\n \"Saint Francis, KS, USA\\n(vertical offset adjusted to converge at 2013)\"), fontsize=20)\naxes.set_xlabel(\"Years\", fontsize=16)\n_ = axes.set_ylabel(\"Temperature\\nDifference (F)\", fontsize=16)", "Now you can see the difference!\nLet's tweak our seaborn style for the next set of plots we'll be doing:", "sns.set(style=\"darkgrid\")", "Seaborn offers some plots that are very useful when looking at lots of data:\n * heat maps\n * cluster maps (and the normalized variant)\nLet's use the first one next, to get a sense of what the means temperatures look like for each month over the course of the given century -- without any analysis, just a visualization of the raw data.", "(figure, axes) = plt.subplots(figsize=(17,9))\naxes.set_title((\"Heat Map\\nMean Monthly Temperatures, 1894-2013\\n\"\n \"Saint Francis, KS, USA\"), fontsize=20)\nsns.heatmap(temps, cmap=temps_cmap, cbar_kws={\"label\": \"Temperature (F)\"})\nfigure.tight_layout()", "If you want to render your plot as the book has published it, you can do the following instead:\n```python\nsns.set(font_scale=1.8)\n(figure, axes) = plt.subplots(figsize=(17,9))\naxes.set_title((\"Heat Map\\nMean Monthly Temperatures, 1894-2013\\n\"\n \"Saint Francis, KS, USA\"), fontsize=24)\nxticks = temps.columns\nkeptticks = xticks[::int(len(xticks)/36)]\nxticks = ['' for y in xticks]\nxticks[::int(len(xticks)/36)] = keptticks\nsns.heatmap(temps, linewidth=0, xticklabels=xticks, cmap=temps_cmap,\n cbar_kws={\"label\": \"Temperature (F)\"})\nfigure.tight_layout()\n```\nGiven that this is a town in the Northern hemisphere near the 40th parallel, we don't see any surprises:\n * highest temperatures are in the summer\n * lowest temperatures are in the winter\nThere is some interesting summer banding in the 1930s which indicates several years of hotter-than-normal summers. There also seems to be a wide band of cold Decembers from 1907 through about 1932.\nNext we're going to look at Seaborn's cluster map functionality. Cluster maps of this sort are very useful in sorting out data that may have hidden (or not) hierarchical structure. We don't expect that with this data set, so this is more an demonstration of the plot more than anything. However, it might have a few insights for us. We shall see.\nDue to the fact that this is a composite plot, we'll need to access subplot axes as provided by the ClusterMap class.", "clustermap = sns.clustermap(\n temps, figsize=(19, 12), cbar_kws={\"label\": \"Temperature\\n(F)\"}, cmap=temps_cmap)\n_ = clustermap.ax_col_dendrogram.set_title(\n \"Cluster Map\\nMean Monthly Temperatures, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=20)", "For the book version:\n```python\nsns.set(font_scale=1.5)\nxticks = temps.columns\nkeptticks = xticks[::int(len(xticks)/36)]\nxticks = ['' for y in xticks]\nxticks[::int(len(xticks)/36)] = keptticks\nclustermap = sns.clustermap(\n temps, figsize=(19, 12), linewidth=0, xticklabels=xticks,\n cmap=temps_cmap, cbar_kws={\"label\": \"Temperature\\n(F)\"})\n_ = clustermap.ax_col_dendrogram.set_title(\n \"Cluster Map\\nMean Monthly Temperatures, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=24)\n```\nSo here's what's happened here: while keeping the temperatures for each year together, the $x$ (years) and $y$ (months) values have been sorted/grouped to be close to those with which it shares the most similarity. Here's what we can discern from the graph with regard to our current data set:\n\nThe century's temperature patterns each year can be viewed in two groups: higher and lower temperatures.\nJanuary and December share similar low-temperature patterns, with the next-closest being February.\nThe next grouping of similar temperature patterns are November and March, sibling to the Jan/Dec/Feb grouping.\nThe last grouping of the low-temperature months is the April/October pairing.\n\nA similar analysis (with no surprises) can be done for the high-temperature months.\nLooking across the $x$-axis, we can view patterns/groupings by year. With careful tracing (ideally with a larger rendering of the cluster map), one could identify similar temperature patterns in various years. Though this doesn't reveal anything intrinsically, it could assist in additional analysis (e.g., pointing towards historical records to examine in the possibility the trends may be discovered).\nThere are two distinct bands that show up for two different groups of years. However, when rendering this image at twice its current width, the banding goes away; it's an artifact of this particular resolution (and the decreased spacing between the given years).\nIn the cluster map above, we passed a valuer for the color map to use, the one we defined at the beginning of this section. If we leave that out, seaborn will do something quite nice: it will normalize our data and then select a color map that highlights values above and below the mean.\nLet's try that :-)", "clustermap = sns.clustermap(\n temps, z_score=1, figsize=(19, 12),\n cbar_kws={\"label\": \"Normalized\\nTemperature (F)\"})\n_ = clustermap.ax_col_dendrogram.set_title(\n \"Normalized Cluster Map\\nMean Monthly Temperatures, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=20)", "For the book version:\npython\nsns.set(font_scale=1.5)\nclustermap = sns.clustermap(\n temps, z_score=1, figsize=(19, 12), linewidth=0, xticklabels=xticks, \n cbar_kws={\"label\": \"Normalized\\nTemperature (F)\"})\n_ = clustermap.ax_col_dendrogram.set_title(\n \"Normalized Cluster Map\\nMean Monthly Temperatures, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=24)\nNote that we get the same grouping as in the previous heat map; the internal values at each coordinate of the map (and the associated color) are all that have changed. This view offers great insight for statistical data: not only do we see the large and obvious grouping between above and below the mean, but the colors give obvious insights as to how far any given point isfrom the overall mean.\nWith the next plot, we're going to return to two previous plots:\n * the temperature heat map\n * with the previous scatter plot for our temperature data\nSeaborn has an option for heat maps to display a histogram above them. We will see this usage when we examine the precipitation. However, for the temperatures, counts for a year isn't quite as meaningful as the actual values for each month of that year. As such, we will replace the standard histogram with our scatter plot:", "figure = plt.figure(figsize=(18,13))\ngrid_spec = plt.GridSpec(2, 2,\n width_ratios=[50, 1],\n height_ratios=[1, 3],\n wspace=0.05, hspace=0.05)\nscatter_axes = figure.add_subplot(grid_spec[0])\ncluster_axes = figure.add_subplot(grid_spec[2])\ncolorbar_axes = figure.add_subplot(grid_spec[3])\n\nscatter_axes.scatter(years,\n temps_degrees,\n s=40,\n c=\"0.3\",\n alpha=0.5)\nscatter_axes.set(xticks=[], ylabel=\"Yearly. Temp. (F)\")\nscatter_axes.set_xlim([years_min, years_max])\nscatter_axes.set_title(\n \"Heat Map with Scatter Plot\\nMean Monthly Temperatures, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=20)\nsns.heatmap(temps,\n cmap=temps_cmap,\n ax=cluster_axes,\n cbar_ax=colorbar_axes,\n cbar_kws={\"orientation\": \"vertical\"})\n_ = colorbar_axes.set(xlabel=\"Temperature\\n(F)\")", "For the book version:\n```python\nsns.set(font_scale=1.8)\nfigure = plt.figure(figsize=(18,13))\ngrid_spec = plt.GridSpec(2, 2,\n width_ratios=[50, 1],\n height_ratios=[1, 3],\n wspace=0.05, hspace=0.05)\nscatter_axes = figure.add_subplot(grid_spec[0])\ncluster_axes = figure.add_subplot(grid_spec[2])\ncolorbar_axes = figure.add_subplot(grid_spec[3])\nscatter_axes.scatter(years,\n temps_degrees,\n s=40,\n c=\"0.3\",\n alpha=0.5)\nscatter_axes.set(xticks=[], ylabel=\"Yearly. Temp. (F)\")\nscatter_axes.set_xlim([years_min, years_max])\nscatter_axes.set_title(\n \"Heat Map with Scatter Plot\\nMean Monthly Temperatures, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=20)\nsns.heatmap(temps,\n cmap=temps_cmap,\n ax=cluster_axes,\n linewidth=0, xticklabels=xticks, \n cbar_ax=colorbar_axes,\n cbar_kws={\"label\": \"Temperature\\n(F)\"})\n```\nNo new insights here, rather a demonstration of combining two views of the same data for easier examination and exploration of trends.\nNext, let's take a closer look at average monthly temperatures by month using a histogram matrix.\nTo do this, we'll need a new pivot. Our first one created a pivot with the \"Month\" data being the index; now we want to index by \"Year\". We'll do the same trick of keeping the data in the correct month-order by converting the month numbers to names after we create the pivot table ... but in the case of the histogram matrix plot, that won't actually help us: to keep the sorting correct, we'll need to pre-pend the zero-filled month number:", "temps2 = data_raw.pivot(\"Year\", \"Month\", \"Mean Temperature (F)\")\ntemps2.columns = [str(x).zfill(2) + \" - \" + calendar.month_name[x] for x in temps2.columns]\nmonthly_means = temps2.mean()\ntemps2.head()", "Now we're ready for our histogram. We'll use the histogram provided by Pandas for this.\nUnfortunately, Pandas does not return the figure and axes that it creates with its hist wrapper. Instead, it returns an NumPy array of subplots. As such, we're left with fewer options than we might like for further tweaking of the plot. Our use below of plt.text is a quick hack (of trial and error) that lets us label the overall figure (instead of the enclosing axes, as we'd prefer).", "axes = temps2.hist(figsize=(16,12))\nplt.text(-20, -10, \"Temperatures (F)\", fontsize=16)\nplt.text(-74, 77, \"Counts\", rotation=\"vertical\", fontsize=16)\n_ = plt.suptitle(\"Temperatue Counts by Month, 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)", "This provides a nice view on the number of occurrences for temperature ranges in each month over the course of the century.\nNow what we'd like to do is:\n * look at the mean temperature for all months over the century\n * but also show the constituent data that generated that mean\n * and trace the max, mean, and min temperatures\nLet's tackle that last one first. The min, max, and means are are discrete values in our case, one for each month. What we'd like to do is see what a smooth curve through those points might look like (as a visual aid more than anything). SciPy provides just the thing: spline interpolation. This will give us a smooth curve for our discrete values:", "from scipy.interpolate import UnivariateSpline\n\nsmooth_mean = UnivariateSpline(month_nums, list(monthly_means), s=0.5)\nmeans_xs = np.linspace(0, 13, 2000)\nmeans_ys = smooth_mean(means_xs)\n\nsmooth_maxs = UnivariateSpline(month_nums, list(temps2.max()), s=0)\nmaxs_xs = np.linspace(0, 13, 2000)\nmaxs_ys = smooth_maxs(maxs_xs)\n\nsmooth_mins = UnivariateSpline(month_nums, list(temps2.min()), s=0)\nmins_xs = np.linspace(0, 13, 2000)\nmins_ys = smooth_mins(mins_xs)", "We'll use the raw data from the beginning of this section, since we'll be doing interpolation on our $x$ values (month numbers):", "temps3 = data_raw[[\"Month\", \"Mean Temperature (F)\"]]", "Now we can plot our means for all months, a scatter plot (as lines, in this case) for each month superimposed over each mean, and finally our max/mean/min interpolations:", "(figure, axes) = plt.subplots(figsize=(18,10))\naxes.bar(month_nums, monthly_means, width=0.96, align=\"center\", alpha=0.6)\naxes.scatter(temps3[\"Month\"], temps3[\"Mean Temperature (F)\"], s=2000, marker=\"_\", alpha=0.6)\naxes.plot(means_xs, means_ys, \"b\", linewidth=6, alpha=0.6)\naxes.plot(maxs_xs, maxs_ys, \"r\", linewidth=6, alpha=0.2)\naxes.plot(mins_xs, mins_ys, \"y\", linewidth=6, alpha=0.5)\naxes.axis((0.5, 12.5, temps_degrees.min() - 5, temps_degrees.max() + 5))\naxes.set_title(\"Mean Monthly Temperatures from 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)\naxes.set_xticks(month_nums)\naxes.set_xticklabels(month_names)\n_ = axes.set_ylabel(\"Temperature (F)\", fontsize=16)", "When we created our by-month pivot (the assigned to the temps2 variable), we provided ourselves with the means to easily look at statistical data for each month. We'll print out the highlights below so we can look at the numbers in preparation for sanity checking our visuals on the next plot:", "temps2.max()\n\ntemps2.mean()\n\ntemps2.min()", "We've seen those above (in various forms). We haven't seen the standard deviation for this data yet, though:", "temps2.std()", "Have a good look at those numbers; we're going to use them to make sure that our box plot results make sense in the next plot.\nWhat is a box plot? The box plot was invented by the famous statistical mathematician John Tukey (the inventor of many important concepts, he is often forgotten as the person who coined the term \"bit\"). Box plots concisely and visually convey the following \"bits\" (couldn't resist) of information:\n * upper part of the box: approximate distribution, 75th percentile\n * line across box: median\n * lower part of the box: approximate distribution, 25th percentile\n * height of the box: fourth spread\n * upper line out of box: greatest non-outlying value\n * lower line out of box: smallest non-outlying value\n * dots above and below: outliers\nSometimes you will see box plots of different width; the width indicates the relative size of the data sets.\nThe box plot allows one to view data without any assumptions having made about it; the basic statistics are there to view, in plain sight.\nOur next plot will overlay a box plot on our barchart of medians (and line scatter plot of values).", "(figure, axes) = plt.subplots(figsize=(18,10))\naxes.bar(month_nums, monthly_means, width=0.96, align=\"center\", alpha=0.6)\naxes.scatter(temps3[\"Month\"], temps3[\"Mean Temperature (F)\"], s=2000, marker=\"_\", alpha=0.6)\nsns.boxplot(temps2, ax=axes)\naxes.axis((0.5, 12.5, temps_degrees.min() - 5, temps_degrees.max() + 5))\naxes.set_title(\"Mean Monthly Temperatures, 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)\naxes.set_xticks(month_nums)\naxes.set_xticklabels(month_names)\n_ = axes.set_ylabel(\"Temperature (F)\", fontsize=16)", "Now we can easily identify the spread, the outliers, the area that contains 50% of the distribution, etc.\nThe violin plot, as previously mentioned, is a variation on the box plot, it's shape indicating the probability distribution of the data in that particular set. We will configure it to show our data points as lines (the \"stick\" option), thus combining our use of the line-scatter plot above with the box plot.\nLet's see this same data as a violin plot:", "sns.set(style=\"whitegrid\")\n\n(figure, axes) = plt.subplots(figsize=(18, 10))\nsns.violinplot(temps2, bw=0.2, lw=1, inner=\"stick\")\naxes.set_title((\"Violin Plots\\nMean Monthly Temperatures, 1894-2013\\n\"\n \"Saint Francis, KS, USA\"), fontsize=20)\naxes.set_xticks(month_nums)\naxes.set_xticklabels(month_names)\n_ = axes.set_ylabel(\"Temperature (F)\", fontsize=16)", "With the next plot, Andrews' curves, we reach the end of the section on temperature analysis.\nThe application of Andrews' curves to this particular data set is a bit forced. It's a more useful analysis tool when applied to data sets with higher dimensionality, due to the fact that the computed curves can reveal structure (grouping/clustering) where it might not otherwise be (as) evident.\nWe're essentially looking at just two dimensions here:\n * temperature\n * month\nAs we have already seen above, there is not a lot of unexpected (or unexplained) structure in this data. A data set that included wind speed and air pressure might render much more interesting results in an Andrews' curve ...", "months_cmap = sns.cubehelix_palette(8, start=-0.5, rot=0.75, as_cmap=True)\n\n(figure, axes) = plt.subplots(figsize=(18, 10))\ntemps4 = data_raw[[\"Mean Temperature (F)\", \"Month\"]]\naxes.set_xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi])\naxes.set_xticklabels([r\"$-{\\pi}$\", r\"$-\\frac{\\pi}{2}$\", r\"$0$\", r\"$\\frac{\\pi}{2}$\", r\"${\\pi}$\"])\naxes.set_title(\"Andrews Curves for\\nMean Monthly Temperatures, 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)\naxes.set_xlabel(r\"Data points mapped to lines in the range $[-{\\pi},{\\pi}]$\", fontsize=16)\naxes.set_ylabel(r\"$f_{x}(t)$\", fontsize=16)\npd.tools.plotting.andrews_curves(\n temps4, class_column=\"Month\", ax=axes,\n colormap=months_cmap)\naxes.axis([-np.pi, np.pi] + [x * 1.025 for x in axes.axis()[2:]])\n_ = axes.legend(labels=month_names, loc=(0, 0.67))", "Andrews' curves are groups of lines where each line represents a point in the input data set. The line itself is a the plot of a finite Fourier series, as defined below (taken from the paper linked above).\nEach data point $x = \\left { x_1, x_2, \\ldots x_d \\right }$ defines a finite Fourier series:\n\\begin{align}\nf_x(t) = \\frac{x_1}{\\sqrt 2} + x_2 \\sin(t) + x_3 \\cos(t) + x_4 \\sin(2t) + x_5 \\cos(2t) + \\ldots\n\\end{align}\nThis function is then plotted for $-\\pi < t < \\pi$. Thus each data point may be viewed as a line between $-\\pi$ and $\\pi$. This formula can be thought of as the projection of the data point onto the vector:\n\\begin{align}\n\\left ( \\frac{1}{\\sqrt 2}, \\sin(t), \\cos(t), \\sin(2t), \\cos(2t), \\ldots \\right )\n\\end{align}\nIf we examine the rendered curves, we see the same patterns we identified in the cluster map plots:\n * the temperatures of January and December are similar (thus the light and dark banding)\n * likewise for the temperatures during the summer months\nNotice that the curves preserve the distance between the high and low temperatures. This is another property of the curves. Other include:\n * the mean is preserved\n * linear relationships are preserved\n * the variance is preserved\nThings to keep in mind when using Andrews' curves in your projects:\n * the order of the variables matters; changing that order will result in different curves\n * the lower frequencies show up better; as such, put the variables you feel to be more important first\nFor example, if we did have a data set with atmospheric pressure and wind speed, we might have defined our Pandas DataFrame with the columns in this order:\npython\ntemps4 = data_raw[[\"Mean Temperature (F)\", \"Wind Speed (kn)\", \"Pressure (Pa)\", \"Month\"]]\nThis concludes the section on temperature analysis. Next we will look precipitation. For the most part, the notes and comments are the same; as such, we will not repeat the text, but merely run through the examples without interruption or commentary.\nAnalysis of Precipitation, 1894-2013", "sns.set(style=\"darkgrid\")\n\nprecips_colors = [\"#f2d98f\", \"#f8ed39\", \"#a7cf38\", \"#7fc242\", \"#4680c2\", \"#3a53a3\", \"#6e4a98\"]\nsns.palplot(precips_colors)\n\nprecips_cmap = mpl.colors.LinearSegmentedColormap.from_list(\"precip colors\", precips_colors)\n\n(figure, axes) = plt.subplots(figsize=(17,9))\naxes.set_title((\"Heat Map\\nMean Monthly Precipitation, 1894-2013\\n\"\n \"Saint Francis, KS, USA\"), fontsize=20)\nsns.heatmap(precips, cmap=precips_cmap, cbar_kws={\"label\": \"Inches\"})\nfigure.tight_layout()\n\nfigure = plt.figure(figsize=(18, 13))\ngrid_spec = plt.GridSpec(2, 2,\n width_ratios=[50, 1],\n height_ratios=[1, 3],\n wspace=0.05, hspace=0.05)\nhist_axes = figure.add_subplot(grid_spec[0])\ncluster_axes = figure.add_subplot(grid_spec[2])\ncolorbar_axes = figure.add_subplot(grid_spec[3])\n\nprecips_sum = precips.sum(axis=0)\nyears_unique = data[\"Year\"].unique()\nhist_axes.bar(years_unique, precips_sum, 1,\n ec=\"w\", lw=2, color=\"0.5\", alpha=0.5)\nhist_axes.set(xticks=[], ylabel=\"Total Yearly\\nPrecip. (in)\")\nhist_axes.set_xlim([years_min, years_max])\nhist_axes.set_title(\n \"Heat Map with Histogram\\nMean Monthly Precipitation, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=20)\n\nsns.heatmap(precips,\n cmap=precips_cmap,\n ax=cluster_axes,\n cbar_ax=colorbar_axes,\n cbar_kws={\"orientation\": \"vertical\"})\n_ = colorbar_axes.set(xlabel=\"Precipitation\\n(in)\")", "For the book version:\n```python\nsns.set(font_scale=1.8)\nfigure = plt.figure(figsize=(18, 13))\ngrid_spec = plt.GridSpec(2, 2,\n width_ratios=[50, 1],\n height_ratios=[1, 3],\n wspace=0.05, hspace=0.05)\nhist_axes = figure.add_subplot(grid_spec[0])\ncluster_axes = figure.add_subplot(grid_spec[2])\ncolorbar_axes = figure.add_subplot(grid_spec[3])\nprecips_sum = precips.sum(axis=0)\nyears_unique = data[\"Year\"].unique()\nhist_axes.bar(years_unique, precips_sum, 1,\n ec=\"w\", lw=2, color=\"0.5\", alpha=0.5)\nhist_axes.set(xticks=[], ylabel=\"Total Yearly\\nPrecip. (in)\")\nhist_axes.set_xlim([years_min, years_max])\nhist_axes.set_title(\n \"Heat Map with Histogram\\nMean Monthly Precipitation, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=24)\nxticks = precips.columns\nkeptticks = xticks[::int(len(xticks)/36)]\nxticks = ['' for y in xticks]\nxticks[::int(len(xticks)/36)] = keptticks\n_ = sns.heatmap(precips,\n cmap=precips_cmap,\n ax=cluster_axes,\n linewidth=0, xticklabels=xticks, \n cbar_ax=colorbar_axes,\n cbar_kws={\"label\": \"Precipitation\\n(in)\"})\n```\nOur historgram gives a nice view of the average precipitation, and we notice immediately that 1923 is the year in this data set with the highest average. A quick google for \"kansas rain 1923\" lands us on this USGS page which discusses major floods along the Arkansas River:\n<blockquote>\n<strong>June 8-9, 1923</strong><br/><br/>\n\nIn June 1923, the entire drainage area between Hutchinson and Arkansas City received excessive rains. On June 8 and 9, Wichita reported 7.06 inches, Newton 5.75 inches, and Arkansas City 2.06 inches. Excessive precipitation fell over all of the Little Arkansas, Ninnescah, and Chikaskia River Basins as well as the Arkansas River Valley, and major flooding occurred on all of the affected streams. Wichita and Arkansas City were severely damaged. In Wichita, 6 square miles were inundated. At Arkansas City, two lives were lost, and property damage was estimated in the millions (Kansas Water Resources Board, 1960). Flood stages on the Ninnescah were the highest known.\n</blockquote>", "clustermap = sns.clustermap(\n precips, figsize=(19, 12), cbar_kws={\"label\": \"Precipitation\\n(F)\"}, cmap=precips_cmap)\n_ = clustermap.ax_col_dendrogram.set_title(\n \"Cluster Map\\nMean Monthly Precipitation, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=20)\n\nclustermap = sns.clustermap(\n precips, z_score=1, figsize=(19, 12),\n cbar_kws={\"label\": \"Normalized\\nPrecipitation\\n(in)\"})\n_ = clustermap.ax_col_dendrogram.set_title(\n \"Normalized Cluster Map\\nMean Monthly Precipitation, 1894-2013\\nSaint Francis, KS, USA\",\n fontsize=20)\n\nprecips2 = data_raw.pivot(\"Year\", \"Month\", \"Precipitation (in)\")\nprecips2.columns = [str(x).zfill(2) + \" - \" + calendar.month_name[x] for x in precips2.columns]\nmonthly_means = precips2.mean()\nprecips2.head()\n\naxes = pd.tools.plotting.hist_frame(precips2, figsize=(16,12))\nplt.text(-3.5, -20, \"Precipitation (in)\", fontsize=16)\nplt.text(-9.75, 155, \"Counts\", rotation=\"vertical\", fontsize=16)\n_ = plt.suptitle(\"Precipitation Counts by Month, 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)\n\nfrom scipy.interpolate import UnivariateSpline\n\nsmooth_mean = UnivariateSpline(month_nums, list(monthly_means), s=0.5)\nmeans_xs = np.linspace(0, 13, 2000)\nmeans_ys = smooth_mean(means_xs)\n\nsmooth_maxs = UnivariateSpline(month_nums, list(precips2.max()), s=1)\nmaxs_xs = np.linspace(-5, 14, 2000)\nmaxs_ys = smooth_maxs(maxs_xs)\n\nsmooth_mins = UnivariateSpline(month_nums, list(precips2.min()), s=0.25)\nmins_xs = np.linspace(0, 13, 2000)\nmins_ys = smooth_mins(mins_xs)\n\nprecips3 = data_raw[[\"Month\", \"Precipitation (in)\"]]\n\n(figure, axes) = plt.subplots(figsize=(18,10))\naxes.bar(month_nums, monthly_means, width=0.99, align=\"center\", alpha=0.6)\naxes.scatter(precips3[\"Month\"], precips3[\"Precipitation (in)\"], s=2000, marker=\"_\", alpha=0.6)\naxes.plot(means_xs, means_ys, \"b\", linewidth=6, alpha=0.6)\naxes.plot(maxs_xs, maxs_ys, \"r\", linewidth=6, alpha=0.2)\naxes.plot(mins_xs, mins_ys, \"y\", linewidth=6, alpha=0.5)\naxes.axis((0.5, 12.5, precips_inches.min(), precips_inches.max() + 0.25))\naxes.set_title(\"Mean Monthly Precipitation from 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)\naxes.set_xticks(month_nums)\naxes.set_xticklabels(month_names)\n_ = axes.set_ylabel(\"Precipitation (in)\", fontsize=16)\n\nprecips2.max()\n\nprecips2.mean()\n\nprecips2.min()\n\nprecips2.std()\n\n(figure, axes) = plt.subplots(figsize=(18,10))\naxes.bar(month_nums, monthly_means, width=0.99, align=\"center\", alpha=0.6)\naxes.scatter(precips3[\"Month\"], precips3[\"Precipitation (in)\"], s=2000, marker=\"_\", alpha=0.6)\nsns.boxplot(precips2, ax=axes)\naxes.axis((0.5, 12.5, precips_inches.min(), precips_inches.max() + 0.25))\naxes.set_title(\"Mean Monthly Precipitation from 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)\naxes.set_xticks(month_nums)\naxes.set_xticklabels(month_names)\n_ = axes.set_ylabel(\"Precipitation (in)\", fontsize=16)\n\nsns.set(style=\"whitegrid\")\n\n(figure, axes) = plt.subplots(figsize=(18, 10))\nsns.violinplot(precips2, bw=0.2, lw=1, inner=\"stick\")\naxes.set_title((\"Violin Plots\\nMean Monthly Precipitation from 1894-2013\\n\"\n \"Saint Francis, KS, USA\"), fontsize=20)\naxes.set_xticks(month_nums)\naxes.set_xticklabels(month_names)\n_ = axes.set_ylabel(\"Precipitation (in)\", fontsize=16)\n\nsns.set(style=\"darkgrid\")\n\n(figure, axes) = plt.subplots(figsize=(18, 10))\nprecips4 = data_raw[[\"Precipitation (in)\", \"Month\"]]\naxes.set_xlim([-np.pi, np.pi])\naxes.set_xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi])\naxes.set_xticklabels([r\"$-{\\pi}$\", r\"$-\\frac{\\pi}{2}$\", r\"$0$\", r\"$\\frac{\\pi}{2}$\", r\"${\\pi}$\"])\naxes.set_title(\"Andrews Curves for\\nMean Monthly Precipitation, 1894-2013\\nSaint Francis, KS, USA\", fontsize=20)\naxes.set_xlabel(r\"Data points mapped to lines in the range $[-{\\pi},{\\pi}]$\", fontsize=16)\naxes.set_ylabel(r\"$f_{x}(t)$\", fontsize=16)\naxes = pd.tools.plotting.andrews_curves(\n precips4, class_column=\"Month\", ax=axes,\n colormap=sns.cubehelix_palette(8, start=0.5, rot=-0.75, as_cmap=True))\naxes.axis([-np.pi, np.pi] + [x * 1.025 for x in axes.axis()[2:]])\n_ = axes.legend(labels=month_names, loc=(0, 0.67))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.19/_downloads/04c2d1e64afcdd4e5032afb2212a74e5/plot_objects_from_arrays.ipynb
bsd-3-clause
[ "%matplotlib inline", "Creating MNE objects from data arrays\nIn this simple example, the creation of MNE objects from\nnumpy arrays is demonstrated. In the last example case, a\nNEO file format is used as a source for the data.", "# Author: Jaakko Leppakangas <jaeilepp@student.jyu.fi>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport neo\n\nimport mne\n\nprint(__doc__)", "Create arbitrary data", "sfreq = 1000 # Sampling frequency\ntimes = np.arange(0, 10, 0.001) # Use 10000 samples (10s)\n\nsin = np.sin(times * 10) # Multiplied by 10 for shorter cycles\ncos = np.cos(times * 10)\nsinX2 = sin * 2\ncosX2 = cos * 2\n\n# Numpy array of size 4 X 10000.\ndata = np.array([sin, cos, sinX2, cosX2])\n\n# Definition of channel types and names.\nch_types = ['mag', 'mag', 'grad', 'grad']\nch_names = ['sin', 'cos', 'sinX2', 'cosX2']", "Create an :class:info &lt;mne.Info&gt; object.", "# It is also possible to use info from another raw object.\ninfo = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)", "Create a dummy :class:mne.io.RawArray object", "raw = mne.io.RawArray(data, info)\n\n# Scaling of the figure.\n# For actual EEG/MEG data different scaling factors should be used.\nscalings = {'mag': 2, 'grad': 2}\n\nraw.plot(n_channels=4, scalings=scalings, title='Data from arrays',\n show=True, block=True)\n\n# It is also possible to auto-compute scalings\nscalings = 'auto' # Could also pass a dictionary with some value == 'auto'\nraw.plot(n_channels=4, scalings=scalings, title='Auto-scaled Data from arrays',\n show=True, block=True)", "EpochsArray", "event_id = 1 # This is used to identify the events.\n# First column is for the sample number.\nevents = np.array([[200, 0, event_id],\n [1200, 0, event_id],\n [2000, 0, event_id]]) # List of three arbitrary events\n\n# Here a data set of 700 ms epochs from 2 channels is\n# created from sin and cos data.\n# Any data in shape (n_epochs, n_channels, n_times) can be used.\nepochs_data = np.array([[sin[:700], cos[:700]],\n [sin[1000:1700], cos[1000:1700]],\n [sin[1800:2500], cos[1800:2500]]])\n\nch_names = ['sin', 'cos']\nch_types = ['mag', 'mag']\ninfo = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types)\n\nepochs = mne.EpochsArray(epochs_data, info=info, events=events,\n event_id={'arbitrary': 1})\n\npicks = mne.pick_types(info, meg=True, eeg=False, misc=False)\n\nepochs.plot(picks=picks, scalings='auto', show=True, block=True)", "EvokedArray", "nave = len(epochs_data) # Number of averaged epochs\nevoked_data = np.mean(epochs_data, axis=0)\n\nevokeds = mne.EvokedArray(evoked_data, info=info, tmin=-0.2,\n comment='Arbitrary', nave=nave)\nevokeds.plot(picks=picks, show=True, units={'mag': '-'},\n titles={'mag': 'sin and cos averaged'}, time_unit='s')", "Create epochs by windowing the raw data.", "# The events are spaced evenly every 1 second.\nduration = 1.\n\n# create a fixed size events array\n# start=0 and stop=None by default\nevents = mne.make_fixed_length_events(raw, event_id, duration=duration)\nprint(events)\n\n# for fixed size events no start time before and after event\ntmin = 0.\ntmax = 0.99 # inclusive tmax, 1 second epochs\n\n# create :class:`Epochs <mne.Epochs>` object\nepochs = mne.Epochs(raw, events=events, event_id=event_id, tmin=tmin,\n tmax=tmax, baseline=None, verbose=True)\nepochs.plot(scalings='auto', block=True)", "Create overlapping epochs using :func:mne.make_fixed_length_events (50 %\noverlap). This also roughly doubles the amount of events compared to the\nprevious event list.", "duration = 0.5\nevents = mne.make_fixed_length_events(raw, event_id, duration=duration)\nprint(events)\nepochs = mne.Epochs(raw, events=events, tmin=tmin, tmax=tmax, baseline=None,\n verbose=True)\nepochs.plot(scalings='auto', block=True)", "Extracting data from NEO file", "# The example here uses the ExampleIO object for creating fake data.\n# For actual data and different file formats, consult the NEO documentation.\nreader = neo.io.ExampleIO('fakedata.nof')\nbl = reader.read(lazy=False)[0]\n\n# Get data from first (and only) segment\nseg = bl.segments[0]\ntitle = seg.file_origin\n\nch_names = list()\ndata = list()\nfor ai, asig in enumerate(seg.analogsignals):\n # Since the data does not contain channel names, channel indices are used.\n ch_names.append('Neo %02d' % (ai + 1,))\n # We need the ravel() here because Neo < 0.5 gave 1D, Neo 0.5 gives\n # 2D (but still a single channel).\n data.append(asig.rescale('V').magnitude.ravel())\n\ndata = np.array(data, float)\n\nsfreq = int(seg.analogsignals[0].sampling_rate.magnitude)\n\n# By default, the channel types are assumed to be 'misc'.\ninfo = mne.create_info(ch_names=ch_names, sfreq=sfreq)\n\nraw = mne.io.RawArray(data, info)\nraw.plot(n_channels=4, scalings={'misc': 1}, title='Data from NEO',\n show=True, block=True, clipping='clamp')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jorisvandenbossche/geopandas
doc/source/gallery/matplotlib_scalebar.ipynb
bsd-3-clause
[ "Adding a scale bar to a matplotlib plot\nWhen making a geospatial plot in matplotlib, you can use maplotlib-scalebar library to add a scale bar.", "import geopandas as gpd\nfrom matplotlib_scalebar.scalebar import ScaleBar", "Creating a ScaleBar object\nThe only required parameter for creating a ScaleBar object is dx. This is equal to a size of one pixel in real world. Value of this parameter depends on units of your CRS.\nProjected coordinate system (meters)\nThe easiest way to add a scale bar is using a projected coordinate system with meters as units. Just set dx = 1:", "nybb = gpd.read_file(gpd.datasets.get_path('nybb'))\nnybb = nybb.to_crs(32619) # Convert the dataset to a coordinate\n# system which uses meters\n\nax = nybb.plot()\nax.add_artist(ScaleBar(1))", "Geographic coordinate system (degrees)\nWith a geographic coordinate system with degrees as units, dx should be equal to a distance in meters of two points with the same latitude (Y coordinate) which are one full degree of longitude (X) apart. You can calculate this distance by online calculator (e.g. the Great Circle calculator) or in geopandas.\\\n\\\nFirstly, we will create a GeoSeries with two points that have roughly the coordinates of NYC. They are located on the same latitude but one degree of longitude from each other. Their initial coordinates are specified in a geographic coordinate system (geographic WGS 84). They are then converted to a projected system for the calculation:", "from shapely.geometry.point import Point\n\npoints = gpd.GeoSeries([Point(-73.5, 40.5), Point(-74.5, 40.5)], crs=4326) # Geographic WGS 84 - degrees\npoints = points.to_crs(32619) # Projected WGS 84 - meters", "After the conversion, we can calculate the distance between the points. The result slightly differs from the Great Circle Calculator but the difference is insignificant (84,921 and 84,767 meters):", "distance_meters = points[0].distance(points[1])", "Finally, we are able to use geographic coordinate system in our plot. We set value of dx parameter to a distance we just calculated:", "nybb = gpd.read_file(gpd.datasets.get_path('nybb'))\nnybb = nybb.to_crs(4326) # Using geographic WGS 84\n\nax = nybb.plot()\nax.add_artist(ScaleBar(distance_meters))", "Using other units\nThe default unit for dx is m (meter). You can change this unit by the units and dimension parameters. There is a list of some possible units for various values of dimension below:\n| dimension | units |\n| ----- |:-----:|\n| si-length | km, m, cm, um|\n| imperial-length |in, ft, yd, mi|\n|si-length-reciprocal|1/m, 1/cm|\n|angle|deg|\nIn the following example, we will leave the dataset in its initial CRS which uses feet as units. The plot shows scale of 2 leagues (approximately 11 kilometers):", "nybb = gpd.read_file(gpd.datasets.get_path('nybb'))\n\nax = nybb.plot()\nax.add_artist(ScaleBar(1, dimension=\"imperial-length\", units=\"ft\"))", "Customization of the scale bar", "nybb = gpd.read_file(gpd.datasets.get_path('nybb')).to_crs(32619)\nax = nybb.plot()\n\n# Position and layout\nscale1 = ScaleBar(\ndx=1, label='Scale 1',\n location='upper left', # in relation to the whole plot\n label_loc='left', scale_loc='bottom' # in relation to the line\n)\n\n# Color\nscale2 = ScaleBar(\n dx=1, label='Scale 2', location='center', \n color='#b32400', box_color='yellow',\n box_alpha=0.8 # Slightly transparent box\n)\n\n# Font and text formatting\nscale3 = ScaleBar(\n dx=1, label='Scale 3',\n font_properties={'family':'serif', 'size': 'large'}, # For more information, see the cell below\n scale_formatter=lambda value, unit: f'> {value} {unit} <'\n)\n\nax.add_artist(scale1)\nax.add_artist(scale2)\nax.add_artist(scale3)", "Note: Font is specified by six properties: family, style, variant, stretch, weight, size (and math_fontfamily). See more.\\\n\\\nFor more information about matplotlib-scalebar library, see the PyPI or GitHub page." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BigDataRepublic/bdr-analytics-py
notebooks/bdr-imbalanced-classification.ipynb
apache-2.0
[ "Classification model\nHere we use machine learning techniques to create and validate a model that can predict the probability of a relatively rare event (imbalanced classes problem).", "import sys\nsys.path.append('../')\n\n# import generic packages\nimport numpy as np\nimport pandas as pd\n# pd.options.display.max_columns = None\n# pd.options.display.max_colwidth = 100\nfrom IPython.display import display\n\n# visualization packages\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport seaborn as sns\nsns.set(style=\"white\")\n%matplotlib inline\n\n# module loading settings\n%load_ext autoreload\n%autoreload 2\n\n# load to data frame\ndf = pd.read_csv('')\n\n# extract and remove timestamps from data frame\ntimestamps = df['timestamp']\ndf.drop('timestamp', axis=1, inplace=True)\n\n# determine categoricals\nhigh_capacity = df.columns.values[~np.array(df.dtypes == np.number)].tolist()\nprint \"high capacity categorical feature columns:\"\nprint high_capacity\n\n# print some info\nprint \"{:d} observations\".format(len(df))\ndf.head()", "Model specification\nHere we set some specifications for the model: type, how it should be fitted, optimized and validated.", "model_type = 'rf' # the classification algorithm\ntune_model = False # optimize hyperparameters\n\ncross_val_method = 'temporal' # cross-validation routine\n\ncost_fp = 1000 # preferably in euros!\nbenefit_tp = 3000\nclass_weights = {0: cost_fp, 1: benefit_tp} # costs for fn and fp", "Cross-validation procedure\nTo validate whether the model makes sensible predictions, we need to perform cross-validation. The exact procedure for this is specified below. Random cross-validation (set-aside a random sample for testing) is fast, but temporal cross-validation (set-aside a time period for testing) gives the most realistic results due to the resemblence of real-world model usage.", "from sklearn.model_selection import StratifiedShuffleSplit, GridSearchCV, train_test_split\n\n#source: https://github.com/BigDataRepublic/bdr-analytics-py\n#! pip install -e git+ssh://git@github.com/BigDataRepublic/bdr-analytics.git#egg=bdranalytics-0.1\nfrom bdranalytics.pipeline.encoders import WeightOfEvidenceEncoder\nfrom bdranalytics.model_selection.growingwindow import IntervalGrowingWindow\n\nfrom sklearn.metrics import average_precision_score, make_scorer, roc_auc_score\n\nif cross_val_method is 'random':\n \n # split train data into stratified random folds\n cv_dev = StratifiedShuffleSplit(test_size=0.1, train_size=0.1, n_splits=5, random_state=1)\n \n cv_test = StratifiedShuffleSplit(test_size=0.33, n_splits=1, random_state=2)\n\nelif cross_val_method is 'temporal':\n \n train_size = pd.Timedelta(days=365 * 4 )\n \n # create a cross-validation routine for parameter tuning\n cv_dev = IntervalGrowingWindow(timestamps=timestamps,\n test_start_date=pd.datetime(year=2015, month=1, day=1),\n test_end_date=pd.datetime(year=2015, month=12, day=31),\n test_size=pd.Timedelta(days=30), \n train_size=train_size)\n \n # create a cross-validation routine for model evaluation\n cv_test = IntervalGrowingWindow(timestamps=timestamps,\n test_start_date=pd.datetime(year=2016, month=1, day=1),\n test_end_date=pd.datetime(year=2016, month=8, day=31),\n test_size=pd.Timedelta(days=2*30),\n train_size=train_size) \n\n# number of parallel jobs for cross-validation\nn_jobs = 1\n\n# two functions for advanced performance evaluation metrics\ndef roc_auc(y_true, y_pred):\n return roc_auc_score(pd.get_dummies(y_true), y_pred)\n\nroc_auc_scorer = make_scorer(roc_auc, needs_proba=True)\n\ndef pr_auc(y_true, y_pred):\n return average_precision_score(pd.get_dummies(y_true), y_pred, average=\"micro\")\n\npr_auc_scorer = make_scorer(pr_auc, needs_proba=True)\n\nfrom sklearn.preprocessing import StandardScaler, Imputer\n\nfrom sklearn.pipeline import Pipeline\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier\nfrom sklearn.dummy import DummyClassifier\nfrom xgboost import XGBClassifier\n\n# convert date frame to bare X and y variables for the model pipeline\ny_col = 'target'\nX = df.copy().drop(y_col, axis=1)\ny = np.array(df[y_col])\nn_features = X.shape[1]\n\n# define preprocessing steps\npreproc_steps = [('woe', WeightOfEvidenceEncoder(cols=high_capacity)),\n ('imputer', Imputer(missing_values='NaN', strategy='median', axis=0)),\n ('standardizer', StandardScaler(with_mean=True, with_std=True))]\n\n# specification of different model types and their defaults\nmodel_steps_dict = {'lr': [('lr', LogisticRegression(C=0.001, penalty='l2', tol=0.01,\n class_weight=class_weights))],\n 'rf': [('rf', RandomForestClassifier(n_estimators=400, max_features='auto',\n class_weight=class_weights))],\n 'gbc': [('gbc', GradientBoostingClassifier(n_estimators=400, max_depth=3))],\n 'xgb': [('xgb', XGBClassifier(scale_pos_weight=class_weights[1],\n n_estimators=100, max_depth=4))],\n 'dummy': [('dummy', DummyClassifier(strategy='prior'))]\n }\n\n# specification of the different model hyperparameters and tuning space\nmodel_params_grid = {'lr': {'lr__C': [1e-4, 1e-3, 1e-2, 1e-1]},\n 'rf': {'rf__max_features': [3, n_features, np.sqrt(n_features)],\n 'rf__n_estimators': [10, 100, 1000]},\n 'gbc': {'gbc__n_estimators': [100, 200]},\n 'xgb': {'xgb__max_depth': [3,6,9],\n 'xgb__reg_alpha': [0,5,15],\n 'xgb__reg_lambda': [0,5,15],\n 'xgb__gamma' : [0,10,50,100]},\n 'dummy': {}}\n\n# store the model step\nmodel_steps = model_steps_dict[model_type]\n\n# combine everything in one pipeline\nestimator = Pipeline(steps=(preproc_steps + model_steps))\nprint estimator", "Model parameter tuning\nIf desired, we can optimize the model hyperparameters to get the best possible model.", "# procedure depends on cross-validation type\nif cross_val_method is 'random': \n train_index = next(cv_test.split(X, y))[0]\n X_dev = X.iloc[train_index,:]\n y_dev = y[train_index]\nelif cross_val_method is 'temporal':\n X_dev = X\n y_dev = y\n\n# setting to include class weights in the gradient boosting model\nif model_type is 'gbc':\n sample_weights = np.array(map(lambda x: class_weights[x], y_dev))\n fit_params = {'gbc__sample_weight': sample_weights}\nelse: \n fit_params = {}\n\n# tune model with a parameter grid search if desired\nif tune_model:\n \n grid_search = GridSearchCV(estimator, cv=cv_dev, n_jobs=n_jobs, refit=False,\n param_grid=model_params_grid[model_type],\n scoring=pr_auc_scorer, fit_params=fit_params)\n\n grid_search.fit(X_dev, y_dev)\n \n # show grid search results\n display(pd.DataFrame(grid_search.cv_results_))\n \n # set best parameters for estimator\n estimator.set_params(**grid_search.best_params_)", "Model validation\nThe final test on the holdout.", "y_pred_proba = [] # initialize empty predictions array\ny_true = [] # initialize empty ground-truth array\n\n# loop over the test folds\nfor i_fold, (train_index, test_index) in enumerate(cv_test.split(X, y)):\n \n print \"validation fold {:d}\".format(i_fold)\n \n X_train = X.iloc[train_index,:]\n y_train = y[train_index]\n \n X_test = X.iloc[test_index,:]\n y_test = y[test_index]\n \n if model_type is 'gbc':\n sample_weights = map(lambda x: class_weights[x], y_train)\n fit_params = {'gbc__sample_weight': sample_weights}\n else: \n fit_params = {}\n \n # fit the model\n estimator.fit(X_train, y_train, **fit_params)\n\n # probability outputs for class 1\n y_pred_proba.append(map(lambda x: x[1], estimator.predict_proba(X_test)))\n \n # store the true y labels for each fold\n y_true.append(np.array(y_test))\n\n# postprocess the results\ny_true = np.concatenate(y_true)\ny_pred_proba = np.concatenate(y_pred_proba) \ny_pred_bin = (y_pred_proba > 0.5) * 1.\n\n# print some stats\nn_samples_test = len(y_true)\nn_pos_test = sum(y_true)\nn_neg_test = n_samples_test - n_pos_test\nprint \"events: {}\".format(n_pos_test)\nprint \"p_no_event: {}\".format(n_neg_test / n_samples_test)\nprint \"test accuracy: {}\".format((np.equal(y_pred_bin, y_true) * 1.).mean())", "Receiver-operator characteristics\nLine is constructed by applying various threshold to the model output. \nY-axis: proportion of events correctly identified, hit-rate\nX-axis: proportion of false positives, usually results in waste of resources \nDotted line is guessing (no model). Blue line above the dotted line means there is information in the features.", "from sklearn.metrics import roc_curve, auc\n\nfpr, tpr, thresholds = roc_curve(y_true, y_pred_proba, pos_label=1)\nroc_auc = auc(fpr, tpr)\n \n# plot ROC curve\nplt.figure()\nplt.plot(fpr, tpr, label=\"ROC curve (area = {:.2f})\".format(roc_auc))\nplt.plot([0, 1], [0, 1], 'k--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.0])\nplt.xlabel('False positive rate')\nplt.ylabel('True positive rate')\nplt.title('Receiver-operating characteristic')\nplt.legend(loc=\"lower right\")\nplt.show()", "Costs and benefits\nROC optimization with cost matrix. Critical information: cost of FP and cost of FN (i.e. benefit of TP). Also used to train the model with class_weights.", "def benefit(tpr, fpr):\n\n n_tp = tpr * n_pos_test # number of true positives (benefits)\n n_fp = fpr * n_neg_test # number of false positives (extra costs)\n \n fp_costs = n_fp * cost_fp\n tp_benefits = n_tp * benefit_tp\n \n return tp_benefits - fp_costs\n\nbenefits = np.zeros_like(thresholds)\nfor i, _ in enumerate(thresholds):\n benefits[i] = benefit(tpr[i], fpr[i])\n\ni_max = np.argmax(benefits)\nprint (\"max benefits: {:.0f}k euros, tpr: {:.3f}, fpr: {:.3f}, threshold: {:.3f}\"\n .format(benefits[i_max]/ 1e3, benefits[i_max]/ 1e3 / 8, tpr[i_max], fpr[i_max], thresholds[i_max]))\n\nplt.plot(thresholds, benefits)\nplt.xlim([0,1])\nplt.ylim([0,np.max(benefits)])\nplt.show()\n\n# recalibrate threshold based on benefits (optional, should still be around 0.5)\ny_pred_bin = (y_pred_proba > thresholds[i_max]) * 1.", "Precision-recall curve\nAnother way to look at it. Note that models which perform well in PR-space are necessarily also dominating ROC-space. The opposite is not the case! Line is constructed by applying various threshold to the model output.\nY-axis: proportion of events among all positives (precision)\nX-axis: proportion of events correctly identified (recall, hit rate)", "from sklearn.metrics import precision_recall_curve\n\nprecision, recall, thresholds = precision_recall_curve(y_true, y_pred_proba, pos_label=1)\n\naverage_precision = average_precision_score(y_true, y_pred_proba, average=\"micro\")\n\nbaseline = n_pos_test / n_samples_test\n\n# plot PR curve\nplt.figure()\nplt.plot(recall, precision, label=\"PR curve (area = {:.2f})\".format(average_precision))\nplt.plot([0, 1], [baseline, baseline], 'k--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.0])\nplt.xlabel('Recall')\nplt.ylabel('Precision')\nplt.title('Precision-recall curve')\nplt.legend(loc=\"lower right\")\nplt.show()\n\nif model_type is 'dummy':\n print 'DummyClassifier only has endpoints in PR-curve'", "Classification report", "from sklearn.metrics import classification_report\n\ntarget_names = ['no event','event']\n\nprint classification_report(y_true, y_pred_bin, target_names=target_names, digits=3)", "Confusion matrix", "from sklearn.metrics import confusion_matrix\n\nconfusion = pd.DataFrame(confusion_matrix(y_true, y_pred_bin), index=target_names, columns=target_names)\nsns.heatmap(confusion, annot=True, fmt=\"d\")\nplt.xlabel('predicted label')\nplt.ylabel('true label')", "Accuracies at different classifier thresholds", "from sklearn.metrics import accuracy_score\n\nthresholds = (np.arange(0,100,1) / 100.)\nacc = map(lambda thresh: accuracy_score(y_true, map(lambda prob: prob > thresh, y_pred_proba)), thresholds)\nplt.hist(acc, bins=20);", "Thresholds versus accuracy", "plt.plot(thresholds, acc);", "Feature importance\nNote that these models are optimized to make accurate predictions, and not to make solid statistical inferences.", "feature_labels = filter(lambda k: y_col not in k, df.columns.values) \n\nif model_type is 'lr':\n weights = estimator._final_estimator.coef_[0]\nelif model_type in ['rf','gbc']:\n weights = estimator._final_estimator.feature_importances_\nelif model_type is 'dummy':\n print 'DummyClassifier does not have weights'\n weights = np.zeros(len(feature_labels))\n \nfeature_weights = pd.Series(weights, index=feature_labels)\nfeature_weights.plot.barh(title='Feature importance', fontsize=8, figsize=(12,30), grid=True);\n\nfrom sklearn.ensemble.partial_dependence import plot_partial_dependence\n\nif model_type is 'gbc':\n preproc_pipe = Pipeline(steps=preproc_steps)\n X_transformed = preproc_pipe.fit_transform(X_dev, y_dev)\n\n plot_partial_dependence(estimator._final_estimator, X_transformed,\n features=range(n_features), feature_names=feature_labels,\n figsize=(12,180), n_cols=4, percentiles=(0.2,0.8));\nelse:\n print \"No partial dependence plots available for this model type.\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rizar/attention-lvcsr
libs/Theano/doc/library/d3viz/index.ipynb
mit
[ "d3viz: Interactive visualization of Theano compute graphs\nRequirements\nd3viz requires the pydot package, which can be installed with pip:\npip install pydot\nOverview\nd3viz extends Theano’s printing module to interactively visualize compute graphs. Instead of creating a static picture, it creates an HTML file, which can be opened with current web-browsers. d3viz allows\n\nto zoom to different regions and to move graphs via drag and drop,\nto position nodes both manually and automatically,\nto retrieve additional information about nodes and edges such as their data type or definition in the source code,\nto edit node labels,\nto visualizing profiling information, and\nto explore nested graphs such as OpFromGraph nodes.", "import theano as th\nimport theano.tensor as T\nimport numpy as np", "As an example, consider the following multilayer perceptron with one hidden layer and a softmax output layer.", "ninputs = 1000\nnfeatures = 100\nnoutputs = 10\nnhiddens = 50\n\nrng = np.random.RandomState(0)\nx = T.dmatrix('x')\nwh = th.shared(rng.normal(0, 1, (nfeatures, nhiddens)), borrow=True)\nbh = th.shared(np.zeros(nhiddens), borrow=True)\nh = T.nnet.sigmoid(T.dot(x, wh) + bh)\n\nwy = th.shared(rng.normal(0, 1, (nhiddens, noutputs)))\nby = th.shared(np.zeros(noutputs), borrow=True)\ny = T.nnet.softmax(T.dot(h, wy) + by)\n\npredict = th.function([x], y)", "The function predict outputs the probability of 10 classes. You can visualize it with pydotprint as follows:", "from theano.printing import pydotprint\nimport os\n\nif not os.path.exists('examples'):\n os.makedirs('examples')\npydotprint(predict, 'examples/mlp.png')\n\nfrom IPython.display import Image\nImage('examples/mlp.png', width='80%')", "To visualize it interactively, import the d3viz function from the d3viz module, which can be called as before:", "import theano.d3viz as d3v\nd3v.d3viz(predict, 'examples/mlp.html')", "Open visualization!\nWhen you open the output file mlp.html in your web-browser, you will see an interactive visualization of the compute graph. You can move the whole graph or single nodes via drag and drop, and zoom via the mouse wheel. When you move the mouse cursor over a node, a window will pop up that displays detailed information about the node, such as its data type or definition in the source code. When you left-click on a node and select Edit, you can change the predefined node label. If you are dealing with a complex graph with many nodes, the default node layout may not be perfect. In this case, you can press the Release node button in the top-left corner to automatically arrange nodes. To reset nodes to their default position, press the Reset nodes button.\nProfiling\nTheano allows function profiling via the profile=True flag. After at least one function call, the compute time of each node can be printed in text form with debugprint. However, analyzing complex graphs in this way can be cumbersome.\nd3viz can visualize the same timing information graphically, and hence help to spot bottlenecks in the compute graph more easily! To begin with, we will redefine the predict function, this time by using profile=True flag. Afterwards, we capture the runtime on random data:", "predict_profiled = th.function([x], y, profile=True)\n\nx_val = rng.normal(0, 1, (ninputs, nfeatures))\ny_val = predict_profiled(x_val)\n\nd3v.d3viz(predict_profiled, 'examples/mlp2.html')", "Open visualization!\nWhen you open the HTML file in your browser, you will find an additional Toggle profile colors button in the menu bar. By clicking on it, nodes will be colored by their compute time, where red corresponds to a high compute time. You can read out the exact timing information of a node by moving the cursor over it.\nDifferent output formats\nInternally, d3viz represents a compute graph in the Graphviz DOT language, using the pydot package, and defines a front-end based on the d3.js library to visualize it. However, any other Graphviz front-end can be used, which allows to export graphs to different formats.", "formatter = d3v.formatting.PyDotFormatter()\npydot_graph = formatter(predict_profiled)\n\npydot_graph.write_png('examples/mlp2.png');\npydot_graph.write_pdf('examples/mlp2.pdf');\n\nImage('./examples/mlp2.png')", "Here, we used the PyDotFormatter class to convert the compute graph into a pydot graph, and created a PNG and PDF file. You can find all output formats supported by Graphviz here.\nOpFromGraph nodes\nAn OpFromGraph node defines a new operation, which can be called with different inputs at different places in the compute graph. Each OpFromGraph node defines a nested graph, which will be visualized accordingly by d3viz.", "x, y, z = T.scalars('xyz')\ne = T.nnet.sigmoid((x + y + z)**2)\nop = th.OpFromGraph([x, y, z], [e])\n\ne2 = op(x, y, z) + op(z, y, x)\nf = th.function([x, y, z], e2)\n\nd3v.d3viz(f, 'examples/ofg.html')", "Open visualization!\nIn this example, an operation with three inputs is defined, which is used to build a function that calls this operations twice, each time with different input arguments. \nIn the d3viz visualization, you will find two OpFromGraph nodes, which correspond to the two OpFromGraph calls. When you double click on one of them, the nested graph appears with the correct mapping of its input arguments. You can move it around by drag and drop in the shaded area, and close it again by double-click.\nAn OpFromGraph operation can be composed of further OpFromGraph operations, which will be visualized as nested graphs as you can see in the following example.", "x, y, z = T.scalars('xyz')\ne = x * y\nop = th.OpFromGraph([x, y], [e])\ne2 = op(x, y) + z\nop2 = th.OpFromGraph([x, y, z], [e2])\ne3 = op2(x, y, z) + z\nf = th.function([x, y, z], [e3])\n\nd3v.d3viz(f, 'examples/ofg2.html')", "Open visualization!\nFeedback\nIf you have any problems or great ideas on how to improve d3viz, please let me know!\n\nChristof Angermueller\n&#99;&#97;&#110;&#103;&#101;&#114;&#109;&#117;&#101;&#108;&#108;&#101;&#114;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;\nhttps://cangermueller.com" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BeatHubmann/17F-U-DLND
gan_mnist/Intro_to_GANs_Solution.ipynb
mit
[ "Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.", "%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')", "Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.", "def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') \n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z", "Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.", "def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n with tf.variable_scope('generator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(z, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1, out_dim, activation=None)\n out = tf.tanh(logits)\n \n return out", "Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.", "def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(x, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1, 1, activation=None)\n out = tf.sigmoid(logits)\n \n return out, logits", "Hyperparameters", "# Size of input image to discriminator\ninput_size = 784\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Smoothing \nsmooth = 0.1", "Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).", "tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Build the model\ng_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)\n# g_model is the generator output\n\nd_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)\nd_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)", "Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.", "# Calculate losses\nd_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \n labels=tf.ones_like(d_logits_real) * (1 - smooth)))\nd_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \n labels=tf.zeros_like(d_logits_real)))\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,\n labels=tf.ones_like(d_logits_fake)))", "Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.", "# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [var for var in t_vars if var.name.startswith('generator')]\nd_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)", "Training", "batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\n# Only save generator variables\nsaver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)", "Training loss\nHere we'll check out the training losses for the generator and discriminator.", "fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()", "Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.", "def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)", "These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.", "_ = view_samples(-1, samples)", "Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!", "rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)", "It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!", "saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n_ = view_samples(0, [gen_samples])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jphall663/GWU_data_mining
09_matrix_factorization/src/py_part_9_kaggle_GLRM_example.ipynb
apache-2.0
[ "License\n\nCopyright (C) 2017 J. Patrick Hall, jphall@gwu.edu\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nKaggle House Prices with GLRM Matrix Factorization Example\nImports and inits", "import h2o\nfrom h2o.estimators.glrm import H2OGeneralizedLowRankEstimator\nfrom h2o.estimators.glm import H2OGeneralizedLinearEstimator\nfrom h2o.grid.grid_search import H2OGridSearch \nh2o.init(max_mem_size='12G') # give h2o as much memory as possible\nh2o.no_progress() # turn off h2o progress bars\n\nimport matplotlib as plt\n%matplotlib inline\nimport numpy as np\nimport pandas as pd", "Helper Functions\nDetermine data types", "def get_type_lists(frame, rejects=['Id', 'SalePrice']):\n\n \"\"\"Creates lists of numeric and categorical variables.\n \n :param frame: The frame from which to determine types.\n :param rejects: Variable names not to be included in returned lists.\n :return: Tuple of lists for numeric and categorical variables in the frame.\n \n \"\"\"\n \n nums, cats = [], []\n for key, val in frame.types.items():\n if key not in rejects:\n if val == 'enum':\n cats.append(key)\n else: \n nums.append(key)\n \n print('Numeric =', nums) \n print()\n print('Categorical =', cats)\n \n return nums, cats", "Impute with GLRM", "def glrm_num_impute(role, frame):\n\n \"\"\" Helper function for imputing numeric variables using GLRM.\n \n :param role: Role of frame to be imputed.\n :param frame: H2OFrame to be imputed.\n :return: H2OFrame of imputed numeric features.\n \n \"\"\"\n \n # count missing values in training data numeric columns\n print(role + ' missing:\\n', [cnt for cnt in frame.nacnt() if cnt != 0.0])\n\n # initialize GLRM\n matrix_complete_glrm = H2OGeneralizedLowRankEstimator(\n k=10, # create 10 features \n transform='STANDARDIZE', # <- seems very important\n gamma_x=0.001, # regularization on values in X\n gamma_y=0.05, # regularization on values in Y\n impute_original=True)\n\n # train GLRM\n matrix_complete_glrm.train(training_frame=frame, x=original_nums)\n\n # plot iteration history to ensure convergence\n matrix_complete_glrm.score_history().plot(x='iterations', y='objective', title='GLRM Score History')\n\n # impute numeric inputs by multiply the calculated xi and yj for the missing values in train\n num_impute = matrix_complete_glrm.predict(frame)\n\n # count missing values in imputed set\n print('imputed ' + role + ' missing:\\n', [cnt for cnt in num_impute.nacnt() if cnt != 0.0])\n \n return num_impute", "Embed with GLRM", "def glrm_cat_embed(frame):\n \n \"\"\" Helper function for embedding caetgorical variables using GLRM.\n \n :param frame: H2OFrame to be embedded.\n :return: H2OFrame of embedded categorical features.\n \n \"\"\"\n \n # initialize GLRM\n cat_embed_glrm = H2OGeneralizedLowRankEstimator(\n k=50,\n transform='STANDARDIZE',\n loss='Quadratic',\n regularization_x='Quadratic',\n regularization_y='L1',\n gamma_x=0.25,\n gamma_y=0.5)\n\n # train GLRM\n cat_embed_glrm.train(training_frame=frame, x=cats)\n\n # plot iteration history to ensure convergence\n cat_embed_glrm.score_history().plot(x='iterations', y='objective', title='GLRM Score History')\n\n # extracted embedded features\n cat_embed = h2o.get_frame(cat_embed_glrm._model_json['output']['representation_name'])\n \n return cat_embed", "Import data", "train = h2o.import_file('../../03_regression/data/train.csv')\ntest = h2o.import_file('../../03_regression/data/test.csv')\n\n# bug fix - from Keston\ndummy_col = np.random.rand(test.shape[0])\ntest = test.cbind(h2o.H2OFrame(dummy_col))\ncols = test.columns\ncols[-1] = 'SalePrice'\ntest.columns = cols\nprint(train.shape)\nprint(test.shape)\n\noriginal_nums, cats = get_type_lists(train)", "Split into to train and validation (before doing data prep!!!)", "train, valid = train.split_frame([0.7], seed=12345)\nprint(train.shape)\nprint(valid.shape)", "Impute numeric missing using GLRM matrix completion\nTraining data", "train_num_impute = glrm_num_impute('training', train)\n\ntrain_num_impute.head()", "Validation data", "valid_num_impute = glrm_num_impute('validation', valid)", "Test data", "test_num_impute = glrm_num_impute('test', test)", "Embed categorical vars using GLRM\nTraining data", "train_cat_embed = glrm_cat_embed(train)", "Validation data", "valid_cat_embed = glrm_cat_embed(valid)", "Test data", "test_cat_embed = glrm_cat_embed(test)", "Merge imputed and embedded frames", "imputed_embedded_train = train[['Id', 'SalePrice']].cbind(train_num_impute).cbind(train_cat_embed)\nimputed_embedded_valid = valid[['Id', 'SalePrice']].cbind(valid_num_impute).cbind(valid_cat_embed)\nimputed_embedded_test = test[['Id', 'SalePrice']].cbind(test_num_impute).cbind(test_cat_embed)", "Redefine numerics and explore", "imputed_embedded_nums, cats = get_type_lists(imputed_embedded_train)\n\nprint('Imputed and encoded numeric training data:')\nimputed_embedded_train.describe() \nprint('--------------------------------------------------------------------------------')\nprint('Imputed and encoded numeric validation data:')\nimputed_embedded_valid.describe() \nprint('--------------------------------------------------------------------------------')\nprint('Imputed and encoded numeric test data:')\nimputed_embedded_test.describe()", "Train model on imputed, embedded features", "h2o.show_progress() # turn on progress bars\n\n# Check log transform - looks good\n%matplotlib inline\nimputed_embedded_train['SalePrice'].log().as_data_frame().hist()\n\n# Execute log transform\nimputed_embedded_train['SalePrice'] = imputed_embedded_train['SalePrice'].log()\nimputed_embedded_valid['SalePrice'] = imputed_embedded_valid['SalePrice'].log()\nprint(imputed_embedded_train[0:3, 'SalePrice'])", "Train GLM on imputed, embedded inputs", "alpha_opts = [0.01, 0.25, 0.5, 0.99] # always keep some L2\nhyper_parameters = {\"alpha\":alpha_opts}\n\n# initialize grid search\ngrid = H2OGridSearch(\n H2OGeneralizedLinearEstimator(\n family=\"gaussian\",\n lambda_search=True,\n seed=12345),\n hyper_params=hyper_parameters)\n \n# train grid\ngrid.train(y='SalePrice',\n x=imputed_embedded_nums, \n training_frame=imputed_embedded_train,\n validation_frame=imputed_embedded_valid)\n\n# show grid search results\nprint(grid.show())\n\nbest = grid.get_grid()[0]\nprint(best)\n \n# plot top frame values\nyhat_frame = imputed_embedded_valid.cbind(best.predict(imputed_embedded_valid))\nprint(yhat_frame[0:10, ['SalePrice', 'predict']])\n\n# plot sorted predictions\nyhat_frame_df = yhat_frame[['SalePrice', 'predict']].as_data_frame()\nyhat_frame_df.sort_values(by='predict', inplace=True)\nyhat_frame_df.reset_index(inplace=True, drop=True)\n_ = yhat_frame_df.plot(title='Ranked Predictions Plot')\n\n# Shutdown H2O - this will erase all your unsaved frames and models in H2O\nh2o.cluster().shutdown(prompt=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cgrudz/cgrudz.github.io
teaching/stat_775_2021_fall/activities/activity-2021-09-15.ipynb
mit
[ "Introduction to Python part IX (And a discussion of stochastic processes)\nActivity 1: Discussion of stochastic processes\n\nHow does a stochastic process extend the idea of a random vector? What are two additional considerations we have to make in this extension?\nWhat is a Gaussian process? What are two well-known examples of Gaussian processes?\nWhat properties define a Wiener process? How is this related to well-known physical models?\n\nActivity 2: Conditionals in Python\nIn our last lesson, we discovered something suspicious was going on in our inflammation data by drawing some plots. How can we use Python to automatically recognize the different features we saw, and take a different action for each? In this lesson, we’ll learn how to write code that runs only when certain conditions are true.", "num = 37\nif num > 100:\n print('greater')\nelse:\n print('not greater')\nprint('done')", "The second line of this code uses the keyword if to tell Python that we want to make a choice. If the test that follows the if statement is true, the body of the if (i.e., the set of lines indented underneath it) is executed, and “greater” is printed. If the test is false, the body of the else is executed instead, and “not greater” is printed. Only one or the other is ever executed before continuing on with program execution to print “done”:\n\nWe can also chain several tests together using elif, which is short for “else if”. The following Python code uses elif to print the sign of a number.", "num = -3\n\nif num > 0:\n print(num, 'is positive')\nelif num == 0:\n print(num, 'is zero')\nelse:\n print(num, 'is negative')", "Note that to test for equality we use a double equals sign == rather than a single equals sign = which is used to assign values.\nAlong with the > and == operators we have already used for comparing values in our conditionals, there are a few more options to know about:\n\n&gt;: greater than\n&lt;: less than\n==: equal to\n!=: does not equal\n&gt;=: greater than or equal to\n&lt;=: less than or equal to\n\nWe can also combine tests using and and or. and is only true if both parts are true:", "if (1 > 0) and (-1 >= 0):\n print('both parts are true')\nelse:\n print('at least one part is false')", "while or is true if at least one part is true:", "if (1 < 0) or (1 >= 0):\n print('at least one test is true')", "Activity 3: Checking our Data\nNow that we’ve seen how conditionals work, we can use them to check for the suspicious features we saw in our inflammation data. We are about to use functions provided by the numpy module again.", "import numpy as np\ndata = np.loadtxt(\"./swc-python/data/inflammation-01.csv\", delimiter=\",\")", "From the first couple of plots, we saw that maximum daily inflammation exhibits a strange behavior and raises one unit a day. Wouldn’t it be a good idea to detect such behavior and report it as suspicious? Let’s do that! However, instead of checking every single day of the study, let’s merely check if maximum inflammation in the beginning (day 0) and in the middle (day 20) of the study are equal to the corresponding day numbers.", "max_inflammation_0 = np.max(data, axis=0)[0]\nmax_inflammation_20 = np.max(data, axis=0)[20]\n\nif max_inflammation_0 == 0 and max_inflammation_20 == 20:\n print('Suspicious looking maxima!')", "We also saw a different problem in the third dataset; the minima per day were all zero (looks like a healthy person snuck into our study). We can also check for this with an elif condition:", "if max_inflammation_0 == 0 and max_inflammation_20 == 20:\n print('Suspicious looking maxima!')\nelif np.sum(np.min(data, axis=0)) == 0:\n print('Minima add up to zero!')", "And if neither of these conditions are true, we can use else to give the all-clear:", "if max_inflammation_0 == 0 and max_inflammation_20 == 20:\n print('Suspicious looking maxima!')\nelif np.sum(np.min(data, axis=0)) == 0:\n print('Minima add up to zero!')\nelse:\n print('Seems OK!')", "Exercise:\nUsing glob loop over the file names and check each of the files in the loop with the above if / else statements. Print out the file name simultaneously to keep track of which file we are studying, and make sure these are sorted." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Naereen/Lempel-Ziv_Complexity
Short_study_of_the_Lempel-Ziv_complexity.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Short-study-of-the-Lempel-Ziv-complexity\" data-toc-modified-id=\"Short-study-of-the-Lempel-Ziv-complexity-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Short study of the Lempel-Ziv complexity</a></div><div class=\"lev2 toc-item\"><a href=\"#Short-definition\" data-toc-modified-id=\"Short-definition-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Short definition</a></div><div class=\"lev2 toc-item\"><a href=\"#Python-implementation\" data-toc-modified-id=\"Python-implementation-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Python implementation</a></div><div class=\"lev2 toc-item\"><a href=\"#Tests-(1/2)\" data-toc-modified-id=\"Tests-(1/2)-13\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Tests (1/2)</a></div><div class=\"lev2 toc-item\"><a href=\"#Cython-implementation\" data-toc-modified-id=\"Cython-implementation-14\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Cython implementation</a></div><div class=\"lev2 toc-item\"><a href=\"#Numba-implementation\" data-toc-modified-id=\"Numba-implementation-15\"><span class=\"toc-item-num\">1.5&nbsp;&nbsp;</span>Numba implementation</a></div><div class=\"lev2 toc-item\"><a href=\"#Tests-(2/2)\" data-toc-modified-id=\"Tests-(2/2)-16\"><span class=\"toc-item-num\">1.6&nbsp;&nbsp;</span>Tests (2/2)</a></div><div class=\"lev2 toc-item\"><a href=\"#Benchmarks\" data-toc-modified-id=\"Benchmarks-17\"><span class=\"toc-item-num\">1.7&nbsp;&nbsp;</span>Benchmarks</a></div><div class=\"lev2 toc-item\"><a href=\"#Complexity-?\" data-toc-modified-id=\"Complexity-?-18\"><span class=\"toc-item-num\">1.8&nbsp;&nbsp;</span>Complexity ?</a></div><div class=\"lev2 toc-item\"><a href=\"#Conclusion\" data-toc-modified-id=\"Conclusion-19\"><span class=\"toc-item-num\">1.9&nbsp;&nbsp;</span>Conclusion</a></div><div class=\"lev2 toc-item\"><a href=\"#(Experimental)-Julia-implementation\" data-toc-modified-id=\"(Experimental)-Julia-implementation-110\"><span class=\"toc-item-num\">1.10&nbsp;&nbsp;</span>(Experimental) <a href=\"http://julialang.org\" target=\"_blank\">Julia</a> implementation</a></div><div class=\"lev2 toc-item\"><a href=\"#Ending-notes\" data-toc-modified-id=\"Ending-notes-111\"><span class=\"toc-item-num\">1.11&nbsp;&nbsp;</span>Ending notes</a></div>\n\n# Short study of the Lempel-Ziv complexity\n\nIn this short [Jupyter notebook](https://www.Jupyter.org/) aims at defining and explaining the [Lempel-Ziv complexity](https://en.wikipedia.org/wiki/Lempel-Ziv_complexity).\n\n[I](http://perso.crans.org/besson/) will give examples, and benchmarks of different implementations.\n\n- **Reference:** Abraham Lempel and Jacob Ziv, *« On the Complexity of Finite Sequences »*, IEEE Trans. on Information Theory, January 1976, p. 75–81, vol. 22, n°1.\n\n----\n## Short definition\nThe Lempel-Ziv complexity is defined as the number of different substrings encountered as the stream is viewed from begining to the end.\n\nAs an example:\n\n```python\n>>> s = '1001111011000010'\n>>> lempel_ziv_complexity(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010\n8\n```\n\nMarking in the different substrings, this sequence $s$ has complexity $\\mathrm{Lempel}$-$\\mathrm{Ziv}(s) = 6$ because $s = 1001111011000010 = 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010$.\n\n- See the page https://en.wikipedia.org/wiki/Lempel-Ziv_complexity for more details.\n\nOther examples:\n\n```python\n>>> lempel_ziv_complexity('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010\n7\n>>> lempel_ziv_complexity('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000\n9\n>>> lempel_ziv_complexity('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101\n10\n```\n\n----\n## Python implementation", "def lempel_ziv_complexity(sequence):\n \"\"\"Lempel-Ziv complexity for a binary sequence, in simple Python code.\"\"\"\n sub_strings = set()\n n = len(sequence)\n ind = 0\n inc = 1\n # this while loop runs at most n times\n while True:\n if ind + inc > len(sequence):\n break\n # this can take some time, takes O(inc)\n sub_str = sequence[ind : ind + inc]\n # and this also, takes a O(log |size set|) in worst case\n # max value for inc = n / size set at the end\n # so worst case is that the set contains sub strings of the same size\n # and the worst loop takes a O(n / |S| * log(|S|))\n # ==> so if n/|S| is constant, it gives O(n log(n)) at the end\n # but if n/|S| = O(n) then it gives O(n^2)\n if sub_str in sub_strings:\n inc += 1\n else:\n sub_strings.add(sub_str)\n ind += inc\n inc = 1\n return len(sub_strings)", "Tests (1/2)", "s = '1001111011000010'\nlempel_ziv_complexity(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010\n\n%timeit lempel_ziv_complexity(s)\n\nlempel_ziv_complexity('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010\n\nlempel_ziv_complexity('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000\n\nlempel_ziv_complexity('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101\n\n%timeit lempel_ziv_complexity('100111101100001000001010')\n\nimport random\n\ndef random_string(size, alphabet=\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"):\n return \"\".join(random.choices(alphabet, k=size))\n\ndef random_binary_sequence(size):\n return random_string(size, alphabet=\"01\")\n\nrandom_string(100)\nrandom_binary_sequence(100)\n\nfor (r, name) in zip(\n [random_string, random_binary_sequence],\n [\"random strings in A..Z\", \"random binary sequences\"]\n ):\n print(\"\\nFor {}...\".format(name))\n for n in [10, 100, 1000, 10000, 100000]:\n print(\" of sizes {}, Lempel-Ziv complexity runs in:\".format(n))\n %timeit lempel_ziv_complexity(r(n))", "We can start to see that the time complexity of this function seems to grow linearly as the size grows.\n\nCython implementation\nAs this blog post explains it, we can easily try to use Cython in a notebook cell.\n\nSee the Cython documentation for more information.", "%load_ext cython\n\n%%cython\nimport cython\n\nctypedef unsigned int DTYPE_t\n\n@cython.boundscheck(False) # turn off bounds-checking for entire function, quicker but less safe\ndef lempel_ziv_complexity_cython(str sequence not None):\n \"\"\"Lempel-Ziv complexity for a string, in simple Cython code (C extension).\"\"\"\n \n cdef set sub_strings = set()\n cdef str sub_str = \"\"\n cdef DTYPE_t n = len(sequence)\n cdef DTYPE_t ind = 0\n cdef DTYPE_t inc = 1\n while True:\n if ind + inc > len(sequence):\n break\n sub_str = sequence[ind : ind + inc]\n if sub_str in sub_strings:\n inc += 1\n else:\n sub_strings.add(sub_str)\n ind += inc\n inc = 1\n return len(sub_strings)", "Let try it!", "s = '1001111011000010'\nlempel_ziv_complexity_cython(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010\n\n%timeit lempel_ziv_complexity(s)\n%timeit lempel_ziv_complexity_cython(s)\n\nlempel_ziv_complexity_cython('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010\n\nlempel_ziv_complexity_cython('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000\n\nlempel_ziv_complexity_cython('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101", "Now for a test of the speed?", "for (r, name) in zip(\n [random_string, random_binary_sequence],\n [\"random strings in A..Z\", \"random binary sequences\"]\n ):\n print(\"\\nFor {}...\".format(name))\n for n in [10, 100, 1000, 10000, 100000]:\n print(\" of sizes {}, Lempel-Ziv complexity in Cython runs in:\".format(n))\n %timeit lempel_ziv_complexity_cython(r(n))", "$\\implies$ Yay! It seems faster indeed! but only x2 times faster...\n\n\nNumba implementation\nAs this blog post explains it, we can also try to use Numba in a notebook cell.", "from numba import jit\n\n@jit\ndef lempel_ziv_complexity_numba(sequence : str) -> int:\n \"\"\"Lempel-Ziv complexity for a sequence, in Python code using numba.jit() for automatic speedup (hopefully).\"\"\"\n\n sub_strings = set()\n n : int= len(sequence)\n\n ind : int = 0\n inc : int = 1\n while True:\n if ind + inc > len(sequence):\n break\n sub_str : str = sequence[ind : ind + inc]\n if sub_str in sub_strings:\n inc += 1\n else:\n sub_strings.add(sub_str)\n ind += inc\n inc = 1\n return len(sub_strings)", "Let try it!", "s = '1001111011000010'\nlempel_ziv_complexity_numba(s) # 1 / 0 / 01 / 1110 / 1100 / 0010\n\n%timeit lempel_ziv_complexity_numba(s)\n\nlempel_ziv_complexity_numba('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010\n\nlempel_ziv_complexity_numba('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000\n 9\n\nlempel_ziv_complexity_numba('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101\n\n%timeit lempel_ziv_complexity_numba('100111101100001000001010')", "$\\implies$ Well... It doesn't seem that much faster from the naive Python code.\nWe specified the signature when calling @numba.jit, and used the more appropriate data structure (string is probably the smaller, numpy array are probably faster).\nBut even these tricks didn't help that much.\nI tested, and without specifying the signature, the fastest approach is using string, compared to using lists or numpy arrays.\nNote that the @jit-powered function is compiled at runtime when first being called, so the signature used for the first call is determining the signature used by the compile function\n\n\nTests (2/2)\nTo test more robustly, let us generate some (uniformly) random binary sequences.", "from numpy.random import binomial\n\ndef bernoulli(p, size=1):\n \"\"\"One or more samples from a Bernoulli of probability p.\"\"\"\n return binomial(1, p, size)\n\nbernoulli(0.5, 20)", "That's probably not optimal, but we can generate a string with:", "''.join(str(i) for i in bernoulli(0.5, 20))\n\ndef random_binary_sequence(n, p=0.5):\n \"\"\"Uniform random binary sequence of size n, with rate of 0/1 being p.\"\"\"\n return ''.join(str(i) for i in bernoulli(p, n))\n\nrandom_binary_sequence(50)\nrandom_binary_sequence(50, p=0.1)\nrandom_binary_sequence(50, p=0.25)\nrandom_binary_sequence(50, p=0.5)\nrandom_binary_sequence(50, p=0.75)\nrandom_binary_sequence(50, p=0.9)", "And so, this function can test to check that the three implementations (naive, Cython-powered, Numba-powered) always give the same result.", "def tests_3_functions(n, p=0.5, debug=True):\n s = random_binary_sequence(n, p=p)\n c1 = lempel_ziv_complexity(s)\n if debug:\n print(\"Sequence s = {} ==> complexity C = {}\".format(s, c1))\n c2 = lempel_ziv_complexity_cython(s)\n c3 = lempel_ziv_complexity_numba(s)\n assert c1 == c2 == c3, \"Error: the sequence {} gave different values of the Lempel-Ziv complexity from 3 functions ({}, {}, {})...\".format(s, c1, c2, c3)\n return c1\n\ntests_3_functions(5)\n\ntests_3_functions(20)\n\ntests_3_functions(50)\n\ntests_3_functions(500)\n\ntests_3_functions(5000)", "Benchmarks\nOn two example of strings (binary sequences), we can compare our three implementation.", "%timeit lempel_ziv_complexity('100111101100001000001010')\n%timeit lempel_ziv_complexity_cython('100111101100001000001010')\n%timeit lempel_ziv_complexity_numba('100111101100001000001010')\n\n%timeit lempel_ziv_complexity('10011110110000100000101000100100101010010111111011001111111110101001010110101010')\n%timeit lempel_ziv_complexity_cython('10011110110000100000101000100100101010010111111011001111111110101001010110101010')\n%timeit lempel_ziv_complexity_numba('10011110110000100000101000100100101010010111111011001111111110101001010110101010')", "Let check the time used by all the three functions, for longer and longer sequences:", "%timeit tests_3_functions(10, debug=False)\n%timeit tests_3_functions(20, debug=False)\n%timeit tests_3_functions(40, debug=False)\n%timeit tests_3_functions(80, debug=False)\n%timeit tests_3_functions(160, debug=False)\n%timeit tests_3_functions(320, debug=False)\n\ndef test_cython(n):\n s = random_binary_sequence(n)\n c = lempel_ziv_complexity_cython(s)\n return c\n\n%timeit test_cython(10)\n%timeit test_cython(20)\n%timeit test_cython(40)\n%timeit test_cython(80)\n%timeit test_cython(160)\n%timeit test_cython(320)\n\n%timeit test_cython(640)\n%timeit test_cython(1280)\n%timeit test_cython(2560)\n%timeit test_cython(5120)\n\n%timeit test_cython(10240)\n%timeit test_cython(20480)", "Complexity ?\n$\\implies$ The function lempel_ziv_complexity_cython seems to be indeed (almost) linear in $n$, the length of the binary sequence $S$.\nBut let check more precisely, as it could also have a complexity of $\\mathcal{O}(n \\log n)$.", "import matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\nsns.set(context=\"notebook\", style=\"darkgrid\", palette=\"hls\", font=\"sans-serif\", font_scale=1.4)\n\nimport numpy as np\nimport timeit\n\nsizes = np.array(np.trunc(np.logspace(1, 6, 30)), dtype=int)\n\ntimes = np.array([\n timeit.timeit(\n stmt=\"lempel_ziv_complexity_cython(random_string({}))\".format(n),\n globals=globals(),\n number=10,\n )\n for n in sizes\n])\n\nplt.figure(figsize=(15, 10))\nplt.plot(sizes, times, 'o-')\nplt.xlabel(\"Length $n$ of the binary sequence $S$\")\nplt.ylabel(r\"Time in $\\mu\\;\\mathrm{s}$\")\nplt.title(\"Time complexity of Lempel-Ziv complexity\")\nplt.show()\n\nplt.figure(figsize=(15, 10))\nplt.loglog(sizes, times, 'o-')\nplt.xlabel(\"Length $n$ of the binary sequence $S$\")\nplt.ylabel(r\"Time in $\\mu\\;\\mathrm{s}$\")\nplt.title(\"Time complexity of Lempel-Ziv complexity, loglog scale\")\nplt.show()", "It is linear in $\\log\\log$ scale, so indeed the algorithm seems to have a linear complexity.\nTo sum-up, for a sequence $S$ of length $n$, it takes $\\mathcal{O}(n)$ basic operations to compute its Lempel-Ziv complexity $\\mathrm{Lempel}-\\mathrm{Ziv}(S)$.\n\nConclusion\n\n\nThe Lempel-Ziv complexity is not too hard to implement, and it indeed represents a certain complexity of a binary sequence, capturing the regularity and reproducibility of the sequence.\n\n\nUsing the Cython was quite useful to have a $\\simeq \\times 100$ speed up on our manual naive implementation !\n\n\nThe algorithm is not easy to analyze, we have a trivial $\\mathcal{O}(n^2)$ bound but experiments showed it is more likely to be $\\mathcal{O}(n \\log n)$ in the worst case, and $\\mathcal{O}(n)$ in practice for \"not too complicated sequences\" (or in average, for random sequences).\n\n\n\n(Experimental) Julia implementation\nI want to (quickly) try to see if I can use Julia to write a faster version of this function.\nSee issue #1.", "%%time\n%%script julia\n\n\"\"\"Lempel-Ziv complexity for a sequence, in simple Julia code.\"\"\"\nfunction lempel_ziv_complexity(sequence)\n sub_strings = Set()\n n = length(sequence)\n\n ind = 1\n inc = 1\n while true\n if ind + inc > n\n break\n end\n sub_str = sequence[ind : ind + inc]\n if sub_str in sub_strings\n inc += 1\n else\n push!(sub_strings, sub_str)\n ind += inc\n inc = 1\n end\n end\n return length(sub_strings)\nend\n\ns = \"1001111011000010\"\nlempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010\n\nM = 1000;\nN = 10000;\nfor _ in 1:M\n s = join(rand(0:1, N));\n lempel_ziv_complexity(s);\nend\nlempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010", "And to compare it fairly, let us use Pypy for comparison.", "%%time\n%%pypy\n\ndef lempel_ziv_complexity(sequence):\n \"\"\"Lempel-Ziv complexity for a binary sequence, in simple Python code.\"\"\"\n sub_strings = set()\n n = len(sequence)\n\n ind = 0\n inc = 1\n while True:\n if ind + inc > len(sequence):\n break\n sub_str = sequence[ind : ind + inc]\n if sub_str in sub_strings:\n inc += 1\n else:\n sub_strings.add(sub_str)\n ind += inc\n inc = 1\n return len(sub_strings)\n\ns = \"1001111011000010\"\nlempel_ziv_complexity(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010\n\nfrom random import random\n\nM = 1000\nN = 10000\nfor _ in range(M):\n s = ''.join(str(int(random() < 0.5)) for _ in range(N))\n lempel_ziv_complexity(s)", "So we can check that on these 1000 random trials on strings of size 10000, the naive Julia version is slower than the naive Python version (executed by Pypy for speedup).\n\nEnding notes\n\nThanks for reading!\nMy implementation is now open-source and available on GitHub, on https://github.com/Naereen/Lempel-Ziv_Complexity.\nIt will be available from PyPi very soon, see https://pypi.python.org/pypi/lempel_ziv_complexity.\nSee this repo on GitHub for more notebooks, or on nbviewer.jupyter.org.\nThat's it for this demo! See you, folks!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kubeflow/code-intelligence
Issue_Embeddings/notebooks/04_Inference.ipynb
mit
[ "Location of Model Artifacts\nGoogle Cloud Storage\n\n\nmodel for inference (965 MB): https://storage.googleapis.com/issue_label_bot/model/lang_model/models_22zkdqlr/trained_model_22zkdqlr.hdf\n\n\nencoder (for fine-tuning w/a classifier) (965 MB): \nhttps://storage.googleapis.com/issue_label_bot/model/lang_model/models_22zkdqlr/trained_model_encoder_22zkdqlr.pth\n\n\nfastai.databunch (27.1 GB):\nhttps://storage.googleapis.com/issue_label_bot/model/lang_model/data_save.hdf\n\n\ncheckpointed model (2.29 GB): \nhttps://storage.googleapis.com/issue_label_bot/model/lang_model/models_22zkdqlr/best_22zkdqlr.pth\n\n\nWeights & Biases Run\nhttps://app.wandb.ai/github/issues_lang_model/runs/22zkdqlr/overview\nPart 1: Load Full Model + DataBunch In Order To Save Model For Inference\nA fastai learner comes packaged with the training data and other data, however for inference we don't need this. There is a way to export just the model weights without the data with learn.export or just the encoder base with learn.save_encoder. Unfortunately, I forgot to do this during model training therefore we need to load the full checkpointed model and a databunch and save these artificacts for inference.", "import os\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"1\"\n\n\nfrom pathlib import Path\nfrom fastai.basic_train import load_learner\nfrom fastai.text import load_data\nfrom fastai.text.learner import language_model_learner\nfrom fastai.text.models import AWD_LSTM, awd_lstm_lm_config\n\nemb_sz=800\nqrnn=False\nbidir=False\nn_layers=4\nn_hid=2400\n\n\n# https://app.wandb.ai/github/issues_lang_model/runs/22zkdqlr/overview\ndata_path = Path('/ds/lang_model')\nmodel_path = data_path/'models_22zkdqlr'\n\n\ndef pass_through(x):\n return x\n\n\nawd_lstm_lm_config.update(dict(emb_sz=emb_sz, qrnn=qrnn, bidir=bidir, n_layers=n_layers, n_hid=n_hid))", "Load learner object\nNote: you don't have to do this over and over again, you just have to call learn.export() to save the learner after you have loaded everything.", "data_lm = load_data(data_path, bs=96)\n\nlearn = language_model_learner(data=data_lm,\n arch=AWD_LSTM,\n model_dir=model_path,\n pretrained=False)", "Load weights of trained model", "learn.load('best_22zkdqlr')", "Export Minimal Model State For Inference", "learn.export('trained_model_22zkdqlr.hdf')\n\nlearn.save_encoder('trained_model_encoder_22zkdqlr')", "The data is very large so if you are running this notebook best to release memory by deleting these objects and loading the more lightweight inference artifacts that we just saved.", "del learn\ndel data_lm", "Part II: Load Minimal Model For Inference", "from inference import InferenceWrapper, pass_through", "Create an InferenceWrapper object", "wrapper = InferenceWrapper(model_path='/ds/lang_model/models_22zkdqlr/',\n )\n\nissue_string = '# hello abacadabra world \\nA second line **something bold**.'\n\npooledfeat = wrapper.get_pooled_features(issue_string)\nprint(pooledfeat)\nprint(pooledfeat.shape)\n\nrawfeat = wrapper.get_raw_features(issue_string)\nprint(rawfeat)\nprint(rawfeat.shape)", "Predict the next 5 words\nWe don't actually use this functionality, but it is interesting to see for those who are curious what the output of a langauge model is. Recall that we are using the encoder of the language model to extract features from GitHub issues.", "wrapper.learn.predict('I am having trouble opening a', 5)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]