repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
Micromeritics/report-models-python
documentation/isotherm.ipynb
gpl-3.0
[ "What is an Adsorption Isotherm?\nIn a typical isotherm analysis, a small sample of material, which one wants to determine the surface properties of, is placed in a test tube and put under vacuum. A known species of gas such as Nitrogen is then incrementally dosed to the sample tube with each increment followed by a measurement of the equilibrated pressure $P$. Much of the dosed analysis gas just fills the free space in the sample tube resulting in an increase in gas pressure, but some fraction of the dosed gas is adsorbed to the surface of the sample material, also finding its way into any pores on the surface of the sample. Using the measurement of the pressures before and after the dose, and using knowledge of the exact amount of gas that was dosed, one may then infer the quantity of gas $Q_{ads}$ that is adsorbed onto the sample. The series of controlled doses all happens at a nearly constant temperature (typically 77K for Nitrogen analysis), and the convention is to refer to the collected $P$ vs. $Q_{ads}$ data as an \"Isotherm.\" The above description is somewhat simplified, and the details of the experimental method, the apparatus, and the determination of the quantity adsorbed is explained in detail in many references such as [1].\nThe quantity of gas adsorbed onto the sample's surface can be expressed as the number of moles of gas $n$. However the convention is to use the ideal gas law at standard temperature and pressure to represent this quantity as a volume of gas $V = nR T_{STD}/P_{STD}$. Then dividing by the mass of the sample, the quantities adsorbed $Q_{ads}$ are typically reported in units of $\\textrm{cm}^3 / \\textrm{g STP}$. This is the system of units employed here. When discussing particular models however, the number of moles $n$ or the number of molecules $N$ will be employed.\nAnother convention for isotherm data is to convert absolute pressures to relative pressures. Specifically, for analysis gases which can condense to a liquid at the analysis temperature, it is the relative pressure which is typically reported rather than the absolute pressure. The relative pressure is simply $P_{rel} = \\frac{P_{abs}}{P_0}$ where $P_{abs}$ is the absolute pressure measured in millimeters Mercury (mmHg) or some other pressure unit and $P_0$ is the saturation pressure of the analysis gas which is also typically measured in the course of the experiment. The relative pressure is a dimensionless quantity. Many gas adsorption calculations will us relative pressure units rather than the absolute pressure.\n[1] Webb, Paul A., and Clyde Orr. Analytical methods in fine particle technology. Micromeritics Instrument Corp, 1997.\nIsotherm adsorption data shown in a few representations\nA few isotherms from reference data sets are shown below. These example sets are available on github report-models-python in the 'micromeritics' python package. Isotherm data may be obtained from other online resources as well.\nFirst we show a few data sets using a linear scale with relative pressure as the dependent variable", "%matplotlib inline\nfrom micromeritics import util\nfrom micromeritics import isotherm_examples as ex\nimport matplotlib.pyplot as plt\n\ncarb = ex.carbon_black() # example isotherm of Carbon Black with N2 at 77K\nsial = ex.silica_alumina() # example isotherm of Silica Alumina with N2 at 77K\nmcm = ex.mcm_41() # example isotherm of MCM 41 with N2 at 77K\n\nfig = plt.figure(figsize=(12,5))\naxes = fig.add_subplot(111)\nplt.title('Isotherm Plot')\nplt.ylabel(\"Quantity Adsorbed (cm^3/g STP)\")\nplt.xlabel(\"Relative Pressure\")\nplt.gca().set_xscale('linear')\nplt.plot( carb.Prel, carb.Qads, 'ro', label='Carbon Black with N2 at 77K' )\nplt.plot( sial.Prel, sial.Qads, 'bo-', label='Silica Alumina with N2 at 77K')\nplt.plot( mcm.Prel, mcm.Qads, 'go-', label='MCM 41 with N2 at 77K')\nlegend = axes.legend(loc='upper left', shadow=True)\nplt.show()", "It is also useful to show the isotherm with the Pressure axis scaled as logarithmic.", "fig = plt.figure(figsize=(12,5))\naxes = fig.add_subplot(111)\nplt.title('Isotherm Plot')\nplt.ylabel(\"Quantity Adsorbed (cm^3/g STP)\")\nplt.xlabel(\"Relative Pressure\")\n\nplt.gca().set_xscale('log')\nplt.plot( carb.Prel, carb.Qads, 'ro', label='Carbon Black with N2 at 77K' )\nplt.plot( sial.Prel, sial.Qads, 'bo-', label='Silica Alumina with N2 at 77K')\nplt.plot( mcm.Prel, mcm.Qads, 'go-', label='MCM 41 with N2 at 77K')\nlegend = axes.legend(loc='upper left', shadow=True)\nplt.show()", "While it is more common to show isotherm data using relative pressure, it is also worth while to have the absolute pressures available. Below is an example data set for ZSM-5 analyzed with argon gas at 87k shown with absolute pressure as the dependent variable.", "zsm = ex.zsm_5() # example isotherm of ZSM-5 with Ar at 87K\nfig = plt.figure(figsize=(12,5))\naxes = fig.add_subplot(111)\nplt.title('Isotherm Plot')\nplt.ylabel(\"Quantity Adsorbed (cm^3/g STP)\")\nplt.xlabel(\"Absolute Pressure (mmHg)\")\nplt.gca().set_xscale('log')\nplt.plot( zsm.Pabs, zsm.Qads, 'ro', label='ZSM-5 with Ar at 87K' )\nlegend = axes.legend(loc='upper left', shadow=True)\nplt.show()", "A note about relative pressures\nConverting between absolute and relative pressure is often simply a matter of multiplying or dividing by a single measurmenent of the average saturation pressure, but in some cases, the saturation pressure will deviate slightly from the average by small amounts from one data point to the next owing to small deviations in the actual analysis temperature at time of these measurements. In cases where a high level of precision is required, the saturation pressure can be determined at every pressure measurement in the isotherm, and the relative pressure is thus computed uniquely for each data point." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
colour-science/colour-hdri
colour_hdri/examples/examples_merge_from_raw_files.ipynb
bsd-3-clause
[ "Colour - HDRI - Examples: Merge from Raw Files\nThrough this example, some Canon EOS 5D Mark II CR2 files will be merged together in order to create a single radiance image.\nThe following steps will be taken:\n\nConversion of the CR2 files to DNG files using Adobe DNG Converter.\nConversion of the DNG files to intermediate demosaiced linear Tiff files using Dave Coffin's dcraw.\nCreation of an image stack using DNG and intermediate Tiff files:\nReading of the DNG files Exif metadata using Phil Harvey's ExifTool.\nReading of the intermediate Tiff files pixel data using OpenImageIO.\nWhite balancing of the intermediate Tiff files.\nConversion of the intermediate Tiff files to RGB display colourspace.\n\n\nMerging of the image stack into a radiance image.\nDisplay of the final resulting radiance image.\n\n\nNote: Some steps can be performed using alternative methods or simplified, for instance the DNG conversion can be entirely avoided. Our interest here is to retrieve the camera levels and the Adobe DNG camera colour profiling data.\n\nCR2 Files Conversion to DNG and Intermediate Files", "import logging\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\n\nimport colour\n\nfrom colour_hdri import (\n EXAMPLES_RESOURCES_DIRECTORY,\n Image,\n ImageStack,\n camera_space_to_sRGB,\n convert_dng_files_to_intermediate_files,\n convert_raw_files_to_dng_files,\n filter_files,\n read_exif_tag,\n image_stack_to_radiance_image,\n update_exif_tags,\n weighting_function_Debevec1997,\n)\nfrom colour_hdri.plotting import plot_radiance_image_strip\n\nlogging.basicConfig(level=logging.INFO)\n\n\nRESOURCES_DIRECTORY = os.path.join(\n EXAMPLES_RESOURCES_DIRECTORY, \"frobisher_001\"\n)\n\ncolour.plotting.colour_style()\n\ncolour.utilities.describe_environment();\n\nRAW_FILES = filter_files(RESOURCES_DIRECTORY, (\"CR2\",))\n\nDNG_FILES = convert_raw_files_to_dng_files(RAW_FILES, RESOURCES_DIRECTORY)\n\nXYZ_TO_CAMERA_SPACE_MATRIX = colour.utilities.as_float_array(\n [\n float(M_c)\n for M_c in read_exif_tag(DNG_FILES[-2], \"ColorMatrix2\").split()\n ]\n).reshape((3, 3))\n\n# In order to avoid artefacts, white balancing should be peformed before\n# demosaicing thus we need to pass appropriate gains to *dcraw*.\nWHITE_BALANCE_MULTIPLIERS = colour.utilities.as_float_array(\n [\n float(M_c)\n for M_c in read_exif_tag(DNG_FILES[-2], \"AsShotNeutral\").split()\n ]\n)\n\nWHITE_BALANCE_MULTIPLIERS = 1 / WHITE_BALANCE_MULTIPLIERS\n\nRAW_CONVERTER_ARGUMENTS = (\n '-t 0 -H 1 -r {0} {1} {2} {1} -4 -q 3 -o 0 -T \"{{raw_file}}\"'.format(\n *WHITE_BALANCE_MULTIPLIERS\n )\n)\n\nINTERMEDIATE_FILES = convert_dng_files_to_intermediate_files(\n DNG_FILES,\n RESOURCES_DIRECTORY,\n raw_converter_arguments=RAW_CONVERTER_ARGUMENTS,\n)\n\nupdate_exif_tags(zip(DNG_FILES, INTERMEDIATE_FILES))\n\ncolour.plotting.plot_image(\n colour.cctf_encoding(\n colour.read_image(str(INTERMEDIATE_FILES[-2]))[\n 1250:2250, 3000:4000, ...\n ]\n ),\n text_kwargs={\"text\": os.path.basename(INTERMEDIATE_FILES[-2])},\n);", "Radiance Image Merge", "def merge_from_raw_files(\n dng_files,\n output_directory,\n batch_size=5,\n white_balance_multipliers=None,\n weighting_function=weighting_function_Debevec1997,\n):\n paths = []\n for dng_files in colour.utilities.batch(dng_files, batch_size):\n image_stack = ImageStack()\n for dng_file in dng_files:\n image = Image(dng_file)\n image.read_metadata()\n image.path = str(dng_file.replace(\"dng\", \"tiff\"))\n image.read_data()\n\n image.data = camera_space_to_sRGB(\n image.data * np.max(WHITE_BALANCE_MULTIPLIERS),\n XYZ_TO_CAMERA_SPACE_MATRIX,\n )\n image_stack.append(image)\n\n path = os.path.join(\n output_directory,\n \"{0}_{1}_MRF.{2}\".format(\n os.path.splitext(os.path.basename(image_stack.path[0]))[0],\n batch_size,\n \"exr\",\n ),\n )\n paths.append(path)\n\n logging.info('Merging \"{0}\"...'.format(path))\n logging.info(\n '\\tImage stack \"F Number\" (Exif): {0}'.format(image_stack.f_number)\n )\n logging.info(\n '\\tImage stack \"Exposure Time\" (Exif): {0}'.format(\n image_stack.exposure_time\n )\n )\n logging.info('\\tImage stack \"ISO\" (Exif): {0}'.format(image_stack.iso))\n image = image_stack_to_radiance_image(\n image_stack, weighting_function, weighting_average=True\n )\n\n logging.info('Writing \"{0}\"...'.format(path))\n colour.write_image(image, path)\n\n return paths\n\n\nPATHS = merge_from_raw_files(DNG_FILES, RESOURCES_DIRECTORY)", "Radiance Image Display", "plot_radiance_image_strip(colour.read_image(PATHS[0]));" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
oseledets/talks-online
kaiserslautern-2018/lecture-1.ipynb
cc0-1.0
[ "from traitlets.config.manager import BaseJSONConfigManager\n#path = \"/home/damian/miniconda3/envs/rise_latest/etc/jupyter/nbconfig\"\ncm = BaseJSONConfigManager()\ncm.update('livereveal', {\n 'theme': 'sky',\n 'transition': 'zoom',\n 'start_slideshow_at': 'selected',\n 'scroll': True\n})", "Multivariate function approximation: methods and tools\nOverall plan\n\n\n(Today) Multivariate function approximation: curse of dimensionality, polynomial chaos, optimal experiment design, connection to linear algebra.\n\n\nTensor decompositions:\n\n\nDeep learning methods.\n\n\nUncertainty quantification\n<img src='entire_process.png'>\nUncertainty quantification\nNumerical simulations assume models of real world; these models pick uncertainties\n\nIn coefficients\nIn right-hand sides\nModels are approximate\n\nForward and inverse problems\nUQ splits into two major branches: forward and inverse problems\nRoughly speaking, UQ divides into two major branches, forward and inverse problems. \nForward problems\nIn the forward propagation of uncertainty, we have a known model F for a system of interest. \nWe model its inputs $X$ as a random variable and wish to understand the output random variable \n$$Y = F(X)$$\n(also denoted $Y \\vert X$) and reads $Y$ given $X$. \nAlso, this is related to sensitivity analysis (how random variations in $X$ influence variation in $Y$).\nInverse problems\nIn inverse problems, $F$ is a forward model, but $Y$ is observed data, and we want to find the input data $X$ \nsuch that $F(X) = Y$, i.e. we want $X \\vert Y$ instead of $Y \\vert X$. \nInverse problems are typically ill-posed in the usual sense, so we need an expert (or prior) \nabout what a good solution $X$ might be.\nBayesian perspective becomes the method of choice, but this requires the representation of high-dimensional distributions.\n$$p(X \\vert Y) = \\frac{p(Y \\vert X) p(X)}{p(Y)}.$$\nApproximation of multivariate functions\nIf we want to do efficient UQ (not only Monte-Carlo) we need efficient tools for the approximation of multivariate functions.\nCurse of dimensionality\nComplexity to approximation a $d$-variate function grows exponentially with $d$.\nMethods that one case use\n\nPolynomial / Fourier type approximations \nSparse polynomial approximations / best N-term approximations\nANOVA decomposition / sparse grids\nGaussian process regression\nTensor decompositions (Friday)\nDeep learning (Friday)\n\nConsider orthogonal polynomials ${p_n}$\n$$\n\\langle p_n,\\, p_m \\rangle = \\int_a^bp_n(x)p_m(x)w(x)\\,dx=\\delta_{nm}h_n.\n$$\n - Chebyshev polynomials of the first kind, $(a,\\,b)=(-1,\\,1)$, $w(x)=\\left(1-x^2\\right)^{-1/2}$\n - Hermite polynomials (mathematical or probabilistic), $(a,\\,b)=(-\\infty,\\,+\\infty)$, $w(x)=\\frac1{\\sqrt{2\\pi}}\\exp\\left(-x^2/2\\right)$", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy.polynomial import Chebyshev as T\nfrom numpy.polynomial.hermite import hermval\n%matplotlib inline\n\ndef p_cheb(x, n):\n \"\"\"\n RETURNS T_n(x)\n value of not normalized Chebyshev polynomials\n $\\int \\frac1{\\sqrt{1-x^2}}T_m(x)T_n(x) dx = \\frac\\pi2\\delta_{nm}$\n \"\"\"\n return T.basis(n)(x)\n\ndef p_herm(x, n):\n \"\"\"\n RETURNS H_n(x)\n value of non-normalized Probabilistic polynomials\n \"\"\"\n cf = np.zeros(n+1)\n cf[n] = 1\n return (2**(-float(n)*0.5))*hermval(x/np.sqrt(2.0), cf)\n\ndef system_mat(pnts, maxn, poly):\n \"\"\"\n RETURNS system matrix\n \"\"\"\n A = np.empty((pnts.size, maxn), dtype=float)\n for i in range(maxn):\n A[:, i] = poly(pnts, i)\n return A\n\nx = np.linspace(-1, 1, 1000)\ndata = []\nfor i in range(5):\n data.append(x)\n data.append(p_cheb(x, i))\n \n\nplt.plot(*data)\nplt.legend([\"power = {}\".format(i) for i in range(len(data))]);\n\ndef complex_func(x):\n return np.sin(2.0*x*np.pi)*np.cos(0.75*(x+0.3)*np.pi)\n\nplt.plot(x, complex_func(x));", "Now, let's approximate the function with polynomials taking different maximal power $n$ and the corresponding number of node points\n$$\nf(x)\\approx\\hat f(x)=\\sum_{i=0}^n\\alpha_i p_i(x)\n$$", "n = 6\nM = n\nnodes = np.linspace(-1, 1, M)\nRH = complex_func(nodes)\nA = system_mat(nodes, n, p_cheb)\nif n == M:\n alpha = np.linalg.solve(A, RH)\nelse:\n alpha = np.linalg.lstsq(A, RH)[0]\nprint(\"α = {}\".format(alpha))\n\ndef calc_apprximant(poly, alpha, x):\n \"\"\"\n RETURNS values of approximant in points x\n \"\"\"\n n = len(alpha)\n y = np.zeros_like(x)\n for i in range(n):\n y[...] += poly(x, i)*alpha[i]\n \n return y\n\ny = complex_func(x)\napprox_y = calc_apprximant(p_cheb, alpha, x)\nplt.plot(x, y, x, approx_y, nodes, RH, 'ro');", "Approximate value of the error\n$$\n\\varepsilon=\n\\|f-\\hat f\\|\\infty\\approx\\max{x\\in \\mathcal X}\\bigl|f(x)-\\hat f(x)\\bigr|\n$$", "epsilon = np.linalg.norm(y - approx_y, np.inf)\nprint(\"ε = {}\".format(epsilon))", "If we take another set of polynomials, the result of the approximation will be the same (coefficients $\\alpha$ will be different of course).", "A = system_mat(nodes, n, p_herm)\nif n == M:\n alpha = np.linalg.solve(A, RH)\nelse:\n alpha = np.linalg.lstsq(A, RH)[0]\nprint(\"α = {}\".format(alpha))\n\napprox_y = calc_apprximant(p_herm, alpha, x)\nplt.plot(x, y, x, approx_y, nodes, RH, 'ro')\n\nepsilon = np.linalg.norm(y - approx_y, np.inf)\nprint(\"ε = {}\".format(epsilon))", "Now, what will change if we take another set of node points?", "nodes = np.cos((2.0*np.arange(M) + 1)/M*0.5*np.pi)\nRH = complex_func(nodes)\nA = system_mat(nodes, n, p_herm)\nalpha = np.linalg.solve(A, RH)\nprint(\"α = {}\".format(alpha))\n\napprox_y = calc_apprximant(p_herm, alpha, x)\nplt.plot(x, y, x, approx_y, nodes, RH, 'ro')\n\nepsilon_cheb = np.linalg.norm(y - approx_y, np.inf)\nprint(\"ε_cheb = {}\".format(epsilon_cheb))\n\n# All in one. We can play with maximum polynomial power\ndef plot_approx(f, n, distrib='unif', poly='cheb'):\n def make_nodes(n, distrib='unif'):\n return {'unif' : lambda : np.linspace(-1, 1, n),\n 'cheb' : lambda : np.cos((2.0*np.arange(n) + 1.0)/n*0.5*np.pi)}[distrib[:4].lower()]\n \n poly_f = {'cheb' : p_cheb, 'herm' : p_herm}[poly[:4].lower()]\n \n #solve\n nodes = make_nodes(n, distrib)()\n RH = f(nodes)\n A = system_mat(nodes, n, p_herm)\n alpha = np.linalg.solve(A, RH)\n \n # calc values\n x = np.linspace(-1, 1, 2**10)\n y = f(x)\n approx_y = calc_apprximant(p_herm, alpha, x)\n \n #plot\n plt.figure(figsize=(14,6.5))\n plt.plot(x, y, x, approx_y, nodes, RH, 'ro')\n plt.show()\n\n # calc error\n epsilon_cheb = np.linalg.norm(y - approx_y, np.inf)\n print(\"ε = {}\".format(epsilon_cheb))\n \nfrom ipywidgets import interact, fixed, widgets\n\ninteract(plot_approx, \n f=fixed(complex_func), \n n=widgets.IntSlider(min=1,max=15,step=1,value=4,continuous_update=True,description='# of terms (n)'),\n distrib=widgets.ToggleButtons(options=['Uniform', 'Chebyshev roots'],description='Points distr.'), \n poly=widgets.ToggleButtons(options=['Chebyshev polynomials', 'Hermite polynomials'],description='Poly. type')\n );", "Random input\nLet input $x$ is random with known probability density function $\\rho$. \nWe want to know statistical properties of the output\n- mean value \n- variance\n- risk estimation\nHow to find them using polynomial expansion?\nAssume the function $f$ is analytical\n$$\nf(x)=\\sum_{i=0}^\\infty \\alpha_i p_i(x).\n$$\nThe mean value of $f$ is\n$$\n\\mathbb E f = \\int_a^bf(\\tau)\\rho(\\tau)\\,d\\tau = \n\\sum_{i=0}^\\infty \\int_a^b\\alpha_i p_i(\\tau)\\rho(\\tau)\\,d\\tau.\n$$\nIf the set of orthogonal polynomials ${p_n}$ have the same wight function as $\\rho$,\nand the first polynomial is constant $p_0(x)=h_0$,\nthen $\\mathbb Ef=\\alpha_0h_0$. \nUsually, $h_0=1$ and we get a simple relation\n$$\n\\mathbb Ef = \\alpha_0\n$$\nThe variance is equal to\n$$\n\\text{Var } f=\\mathbb E\\bigl(f-\\mathbb E f\\bigr)^2=\n\\int_a^b \\left(\\sum_{i=1}^\\infty\\alpha_ip_i(\\tau)\\right)^2\\rho(\\tau)\\,d\\tau ,\n$$\nnote, that the summation begins with 1. Assume we can interchange the sum and the integral, then\n$$\n\\text{Var } f=\n\\sum_{i=1}^\\infty\\sum_{j=1}^\\infty\\int_a^b !!\\alpha_ip_i(\\tau)\\,\\alpha_jp_j(\\tau)\\,\\rho(\\tau)\\,d\\tau =\n\\sum_{i=1}^\\infty \\alpha_i^2h_i.\n$$\nThe formula is very simple if all the coefficients ${h_i}$ are equal to 1\n$$\n\\text{Var } f = \\sum_{i=1}^\\infty \\alpha_i^2 \n$$\nLet us check the formulas for the mean and variance by calculating them using the Monte-Carlo method.\nNormal distribution\nFirst, consider the case of normal distrubution of the input $x\\sim\\mathcal N(0,1)$, \n$\\rho(x)=\\frac1{\\sqrt{2\\pi}}\\exp(-x^2/2)$, \nso we take Hermite polynomials.", "# Scale the function a little\nscale = 5.0\n\nbig_x = np.random.randn(int(1e6))\nbig_y = complex_func(big_x/scale)\nmean = np.mean(big_y)\nvar = np.std(big_y)**2\nprint (\"mean = {}, variance = {}\".format(mean, var))\n\ndef p_herm_snorm(n):\n \"\"\"\n Square norm of \"math\" Hermite (w = exp(-x^2/2)/sqrt(2*pi))\n \"\"\"\n return np.math.factorial(n)\n\n\nn = 15\nM = n\n\nnodes = np.linspace(-scale, scale, M)\nRH = complex_func(nodes/scale)\nA = system_mat(nodes, n, p_herm)\n\nif n == M:\n alpha = np.linalg.solve(A, RH)\nelse:\n W = np.diag(np.exp( -nodes**2*0.5))\n alpha = np.linalg.lstsq(W.dot(A), W.dot(RH))[0]\n \nh = np.array([p_herm_snorm(i) for i in range(len(alpha))])\nvar = np.sum(alpha[1:]**2*h[1:])\n\nprint (\"mean = {}, variance = {}\".format(alpha[0]*h[0], var))", "Note, that the precise values are\n$$\n\\mathbb E f = -0.16556230699\\ldots,\n\\qquad \n\\text{Var }f= 0.23130350880\\ldots\n$$\nso, the method based on polynomial expansion is more precise than Monte-Carlo.", "ex = 2\nx = np.linspace(-scale - ex, scale + ex, 10000)\ny = complex_func(x/scale)\napprox_y = calc_apprximant(p_herm, alpha, x)\nplt.plot(x, y, x, approx_y, nodes, RH, 'ro');", "Linear model\nThe model described above is a special case of linear model: we fix a basis set and obtain\n$$f(x) \\approx \\sum_{k=1}^M c_k \\phi_k(x).$$\nFor $x \\in \\mathbb{R}^d$ what basis set to choose?\nWhy tensor-product polynomials are bad for large $d$ ? \nWhat are the alternatives to tensor-product basis?\nGood approach: sparse polynomial bases\nInstead of taking all possible $\\mathbf{x}^{\\mathbf{\\alpha}}$, we take only a subset, such as:\n\nTotal degree: $\\vert \\mathbf{\\alpha} \\vert \\leq T$\nHyperbolic cross scheme\n\nFor very smooth functions, such approximations work really well (and simple to use!).\nExperiment design\nGiven a linear model, \n$$f(x) \\approx \\sum_{k=1}^M c_k \\phi_k(x).$$\nHow to find coefficients?\nSampling\nAnswer: do sampling, \nand solve linear least squares\n$$f(x_i) \\approx \\sum_{k=1}^M c_k \\phi_k(x_i).$$\nHow to sample?\nSampling methods\n$$f(x_i) \\approx \\sum_{k=1}^M c_k \\phi_k(x_i).$$\n\nNon-adaptive schemes: Monte-Carlo, Quasi-Monte Carlo, Latin Hypercube Sampling (LHS)\nAdaptive: optimize for points $x_1, \\ldots, x_N$.\n\nThere are many criteria.\nD-optimality\nIf we select $N = M$ and select points such that \n$$\\vert \\det M \\vert \\rightarrow \\max,$$\nwhere \n$$M_{kj} = \\phi_k(x_i)$$ is the design matrix.\nWhy it is good?\nLinear algebra: maximum volume\nLet $A \\in \\mathbb{R}^{n \\times r}$, $n \\gg r$.\nLet $\\widehat{A}$ be the submatrix of maximum volume.\nThen, all coefficients in $A \\widehat{A}^{-1}$ are less than $1$ in modulus.\nAs a simple consequence, \nwe have\n$$E_{D} \\leq (r+1) E_{best},$$\nwhere $E_D$ is the approximation from optimal design, and $E_{best}$ is the best possibble approximation error in the Chebyshev norm.\nProblem setting\n\nWe have an unknown multivariate function $f(\\mathbf{x})$ that we would like to approximate on some specified domain.<br> \nWe have a dataset $\\mathcal{D}$ of $n$ function observations, $\\mathcal{D} = {(\\mathbf{x}_i,y_i),i = 1,\\ldots,n}$.\n\nGiven $\\mathcal{D}$ we wish to make predictions for new inputs $\\mathbf{x}_*$ within the domain.\n\n\nTo solve this problem we must make assumptions about the characteristics of $f(\\mathbf{x})$.\n\n\nTwo common approaches\n\n\nrestrict the class of functions that we consider (e.g. considering only linear functions)\n\nProblem: we have to decide upon the richness of the class of functions considered; $f(\\mathbf{x})$ can be not well modelled by this class, so the predictions will be poor.\n\n\n\nthe second approach is (speaking rather loosely) to give a prior probability to every possible function, where higher probabilities are given to functions that we consider to be more likely.\n\n<span style=\"color:red\">Problem: there are an uncountably infinite set of possible functions.</span>\n\n\n\nSecond approach\nThis is where the Gaussian process (GP) arise to cope with the problem mentioned above\nTypically, there is some knowledge about a function of interest $f(\\mathbf{x})$ even before observing it anywhere.\nFor example, $f(\\mathbf{x})$ cannot exceed, or be smaller than, certain values or that it is periodic or that it shows translational invariance.<br>\nSuch knowledge is called as the prior knowledge.\nPrior knowledge may be precise (e.g., $f(\\mathbf{x})$ is twice differentiable), or it may be vague (e.g., the probability that the periodicity is $T$ is $p(T)$). When we have a deal with vague prior knowledge, we refer to it as prior belief. \nPrior beliefs about $f(\\mathbf{x})$ can be modeled by a probability measure on the space of functions from $\\mathcal{F}$ to $\\mathbb{R}$. A GP is a great way to represent this probability measure.\nDefinition of GP\nA Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution (in other words GP is a generalization of a multivariate Gaussian distribution to infinite dimensions). \nA GP defines a probability measure on $\\mathcal{F}$. When we say that $f(\\mathbf{x})$ is a GP, we mean that it is a random variable that is actually a function. \nAnalytically, it can be written as\n$$\nf(\\mathbf{x}) \\sim \\mbox{GP}\\left(m(\\mathbf{x}), k(\\mathbf{x},\\mathbf{x'}) \\right),\n$$ where\n* $m:\\mathbb{R}^d \\rightarrow \\mathbb{R}$ is the mean function; \n* $k:\\mathbb{R}^d \\times \\mathbb{R}^d \\rightarrow \\mathbb{R}$ is the covariance function.\nConnection to the multivariate Gaussian distribution\nLet $\\mathbf{x}{1:n}={\\mathbf{x}_1,\\dots,\\mathbf{x}_n}$ be $n$ points in $\\mathbb{R}^d$. Let $\\mathbf{f}\\in\\mathbb{R}^n$ be the outputs of $f(\\mathbf{x})$ on each one of the elements of $\\mathbf{x}{1:n}$,\n$$\n\\mathbf{f} =\n\\left(\n\\begin{array}{c}\nf(\\mathbf{x}1)\\\n\\vdots\\\nf(\\mathbf{x}_n)\n\\end{array}\n\\right).\n$$\nThe fact that $f(\\mathbf{x})$ is a GP with mean and covariance function $m(\\mathbf{x})$ and $k(\\mathbf{x},\\mathbf{x'})$ means that the vector of outputs $\\mathbf{f}$ at the arbitrary inputs is the following multivariate-normal: $$\n\\mathbf{f} \\sim \\mathcal{N}\\bigl(\\mathbf{m}(\\mathbf{x}{1:n}), \\mathbf{K}(\\mathbf{x}{1:n}, \\mathbf{x}{1:n})\\bigr),\n$$ with mean vector: $$\n\\mathbf{m}(\\mathbf{x}{1:n}) =\n\\left(\n\\begin{array}{c}\nm(\\mathbf{x}_1)\\\n\\vdots\\\nm(\\mathbf{x}_n)\n\\end{array}\n\\right),\n$$ and covariance matrix: $$\n\\mathbf{K}(\\mathbf{x}{1:n},\\mathbf{x}_{1:n}) = \\left(\n\\begin{array}{ccc}\nk(\\mathbf{x}_1,\\mathbf{x}_1) & \\dots &k(\\mathbf{x}_1, \\mathbf{x}_n)\\\n\\vdots& \\ddots& \\vdots\\\nk(\\mathbf{x}_n, \\mathbf{x}_1)& \\dots &k(\\mathbf{x}_n, \\mathbf{x}_n)\n\\end{array}\n\\right).\n$$\nNow, since we have defined a GP, let us talk about how do we encode our prior beliefs into a GP. \nWe do so through the mean and covariance functions.\nInterpretation of the mean function\nFor any point $\\mathbf{x}\\in\\mathbb{R}^d$, $m(\\mathbf{x})$ is the expected value of the r.v. $f(\\mathbf{x})$:\n$$\nm(\\mathbf{x}) = \\mathbb{E}[f(\\mathbf{x})].\n$$\nThe mean function can be any arbitrary function. Essentially, it tracks generic trends in the response as the input is varied.<br> \nIn practice, we try and make a suitable choice for the mean function that is easy to work with. Such choices include:\n* a constant, $m(\\mathbf{x}) = c,$ where $c$ is a parameter (in many cases $c=0$).\n* linear, $m(\\mathbf{x}) = c_0 + \\sum_{i=1}^dc_ix_i,$ where $c_i, i=0,\\dots,d$ are parameters.\n* using a set of $m$ basis functions (generalized linear model), $m(\\mathbf{x}) = \\sum_{i=1}^mc_i\\phi_i(\\mathbf{x})$, where $c_i$ and $\\phi_i(\\cdot)$ are parameters and basis functions.\n* generalized polynomial chaos (gPC), using a set of $d$ polynomial basis functions upto a given degree $\\rho$, $m(\\mathbf{x}) = \\sum_{i=1}^{d}c_i\\phi_i(\\mathbf{x})$ where the basis functions $\\phi_i$ are mutually orthonormal: $$\n\\int \\phi_{i}(\\mathbf{x}) \\phi_{j}(\\mathbf{x}) dF(\\mathbf{x}) = \\delta_{ij}.\n$$\nSquared exponential covariance function\nSquared exponential (SE) is widely used covariance function. Its has the form: \n$$\nk(\\mathbf{x}, \\mathbf{x}') = v\\exp\\left{-\\frac{1}{2}\\sum_{i=1}^d\\frac{(x_i - x_i')^2}{\\ell_i^2}\\right},\n$$ \nwhere \n* $v>0$ – signal strength. The bigger it is, the more the GP $f(\\mathbf{x})$ will vary about the mean.\n* $\\ell_i>0, i=1,\\dots,d$ – length-scale of the $i$-th input dimension of the GP. The bigger it is, the smoother the samples of $f(\\mathbf{x})$ appear along the $i$-th input dimension.", "# 1-D example\nfrom ipywidgets import interactive, interact, widgets\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.spatial as SP\n\n# defining Squared Exponential Kernel and plot it\n\ndef k(length_scale):\n x = np.arange(0., 5., 0.1)\n plt.figure(figsize=(10, 7))\n plt.ylim([0, 1.05])\n plt.xlabel('$x$', fontsize=16)\n plt.ylabel('$k(x,0)$', fontsize=16)\n plt.plot(x, np.exp(-.5 * x**2/length_scale**2), 'b-')\n plt.show()\n\n\ncontrols = {r'length_scale': widgets.FloatSlider(\n min=0.01, max=5.0, step=0.1, value=1., continuous_update=False, description=r'$\\ell$')}\n\nfrom ipywidgets import interactive\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\ndef GP(length_scale, Test, Training, sigma):\n np.random.seed(100)\n\n \"\"\" This is code for simple GP regression. It assumes a zero mean GP Prior \"\"\"\n\n # This is the true unknown function we are trying to approximate\n def f(x): return np.sin(0.9*x.flatten())\n\n # Define the kernel\n def kernel(a, b):\n sqdist = SP.distance.cdist(a, b, 'sqeuclidean')\n return np.exp(-.5 * sqdist/(length_scale**2))\n\n N = Training # number of training points.\n n = Test # number of test points.\n s = sigma # noise variance.\n\n # Sample some input points and noisy versions of the function evaluated at\n # these points.\n X = np.random.uniform(-5, 5, size=(N, 1))\n y = f(X) + s*np.random.randn(N)\n\n K = kernel(X, X)\n L = np.linalg.cholesky(K + s*np.eye(N))\n\n # points we're going to make predictions at.\n Xtest = np.linspace(-5, 5, n)[:, None]\n\n # compute the mean at our test points.\n Lk = np.linalg.solve(L, kernel(X, Xtest))\n mu = np.dot(Lk.T, np.linalg.solve(L, y))\n\n # compute the variance at our test points.\n K_ = kernel(Xtest, Xtest)\n s2 = np.diag(K_) - np.sum(Lk**2, axis=0)\n s = np.sqrt(s2)\n\n # PLOTS:\n plt.figure(figsize=(9, 7))\n plt.clf()\n plt.plot(X, y, 'r+', ms=18, label=\"Training points\")\n plt.plot(Xtest, f(Xtest), 'b-', label=\"Function\")\n plt.gca().fill_between(Xtest.flat, mu-s, mu+s,\n color=\"#dddddd\", label=\"Confidence interval\")\n plt.plot(Xtest, mu, 'r--', lw=2, label=\"Approximation\")\n plt.title(r'Mean prediction plus-minus one s.d.')\n plt.xlabel('$x$', fontsize=16)\n plt.ylabel('$f(x)$', fontsize=16)\n plt.axis([-5, 5, -3, 3])\n plt.legend()\n print(\"Error (inf. norm) = \", np.linalg.norm(f(Xtest)-mu, ord=np.inf)/np.linalg.norm(f(Xtest), ord=np.inf))\n plt.show()\ncontrols = {r'sigma': widgets.FloatSlider(min=5e-4, max=5e-1, step=1e-3, value=1e-3, continuous_update=True, description=r'$\\sigma$'),\n r'length_scale': widgets.FloatSlider(min=0.1, max=2.0, step=0.05, value=0.7, continuous_update=True, description=r'$\\ell$'),\n r'Training': widgets.IntSlider(min=1, max=50, step=1, value=10, continuous_update=True, description=r'$N$ of $f$ evals'),\n r'Test': widgets.IntSlider(min=1, max=100, step=1, value=50, continuous_update=True, description=r'$N$ of GP samples')} \n\ninteract(GP, **controls); ", "Problems with GP\n\nBad scaling with large number of sampling points (data points)\nChoice of covariance is crucial." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
lmcinnes/pynndescent
doc/pynndescent_in_pipelines.ipynb
bsd-2-clause
[ "Working with Scikit-learn pipelines\nNearest neighbor search is a fundamental building block of many machine learning algorithms, including in supervised learning with kNN-classifiers and kNN-regressors, and unsupervised learning with manifold learning, and clustering. It would be useful to be able to bring the speed of PyNNDescent's approximate nearest neighbor search to bear on these problems without having to re-implement everything from scratch. Fortunately Scikit-learn has done most of the work for us with their KNeighborsTransformer, which provides a means to insert nearest neighbor computations into sklearn pipelines, and feed the results to many of their models that make use of nearest neighbor computations. It is worth reading through the documentation they have, because we are going to use PyNNDescent as a drop in replacement.\nTo make this as simple as possible PyNNDescent implements a class PyNNDescentTransformer that acts as a KNeighborsTransformer and can be dropped into all the same pipelines. Let's see an example of this working ...", "from sklearn.manifold import Isomap, TSNE\nfrom sklearn.neighbors import KNeighborsTransformer\nfrom pynndescent import PyNNDescentTransformer\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.datasets import fetch_openml\nfrom sklearn.utils import shuffle\n\nimport seaborn as sns", "As usual we will need some data to play with. In this case let's use a random subsample of MNIST digits.", "def load_mnist(n_samples):\n \"\"\"Load MNIST, shuffle the data, and return only n_samples.\"\"\"\n mnist = fetch_openml(\"mnist_784\")\n X, y = shuffle(mnist.data, mnist.target, random_state=2)\n return X[:n_samples] / 255, y[:n_samples]\n\ndata, target = load_mnist(10000)", "Now we need to make a pipeline that feeds the nearest neighbor results into a downstream task. To demonstrate how this can work we'll try manifold learning. First we will try out Isomap and then t-SNE. In both cases we can provide a \"precomputed\" distance matrix, and if it is a sparse matrix (as output by KNeighborsTransformer) then any entry not explicitly provided as a non-zero element of the matrix will be ignored (or treated as an effectively infinite distance). To make the whole thing work we simple make an sklearn pipeline (and could easily include pre-processing steps such as categorical encoding, or data scaling and standardisation as earlier steps if we wished) that first uses the KNeighborsTransformer to process the raw data into a nearest neighbor graph, and then passes that on to either Isomap or TSNE. For comparison we'll drop in a PyNNDescentTransformer instead and see how that effects the results.", "sklearn_isomap = make_pipeline(\n KNeighborsTransformer(n_neighbors=15),\n Isomap(metric='precomputed')\n)\npynnd_isomap = make_pipeline(\n PyNNDescentTransformer(n_neighbors=15),\n Isomap(metric='precomputed')\n)\nsklearn_tsne = make_pipeline(\n KNeighborsTransformer(n_neighbors=92),\n TSNE(metric='precomputed', random_state=42)\n)\npynnd_tsne = make_pipeline(\n PyNNDescentTransformer(n_neighbors=92, early_termination_value=0.05),\n TSNE(metric='precomputed', random_state=42)\n)", "First let's try Isomap. The algorithm first constructs a k-nearest-neighbor graph (which our transformers will handle in the pipeline), then measures distances between points as path lengths in that graph. Finally it performs an eigendecomposition of the resulting distance matrix. We can do much to speed up the later two steps, which are still non-trivial, but hopefully we can get some speedup by substituting in the approximate nearest neighbor computation.", "%%time\nsklearn_iso_map = sklearn_isomap.fit_transform(data)\n\n%%time\npynnd_iso_map = pynnd_isomap.fit_transform(data)", "A two-times speedup is not bad, especially since we only accelerated one component of the full algorithm. It is quite good considering it was simply a matter of dropping a different class into a pipeline. More importantly as we scale to larger amounts of data the nearest neighbor search comes to dominate the over algorithm run-time, so we can expect to only get better speedups for more data. We can plot the results to ensure we are getting qualitatively the same thing.", "sns.scatterplot(x=sklearn_iso_map.T[0], y=sklearn_iso_map.T[1], hue=target, palette=\"Spectral\", size=1);\n\nsns.scatterplot(x=pynnd_iso_map.T[0], y=pynnd_iso_map.T[1], hue=target, palette=\"Spectral\", size=1);", "Now let's try t-SNE. This algorithm requires nearest neighbors as a first step, and then the second major part, in terms of computation time, is the optimization of a layout of a modified k-neighbor graph. We can hope for some improvement in the first part, which usually accounts for around half the overall run-time for small data (and comes to consume a majority of the run-time for large datasets).", "%%time\nsklearn_tsne_map = sklearn_tsne.fit_transform(data)\n\n%%time\npynnd_tsne_map = pynnd_tsne.fit_transform(data)", "Again we have an approximate two-times speedup. Again this was achieved by simply substituting a different class into the pipeline (although in the case we tweaked the early_termination_value so it would stop sooner). Again we can look at the qualitative results and see that we are getting something very similar.", "sns.scatterplot(x=sklearn_tsne_map.T[0], y=sklearn_tsne_map.T[1], hue=target, palette=\"Spectral\", size=1);\n\nsns.scatterplot(x=pynnd_tsne_map.T[0], y=pynnd_tsne_map.T[1], hue=target, palette=\"Spectral\", size=1);", "So the results, in both cases, look pretty good, and we did get a good speed-up. A question remains -- how fast was he nearest neighbor component, and how accurate was it? We can write a simple function to measure the neighbor accuracy: compute the average percentage intersection in the neighbor sets of each sample point. Then let's just run the transformers and compare the times as well as computing the actual percentage accuracy.", "import numba\nimport numpy as np\n\n@numba.njit()\ndef arr_intersect(ar1, ar2):\n aux = np.sort(np.concatenate((ar1, ar2)))\n return aux[:-1][aux[:-1] == aux[1:]]\n\n@numba.njit()\ndef neighbor_accuracy_numba(n1_indptr, n1_indices, n2_indptr, n2_indices):\n result = 0.0\n for i in range(n1_indptr.shape[0] - 1):\n indices1 = n1_indices[n1_indptr[i]:n1_indptr[i+1]]\n indices2 = n2_indices[n2_indptr[i]:n2_indptr[i+1]]\n n_correct = np.float64(arr_intersect(indices1, indices2).shape[0])\n result += n_correct / indices1.shape[0]\n return result / (n1_indptr.shape[0] - 1)\n\ndef neighbor_accuracy(neighbors1, neighbors2):\n return neighbor_accuracy_numba(\n neighbors1.indptr, neighbors1.indices, neighbors2.indptr, neighbors2.indices\n )\n\n%time true_neighbors = KNeighborsTransformer(n_neighbors=15).fit_transform(data)\n%time pynnd_neighbors = PyNNDescentTransformer(n_neighbors=15).fit_transform(data)\n\nprint(f\"Neighbor accuracy is {neighbor_accuracy(true_neighbors, pynnd_neighbors) * 100.0}%\")", "So for the Isomap case we went from taking over one and half minutes down to less then a second. While doing so we still achieved over 99% accuracy in the nearest neighbors. This seems like a good tradeoff.\nBy constrast t-SNE requires a much larger number of neighbors (approximately three times the desired perplexity value, which defaults to 30 in sklearn's implementation). This is a little more of a challenge so we might expect it to take longer.", "%time true_neighbors = KNeighborsTransformer(n_neighbors=92).fit_transform(data)\n%time pynnd_neighbors = PyNNDescentTransformer(n_neighbors=92, early_termination_value=0.05).fit_transform(data)\n\nprint(f\"Neighbor accuracy is {neighbor_accuracy(true_neighbors, pynnd_neighbors) * 100.0}%\")", "We see that the KNeighborsTransformer takes the same amount of time for this -- this is because it is making the choice, given the dataset size and dimensionality, to compute nearest neighbors by effectively computing the full distance matrix. That means regardless of how many neighbors we ask for it will take a largely constant amount of time.\nIn constrast we see that the PyNNDescentTransformer is having to work harder, taking almost eight seconds (still a lot better than one and half minutes!). The increased early_termination_value (the default is 0.001) stops the computation early, but even with this we are still getting over 99.9% accuracy! Certainly the minute and a half saved in computation time at this step is worth the drop of 0.033% accuracy in nearest neighbors. And these differences in computation time will only increase as dataset sizes get larger." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
deepchem/deepchem
examples/tutorials/Creating_a_high_fidelity_model_from_experimental_data.ipynb
mit
[ "Creating a High Fidelity Dataset from Experimental Data\nIn this tutorial, we will look at what is involved in creating a new Dataset from experimental data. As we will see, the mechanics of creating the Dataset object is only a small part of the process. Most real datasets need significant cleanup and QA before they are suitable for training models.\nColab\nThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.", "!pip install --pre deepchem\nimport deepchem\ndeepchem.__version__", "Working With Data Files\nSuppose you were given data collected by an experimental collaborator. You would like to use this data to construct a machine learning model. \nHow do you transform this data into a dataset capable of creating a useful model?\nBuilding models from novel data can present several challenges. Perhaps the data was not recorded in a convenient manner. Additionally, perhaps the data contains noise. This is a common occurrence with, for example, biological assays due to the large number of external variables and the difficulty and cost associated with collecting multiple samples. This is a problem because you do not want your model to fit to this noise.\nHence, there are two primary challenges:\n* Parsing data\n* De-noising data\nIn this tutorial, we will walk through an example of curating a dataset from an excel spreadsheet of experimental drug measurements. Before we dive into this example though, let's do a brief review of DeepChem's input file handling and featurization capabilities.\nInput Formats\nDeepChem supports a whole range of input files. For example, accepted input formats include .csv, .sdf, .fasta, .png, .tif and other file formats. The loading for a particular file format is governed by the Loader class associated with that format. For example, to load a .csv file we use the CSVLoader class. Here's an example of a .csv file that fits the requirements of CSVLoader.\n\nA column containing SMILES strings.\nA column containing an experimental measurement.\n(Optional) A column containing a unique compound identifier.\n\nHere's an example of a potential input file.\n|Compound ID | measured log solubility in mols per litre | smiles |\n|---------------|-------------------------------------------|----------------|\n| benzothiazole | -1.5 | c2ccc1scnc1c2 |\nHere the \"smiles\" column contains the SMILES string, the \"measured log\nsolubility in mols per litre\" contains the experimental measurement, and\n\"Compound ID\" contains the unique compound identifier.\nData Featurization\nMost machine learning algorithms require that input data form vectors. However, input data for drug-discovery datasets routinely come in the form of lists of molecules and associated experimental readouts. To load the data, we use a subclass of dc.data.DataLoader such as dc.data.CSVLoader or dc.data.SDFLoader. Users can subclass dc.data.DataLoader to load arbitrary file formats. All loaders must be passed a dc.feat.Featurizer object, which specifies how to transform molecules into vectors. DeepChem provides a number of different subclasses of dc.feat.Featurizer.\nParsing data\nIn order to read in the data, we will use the pandas data analysis library. \nIn order to convert the drug names into smiles strings, we will use pubchempy. This isn't a standard DeepChem dependency, but you can install this library with conda install pubchempy.", "!conda install pubchempy\n\nimport os\nimport pandas as pd\nfrom pubchempy import get_cids, get_compounds", "Pandas is magic but it doesn't automatically know where to find your data of interest. You likely will have to look at it first using a GUI. \nWe will now look at a screenshot of this dataset as rendered by LibreOffice.\nTo do this, we will import Image and os.", "import os\nfrom IPython.display import Image, display\ncurrent_dir = os.path.dirname(os.path.realpath('__file__'))\ndata_screenshot = os.path.join(current_dir, 'assets/dataset_preparation_gui.png')\ndisplay(Image(filename=data_screenshot))", "We see the data of interest is on the second sheet, and contained in columns \"TA ID\", \"N #1 (%)\", and \"N #2 (%)\".\nAdditionally, it appears much of this spreadsheet was formatted for human readability (multicolumn headers, column labels with spaces and symbols, etc.). This makes the creation of a neat dataframe object harder. For this reason we will cut everything that is unnecesary or inconvenient.", "import deepchem as dc\ndc.utils.download_url(\n 'https://github.com/deepchem/deepchem/raw/master/datasets/Positive%20Modulators%20Summary_%20918.TUC%20_%20v1.xlsx',\n current_dir,\n 'Positive Modulators Summary_ 918.TUC _ v1.xlsx'\n)\n\nraw_data_file = os.path.join(current_dir, 'Positive Modulators Summary_ 918.TUC _ v1.xlsx')\nraw_data_excel = pd.ExcelFile(raw_data_file)\n\n# second sheet only\nraw_data = raw_data_excel.parse(raw_data_excel.sheet_names[1])\n\n# preview 5 rows of raw dataframe\nraw_data.loc[raw_data.index[:5]]", "Note that the actual row headers are stored in row 1 and not 0 above.", "# remove column labels (rows 0 and 1), as we will replace them\n# only take data given in columns \"TA ID\" \"N #1 (%)\" (3) and \"N #2 (%)\" (4)\nraw_data = raw_data.iloc[2:, [2, 6, 7]]\n\n# reset the index so we keep the label but number from 0 again\nraw_data.reset_index(inplace=True)\n\n## rename columns\nraw_data.columns = ['label', 'drug', 'n1', 'n2']\n\n# preview cleaner dataframe\nraw_data.loc[raw_data.index[:5]]", "This formatting is closer to what we need.\nNow, let's take the drug names and get smiles strings for them (format needed for DeepChem).", "drugs = raw_data['drug'].values", "For many of these, we can retreive the smiles string via the canonical_smiles attribute of the get_compounds object (using pubchempy)", "get_compounds(drugs[1], 'name')\n\nget_compounds(drugs[1], 'name')[0].canonical_smiles", "However, some of these drug names have variables spaces and symbols (·, (±), etc.), and names that may not be readable by pubchempy. \nFor this task, we will do a bit of hacking via regular expressions. Also, we notice that all ions are written in a shortened form that will need to be expanded. For this reason we use a dictionary, mapping the shortened ion names to versions recognizable to pubchempy. \nUnfortunately you may have several corner cases that will require more hacking.", "import re\n\nion_replacements = {\n 'HBr': ' hydrobromide',\n '2Br': ' dibromide',\n 'Br': ' bromide',\n 'HCl': ' hydrochloride',\n '2H2O': ' dihydrate',\n 'H20': ' hydrate',\n 'Na': ' sodium'\n}\n\nion_keys = ['H20', 'HBr', 'HCl', '2Br', '2H2O', 'Br', 'Na']\n\ndef compound_to_smiles(cmpd):\n # remove spaces and irregular characters\n compound = re.sub(r'([^\\s\\w]|_)+', '', cmpd)\n \n # replace ion names if needed\n for ion in ion_keys:\n if ion in compound:\n compound = compound.replace(ion, ion_replacements[ion])\n\n # query for cid first in order to avoid timeouterror\n cid = get_cids(compound, 'name')[0]\n smiles = get_compounds(cid)[0].canonical_smiles\n\n return smiles", "Now let's actually convert all these compounds to smiles. This conversion will take a few minutes so might not be a bad spot to go grab a coffee or tea and take a break while this is running! Note that this conversion will sometimes fail so we've added some error handling to catch these cases below.", "smiles_map = {}\nfor i, compound in enumerate(drugs):\n try:\n smiles_map[compound] = compound_to_smiles(compound)\n except:\n print(\"Errored on %s\" % i)\n continue\n\nsmiles_data = raw_data\n# map drug name to smiles string\nsmiles_data['drug'] = smiles_data['drug'].apply(lambda x: smiles_map[x] if x in smiles_map else None)\n\n# preview smiles data\nsmiles_data.loc[smiles_data.index[:5]]", "Hooray, we have mapped each drug name to its corresponding smiles code.\nNow, we need to look at the data and remove as much noise as possible.\nDe-noising data\nIn machine learning, we know that there is no free lunch. You will need to spend time analyzing and understanding your data in order to frame your problem and determine the appropriate model framework. Treatment of your data will depend on the conclusions you gather from this process.\nQuestions to ask yourself:\n* What are you trying to accomplish?\n* What is your assay?\n* What is the structure of the data?\n* Does the data make sense?\n* What has been tried previously?\nFor this project (respectively):\n* I would like to build a model capable of predicting the affinity of an arbitrary small molecule drug to a particular ion channel protein\n* For an input drug, data describing channel inhibition\n* A few hundred drugs, with n=2\n* Will need to look more closely at the dataset*\n* Nothing on this particular protein\n*This will involve plotting, so we will import matplotlib and seaborn. We will also need to look at molecular structures, so we will import rdkit. We will also use the seaborn library which you can install with conda install seaborn.", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nimport seaborn as sns\nsns.set_style('white')\n\nfrom rdkit import Chem\nfrom rdkit.Chem import AllChem\nfrom rdkit.Chem import Draw, PyMol, rdFMCS\nfrom rdkit.Chem.Draw import IPythonConsole\nfrom rdkit import rdBase\nimport numpy as np", "Our goal is to build a small molecule model, so let's make sure our molecules are all small. This can be approximated by the length of each smiles string.", "smiles_data['len'] = [len(i) if i is not None else 0 for i in smiles_data['drug']]\nsmiles_lens = [len(i) if i is not None else 0 for i in smiles_data['drug']]\nsns.histplot(smiles_lens)\nplt.xlabel('len(smiles)')\nplt.ylabel('probability')", "Some of these look rather large, len(smiles) > 150. Let's see what they look like.", "# indices of large looking molecules\nsuspiciously_large = np.where(np.array(smiles_lens) > 150)[0]\n\n# corresponding smiles string\nlong_smiles = smiles_data.loc[smiles_data.index[suspiciously_large]]['drug'].values\n\n# look\nDraw._MolsToGridImage([Chem.MolFromSmiles(i) for i in long_smiles], molsPerRow=6)", "As suspected, these are not small molecules, so we will remove them from the dataset. The argument here is that these molecules could register as inhibitors simply because they are large. They are more likely to sterically blocks the channel, rather than diffuse inside and bind (which is what we are interested in).\nThe lesson here is to remove data that does not fit your use case.", "# drop large molecules\nsmiles_data = smiles_data[~smiles_data['drug'].isin(long_smiles)]", "Now, let's look at the numerical structure of the dataset.\nFirst, check for NaNs.", "nan_rows = smiles_data[smiles_data.isnull().T.any().T]\nnan_rows[['n1', 'n2']]", "I don't trust n=1, so I will throw these out. \nThen, let's examine the distribution of n1 and n2.", "df = smiles_data.dropna(axis=0, how='any')\n# seaborn jointplot will allow us to compare n1 and n2, and plot each marginal\nsns.jointplot(x='n1', y='n2', data=smiles_data) ", "We see that most of the data is contained in the gaussian-ish blob centered a bit below zero. We see that there are a few clearly active datapoints located in the bottom left, and one on the top right. These are all distinguished from the majority of the data. How do we handle the data in the blob? \nBecause n1 and n2 represent the same measurement, ideally they would be of the same value. This plot should be tightly aligned to the diagonal, and the pearson correlation coefficient should be 1. We see this is not the case. This helps gives us an idea of the error of our assay.\nLet's look at the error more closely, plotting in the distribution of (n1-n2).", "diff_df = df['n1'] - df['n2']\n\nsns.histplot(diff_df)\nplt.xlabel('difference in n')\nplt.ylabel('probability')", "This looks pretty gaussian, let's get the 95% confidence interval by fitting a gaussian via scipy, and taking 2*the standard deviation", "from scipy import stats\nmean, std = stats.norm.fit(np.asarray(diff_df, dtype=np.float32))\nci_95 = std*2\nci_95", "Now, I don't trust the data outside of the confidence interval, and will therefore drop these datapoints from df. \nFor example, in the plot above, at least one datapoint has n1-n2 > 60. This is disconcerting.", "noisy = diff_df[abs(diff_df) > ci_95]\ndf = df.drop(noisy.index)\nsns.jointplot(x='n1', y='n2', data=df) ", "Now that data looks much better!\nSo, let's average n1 and n2, and take the error bar to be ci_95.", "avg_df = df[['label', 'drug']].copy()\nn_avg = df[['n1', 'n2']].mean(axis=1)\navg_df['n'] = n_avg\navg_df.sort_values('n', inplace=True)", "Now, let's look at the sorted data with error bars.", "plt.errorbar(np.arange(avg_df.shape[0]), avg_df['n'], yerr=ci_95, fmt='o')\nplt.xlabel('drug, sorted')\nplt.ylabel('activity')", "Now, let's identify our active compounds. \nIn my case, this required domain knowledge. Having worked in this area, and having consulted with professors specializing on this channel, I am interested in compounds where the absolute value of the activity is greater than 25. This relates to the desired drug potency we would like to model.\nIf you are not certain how to draw the line between active and inactive, this cutoff could potentially be treated as a hyperparameter.", "actives = avg_df[abs(avg_df['n'])-ci_95 > 25]['n']\n\nplt.errorbar(np.arange(actives.shape[0]), actives, yerr=ci_95, fmt='o')\n\n# summary\nprint (raw_data.shape, avg_df.shape, len(actives.index))", "In summary, we have:\n* Removed data that did not address the question we hope to answer (small molecules only)\n* Dropped NaNs\n* Determined the noise of our measurements\n* Removed exceptionally noisy datapoints\n* Identified actives (using domain knowledge to determine a threshold)\nDetermine model type, final form of dataset, and sanity load\nNow, what model framework should we use? \nGiven that we have 392 datapoints and 6 actives, this data will be used to build a low data one-shot classifier (10.1021/acscentsci.6b00367). If there were datasets of similar character, transfer learning could potentially be used, but this is not the case at the moment.\nLet's apply logic to our dataframe in order to cast it into a binary format, suitable for classification.", "# 1 if condition for active is met, 0 otherwise\navg_df.loc[:, 'active'] = (abs(avg_df['n'])-ci_95 > 25).astype(int)", "Now, save this to file.", "avg_df.to_csv('modulators.csv', index=False)", "Now, we will convert this dataframe to a DeepChem dataset.", "dataset_file = 'modulators.csv'\ntask = ['active']\nfeaturizer_func = dc.feat.ConvMolFeaturizer()\n\nloader = dc.data.CSVLoader(tasks=task, feature_field='drug', featurizer=featurizer_func)\ndataset = loader.create_dataset(dataset_file)", "Lastly, it is often advantageous to numerically transform the data in some way. For example, sometimes it is useful to normalize the data, or to zero the mean. This depends in the task at hand.\nBuilt into DeepChem are many useful transformers, located in the deepchem.transformers.transformers base class. \nBecause this is a classification model, and the number of actives is low, I will apply a balancing transformer. I treated this transformer as a hyperparameter when I began training models. It proved to unambiguously improve model performance.", "transformer = dc.trans.BalancingTransformer(dataset=dataset)\ndataset = transformer.transform(dataset)", "Now let's save the balanced dataset object to disk, and then reload it as a sanity check.", "dc.utils.save_to_disk(dataset, 'balanced_dataset.joblib')\nbalanced_dataset = dc.utils.load_from_disk('balanced_dataset.joblib')", "Tutorial written by Keri McKiernan (github.com/kmckiern) on September 8, 2016\nCongratulations! Time to join the Community!\nCongratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:\nStar DeepChem on GitHub\nThis helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\nJoin the DeepChem Gitter\nThe DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!\nBibliography\n[2] Anderson, Eric, Gilman D. Veith, and David Weininger. \"SMILES, a line\nnotation and computerized interpreter for chemical structures.\" US\nEnvironmental Protection Agency, Environmental Research Laboratory, 1987." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bbfamily/abu
abupy_lecture/31-资金仓位管理与买入策略的搭配.ipynb
gpl-3.0
[ "ABU量化系统使用文档\n<center>\n <img src=\"./image/abu_logo.png\" alt=\"\" style=\"vertical-align:middle;padding:10px 20px;\"><font size=\"6\" color=\"black\"><b>第31节 资金仓位管理与买入策略的搭配</b></font>\n</center>\n\n作者: 阿布\n阿布量化版权所有 未经允许 禁止转载\nabu量化系统github地址 (欢迎+star)\n本节ipython notebook\n上一节讲解了趋势跟踪与均值回复的长短线搭配的示例,本节讲解资金仓位管理与买入策略的搭配。\n首先导入本节需要使用的abupy中的模块:", "# 基础库导入\n\nfrom __future__ import print_function\nfrom __future__ import division\n\nimport warnings\nwarnings.filterwarnings('ignore')\nwarnings.simplefilter('ignore')\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport os\nimport sys\n# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题\nsys.path.insert(0, os.path.abspath('../'))\nimport abupy\n\n# 使用沙盒数据,目的是和书中一样的数据环境\nabupy.env.enable_example_env_ipython()\n\nfrom abupy import AbuDownUpTrend, AbuPtPosition, ABuMarketDrawing\nfrom abupy import AbuFactorCloseAtrNStop, AbuFactorAtrNStop, AbuFactorPreAtrNStop, tl\nfrom abupy import abu, ABuProgress, AbuMetricsBase, EMarketTargetType", "abupy中的买策,卖策,选股,资管模块在内部都是面向对象的独立存在,只有使用者在顶层的调度是面向过程的,这种设计的目标是为上层使用者提供最大限度的自由,即通过代码实现整体策略的自由度,以及降低整体策略的复杂耦合度,便于后期维护升级扩张的需要。\n在abupy定制整体策略的步骤如下:\n1. 定性整体策略风格,必需(定性趋势策略或者回复策略) \n2. 定制买入信号策略,必需\n3. 定制卖出信号策略,必需\n4. 定制选股策略,可选\n5. 定制资金仓位管理策略,可选\n6. 定制滑点买入策略,可选(中低频回测可不涉及)\n在abpy中对于一个完整的策略其中1,2,3是必需要做的,4,5,6是可以选择进行的工作,从abupy提供的回测ui可交互界面大体可看到组合一个整体个性化策略步骤需要,如下图:\n\n上一节讲解的通过买入和卖出策略的搭配定性整体策略风格为趋势跟踪策略或均值回复策略,即通过买入策略和卖出策略定性整体策略的交易风格,选股策略和资金管理策略的意义更多在与配合整体策略提高稳定可靠性,简单如下所示:\n\n备注:abupy中更关键技术是使用交易结果预测拦截ump模块对策略交易进行深度优化定制,本节暂不涉及,请阅读‘第15节 量化交易和搜索引擎’或之后的教程\n关于定制选股策略相关内容在‘第27节 狗股选股策略与择时策略的配合’有完整示例,请阅读相关内容。\n本节主要讲解针对整体策略风格体制资金仓位管理策略,abupy默认的仓位管理策略为atr资管策略,详请阅读‘第4节 多支股票择时回测与仓位管理’。\n上一节使用了abupy内置的一个长短线买入策略AbuDownUpTrend:\n1. 寻找长线下跌的股票,比如一个季度(4个月)整体趋势为下跌趋势\n2. 短线走势上涨的股票,比如一个月整体趋势为上涨趋势\n3. 最后使用海龟突破的N日突破策略作为策略最终买入信号\n\n本节针对这个策略示例定制一个对应的资管策略。\n本示例资管策略原理很简单,如下图所示:\n如果AbuDownUpTrend策略设置不同的参数将有可能在buy A或buy B两个位置发出买入信号,那么资管策略将根据前期最高点位置3067做为值'100'定位当前买入价格的位置,很明显buy B的买入位置要高于buy A的买入位置,那么:\n\nbuy A点的买入仓位配比会高(认为均值回复有很大向上空间)\nbuy B点的买入仓位配比会低(认为均值回复向上空间比较小)\n\n\n资管仓位策略不需要考虑买入点是否合理以及何时卖出,只需要关注在buy a点buy b应该如何对资金仓位进行配比,策略模块之间尽量减少耦合度。\n上述策略简单举例代码描述如下:\n\n价格价格曲线由10下跌到5后上涨到9\n如果买入点buy_a的位置为7,则相对最高点价格位置:45.0\n如果买入点buy_b的位置为9,则相对最高点价格位置:85.0\nbuy A点的买入仓位配比会高(认为均值回复有很大向上空间)\nbuy B点的买入仓位配比会低(认为均值回复向上空间比较小)\n\n如下所示:", "from scipy import stats\nprice = [10, 9, 8, 7, 6, 5, 6, 7, 8, 9]\nbuy_a = 7\nbuy_b = 9\nplt.plot(price, label='price')\nplt.axhline(buy_a, c='r', label='buy_a')\nplt.axhline(buy_b, c='g', label='buy_b')\nplt.legend(loc='best')\nprint('buy_a 点相对最高点价格位置:{}'.format(stats.percentileofscore(price, buy_a)))\nprint('buy_b 点相对最高点价格位置:{}'.format(stats.percentileofscore(price, buy_b)))", "下面首先依然使用默认的仓位管理策略对上一节的AbuDownUpTrend策略进行回测,打印出交易单,可以看到仓位管理策略为AbuAtrPosition,如下所示:", "# 初始资金量\ncash = 3000000\ndef run_loo_back(ps=None, n_folds=3, start=None, end=None):\n us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS']\n abu_result_tuple, _ = abu.run_loop_back(cash,\n buy_factors,\n sell_factors,\n ps,\n start=start,\n end=end,\n n_folds=n_folds,\n choice_symbols=us_choice_symbols)\n ABuProgress.clear_output()\n return abu_result_tuple\n \n# 买入策略使用AbuDownUpTrend\nbuy_factors = [{'class': AbuDownUpTrend}]\n# 卖出策略:利润保护止盈策略+风险下跌止损+较大的止盈位\nsell_factors = [{'stop_loss_n': 1.0, 'stop_win_n': 3.0,\n 'class': AbuFactorAtrNStop},\n {'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},\n {'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}]\n# 开始回测\nabu_result_tuple = run_loo_back()\n# 筛出有交易结果的\norders_pd_atr = abu_result_tuple.orders_pd[abu_result_tuple.orders_pd.result != 0]\norders_pd_atr.filter(['buy_cnt', 'buy_pos', 'buy_price', 'profit', 'result'])", "abupy内置的资管AbuPtPosition策略为上述策略的代码实现, 详代码请阅读源代码,关键策略代码如下:\n def fit_position(self, factor_object):\n \"\"\"\n 针对均值回复类型策略的仓位管理:\n 根据当前买入价格在过去一段金融序列中的价格rank位置来决定仓位\n fit_position计算的结果是买入多少个单位(股,手,顿,合约)\n :param factor_object: ABuFactorBuyBases子类实例对象\n :return:买入多少个单位(股,手,顿,合约)\n \"\"\"\n # self.kl_pd_buy为买入当天的数据,获取之前的past_day_cnt天数据\n last_kl = factor_object.past_today_kl(self.kl_pd_buy, self.past_day_cnt)\n if last_kl is None or last_kl.empty:\n precent_pos = self.pos_base\n else:\n # 使用percentileofscore计算买入价格在过去的past_day_cnt天的价格位置\n precent_pos = stats.percentileofscore(last_kl.close, self.bp)\n precent_pos = (1 + (self.mid_precent - precent_pos) / 100) * self.pos_base\n # 最大仓位限制,依然受上层最大仓位控制限制,eg:如果算出全仓,依然会减少到75%,如修改需要修改最大仓位值\n precent_pos = self.pos_max if precent_pos &gt; self.pos_max else precent_pos\n # 结果是买入多少个单位(股,手,顿,合约)\n return self.read_cash * precent_pos / self.bp * self.deposit_rate\n\n下面使用同样的买入卖出策略,但是资管策略使用AbuPtPosition进行回测,如下:", "# 买策还是AbuDownUpTrend,但资管类字段position使用AbuPtPosition做为策略\nbuy_factors = [{'class': AbuDownUpTrend, 'position': {'class': AbuPtPosition, 'past_day_cnt': 80}}]\nabu_result_tuple = run_loo_back()\n# 筛出有交易结果的\norders_pd_precent = abu_result_tuple.orders_pd[abu_result_tuple.orders_pd.result != 0]\norders_pd_precent.filter(['buy_cnt', 'buy_pos', 'buy_price', 'profit', 'result'])", "对比上面两处交易单输出结果可以发现在buy_pos处使用的资管策略不同,导致在buy_cnt上资金仓位配比发生了变化。\nabu量化文档目录章节\n\n择时策略的开发\n择时策略的优化\n滑点策略与交易手续费\n多支股票择时回测与仓位管理\n选股策略的开发\n回测结果的度量\n寻找策略最优参数和评分\nA股市场的回测\n港股市场的回测\n比特币,莱特币的回测\n期货市场的回测\n机器学习与比特币示例\n量化技术分析应用\n量化相关性分析应用\n量化交易和搜索引擎\nUMP主裁交易决策\nUMP边裁交易决策\n自定义裁判决策交易\n数据源\nA股全市场回测\nA股UMP决策\n美股全市场回测\n美股UMP决策\n\nabu量化系统文档教程持续更新中,请关注公众号中的更新提醒。\n《量化交易之路》目录章节及随书代码地址\n\n第二章 量化语言——Python\n第三章 量化工具——NumPy\n第四章 量化工具——pandas\n第五章 量化工具——可视化\n第六章 量化工具——数学:你一生的追求到底能带来多少幸福\n第七章 量化系统——入门:三只小猪股票投资的故事\n第八章 量化系统——开发\n第九章 量化系统——度量与优化\n第十章 量化系统——机器学习•猪老三\n第十一章 量化系统——机器学习•ABU\n附录A 量化环境部署\n附录B 量化相关性分析\n附录C 量化统计分析及指标应用\n\n更多阿布量化量化技术文章\n更多关于量化交易相关请阅读《量化交易之路》\n更多关于量化交易与机器学习相关请阅读《机器学习之路》\n更多关于abu量化系统请关注微信公众号: abu_quant" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BinRoot/TensorFlow-Book
ch02_basics/Concept01_defining_tensors.ipynb
mit
[ "Ch 02: Concept 01\nDefining tensors\nImport TensorFlow and Numpy:", "import tensorflow as tf\nimport numpy as np", "Now, define a 2x2 matrix in different ways:", "m1 = [[1.0, 2.0], \n [3.0, 4.0]]\n\nm2 = np.array([[1.0, 2.0], \n [3.0, 4.0]], dtype=np.float32)\n\nm3 = tf.constant([[1.0, 2.0], \n [3.0, 4.0]])", "Let's see what happens when we print them:", "print(type(m1))\nprint(type(m2))\nprint(type(m3))", "So, that's what we're dealing with. Interesting. \nBy the way, there's a function called convert_to_tensor(...) that does exactly what you might expect. \nLet's use it to create tensor objects out of various types:", "t1 = tf.convert_to_tensor(m1, dtype=tf.float32)\nt2 = tf.convert_to_tensor(m2, dtype=tf.float32)\nt3 = tf.convert_to_tensor(m3, dtype=tf.float32)", "Ok, ok! Time for the reveal:", "print(type(t1))\nprint(type(t2))\nprint(type(t3))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.12/_downloads/plot_forward_sensitivity_maps.ipynb
bsd-3-clause
[ "%matplotlib inline", "Display sensitivity maps for EEG and MEG sensors\nSensitivity maps can be produced from forward operators that\nindicate how well different sensor types will be able to detect\nneural currents from different regions of the brain.\nTo get started with forward modeling see ref:tut_forward.", "# Author: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\nimport matplotlib.pyplot as plt\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\n\nsubjects_dir = data_path + '/subjects'\n\n# Read the forward solutions with surface orientation\nfwd = mne.read_forward_solution(fwd_fname, surf_ori=True)\nleadfield = fwd['sol']['data']\nprint(\"Leadfield size : %d x %d\" % leadfield.shape)", "Compute sensitivity maps", "grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')\nmag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')\neeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')", "Show gain matrix a.k.a. leadfield matrix with sensitivity map", "picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)\npicks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)\n\nfig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)\nfig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)\nfor ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):\n im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',\n cmap='RdBu_r')\n ax.set_title(ch_type.upper())\n ax.set_xlabel('sources')\n ax.set_ylabel('sensors')\n plt.colorbar(im, ax=ax, cmap='RdBu_r')\nplt.show()\n\nplt.figure()\nplt.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],\n bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],\n color=['c', 'b', 'k'])\nplt.legend()\nplt.title('Normal orientation sensitivity')\nplt.xlabel('sensitivity')\nplt.ylabel('count')\nplt.show()\n\ngrad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir,\n clim=dict(lims=[0, 50, 100]))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
stubz/deep-learning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\n\nprint(text[3920:3960])", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n vocab = set(text)\n vocab_to_int = {c: i for i, c in enumerate(vocab)}\n int_to_vocab = dict(enumerate(vocab))\n \n return (vocab_to_int, int_to_vocab)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n dict_punc = {\n '.': '||period||',\n ',': '||comma||',\n '\"': '||quotation_mark||',\n ';': '||semicolon||',\n '!': '||exclamation_mark||',\n '?': '||question_mark||',\n '(': '||left_parentheses||',\n ')': '||right_parentheses||',\n '--': '||dash||',\n '\\n': '||return||'\n }\n return dict_punc\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # tf.reset_default_graph()\n # Declare placeholders we'll feed into the graph\n input = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n \n # \n learning_rate = tf.placeholder(tf.float32, name='learning_rate')\n\n # TODO: Implement Function\n return (input, targets, learning_rate)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=True)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)\n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([drop]*2) # In Anna Karina example, it is multiplied by num_layers, and num_layers was set 2.\n initial_state = cell.zero_state(batch_size, tf.float32)\n initial_state = tf.identity(initial_state, name='initial_state')\n\n # TODO: Implement Function\n return (cell, initial_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n \n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n \n # TODO: Implement Function\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n final_state = tf.identity(final_state, name='final_state')\n return (outputs, final_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n embed = get_embed(input_data, vocab_size, rnn_size) # embed_dim can be rnn_size? should we use something else?\n outputs, final_state = build_rnn(cell, embed)\n logits = tf.contrib.layers.fully_connected(outputs,vocab_size, \n weights_initializer=tf.truncated_normal_initializer(mean=0.0,stddev=0.01),\n biases_initializer=tf.zeros_initializer(),\n activation_fn=None)\n # TODO: Implement Function\n return (logits, final_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n #n_batches = len(int_text)//batch_size\n # ignore texts that do not fit into the last batch size\n #mytext = int_text[:n_batches*batch_size]\n \n n_batches = int(len(int_text) / (batch_size * seq_length))\n\n # Drop the last few characters to make only full batches\n xdata = np.array(int_text[: n_batches * batch_size * seq_length])\n ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])\n\n x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)\n y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)\n\n return np.array(list(zip(x_batches, y_batches)))\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 500\n# Batch Size\nbatch_size = 500\n# RNN Size\nrnn_size = 256\n# Embedding Dimension Size\nembed_dim = None\n# Sequence Length\nseq_length = 10\n# Learning Rate\nlearning_rate = 0.005\n# Show stats for every n number of batches\nshow_every_n_batches = 100\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n InputTensor = loaded_graph.get_tensor_by_name('input:0')\n InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')\n FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')\n ProbsTensor = loaded_graph.get_tensor_by_name('probs:0')\n return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n p = np.squeeze(probabilities)\n idx = np.argsort(p)[-1]\n return int_to_vocab[idx]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kevinsung/OpenFermion
docs/tutorials/circuits_1_basis_change.ipynb
apache-2.0
[ "Copyright 2020 The OpenFermion Developers", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Circuits 1: Compiling arbitrary single-particle basis rotations in linear depth\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/openfermion/tutorials/circuits_1_basis_change\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/circuits_1_basis_change.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/circuits_1_basis_change.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/circuits_1_basis_change.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>\n\nThis is the first of several tutorials demonstrating the compilation of quantum circuits. These tutorials build on one another and should be studied in order. In this tutorial we will discuss the compilation of circuits for implementing arbitrary rotations of the single-particle basis of an electronic structure simulation. As an example, we show how one can use these methods to simulate the evolution of an arbitrary non-interacting fermion model.\nSetup\nInstall the OpenFermion package:", "try:\n import openfermion\nexcept ImportError:\n !pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion", "Background\nSecond quantized fermionic operators\nIn order to represent fermionic systems on a quantum computer one must first discretize space. Usually, one expands the many-body wavefunction in a basis of spin-orbitals $\\varphi_p = \\varphi_p(r)$ which are single-particle basis functions. For reasons of spatial efficiency, all NISQ (and even most error-corrected) algorithms for simulating fermionic systems focus on representing operators in second-quantization. Second-quantized operators are expressed using the fermionic creation and annihilation operators, $a^\\dagger_p$ and $a_p$. The action of $a^\\dagger_p$ is to excite a fermion in spin-orbital $\\varphi_p$ and the action of $a_p$ is to annihilate a fermion from spin-orbital $\\varphi_p$. Specifically, if electron $i$ is represented in a space of spin-orbitals ${\\varphi_p(r_i)}$ then $a^\\dagger_p$ and $a_p$ are related to Slater determinants through the equivalence,\n$$\n\\langle r_0 \\cdots r_{\\eta-1} | a^\\dagger_{0} \\cdots a^\\dagger_{\\eta-1} | \\varnothing\\rangle \\equiv \\sqrt{\\frac{1}{\\eta!}}\n\\begin{vmatrix}\n\\varphi_{0}\\left(r_0\\right) & \\varphi_{1}\\left( r_0\\right) & \\cdots & \\varphi_{\\eta-1} \\left( r_0\\right) \\\n\\varphi_{0}\\left(r_1\\right) & \\varphi_{1}\\left( r_1\\right) & \\cdots & \\varphi_{\\eta-1} \\left( r_1\\right) \\\n\\vdots & \\vdots & \\ddots & \\vdots\\\n\\varphi_{0}\\left(r_{\\eta-1}\\right) & \\varphi_{1}\\left(r_{\\eta-1}\\right) & \\cdots & \\varphi_{\\eta-1} \\left(r_{\\eta-1}\\right) \\end{vmatrix}\n$$\nwhere $\\eta$ is the number of electrons in the system, $|\\varnothing \\rangle$ is the Fermi vacuum and $\\varphi_p(r)=\\langle r|\\varphi_p \\rangle$ are the single-particle orbitals that define the basis. By using a basis of Slater determinants, we ensure antisymmetry in the encoded state.\nRotations of the single-particle basis\nVery often in electronic structure calculations one would like to rotate the single-particle basis. That is, one would like to generate new orbitals that are formed from a linear combination of the old orbitals. Any particle-conserving rotation of the single-particle basis can be expressed as\n$$\n\\tilde{\\varphi}p = \\sum{q} \\varphi_q u_{pq}\n\\quad\n\\tilde{a}^\\dagger_p = \\sum_{q} a^\\dagger_q u_{pq}\n\\quad\n\\tilde{a}p = \\sum{q} a_q u_{pq}^*\n$$\nwhere $\\tilde{\\varphi}p$, $\\tilde{a}^\\dagger_p$, and $\\tilde{a}^\\dagger_p$ correspond to spin-orbitals and operators in the rotated basis and $u$ is an $N\\times N$ unitary matrix. From the Thouless theorem, this single-particle rotation\nis equivalent to applying the $2^N \\times 2^N$ operator\n$$\n U(u) = \\exp\\left(\\sum{pq} \\left[\\log u \\right]{pq} \\left(a^\\dagger_p a_q - a^\\dagger_q a_p\\right)\\right) \n$$\nwhere $\\left[\\log u\\right]{pq}$ is the $(p, q)$ element of the matrix $\\log u$.\nThere are many reasons that one might be interested in performing such basis rotations. For instance, one might be interested in preparing the Hartree-Fock (mean-field) state of a chemical system, by rotating from some initial orbitals (e.g. atomic orbitals or plane waves) into the molecular orbitals of the system. Alternatively, one might be interested in rotating from a basis where certain operators are diagonal (e.g. the kinetic operator is diagonal in the plane wave basis) to a basis where certain other operators are diagonal (e.g. the Coulomb operator is diagonal in the position basis). Thus, it is a very useful thing to be able to apply circuits corresponding to $U(u)$ on a quantum computer in low depth.\nCompiling linear depth circuits to rotate the orbital basis\nOpenFermion prominently features routines for implementing the linear depth / linear connectivity basis transformations described in Phys. Rev. Lett. 120, 110501. While we will not discuss this functionality here, we also support routines for compiling the more general form of these transformations which do not conserve particle-number, known as a Bogoliubov transformation, using routines described in Phys. Rev. Applied 9, 044036. We will not discuss the details of how these methods are implemented here and instead refer readers to those papers. All that one needs in order to compile the circuit $U(u)$ using OpenFermion is the $N \\times N$ matrix $u$, which we refer to in documentation as the \"basis_transformation_matrix\". Note that if one intends to apply this matrix to a computational basis state with only $\\eta$ electrons, then one can reduce the number of gates required by instead supplying the $\\eta \\times N$ rectangular matrix that characterizes the rotation of the occupied orbitals only. OpenFermion will automatically take advantage of this symmetry.\nOpenFermion example implementation: exact evolution under tight binding models\nIn this example will show how basis transforms can be used to implement exact evolution under a random Hermitian one-body fermionic operator\n\\begin{equation}\nH = \\sum_{pq} T_{pq} a^\\dagger_p a_q.\n\\end{equation}\nThat is, we will compile a circuit to implement $e^{-i H t}$ for some time $t$. Of course, this is a tractable problem classically but we discuss it here since it is often useful as a subroutine for more complex quantum simulations. To accomplish this evolution, we will use basis transformations. Suppose that $u$ is the basis transformation matrix that diagonalizes $T$. Then, we could implement $e^{-i H t}$ by implementing $U(u)^\\dagger (\\prod_{k} e^{-i \\lambda_k Z_k}) U(u)$ where $\\lambda_k$ are the eigenvalues of $T$. \nBelow, we initialize the T matrix characterizing $H$ and then obtain the eigenvalues $\\lambda_k$ and eigenvectors $u_k$ of $T$. We print out the OpenFermion FermionOperator representation of $T$.", "import openfermion\nimport numpy\n\n# Set the number of qubits in our example.\nn_qubits = 3\nsimulation_time = 1.\nrandom_seed = 8317\n\n# Generate the random one-body operator.\nT = openfermion.random_hermitian_matrix(n_qubits, seed=random_seed)\n\n# Diagonalize T and obtain basis transformation matrix (aka \"u\").\neigenvalues, eigenvectors = numpy.linalg.eigh(T)\nbasis_transformation_matrix = eigenvectors.transpose()\n\n# Print out familiar OpenFermion \"FermionOperator\" form of H.\nH = openfermion.FermionOperator()\nfor p in range(n_qubits):\n for q in range(n_qubits):\n term = ((p, 1), (q, 0))\n H += openfermion.FermionOperator(term, T[p, q])\nprint(H)", "Now we're ready to make a circuit! First we will use OpenFermion to generate the basis transform $U(u)$ from the basis transformation matrix $u$ by calling the Bogoliubov transform function (named as such because this function can also handle non-particle conserving basis transformations). Then, we'll apply local $Z$ rotations to phase by the eigenvalues, then we'll apply the inverse transformation. That will finish the circuit. We're just going to print out the first rotation to keep things easy-to-read, but feel free to play around with the notebook.", "import openfermion\nimport cirq\nimport cirq_google\n\n# Initialize the qubit register.\nqubits = cirq.LineQubit.range(n_qubits)\n\n# Start circuit with the inverse basis rotation, print out this step.\ninverse_basis_rotation = cirq.inverse(openfermion.bogoliubov_transform(qubits, basis_transformation_matrix))\ncircuit = cirq.Circuit(inverse_basis_rotation)\nprint(circuit)\n\n# Add diagonal phase rotations to circuit.\nfor k, eigenvalue in enumerate(eigenvalues):\n phase = -eigenvalue * simulation_time\n circuit.append(cirq.rz(rads=phase).on(qubits[k]))\n\n# Finally, restore basis.\nbasis_rotation = openfermion.bogoliubov_transform(qubits, basis_transformation_matrix)\ncircuit.append(basis_rotation)", "Finally, we can check whether our circuit applied to a random initial state with the exact result. Print out the fidelity with the exact result.", "# Initialize a random initial state.\ninitial_state = openfermion.haar_random_vector(\n 2 ** n_qubits, random_seed).astype(numpy.complex64)\n\n# Numerically compute the correct circuit output.\nimport scipy\nhamiltonian_sparse = openfermion.get_sparse_operator(H)\nexact_state = scipy.sparse.linalg.expm_multiply(\n -1j * simulation_time * hamiltonian_sparse, initial_state)\n\n# Use Cirq simulator to apply circuit.\nsimulator = cirq.Simulator()\nresult = simulator.simulate(circuit, qubit_order=qubits,\n initial_state=initial_state)\nsimulated_state = result.final_state_vector\n\n# Print final fidelity.\nfidelity = abs(numpy.dot(simulated_state, numpy.conjugate(exact_state)))**2\nprint(fidelity)", "Thus, we see that the circuit correctly effects the intended evolution. We can now use Cirq's compiler to output the circuit using gates native to near-term devices, and then optimize those circuits. We'll output in QASM 2.0 just to demonstrate that functionality.", "xmon_circuit = cirq_google.optimized_for_xmon(circuit)\nprint(xmon_circuit.to_qasm())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
EBIvariation/eva-cttv-pipeline
data-exploration/complex-events/notebooks/repeat-expansion-explore.ipynb
apache-2.0
[ "import os\nimport sys\n\nimport pandas as pd\nimport requests\n\nfrom eva_cttv_pipeline.clinvar_xml_utils.clinvar_xml_utils import *\nfrom eva_cttv_pipeline.clinvar_xml_utils.clinvar_identifier_parsing import *\nfrom consequence_prediction.repeat_expansion_variants.pipeline import *\n\nsys.path.append('../')\nfrom filter_clinvar_xml import filter_xml\n\nPROJECT_ROOT = '/home/april/projects/opentargets'\nclinvar_path = os.path.join(PROJECT_ROOT, 'ClinVarFullRelease_00-latest.xml.gz')\nmicrosat_xml = os.path.join(PROJECT_ROOT, 'repeat-expansion', 'microsatellites.xml.gz')\n\n# Filter XML for microsatellites\ndef microsat(x):\n return x.measure and x.measure.variant_type == 'Microsatellite'\n\n\nfilter_xml(\n input_xml=clinvar_path,\n output_xml=microsat_xml,\n filter_fct=microsat,\n)\n\n# only microsatellite variants\ndataset = ClinVarDataset(microsat_xml)", "Part 1: HGVS parsing\nConfirming HGVS coverage (similar to complex event investigation). Note that the python hgvs module can't parse anything in this dataset, so omitting it from the investigation.", "def try_to_parse_hgvs(hgvs_list):\n one_parseable = False\n for hgvs in hgvs_list:\n try:\n if any(parse_variant_identifier(hgvs)):\n one_parseable = True\n except:\n pass # these are None\n return one_parseable\n\nhgvs_count = 0\ntoplevel_refseq_hgvs_count = 0\nparseable_hgvs_count = 0\nparseable_name_count = 0\nparseable_hgvs_only = 0\nparseable_name_only = 0\n\nhgvs_output_file = open(os.path.join(PROJECT_ROOT, 'repeat-expansion', 'unparseable-hgvs.txt'), 'w+')\n\nfor record in dataset:\n m = record.measure\n can_parse_hgvs = False\n can_parse_name = False\n \n if m.hgvs:\n hgvs_count += 1\n if m.toplevel_refseq_hgvs:\n toplevel_refseq_hgvs_count += 1\n\n # hgvs parseability\n if try_to_parse_hgvs(m.hgvs):\n parseable_hgvs_count += 1\n can_parse_hgvs = True\n else:\n hgvs_output_file.write(f'\\n{record.accession}\\n')\n for hgvs in m.hgvs:\n hgvs_output_file.write(hgvs + '\\n')\n \n # variant name parseability\n if try_to_parse_hgvs(m.all_names):\n parseable_name_count += 1\n can_parse_name = True\n \n # exclusive counts\n if can_parse_hgvs and not can_parse_name:\n parseable_hgvs_only += 1\n if can_parse_name and not can_parse_hgvs:\n parseable_name_only += 1\n \nhgvs_output_file.close()\n\n# collect results - \ncounts = {\n 'Any HGVS': hgvs_count,\n 'Top level refseq HGVS': toplevel_refseq_hgvs_count,\n 'HGVS parseable': parseable_hgvs_count,\n 'Name parseable': parseable_name_count,\n 'Only HGVS parseable': parseable_hgvs_only,\n 'Only name parseable': parseable_name_only,\n}\n\n# total is 18551\ncounts", "Thoughts\n\nWe should probably try parsing everything we can\nOrder of preference: top level refseq > any other HGVS > any variant name\nShould support LRG (assuming Ensembl accepts this)\n\nUnparseable from logs (parsing name only, falls back on hgvs when name missing):\nNC_000004.12:g.(41745972_41746031)ins(15_27)\nNC_000004.12:g.(41745972_41746031)ins(15_27)\nNC_000004.12:g.(41745972_41746031)ins(15_27)\nNG_031977.1:g.(5321_5338)ins(360_?)\nNG_031977.1:g.(5321_5338)ins(360_?)\nNG_054747.1:g.(19392_19426)TTTTA[(7_?)]TTTCA[(n)]\nNR_002717.2(ATXN8OS):n.1103CTG[(107_127)]\nNR_002717.2(ATXN8OS):n.1103CTG[(107_127)]\nNR_002717.2(ATXN8OS):n.1103CTG[(107_127)]\nNR_002717.2(ATXN8OS):n.1103CTG[(107_127)]\nNR_002717.2(ATXN8OS):n.1103CTG[(15_40)]\nNR_002717.2(ATXN8OS):n.1103CTG[(15_40)]\nNR_002717.2(ATXN8OS):n.1103CTG[(15_40)]\nNR_003051.3(RMRP):n.-10_-9insCTCTGTGAAGCCTCTGTGAAGC\nNR_120611.1:n.192CCG[(35_?)]\nfragile site, folic acid type, rare, fra(12)(q13.1)\nUnparseable output from above counts (parsing any hgvs):\n```\nRCV000008537\nLRG_863t1:c.589_591CAG(36_38)\nLRG_863p1:p.Gln197_Gln208delinsGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGlnGln\nRCV000761550\nLRG_762t1:c.-128GGC(55_200)\nRCV000761551\nLRG_762t1:c.-128GGC(55_200)\nRCV000853558\nNR_120611.1:n.192CCG[(35_?)]\nRCV000856572\nNG_054747.1:g.(19392_19426)TTTTA[(7_?)]TTTCA[(n)]\n```\nPart 2: Consequence prediction\nSpecifically issues like these:\n* Missing genes: UGT1A, ATXN8, various LOC*\n* Genes mapping to non-standard chromosomes in Ensembl", "pd.set_option('display.max_rows', None)\npd.set_option('display.max_colwidth', None)\n\nvariants, s = load_clinvar_data(microsat_xml)\n\nlen(variants)\n\ns\n\nvariants = annotate_ensembl_gene_info(variants)\n\ndef incomplete(row):\n return not (\n pd.notnull(row['EnsemblGeneID']) and\n pd.notnull(row['EnsemblGeneName']) and\n pd.notnull(row['RepeatType']) and\n row['EnsemblChromosomeName'] in STANDARD_CHROMOSOME_NAMES\n )\n\n\ndef required_cols(variants):\n return display_cols(variants)\n\n\ndef display_cols(variants, add_cols=None):\n if add_cols:\n return variants[add_cols + ['GeneSymbol', 'EnsemblGeneID', 'EnsemblGeneName', 'RepeatType', 'EnsemblChromosomeName']]\n return variants[['GeneSymbol', 'EnsemblGeneID', 'EnsemblGeneName', 'RepeatType', 'EnsemblChromosomeName']]\n\n\ndef gene_symbol_like(variants, s):\n variants_with_names = variants[pd.notna(variants['GeneSymbol'])]\n return variants_with_names[variants_with_names['GeneSymbol'].str.contains(s)]\n\n\ndef nonstandard_chr_name(variants):\n variants_with_chr_name = variants[pd.notna(variants['EnsemblChromosomeName'])]\n return variants_with_chr_name[~variants_with_chr_name['EnsemblChromosomeName'].isin(STANDARD_CHROMOSOME_NAMES)]\n\n# all incomplete variants\nincomplete_variants = variants[variants.apply(incomplete, axis=1)]\n\nprint(len(variants))\nprint(len(incomplete_variants))", "Overview:\n18,550 microsatellites ><sup>(1)</sup> 739 repeat expansion candidates > 1544 annotated variants ><sup>(2)</sup> 1485 complete\n\n(1) is dropping deletions + short expansions which is expected\nI assume we'll eventually also care about microsatellites that aren't expansions but that's maybe a discussion for another day\n\n\n(2) is a combination of parsing issues not extracting variant type, and consequence prediction issues described in this section\n\nMissing genes", "missing_genes = pd.concat([\n gene_symbol_like(incomplete_variants, 'UGT1A'),\n gene_symbol_like(incomplete_variants, 'ATXN8'),\n gene_symbol_like(incomplete_variants, 'LOC'),\n])\n\ndisplay_cols(missing_genes, add_cols=['Name'])\n\n# Check ATXN8 - ATXN8OS always present\natxn8_rcvs = variants[variants['GeneSymbol'] == 'ATXN8']['RCVaccession']\ndisplay_cols(variants[variants['RCVaccession'].isin(atxn8_rcvs)].sort_values(['RCVaccession', 'GeneSymbol']), add_cols=['RCVaccession'])\n\n# Check UGT1A - specific UGT1A* always present\nugt1a_rcvs = variants[variants['GeneSymbol'] == 'UGT1A']['RCVaccession']\ndisplay_cols(variants[variants['RCVaccession'].isin(ugt1a_rcvs)].sort_values(['RCVaccession', 'GeneSymbol']), add_cols=['RCVaccession'])\n\n# Check LOC* - chr/start/stop always present\nloc_vars = incomplete_variants[incomplete_variants['GeneSymbol'].str.contains('LOC')]\nloc_rcvs = loc_vars['RCVaccession'].tolist()\n\nfor r in dataset:\n if r.accession in loc_rcvs:\n m = r.measure\n print(f\"{m.preferred_gene_symbols}: {m.chr}, {m.sequence_location_helper('start')}, {m.sequence_location_helper('stop')}\")\n\n# Another approach to LOCs using NCBI esearch\ndef esearch(s):\n eutils_url = 'https://eutils.ncbi.nlm.nih.gov/entrez/eutils/'\n esearch_url = eutils_url + 'esearch.fcgi'\n esummary_url = eutils_url + 'esummary.fcgi'\n\n payload = {'db': 'Gene', 'term': f'\"{s}\"', 'retmode': 'JSON'}\n data = requests.get(esearch_url, params=payload).json()\n if data:\n result_id_list = data.get('esearchresult').get('idlist')\n payload = {'db': 'Gene', 'id': ','.join(result_id_list), 'retmode': 'JSON'}\n summary_list = requests.get(esummary_url, params=payload).json()\n return result_id_list, summary_list\n return None, None \n\nlocs = gene_symbol_like(incomplete_variants, 'LOC')\nlocs = locs['GeneSymbol'].tolist()\n\nfor l in locs:\n print('=====')\n ids, summary_list = esearch(l)\n for i in ids:\n loc_name = summary_list['result'][i]['name']\n print(loc_name)\n print(f'https://www.ncbi.nlm.nih.gov/gene/{loc_name[3:]}')\n print(summary_list['result'][i]['genomicinfo'])", "Non-standard chromosomes", "nonstandard_chr = nonstandard_chr_name(incomplete_variants)\n\ndisplay_cols(nonstandard_chr)\n\n# Check non-standard chromosome names - also have standard chromosome annotated\nnonstandard_chr_names = nonstandard_chr['EnsemblChromosomeName'].tolist()\nnonstandard_rcvs = variants[variants['EnsemblChromosomeName'].isin(nonstandard_chr_names)]['RCVaccession'].tolist()\n\n(\n variants[variants['RCVaccession'].isin(nonstandard_rcvs)].groupby(['RCVaccession','EnsemblGeneName'])\n .agg({'EnsemblChromosomeName': lambda x: x.tolist()})\n .sort_values(['RCVaccession'])\n)", "Part 3: Correctness\n\nMixed repeats\nClassification of microsatellite events without complete coordinates - are they all indeed expansions?\nSpot-check some weird cases", "# transcript_id, coordinate_span, repeat_unit_length, is_protein_hgvs\n\nparse_variant_identifier('NM_004409.4:c.*224_*283CTG[(173_283)]CCG[1]CTG[8]CCG[2]CTG[2]CCG[1]CTG[4]CCG[1]CTG[30]')\n# 224-283 bases after last coding region, CTG repeated 173-283 times, CCG once, etc.\n# what exactly is the coordinate range telling you?\n\nparse_variant_identifier('NC_000008.10:g.119379055_119379157TGAAA[100_?]TAAAA[40_?]')\n# TGAAA repeated 100+ times, TAAAA repeated 40+ times?\n# again what is the point of the coordinate range?\n\nparse_variant_identifier('NM_000548.3(TSC2):c.5068+27_5069-47dup34')\n# 27 from one end of intron @ 5068, up to 47 from other end - 34 bases duplicated\n# how are you supposed to know how long the intron is??? 34 + 27 + 47 + 1?\n\nparse_variant_identifier('NM_001243246.1(P3H1):c.2049_2067CGAGCGGGTGAGAGCAGCT[3] (p.Trp696delinsSerSerGlyTer)')\n# 19 bases within coding region repeated 3 times\n\nlen('CGAGCGGGTGAGAGCAGCT') == 2067 - 2049 + 1\n\nparse_variant_identifier('NM_000368.4(TSC1):c.914-88_914-58T(27_30)')\n# 88 from one end of intron up to 58 from same end, T repeated 27-30 times\n# coordinate span seems to be max length of the variant\n\nparse_variant_identifier('NM_001256054.2(C9orf72):c.-45+163_-45+180GGGGCC(2_25)')\n# 45 bases before first coding region - 163-180 bases after this?, GGGGCC repeated 2-25 times", "Thoughts\n\nMeaning of coordinate span not always clear\nMixed repeats\nthe repeat unit is I guess ill-defined, but we currently use the first\nmight need to get all repeat units in the future\n\n\n\nRepeats lacking explicit coordinates", "incomplete_repeats = []\nfor r in dataset:\n if r.measure and not r.measure.has_complete_coordinates and r.measure.is_repeat_expansion_variant:\n incomplete_repeats.append(r.measure)\n\nlen(incomplete_repeats)\n\n# pretty print xml\ndef pprint(x):\n print(ElementTree.tostring(x, encoding='unicode'))\n\ninsertions = []\ndeletions = []\nrepeat_number_multiple = [] # span is some multiple of repeat unit length\nnon_repeat_multiple = [] # span is not a multiple of repeat unit length\nmissing_seq_loc = []\nmultiple_seq_loc = []\nunparseable = []\nfor m in incomplete_repeats:\n if 'del' in m.get_variant_name_or_hgvs():\n deletions.append(m)\n continue \n seq_locs = find_elements(m.measure_xml, './SequenceLocation[@Assembly=\"GRCh38\"]')\n if len(seq_locs) < 1:\n missing_seq_loc.append(m)\n continue\n if len(seq_locs) > 1:\n multiple_seq_loc.append(m)\n continue\n \n sl = seq_locs[0] \n start = int(sl.attrib.get('start'))\n stop = int(sl.attrib.get('stop'))\n loc_span = stop - start + 1\n repeat_unit_len = m.hgvs_properties.repeat_unit_length\n if not repeat_unit_len:\n unparseable.append(m)\n continue\n\n if start == stop:\n insertions.append(m)\n elif loc_span % repeat_unit_len == 0:\n repeat_number_multiple.append((m, loc_span, repeat_unit_len))\n else:\n non_repeat_multiple.append((m, loc_span, repeat_unit_len))\n\nprint('Insertions:', len(insertions))\nprint('Deletions:', len(deletions))\nprint('Multiple of repeat unit length:', len(repeat_number_multiple))\nprint('Not a repeat multiple:', len(non_repeat_multiple))\nprint('Multiple locations:', len(multiple_seq_loc))\nprint('Missing location:', len(missing_seq_loc))\n\n# This one is actually a non-coding HGVS that we should otherwise be able to parse:\n# NR_002717.2(ATXN8OS):n.1103CTG[(15_40)]\n# start is 70139384, stop is 70139386, span == unit == 3\nprint('Unparseable:', len(unparseable))\n\nfor m, span, unit in repeat_number_multiple:\n print(m.get_variant_name_or_hgvs())\n print('span:', span)\n print('repeat unit length:', unit)\n print()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
syednasar/datascience
deeplearning/sentiment-analysis/sentiment_network/.ipynb_checkpoints/Sentiment Classification - Mini Project 2-checkpoint.ipynb
mit
[ "Sentiment Classification & How To \"Frame Problems\" for a Neural Network\nby Andrew Trask\n\nTwitter: @iamtrask\nBlog: http://iamtrask.github.io\n\nWhat You Should Already Know\n\nneural networks, forward and back-propagation\nstochastic gradient descent\nmean squared error\nand train/test splits\n\nWhere to Get Help if You Need it\n\nRe-watch previous Udacity Lectures\nLeverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)\nShoot me a tweet @iamtrask\n\nTutorial Outline:\n\n\nIntro: The Importance of \"Framing a Problem\"\n\n\nCurate a Dataset\n\nDeveloping a \"Predictive Theory\"\n\nPROJECT 1: Quick Theory Validation\n\n\nTransforming Text to Numbers\n\n\nPROJECT 2: Creating the Input/Output Data\n\n\nPutting it all together in a Neural Network\n\n\nPROJECT 3: Building our Neural Network\n\n\nUnderstanding Neural Noise\n\n\nPROJECT 4: Making Learning Faster by Reducing Noise\n\n\nAnalyzing Inefficiencies in our Network\n\n\nPROJECT 5: Making our Network Train and Run Faster\n\n\nFurther Noise Reduction\n\n\nPROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary\n\n\nAnalysis: What's going on in the weights?\n\n\nLesson: Curate a Dataset", "def pretty_print_review_and_label(i):\n print(labels[i] + \"\\t:\\t\" + reviews[i][:80] + \"...\")\n\ng = open('reviews.txt','r') # What we know!\nreviews = list(map(lambda x:x[:-1],g.readlines()))\ng.close()\n\ng = open('labels.txt','r') # What we WANT to know!\nlabels = list(map(lambda x:x[:-1].upper(),g.readlines()))\ng.close()\n\nlen(reviews)\n\nreviews[0]\n\nlabels[0]", "Lesson: Develop a Predictive Theory", "print(\"labels.txt \\t : \\t reviews.txt\\n\")\npretty_print_review_and_label(2137)\npretty_print_review_and_label(12816)\npretty_print_review_and_label(6267)\npretty_print_review_and_label(21934)\npretty_print_review_and_label(5297)\npretty_print_review_and_label(4998)", "Project 1: Quick Theory Validation", "from collections import Counter\nimport numpy as np\n\npositive_counts = Counter()\nnegative_counts = Counter()\ntotal_counts = Counter()\n\nfor i in range(len(reviews)):\n if(labels[i] == 'POSITIVE'):\n for word in reviews[i].split(\" \"):\n positive_counts[word] += 1\n total_counts[word] += 1\n else:\n for word in reviews[i].split(\" \"):\n negative_counts[word] += 1\n total_counts[word] += 1\n\npositive_counts.most_common()\n\npos_neg_ratios = Counter()\n\nfor term,cnt in list(total_counts.most_common()):\n if(cnt > 100):\n pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)\n pos_neg_ratios[term] = pos_neg_ratio\n\nfor word,ratio in pos_neg_ratios.most_common():\n if(ratio > 1):\n pos_neg_ratios[word] = np.log(ratio)\n else:\n pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))\n\n# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]", "Transforming Text into Numbers", "from IPython.display import Image\n\nreview = \"This was a horrible, terrible movie.\"\n\nImage(filename='sentiment_network.png')\n\nreview = \"The movie was excellent\"\n\nImage(filename='sentiment_network_pos.png')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nusdbsystem/incubator-singa
doc/en/docs/notebook/rnn.ipynb
apache-2.0
[ "Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements; and to You under the Apache License, Version 2.0. \nRNN for Character Level Language Modeling\nDataset pre-processing\nsample data", "from __future__ import division\nfrom __future__ import print_function\nfrom future import standard_library\nstandard_library.install_aliases()\nfrom builtins import zip\nfrom builtins import range\nfrom builtins import object\nfrom past.utils import old_div\nimport pickle as pickle\nimport numpy as np\nimport argparse\nimport sys\nfrom tqdm import tnrange, tqdm_notebook\n\n# sys.path.append(os.path.join(os.path.dirname(__file__), '../../build/python'))\nfrom singa import layer\nfrom singa import loss\nfrom singa import device\nfrom singa import tensor\nfrom singa import optimizer\nfrom singa import initializer\nfrom singa.proto import model_pb2\nfrom singa import utils\n\nclass Data(object):\n\n def __init__(self, fpath, batch_size=32, seq_length=100, train_ratio=0.8):\n '''Data object for loading a plain text file.\n\n Args:\n fpath, path to the text file.\n train_ratio, split the text file into train and test sets, where\n train_ratio of the characters are in the train set.\n '''\n self.raw_data = open(fpath, 'r').read() # read text file\n chars = list(set(self.raw_data))\n self.vocab_size = len(chars)\n self.char_to_idx = {ch: i for i, ch in enumerate(chars)}\n self.idx_to_char = {i: ch for i, ch in enumerate(chars)}\n data = [self.char_to_idx[c] for c in self.raw_data]\n # seq_length + 1 for the data + label\n nsamples = old_div(len(data), (1 + seq_length))\n data = data[0:nsamples * (1 + seq_length)]\n data = np.asarray(data, dtype=np.int32)\n data = np.reshape(data, (-1, seq_length + 1))\n # shuffle all sequences\n np.random.shuffle(data)\n self.train_dat = data[0:int(data.shape[0]*train_ratio)]\n self.num_train_batch = old_div(self.train_dat.shape[0], batch_size)\n self.val_dat = data[self.train_dat.shape[0]:]\n self.num_test_batch = old_div(self.val_dat.shape[0], batch_size)\n self.batch_size = batch_size\n self.seq_length = seq_length\n print('train dat', self.train_dat.shape)\n print('val dat', self.val_dat.shape)\n\n\ndef numpy2tensors(npx, npy, dev):\n '''batch, seq, dim -- > seq, batch, dim'''\n tmpx = np.swapaxes(npx, 0, 1)\n tmpy = np.swapaxes(npy, 0, 1)\n inputs = []\n labels = []\n for t in range(tmpx.shape[0]):\n x = tensor.from_numpy(tmpx[t])\n y = tensor.from_numpy(tmpy[t])\n x.to_device(dev)\n y.to_device(dev)\n inputs.append(x)\n labels.append(y)\n return inputs, labels\n\n\ndef convert(batch, batch_size, seq_length, vocab_size, dev):\n '''convert a batch of data into a sequence of input tensors'''\n y = batch[:, 1:]\n x1 = batch[:, :seq_length]\n x = np.zeros((batch_size, seq_length, vocab_size), dtype=np.float32)\n for b in range(batch_size):\n for t in range(seq_length):\n c = x1[b, t]\n x[b, t, c] = 1\n return numpy2tensors(x, y, dev)", "Prepare the dataset. Download all works of Shakespeare concatenated. Other plain text files can also be used. \nCreate the network", "def get_lr(epoch):\n return old_div(0.001, float(1 << (old_div(epoch, 50))))\n\nhidden_size=32\nnum_stacks=1\ndropout=0.5\n\ndata = Data('static/shakespeare_input.txt')\n# SGD with L2 gradient normalization\nopt = optimizer.RMSProp(constraint=optimizer.L2Constraint(5))\ncuda = device.create_cuda_gpu()\nrnn = layer.LSTM(name='lstm', hidden_size=hidden_size, num_stacks=num_stacks, dropout=dropout, input_sample_shape=(data.vocab_size,))\nrnn.to_device(cuda)\nrnn_w = rnn.param_values()[0]\nrnn_w.uniform(-0.08, 0.08) \n\ndense = layer.Dense('dense', data.vocab_size, input_sample_shape=(32,))\ndense.to_device(cuda)\ndense_w = dense.param_values()[0]\ndense_b = dense.param_values()[1]\nprint('dense w ', dense_w.shape)\nprint('dense b ', dense_b.shape)\ninitializer.uniform(dense_w, dense_w.shape[0], 0)\nprint('dense weight l1 = %f' % (dense_w.l1()))\ndense_b.set_value(0)\nprint('dense b l1 = %f' % (dense_b.l1()))\n\ng_dense_w = tensor.Tensor(dense_w.shape, cuda)\ng_dense_b = tensor.Tensor(dense_b.shape, cuda)", "Conduct SGD", "lossfun = loss.SoftmaxCrossEntropy()\ntrain_loss = 0\nfor epoch in range(3):\n bar = tnrange(data.num_train_batch, desc='Epoch %d' % 0)\n for b in bar:\n batch = data.train_dat[b * data.batch_size: (b + 1) * data.batch_size]\n inputs, labels = convert(batch, data.batch_size, data.seq_length, data.vocab_size, cuda)\n inputs.append(tensor.Tensor())\n inputs.append(tensor.Tensor())\n\n outputs = rnn.forward(model_pb2.kTrain, inputs)[0:-2]\n grads = []\n batch_loss = 0\n g_dense_w.set_value(0.0)\n g_dense_b.set_value(0.0)\n for output, label in zip(outputs, labels):\n act = dense.forward(model_pb2.kTrain, output)\n lvalue = lossfun.forward(model_pb2.kTrain, act, label)\n batch_loss += lvalue.l1()\n grad = lossfun.backward()\n grad /= data.batch_size\n grad, gwb = dense.backward(model_pb2.kTrain, grad)\n grads.append(grad)\n g_dense_w += gwb[0]\n g_dense_b += gwb[1]\n # print output.l1(), act.l1()\n bar.set_postfix(train_loss=old_div(batch_loss, data.seq_length))\n train_loss += batch_loss\n\n grads.append(tensor.Tensor())\n grads.append(tensor.Tensor())\n g_rnn_w = rnn.backward(model_pb2.kTrain, grads)[1][0]\n dense_w, dense_b = dense.param_values()\n opt.apply_with_lr(epoch, get_lr(epoch), g_rnn_w, rnn_w, 'rnnw')\n opt.apply_with_lr(epoch, get_lr(epoch), g_dense_w, dense_w, 'dense_w')\n opt.apply_with_lr(epoch, get_lr(epoch), g_dense_b, dense_b, 'dense_b')\n print('\\nEpoch %d, train loss is %f' % (epoch, train_loss / data.num_train_batch / data.seq_length))", "Checkpoint", "model_path= 'static/model_' + str(epoch) + '.bin'\n\nwith open(model_path, 'wb') as fd:\n print('saving model to %s' % model_path)\n d = {}\n for name, w in zip(['rnn_w', 'dense_w', 'dense_b'],[rnn_w, dense_w, dense_b]):\n d[name] = tensor.to_numpy(w)\n d['idx_to_char'] = data.idx_to_char\n d['char_to_idx'] = data.char_to_idx\n d['hidden_size'] = hidden_size\n d['num_stacks'] = num_stacks\n d['dropout'] = dropout\n pickle.dump(d, fd)\nfd.close()", "Sample", "nsamples = 300\nseed_text = \"Before we proceed any further, hear me speak.\"\ndo_sample = True\n\nwith open(model_path, 'rb') as fd:\n d = pickle.load(fd)\n rnn_w = tensor.from_numpy(d['rnn_w'])\n idx_to_char = d['idx_to_char']\n char_to_idx = d['char_to_idx']\n vocab_size = len(idx_to_char)\n dense_w = tensor.from_numpy(d['dense_w'])\n dense_b = tensor.from_numpy(d['dense_b'])\n hidden_size = d['hidden_size']\n num_stacks = d['num_stacks']\n dropout = d['dropout']\n\nrnn = layer.LSTM(name='lstm', hidden_size=hidden_size,\n num_stacks=num_stacks, dropout=dropout,\n input_sample_shape=(len(idx_to_char),))\nrnn.to_device(cuda)\nrnn.param_values()[0].copy_data(rnn_w)\ndense = layer.Dense('dense', vocab_size, input_sample_shape=(hidden_size,))\ndense.to_device(cuda)\ndense.param_values()[0].copy_data(dense_w)\ndense.param_values()[1].copy_data(dense_b)\nhx = tensor.Tensor((num_stacks, 1, hidden_size), cuda)\ncx = tensor.Tensor((num_stacks, 1, hidden_size), cuda)\nhx.set_value(0.0)\ncx.set_value(0.0)\nif len(seed_text) > 0:\n for c in seed_text:\n x = np.zeros((1, vocab_size), dtype=np.float32)\n x[0, char_to_idx[c]] = 1\n tx = tensor.from_numpy(x)\n tx.to_device(cuda)\n inputs = [tx, hx, cx]\n outputs = rnn.forward(model_pb2.kEval, inputs)\n y = dense.forward(model_pb2.kEval, outputs[0])\n y = tensor.softmax(y)\n hx = outputs[1]\n cx = outputs[2]\n sys.stdout.write(seed_text)\nelse:\n y = tensor.Tensor((1, vocab_size), cuda)\n y.set_value(old_div(1.0, vocab_size))\n\nfor i in range(nsamples):\n y.to_host()\n prob = tensor.to_numpy(y)[0]\n if do_sample:\n cur = np.random.choice(vocab_size, 1, p=prob)[0]\n else:\n cur = np.argmax(prob)\n sys.stdout.write(idx_to_char[cur])\n x = np.zeros((1, vocab_size), dtype=np.float32)\n x[0, cur] = 1\n tx = tensor.from_numpy(x)\n tx.to_device(cuda)\n inputs = [tx, hx, cx]\n outputs = rnn.forward(model_pb2.kEval, inputs)\n y = dense.forward(model_pb2.kEval, outputs[0])\n y = tensor.softmax(y)\n hx = outputs[1]\n cx = outputs[2]\nprint('')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
thmp/qmc
Diffusion Monte Carlo.ipynb
mit
[ "Diffusion Monte Carlo Method\nThe goal of the diffusion Monte Carlo method (DMC) is to get to the ground state wave function of a quantum system. Using the eigenstates $\\psi_n$ of a system, every state can be represented as a linear combination of these eigenstates.\n$$\\Psi(x,0) = \\sum_n c_n \\psi_n(x)$$\nUsing the time propagation operator $U(t, t_0) = e^{-i H (t-t_0)}$, the solution for the real time Schrödinger equation reads\n$$\\Psi(x,t) = U(t,0)\\Psi(x,0) = \\sum_n c_n e^{-i E_n t / \\hbar } \\psi_n (x).$$\nWhen this equation is continued into imaginary time via $t \\rightarrow -i \\tau$, the resulting equation\n$$\\Psi(x,\\tau) = \\sum_n c_n e^{- E_n \\tau / \\hbar } \\psi_n (x)$$\nis an exponentially damped equation with the eigenenergies $E_n$ as specific damping coefficients for the separate eigenstates. Taking the limit $\\tau \\to \\infty$ in the damping equation leads to every state's amplitude being damped, but the ground state being damped the least\n$$\\lim_{\\tau \\to \\infty} U(\\tau, 0) \\Psi(x,0) = \\lim_{\\tau\\to\\infty} \\sum_n c_n e^{-E_n \\tau / \\hbar} \\psi_n(x, \\tau) = c_0\\psi_0$$ \nOne method to numerically reach the ground state, the time propagation operator in imaginary time step width $\\Delta \\tau$ is repeatedly applied to a random initial state and the state normalized thereafter until we are left with the ground state.\n$$c_0\\psi_0 = \\prod_n U(n\\Delta \\tau, (n-1)\\Delta\\tau) \\Psi(x,0)$$\nA different approach, presented in the following, is to model the time evolution as a diffusion process using an ensemble of diffusive particles, reaching the ground state through increased damping of higher-energy states as well. \n\nProblem with imaginary time propagation: solving the Schrödinger equation is not stable\nUse diffusion instead, therefore...\n\nModeling Imaginary Time Propagation as Diffusion\nThe diffusion Monte Carlo Method uses a restated form of the Schrödinger equation as a diffusion equation to calculate the ground state wave function via a Monte Carlo process. The Schrödinger equation for a free particle in one dimension is\n$$i\\hbar \\frac{\\partial \\psi (x,t)}{\\partial t} = - \\frac{\\hbar ^2}{2m} \\frac{\\partial ^2 \\psi (x,t)}{\\partial x^2}$$\nand can be rewritten as a diffusion equation \n$$\\frac{\\partial \\psi (x,t)}{\\partial t} = \\frac{i \\hbar ^2}{2m} \\frac{\\partial ^2 \\psi (x,t)}{\\partial x^2} = \\gamma_{im} \\frac{\\partial ^2 \\psi (x,t)}{\\partial x^2},$$\nwhere $\\gamma_{im}$ is the imaginary diffusion constant \n$$\\gamma_{im} = \\frac{i \\hbar ^2}{2m}.$$\nIn order to model the diffusion process and exploit the simulation possibilities, the imaginary diffusion constant has to be turned into a rel constant. Therefore, the operator in real time is analytically continued in imaginary time via\n$$t \\rightarrow i \\tau,$$\nleading to a diffusion equation in real space and imaginary time\n$$\\frac{\\partial \\psi (x,\\tau)}{\\partial \\tau} = \\frac{\\hbar}{2m} \\frac{\\partial ^2 \\psi (x,\\tau)}{\\partial x^2}.$$\nWe can therefore model the motion of a quantum particle in real space by simulation the diffusion of a cloud of particles in imaginary time. //TODO: explain\nDiffusion In a Potential\nImplementing Diffusion Monte Carlo\nUsing an ensemble of random walkers, the diffusion Monte Carlo algorithm is implemented. Here, the motion of the particles based on the diffusion equation models the kinetic energy part of the diffusion equation. The potential part is modeled by adding or removing walkers from the ensemble.\nIn the following, a diffusion Monte Carlo simulation for the 3-dimensional harmonic oscillator is developed, which leads to the ground state energy $E_0$ and ground state wave function $\\psi_0$. Since this problem, can be solved analytically, the results can be checked against the analytical results\n$$E_0 = \\frac{2}{3}, \\hspace{2em} \\psi_0 = \\frac{e^{-r^2/2}}{(2\\pi)^{3/2}}$$\nusing $m=\\omega=\\hbar=1.$", "import numpy as np\nimport numpy.random as random\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndim = 3", "Potential of the harmonic oscillator in three dimensions\n$$V(\\mathbf x) = \\frac{1}{2}\\mathbf x ^2$$", "def pot(x):\n return 0.5 * x.dot(x)", "Set the time step size, the target number of walkers as well as the target number of time steps", "dt = 0.05\nN_T = 300\nT_steps = 4000\nalpha = 0.2", "The first 20% of the time steps are used as thermalization steps, after that, the measured values are reset and the real simulation continues\nIn a timestep\n1. Compute one DMC step on each walker\n2. Remove dead walkers\n3. Adjust $E_T$ to drive $N$ towards $N_T$\n4. Accumulate data to measure $\\left<E\\right>$, its variance and the ground state wave function\nIn the diffusion step, each walker is moved randomly in the search space, in the physical picture due to the kinetic energy term. The random step is choosen from a Gaussian distribution with variance $\\Delta t$.", "def diffusion(r, dt):\n return r + random.normal(size=(r.shape[0],3)) * np.sqrt(dt)", "In the branching step, the number of walkers is modified by the parameter $E_T$, in the physical picture due to the potential energy term. The branching is implemented by computing the branching factor $q$ via\n$$q=e^{-\\Delta\\tau (V(x) - E_T)}$$\nwhich then determines if a walker is cloned, survives or dies. If $q<1$, the walker dies with a probability of $1-q$. If $q > 1$, the walker is copied.", "def branching(r, E_T, dt):\n \n r_new = np.zeros((0,3))\n q = np.zeros((r.shape[0],))\n \n for j in range(r.shape[0]):\n \n # branching factor\n q[j] = np.exp(-dt * (pot(r[j,:]) - E_T))\n \n # branching\n if q[j] - int(q[j]) > random.uniform():\n count_new = int(q[j]) + 1\n else:\n count_new = int(q[j])\n \n # generate new walker array\n for c in range(count_new):\n r_new = np.append(r_new,r[j:j+1,:],axis=0)\n \n return r_new, r_new.shape[0]", "In the adjustment step, the value of $E_T$ is adjusted to drive the number of walkers towards the desired number $N_T$. If $N > N_T$, the number of walkers is too high. We therefore increase $E_T$, reducing $q$ and the number of walkers. If, on the other hand, $N < N_T$, walkers need to be created. Then, $E_T$ is decreased, increasing $q$ as well as the number of walkers. The factor $\\alpha$ controls the rate of change and has to be adjusted according to the simulation settings.\n$$E_T \\rightarrow E_T + \\alpha \\ln \\left( \\frac{N_T}{N} \\right)$$", "def adjust(N_T, N):\n return alpha * np.log(N_T / float(N))", "During each step of the production phase, the data is accumulated to evaluate the energy, energy variance and wave function $\\psi$.", "class Accumulator(object):\n def __init__(self):\n self.E_sum = 0\n self.E_squared_sum = 0\n self.r_max = 4.0\n self.N_psi = 100\n self.psi = np.zeros((self.N_psi,))\n \n def reset(self):\n self.E_sum = 0\n self.E_squared_sum = 0\n self.psi = np.zeros((self.N_psi,))\n \n def handle_data(self, E_T, r):\n self.E_sum += E_T\n self.E_squared_sum += E_T**2\n \n for j in range(r.shape[0]):\n r_squared = r[j,:].dot(r[j,:])\n i_bin = int(np.sqrt(r_squared) / self.r_max * self.N_psi)\n if i_bin < self.N_psi:\n self.psi[i_bin] += 1\n\n# Initialize walkers\nr = np.zeros((N_T, 3))\nN = N_T\nE_T = 0 # initial guess for the ground state energy\n\nthermal_steps = int(0.2*T_steps)\n\nacc = Accumulator()\n\n# Time step\nfor i in range(thermal_steps+T_steps):\n # Diffusion step\n r = diffusion(r, dt)\n \n # Branching step\n r, N = branching(r, E_T, dt)\n \n # Adjustment\n E_T += adjust(N_T, N)\n \n # Accumulation\n if i == thermal_steps:\n acc.reset() \n acc.handle_data(E_T, r)\n\nE_avg = acc.E_sum/T_steps\nE_var = acc.E_squared_sum/T_steps - E_avg**2\n\nprint E_avg\nprint E_var\nprint acc.psi\n\nplt.plot(acc.psi)\nplt.show()", "Diffusion Monte Carlo for Optimization" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/bigquery-notebooks
notebooks/community/analytics-componetized-patterns/retail/recommendation-system/bqml-scann/tfx01_interactive.ipynb
apache-2.0
[ "Create an interactive TFX pipeline\nThis notebook is the first of two notebooks that guide you through automating the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution with a pipeline.\nUse this notebook to create and run a TFX pipeline that performs the following steps:\n\nCompute PMI on item co-occurrence data by using a custom Python function component.\nTrain a BigQuery ML matrix factorization model on the PMI data to learn item embeddings by using a custom Python function component.\nExtract the embeddings from the model to a BigQuery table by using a custom Python function component.\nExport the embeddings in TFRecord format by using the standard BigQueryExampleGen component.\nImport the schema for the embeddings by using the standard ImporterNode component.\nValidate the embeddings against the imported schema by using the standard StatisticsGen and ExampleValidator components. \nCreate an embedding lookup SavedModel by using the standard Trainer component.\nPush the embedding lookup model to a model registry directory by using the standard Pusher component.\nBuild the ScaNN index by using the standard Trainer component.\nEvaluate and validate the ScaNN index latency and recall by implementing a TFX custom component.\nPush the ScaNN index to a model registry directory by using the standard Pusher component.\n\nThe tfx_pipeline directory contains the source code for the TFX pipeline implementation. \nBefore starting this notebook, you must run the 00_prep_bq_procedures notebook to complete the solution prerequisites.\nAfter completing this notebook, run the tfx02_deploy_run notebook to deploy the pipeline.\nSetup\nImport the required libraries, configure the environment variables, and authenticate your GCP account.", "%load_ext autoreload\n%autoreload 2\n\n!pip install -U -q tfx", "Import libraries", "import logging\nimport os\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_data_validation as tfdv\nimport tfx\nfrom tensorflow_transform.tf_metadata import schema_utils\n\nlogging.getLogger().setLevel(logging.INFO)\n\nprint(\"Tensorflow Version:\", tf.__version__)\nprint(\"TFX Version:\", tfx.__version__)", "Configure GCP environment settings\nUpdate the following variables to reflect the values for your GCP environment:\n\nPROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.\nBUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.", "PROJECT_ID = \"yourProject\" # Change to your project.\nBUCKET = \"yourBucket\" # Change to the bucket you created.\nBQ_DATASET_NAME = \"recommendations\"\nARTIFACT_STORE = f\"gs://{BUCKET}/tfx_artifact_store\"\nLOCAL_MLMD_SQLLITE = \"mlmd/mlmd.sqllite\"\nPIPELINE_NAME = \"tfx_bqml_scann\"\nEMBEDDING_LOOKUP_MODEL_NAME = \"embeddings_lookup\"\nSCANN_INDEX_MODEL_NAME = \"embeddings_scann\"\n\nPIPELINE_ROOT = os.path.join(ARTIFACT_STORE, f\"{PIPELINE_NAME}_interactive\")\nMODEL_REGISTRY_DIR = os.path.join(ARTIFACT_STORE, \"model_registry_interactive\")\n\n!gcloud config set project $PROJECT_ID", "Authenticate your GCP account\nThis is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.", "try:\n from google.colab import auth\n\n auth.authenticate_user()\n print(\"Colab user is authenticated.\")\nexcept:\n pass", "Instantiate the interactive context\nInstantiate an interactive context so that you can execute the TFX pipeline components interactively in the notebook. The interactive context creates a local SQLite database in the LOCAL_MLMD_SQLLITE directory to use as its ML Metadata (MLMD) store.", "CLEAN_ARTIFACTS = True\nif CLEAN_ARTIFACTS:\n if tf.io.gfile.exists(PIPELINE_ROOT):\n print(\"Removing previous artifacts...\")\n tf.io.gfile.rmtree(PIPELINE_ROOT)\n if tf.io.gfile.exists(\"mlmd\"):\n print(\"Removing local mlmd SQLite...\")\n tf.io.gfile.rmtree(\"mlmd\")\n\nif not tf.io.gfile.exists(\"mlmd\"):\n print(\"Creating mlmd directory...\")\n tf.io.gfile.mkdir(\"mlmd\")\n\nprint(f\"Pipeline artifacts directory: {PIPELINE_ROOT}\")\nprint(f\"Model registry directory: {MODEL_REGISTRY_DIR}\")\nprint(f\"Local metadata SQLlit path: {LOCAL_MLMD_SQLLITE}\")\n\nimport ml_metadata as mlmd\nfrom ml_metadata.proto import metadata_store_pb2\nfrom tfx.orchestration.experimental.interactive.interactive_context import \\\n InteractiveContext\n\nconnection_config = metadata_store_pb2.ConnectionConfig()\nconnection_config.sqlite.filename_uri = LOCAL_MLMD_SQLLITE\nconnection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE\nmlmd_store = mlmd.metadata_store.MetadataStore(connection_config)\n\ncontext = InteractiveContext(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n metadata_connection_config=connection_config,\n)", "Executing the pipeline steps\nThe components that implement the pipeline steps are in the tfx_pipeline/bq_components.py module.", "from tfx_pipeline import bq_components", "Step 1: Compute PMI\nRun the pmi_computer step, which is an instance of the compute_pmi custom Python function component. This component executes the sp_ComputePMI stored procedure in BigQuery and returns the name of the resulting table as a custom property.", "pmi_computer = bq_components.compute_pmi(\n project_id=PROJECT_ID,\n bq_dataset=BQ_DATASET_NAME,\n min_item_frequency=15,\n max_group_size=100,\n)\n\ncontext.run(pmi_computer)\n\npmi_computer.outputs.item_cooc.get()[0].get_string_custom_property(\"bq_result_table\")", "Step 2: Train the BigQuery ML matrix factorization model\nRun the bqml_trainer step, which is an instance of the train_item_matching_model custom Python function component. This component executes the sp_TrainItemMatchingModel stored procedure in BigQuery and returns the name of the resulting model as a custom property.", "bqml_trainer = bq_components.train_item_matching_model(\n project_id=PROJECT_ID,\n bq_dataset=BQ_DATASET_NAME,\n item_cooc=pmi_computer.outputs.item_cooc,\n dimensions=50,\n)\n\ncontext.run(bqml_trainer)\n\nbqml_trainer.outputs.bq_model.get()[0].get_string_custom_property(\"bq_model_name\")", "Step 3: Extract the trained embeddings\nRun the embeddings_extractor step, which is an instance of the extract_embeddings custom Python function component. This component executes the sp_ExractEmbeddings stored procedure in BigQuery and returns the name of the resulting table as a custom property.", "embeddings_extractor = bq_components.extract_embeddings(\n project_id=PROJECT_ID,\n bq_dataset=BQ_DATASET_NAME,\n bq_model=bqml_trainer.outputs.bq_model,\n)\n\ncontext.run(embeddings_extractor)\n\nembeddings_extractor.outputs.item_embeddings.get()[0].get_string_custom_property(\n \"bq_result_table\"\n)", "Step 4: Export the embeddings in TFRecord format\nRun the embeddings_exporter step, which is an instance of the BigQueryExampleGen standard component. This component uses a SQL query to read the embedding records from BigQuery and produces an Examples artifact containing training and evaluation datasets as an output. It then exports these datasets in TFRecord format by using a Beam pipeline. This pipeline can be run using the DirectRunner or DataflowRunner. Note that in this interactive context, the embedding records to read is limited to 1000, and the runner of the Beam pipeline is set to DirectRunner.", "from tfx.extensions.google_cloud_big_query.example_gen.component import \\\n BigQueryExampleGen\nfrom tfx.proto import example_gen_pb2\n\nquery = f\"\"\"\n SELECT item_Id, embedding, bias,\n FROM {BQ_DATASET_NAME}.item_embeddings\n LIMIT 1000\n\"\"\"\n\noutput_config = example_gen_pb2.Output(\n split_config=example_gen_pb2.SplitConfig(\n splits=[example_gen_pb2.SplitConfig.Split(name=\"train\", hash_buckets=1)]\n )\n)\n\nembeddings_exporter = BigQueryExampleGen(query=query, output_config=output_config)\n\nbeam_pipeline_args = [\n \"--runner=DirectRunner\",\n f\"--project={PROJECT_ID}\",\n f\"--temp_location=gs://{BUCKET}/bqml_scann/beam/temp\",\n]\n\ncontext.run(embeddings_exporter, beam_pipeline_args=beam_pipeline_args)", "Step 5: Import the schema for the embeddings\nRun the schema_importer step, which is an instance of the ImporterNode standard component. This component reads the schema.pbtxt file from the solution's schema directory, and produces a Schema artifact as an output. The schema is used to validate the embedding files exported from BigQuery, and to parse the embedding records in the TFRecord files when they are read in the training components.", "schema_importer = tfx.components.ImporterNode(\n source_uri=\"tfx_pipeline/schema\",\n artifact_type=tfx.types.standard_artifacts.Schema,\n instance_name=\"SchemaImporter\",\n)\n\ncontext.run(schema_importer)\n\ncontext.show(schema_importer.outputs.result)", "Read a sample embedding from the exported TFRecord files using the schema", "schema_file = schema_importer.outputs.result.get()[0].uri + \"/schema.pbtxt\"\nschema = tfdv.load_schema_text(schema_file)\nfeature_sepc = schema_utils.schema_as_feature_spec(schema).feature_spec\n\ndata_uri = embeddings_exporter.outputs.examples.get()[0].uri + \"/train/*\"\n\n\ndef _gzip_reader_fn(filenames):\n return tf.data.TFRecordDataset(filenames, compression_type=\"GZIP\")\n\n\ndataset = tf.data.experimental.make_batched_features_dataset(\n data_uri,\n batch_size=1,\n num_epochs=1,\n features=feature_sepc,\n reader=_gzip_reader_fn,\n shuffle=True,\n)\n\ncounter = 0\nfor _ in dataset:\n counter += 1\nprint(f\"Number of records: {counter}\")\nprint(\"\")\n\nfor batch in dataset.take(1):\n print(f'item: {batch[\"item_Id\"].numpy()[0][0].decode()}')\n print(f'embedding vector: {batch[\"embedding\"].numpy()[0]}')", "Step 6: Validate the embeddings against the imported schema\nRuns the stats_generator, which is an instance of the StatisticsGen standard component. This component accepts the output Examples artifact from the embeddings_exporter step and computes descriptive statistics for these examples by using an Apache Beam pipeline. The component produces a Statistics artifact as an output.", "stats_generator = tfx.components.StatisticsGen(\n examples=embeddings_exporter.outputs.examples,\n)\n\ncontext.run(stats_generator)", "Run the stats_validator, which is an instance of the ExampleValidator component. This component validates the output statistics against the schema. It accepts the Statistics artifact produced by the stats_generator step and the Schema artifact produced by the schema_importer step, and produces Anomalies artifacts as outputput if any anomalies are found.", "stats_validator = tfx.components.ExampleValidator(\n statistics=stats_generator.outputs.statistics,\n schema=schema_importer.outputs.result,\n)\n\ncontext.run(stats_validator)\n\ncontext.show(stats_validator.outputs.anomalies)", "Step 7: Create an embedding lookup SavedModel\nRuns the embedding_lookup_creator step, which is an instance of the Trainer standard component. This component accepts the Schema artifact from the schema_importer step and theExamples artifact from the embeddings_exporter step as inputs, executes the lookup_creator.py module, and produces an embedding lookup Model artifact as an output.", "from tfx.components.base import executor_spec\nfrom tfx.components.trainer import executor as trainer_executor\n\n_module_file = \"tfx_pipeline/lookup_creator.py\"\n\nembedding_lookup_creator = tfx.components.Trainer(\n custom_executor_spec=executor_spec.ExecutorClassSpec(\n trainer_executor.GenericExecutor\n ),\n module_file=_module_file,\n train_args={\"splits\": [\"train\"], \"num_steps\": 0},\n eval_args={\"splits\": [\"train\"], \"num_steps\": 0},\n schema=schema_importer.outputs.result,\n examples=embeddings_exporter.outputs.examples,\n)\n\ncontext.run(embedding_lookup_creator)", "Validate the lookup model\nUse the TFX InfraValidator to make sure the created model is mechanically fine and can be loaded successfully.", "from tfx.proto import infra_validator_pb2\n\nserving_config = infra_validator_pb2.ServingSpec(\n tensorflow_serving=infra_validator_pb2.TensorFlowServing(tags=[\"latest\"]),\n local_docker=infra_validator_pb2.LocalDockerConfig(),\n)\n\nvalidation_config = infra_validator_pb2.ValidationSpec(\n max_loading_time_seconds=60,\n num_tries=3,\n)\n\ninfra_validator = tfx.components.InfraValidator(\n model=embedding_lookup_creator.outputs.model,\n serving_spec=serving_config,\n validation_spec=validation_config,\n)\n\ncontext.run(infra_validator)\n\ntf.io.gfile.listdir(infra_validator.outputs.blessing.get()[0].uri)", "Step 8: Push the embedding lookup model to the model registry\nRun the embedding_lookup_pusher step, which is an instance of the Pusher standard component. This component accepts the embedding lookup Model artifact from the embedding_lookup_creator step, and stores the SavedModel in the location specified by the MODEL_REGISTRY_DIR variable.", "embedding_lookup_pusher = tfx.components.Pusher(\n model=embedding_lookup_creator.outputs.model,\n infra_blessing=infra_validator.outputs.blessing,\n push_destination=tfx.proto.pusher_pb2.PushDestination(\n filesystem=tfx.proto.pusher_pb2.PushDestination.Filesystem(\n base_directory=os.path.join(MODEL_REGISTRY_DIR, EMBEDDING_LOOKUP_MODEL_NAME)\n )\n ),\n)\n\ncontext.run(embedding_lookup_pusher)\n\nlookup_savedmodel_dir = embedding_lookup_pusher.outputs.pushed_model.get()[\n 0\n].get_string_custom_property(\"pushed_destination\")\n!saved_model_cli show --dir {lookup_savedmodel_dir} --tag_set serve --signature_def serving_default\n\nloaded_model = tf.saved_model.load(lookup_savedmodel_dir)\nvocab = [\n token.strip()\n for token in tf.io.gfile.GFile(\n loaded_model.vocabulary_file.asset_path.numpy().decode(), \"r\"\n ).readlines()\n]\n\ninput_items = [vocab[0], \" \".join([vocab[1], vocab[2]]), \"abc123\"]\nprint(input_items)\noutput = loaded_model(input_items)\nprint(f\"Embeddings retrieved: {len(output)}\")\nfor idx, embedding in enumerate(output):\n print(f\"{input_items[idx]}: {embedding[:5]}\")", "Step 9: Build the ScaNN index\nRun the scann_indexer step, which is an instance of the Trainer standard component. This component accepts the Schema artifact from the schema_importer step and the Examples artifact from the embeddings_exporter step as inputs, executes the scann_indexer.py module, and produces the ScaNN index Model artifact as an output.", "from tfx.components.base import executor_spec\nfrom tfx.components.trainer import executor as trainer_executor\n\n_module_file = \"tfx_pipeline/scann_indexer.py\"\n\nscann_indexer = tfx.components.Trainer(\n custom_executor_spec=executor_spec.ExecutorClassSpec(\n trainer_executor.GenericExecutor\n ),\n module_file=_module_file,\n train_args={\"splits\": [\"train\"], \"num_steps\": 0},\n eval_args={\"splits\": [\"train\"], \"num_steps\": 0},\n schema=schema_importer.outputs.result,\n examples=embeddings_exporter.outputs.examples,\n)\n\ncontext.run(scann_indexer)", "Step 10: Evaluate and validate the ScaNN index\nRuns the index_evaluator step, which is an instance of the IndexEvaluator custom TFX component. This component accepts the Examples artifact from the embeddings_exporter step, the Schema artifact from the schema_importer step, and ScaNN index Model artifact from the scann_indexer step. The IndexEvaluator component completes the following tasks:\n\nUses the schema to parse the embedding records. \nEvaluates the matching latency of the index.\nCompares the recall of the produced matches with respect to the exact matches.\nValidates the latency and recall against the max_latency and min_recall input parameters.\n\nWhen it is finished, it produces a ModelBlessing artifact as output, which indicates whether the ScaNN index passed the validation criteria or not.\nThe IndexEvaluator custom component is implemented in the tfx_pipeline/scann_evaluator.py module.", "from tfx_pipeline import scann_evaluator\n\nindex_evaluator = scann_evaluator.IndexEvaluator(\n examples=embeddings_exporter.outputs.examples,\n model=scann_indexer.outputs.model,\n schema=schema_importer.outputs.result,\n min_recall=0.8,\n max_latency=0.01,\n)\n\ncontext.run(index_evaluator)", "Step 11: Push the ScaNN index to the model registry\nRuns the embedding_scann_pusher step, which is an instance of the Pusher standard component. This component accepts the ScaNN index Model artifact from the scann_indexer step and the ModelBlessing artifact from the index_evaluator step, and stores the SavedModel in the location specified by the MODEL_REGISTRY_DIR variable.", "embedding_scann_pusher = tfx.components.Pusher(\n model=scann_indexer.outputs.model,\n model_blessing=index_evaluator.outputs.blessing,\n push_destination=tfx.proto.pusher_pb2.PushDestination(\n filesystem=tfx.proto.pusher_pb2.PushDestination.Filesystem(\n base_directory=os.path.join(MODEL_REGISTRY_DIR, SCANN_INDEX_MODEL_NAME)\n )\n ),\n)\n\ncontext.run(embedding_scann_pusher)\n\nfrom index_server.matching import ScaNNMatcher\n\nscann_index_dir = embedding_scann_pusher.outputs.pushed_model.get()[\n 0\n].get_string_custom_property(\"pushed_destination\")\nscann_matcher = ScaNNMatcher(scann_index_dir)\n\nvector = np.random.rand(50)\nscann_matcher.match(vector, 5)", "Check the local MLMD store", "mlmd_store.get_artifacts()", "View the model registry directory", "!gsutil ls {MODEL_REGISTRY_DIR}", "License\nCopyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. \nSee the License for the specific language governing permissions and limitations under the License.\nThis is not an official Google product but sample code provided for an educational purpose" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/datasets
docs/overview.ipynb
apache-2.0
[ "TensorFlow Datasets\nTFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.\nIt handles downloading and preparing the data deterministically and constructing a tf.data.Dataset (or np.array).\nNote: Do not confuse TFDS (this library) with tf.data (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around tf.data. If you're not familiar with this API, we encourage you to read the official tf.data guide first.\nCopyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/datasets/overview\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/datasets/blob/master/docs/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/datasets/blob/master/docs/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/datasets/docs/overview.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nInstallation\nTFDS exists in two packages:\n\npip install tensorflow-datasets: The stable version, released every few months.\npip install tfds-nightly: Released every day, contains the last versions of the datasets.\n\nThis colab uses tfds-nightly:", "!pip install -q tfds-nightly tensorflow matplotlib\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\nimport tensorflow_datasets as tfds", "Find available datasets\nAll dataset builders are subclass of tfds.core.DatasetBuilder. To get the list of available builders, use tfds.list_builders() or look at our catalog.", "tfds.list_builders()", "Load a dataset\ntfds.load\nThe easiest way of loading a dataset is tfds.load. It will:\n\nDownload the data and save it as tfrecord files.\nLoad the tfrecord and create the tf.data.Dataset.", "ds = tfds.load('mnist', split='train', shuffle_files=True)\nassert isinstance(ds, tf.data.Dataset)\nprint(ds)", "Some common arguments:\n\nsplit=: Which split to read (e.g. 'train', ['train', 'test'], 'train[80%:]',...). See our split API guide.\nshuffle_files=: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).\ndata_dir=: Location where the dataset is saved (\ndefaults to ~/tensorflow_datasets/)\nwith_info=True: Returns the tfds.core.DatasetInfo containing dataset metadata\ndownload=False: Disable download\n\ntfds.builder\ntfds.load is a thin wrapper around tfds.core.DatasetBuilder. You can get the same output using the tfds.core.DatasetBuilder API:", "builder = tfds.builder('mnist')\n# 1. Create the tfrecord files (no-op if already exists)\nbuilder.download_and_prepare()\n# 2. Load the `tf.data.Dataset`\nds = builder.as_dataset(split='train', shuffle_files=True)\nprint(ds)", "tfds build CLI\nIf you want to generate a specific dataset, you can use the tfds command line. For example:\nsh\ntfds build mnist\nSee the doc for available flags.\nIterate over a dataset\nAs dict\nBy default, the tf.data.Dataset object contains a dict of tf.Tensors:", "ds = tfds.load('mnist', split='train')\nds = ds.take(1) # Only take a single example\n\nfor example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`\n print(list(example.keys()))\n image = example[\"image\"]\n label = example[\"label\"]\n print(image.shape, label)", "To find out the dict key names and structure, look at the dataset documentation in our catalog. For example: mnist documentation.\nAs tuple (as_supervised=True)\nBy using as_supervised=True, you can get a tuple (features, label) instead for supervised datasets.", "ds = tfds.load('mnist', split='train', as_supervised=True)\nds = ds.take(1)\n\nfor image, label in ds: # example is (image, label)\n print(image.shape, label)", "As numpy (tfds.as_numpy)\nUses tfds.as_numpy to convert:\n\ntf.Tensor -> np.array\ntf.data.Dataset -> Iterator[Tree[np.array]] (Tree can be arbitrary nested Dict, Tuple)", "ds = tfds.load('mnist', split='train', as_supervised=True)\nds = ds.take(1)\n\nfor image, label in tfds.as_numpy(ds):\n print(type(image), type(label), label)", "As batched tf.Tensor (batch_size=-1)\nBy using batch_size=-1, you can load the full dataset in a single batch.\nThis can be combined with as_supervised=True and tfds.as_numpy to get the the data as (np.array, np.array):", "image, label = tfds.as_numpy(tfds.load(\n 'mnist',\n split='test',\n batch_size=-1,\n as_supervised=True,\n))\n\nprint(type(image), image.shape)", "Be careful that your dataset can fit in memory, and that all examples have the same shape.\nBenchmark your datasets\nBenchmarking a dataset is a simple tfds.benchmark call on any iterable (e.g. tf.data.Dataset, tfds.as_numpy,...).", "ds = tfds.load('mnist', split='train')\nds = ds.batch(32).prefetch(1)\n\ntfds.benchmark(ds, batch_size=32)\ntfds.benchmark(ds, batch_size=32) # Second epoch much faster due to auto-caching", "Do not forget to normalize the results per batch size with the batch_size= kwarg.\nIn the summary, the first warmup batch is separated from the other ones to capture tf.data.Dataset extra setup time (e.g. buffers initialization,...).\nNotice how the second iteration is much faster due to TFDS auto-caching.\ntfds.benchmark returns a tfds.core.BenchmarkResult which can be inspected for further analysis.\n\nBuild end-to-end pipeline\nTo go further, you can look:\n\nOur end-to-end Keras example to see a full training pipeline (with batching, shuffling,...).\nOur performance guide to improve the speed of your pipelines (tip: use tfds.benchmark(ds) to benchmark your datasets).\n\nVisualization\ntfds.as_dataframe\ntf.data.Dataset objects can be converted to pandas.DataFrame with tfds.as_dataframe to be visualized on Colab.\n\nAdd the tfds.core.DatasetInfo as second argument of tfds.as_dataframe to visualize images, audio, texts, videos,...\nUse ds.take(x) to only display the first x examples. pandas.DataFrame will load the full dataset in-memory, and can be very expensive to display.", "ds, info = tfds.load('mnist', split='train', with_info=True)\n\ntfds.as_dataframe(ds.take(4), info)", "tfds.show_examples\ntfds.show_examples returns a matplotlib.figure.Figure (only image datasets supported now):", "ds, info = tfds.load('mnist', split='train', with_info=True)\n\nfig = tfds.show_examples(ds, info)", "Access the dataset metadata\nAll builders include a tfds.core.DatasetInfo object containing the dataset metadata.\nIt can be accessed through:\n\nThe tfds.load API:", "ds, info = tfds.load('mnist', with_info=True)", "The tfds.core.DatasetBuilder API:", "builder = tfds.builder('mnist')\ninfo = builder.info", "The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).", "print(info)", "Features metadata (label names, image shape,...)\nAccess the tfds.features.FeatureDict:", "info.features", "Number of classes, label names:", "print(info.features[\"label\"].num_classes)\nprint(info.features[\"label\"].names)\nprint(info.features[\"label\"].int2str(7)) # Human readable version (8 -> 'cat')\nprint(info.features[\"label\"].str2int('7'))", "Shapes, dtypes:", "print(info.features.shape)\nprint(info.features.dtype)\nprint(info.features['image'].shape)\nprint(info.features['image'].dtype)", "Split metadata (e.g. split names, number of examples,...)\nAccess the tfds.core.SplitDict:", "print(info.splits)", "Available splits:", "print(list(info.splits.keys()))", "Get info on individual split:", "print(info.splits['train'].num_examples)\nprint(info.splits['train'].filenames)\nprint(info.splits['train'].num_shards)", "It also works with the subsplit API:", "print(info.splits['train[15%:75%]'].num_examples)\nprint(info.splits['train[15%:75%]'].file_instructions)", "Troubleshooting\nManual download (if download fails)\nIf download fails for some reason (e.g. offline,...). You can always manually download the data yourself and place it in the manual_dir (defaults to ~/tensorflow_datasets/download/manual/.\nTo find out which urls to download, look into:\n\nFor new datasets (implemented as folder): tensorflow_datasets/&lt;type&gt;/&lt;dataset_name&gt;/checksums.tsv. For example: tensorflow_datasets/text/bool_q/checksums.tsv.\n\nYou can find the dataset source location in our catalog.\n * For old datasets: tensorflow_datasets/url_checksums/&lt;dataset_name&gt;.txt\nFixing NonMatchingChecksumError\nTFDS ensure determinism by validating the checksums of downloaded urls.\nIf NonMatchingChecksumError is raised, might indicate:\n\nThe website may be down (e.g. 503 status code). Please check the url.\nFor Google Drive URLs, try again later as Drive sometimes rejects downloads when too many people access the same URL. See bug\nThe original datasets files may have been updated. In this case the TFDS dataset builder should be updated. Please open a new Github issue or PR:\nRegister the new checksums with tfds build --register_checksums\nEventually update the dataset generation code.\nUpdate the dataset VERSION\nUpdate the dataset RELEASE_NOTES: What caused the checksums to change ? Did some examples changed ?\nMake sure the dataset can still be built.\nSend us a PR\n\n\n\nNote: You can also inspect the downloaded file in ~/tensorflow_datasets/download/.\nCitation\nIf you're using tensorflow-datasets for a paper, please include the following citation, in addition to any citation specific to the used datasets (which can be found in the dataset catalog).\n@misc{TFDS,\n title = { {TensorFlow Datasets}, A collection of ready-to-use datasets},\n howpublished = {\\url{https://www.tensorflow.org/datasets}},\n}" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
arongdari/almc
notebooks/Growth_Rate_of_Knowledge_Graph.ipynb
gpl-2.0
[ "import itertools\nimport time\nfrom collections import defaultdict\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nfrom scipy.sparse import csr_matrix, csc_matrix, dok_matrix, lil_matrix\n\n%matplotlib inline", "Load Freebase Datafile\nConstruct tensor (list of sparse matrix where each matrix represent a certain relation between entities) from triple dataset.\nMaintaining the same tensor as a collection of csr matrices and csc matrices help to optimise time complexity.", "def construct_freebase(shuffle = True):\n e_file = '../data/freebase/entities.txt'\n r_file = '../data/freebase/relations.txt'\n datafile = '../data/freebase/train_single_relation.txt'\n\n with open(e_file, 'r') as f:\n e_list = [line.strip() for line in f.readlines()]\n with open(r_file, 'r') as f:\n r_list = [line.strip() for line in f.readlines()]\n\n n_e = len(e_list) # number of entities\n n_r = len(r_list) # number of relations\n\n if shuffle:\n np.random.shuffle(e_list)\n np.random.shuffle(r_list)\n\n entities = {e_list[i]:i for i in range(n_e)}\n relations = {r_list[i]:i for i in range(n_r)}\n\n row_list = defaultdict(list)\n col_list = defaultdict(list)\n\n with open(datafile, 'r') as f:\n for line in f.readlines():\n start, relation, end = line.split('\\t')\n rel_no = relations[relation.strip()]\n en1_no = entities[start.strip()]\n en2_no = entities[end.strip()]\n row_list[rel_no].append(en1_no)\n col_list[rel_no].append(en2_no)\n\n rowT = list()\n colT = list()\n for k in range(n_r):\n mat = csr_matrix((np.ones(len(row_list[k])), (row_list[k], col_list[k])), shape=(n_e, n_e))\n rowT.append(mat)\n mat = csc_matrix((np.ones(len(row_list[k])), (row_list[k], col_list[k])), shape=(n_e, n_e))\n colT.append(mat)\n return n_e, n_r, rowT, colT", "Growth of the number of triples with respect to the number of entities\nFirst, we will see how the number of triples will be changed as we randomly add entities into tensor starting from zero entities.", "n_triple = defaultdict(list)\nn_sample = 10 # repeat counting n_sample times\n\nfor s in range(n_sample):\n tic = time.time()\n n_triple[0].append(0) \n\n n_e, n_r, _rowT, _colT = construct_freebase()\n \n for i in range(1, n_e):\n # counting triples by expanding tensor\n cnt = 0\n for k in range(n_r):\n cnt += _rowT[k].getrow(i)[:,:i].nnz\n cnt += _colT[k].getcol(i)[:i-1,:].nnz \n n_triple[i].append(n_triple[i-1][-1] + cnt)\n \n print(time.time()-tic)\navg_cnt = [np.mean(n_triple[i]) for i in range(n_e)]", "Size of tensor:", "print(n_e**2*n_r)\n\nplt.figure(figsize=(8,6))\nplt.plot(avg_cnt)\nplt.title('# of entities vs # of triples')\nplt.xlabel('# entities')\nplt.ylabel('# triples')\n\nimport pickle\npickle.dump(n_triple, open('growth_freebase.pkl', 'wb'))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DfAC/MachineLearningFoundations
getting_started_with_graphlab_create.ipynb
mit
[ "Getting Started with GraphLab Create\nFirst a note about IPython Notebook\nMost of our tutorials are written as IPython notebooks. This allows you to download and run the tutorials on your own machine, either as notebooks (.ipynb) or Python files (.py). To run the notebooks you'll need to install IPython and IPython Notebook; for installation details, visit www.ipython.org. A couple of the notebooks depend on matplotlib for custom plots; this library can be installed with the terminal command 'pip install matplotlib'.\nOverview\nIn this tutorial, you'll get a good flavor of some of the fundamental tasks that GraphLab Create is built for.\nYou will learn how to:\n\nload data into SFrames\ncreate a Graph data structure from these frames\nwrite simple graph queries\napply a machine learning model from the Graph Analytics Toolkit\n\nWe also have many other toolkits to explore from including recommender systems, data matching, graph analytics and more. Explore these and the rest of Graphlab Create in our User Guide. \n...oh yeah, you'll also learn that some of us at Dato have a thing for Bond...yes...James Bond...", "import graphlab as gl\ngl.canvas.set_target('ipynb') # use IPython Notebook output for GraphLab Canvas", "Load data into an SFrame\nGraphLab Create uses two scalable data structures:\n\nthe SFrame, a tabular structure ideal for data munging & feature building\nthe Graph, a structure ideal for sparse data", "vertices = gl.SFrame.read_csv('http://s3.amazonaws.com/dato-datasets/bond/bond_vertices.csv')\nedges = gl.SFrame.read_csv('http://s3.amazonaws.com/dato-datasets/bond/bond_edges.csv')\n\n# SFrame has a number of methods to explore and transform your data\nvertices.show()\n\n# this shows the summary of the edges SFrame\nedges.show()", "Create a graph object", "g = gl.SGraph()", "Add vertices and edges to this graph", "# add some vertices in a dataflow-ish way\ng = g.add_vertices(vertices=vertices, vid_field='name')\n\n# more dataflow\ng = g.add_edges(edges=edges, src_field='src', dst_field='dst')", "Do some basic graph querying", "# Show all the vertices\ng.get_vertices()\n\n# Show all the edges\ng.get_edges()\n\n# Get all the \"friend\" edges\ng.get_edges(fields={'relation': 'friend'})", "Apply the pagerank algorithm to our graph", "pr = gl.pagerank.create(g)\n\npr.get('pagerank').topk(column_name='pagerank')", "We see, not unexpectedly, that James Bond is a very important person, and that bad guys aren't that popular...\n(Looking for more details about the modules and functions? Check out the <a href=\"https://dato.com/products/create/docs/\">API docs</a>.)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jdhp-docs/python_notebooks
nb_sci_maths/maths_montecarlo_compute_integral_fr.ipynb
mit
[ "Calcule numérique avec la méthode Monte-Carlo (première partie)\nTODO\n- traduire en français certaines phrases restées en anglais\nApproximation numérique d'une surface avec la méthode Monte-Carlo\n$\\newcommand{\\ounk}{{\\color{red}{O_1}}}$\n$\\newcommand{\\aunk}{{\\color{red}{\\mathcal{A}_1}}}$\n$\\newcommand{\\nunk}{{\\color{red}{\\mathcal{N}_1}}}$\n$\\newcommand{\\okn}{{\\color{blue}{O_2}}}$\n$\\newcommand{\\akn}{{\\color{blue}{\\mathcal{A}_2}}}$\n$\\newcommand{\\nkn}{{\\color{blue}{\\mathcal{N}_2}}}$\nConceptuellement, pour calculer la surface $\\aunk$ d'un objet $\\ounk$ avec la methode Monte-Carlo, il suffit:\n1. de placer cet objet $\\ounk$ entièrement dans une figure géométrique $\\okn$ dont on connait la surface $\\mathcal \\akn$ (par exemple un carré ou un rectangle)\n2. tirer aléatoirement un grand nombre de points dans cette figure $\\okn$ (tirage uniforme)\n3. compter le nombre de points $\\nunk$ tombés dans l'objet $\\ounk$ dont on veut calculer la surface\n4. calculer le rapport $\\frac{\\nunk}{\\nkn}$ où $\\nkn$ est le nombre total de points tiré aléatoirement (en multipliant par 100 ce rapport on obtient le pourcentage de points tombés dans l'objet $\\ounk$)\n5. appliquer ce rapport à la surface $\\mathcal \\akn$ de la figure englobante $\\okn$ (le carré, rectangle, ... dont on connait la surface) pour obtenir la surface $\\aunk$ recherchée: $\\aunk \\simeq \\frac{\\nunk}{\\nkn} \\mathcal \\akn$", "import numpy as np\nimport matplotlib.pyplot as plt\n\nt = np.linspace(0., 2. * np.pi, 100)\nx = np.cos(t) + np.cos(2. * t)\ny = np.sin(t)\n\nN = 100\nrand = np.array([np.random.uniform(low=-3, high=3, size=N), np.random.uniform(low=-3, high=3, size=N)]).T\n\nfig, ax = plt.subplots(1, 1, figsize=(7, 7))\nax.plot(rand[:,0], rand[:,1], '.k')\nax.plot(x, y, \"-r\", linewidth=2)\nax.plot([-3, -3, 3, 3, -3], [-3, 3, 3, -3, -3], \"-b\", linewidth=2)\nax.set_axis_off()\nax.set_xlim([-4, 4])\nax.set_ylim([-4, 4])\n\nplt.show()", "Le même principe peut être appliqué pour calculer un volume.\nCette methode très simple est parfois très utile pour calculer la surface (ou le volume) de figures géometriques complexes. En revanche, elle suppose l'existance d'une procedure ou d'une fonction permettant de dire si un point est tombé dans l'objet $O_2$ ou non.\nApplication au calcul d'intégrales\nCalculer l'intégrale d'une fonction, revient à calculer la surface entre la courbe décrivant cette fonction et l'axe des abscisses (les surfaces au dessus de l'axe des abscisses sont ajoutées et les surfaces au dessous sont retranchées).\nExemple written in Python", "import sympy as sp\nsp.init_printing()\n\n%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt", "The function to integrate", "def f(x):\n return -x**2 + 3.", "Random points", "N = 100000 # The number of random points\n\nx_lower_bound = -4.0\nx_upper_bound = 4.0\ny_lower_bound = -16.0\ny_upper_bound = 16.0\n\nrandom_points = np.array([np.random.uniform(low=x_lower_bound, high=x_upper_bound, size=N),\n np.random.uniform(low=y_lower_bound, high=y_upper_bound, size=N)]).T", "Numerical computation of the integral with Monte-Carlo", "# Points between f and the abscissa\nrandom_points_in_pos = np.array([p for p in random_points if 0 <= p[1] <= f(p[0])])\nrandom_points_in_neg = np.array([p for p in random_points if 0 > p[1] >= f(p[0])])\n\nratio_pos = float(len(random_points_in_pos)) / float(N)\nratio_neg = float(len(random_points_in_neg)) / float(N)\nprint('Percentage of \"positive\" points between f and the abscissa: {:.2f}%'.format(ratio_pos * 100))\nprint('Percentage of \"negative\" points between f and the abscissa: {:.2f}%'.format(ratio_neg * 100))\n\ns2 = (x_upper_bound - x_lower_bound) * (y_upper_bound - y_lower_bound)\nprint(\"Box surface:\", s2)\n\ns1 = ratio_pos * s2 - ratio_neg * s2\nprint(\"Function integral (numerical computation using Monte-Carlo):\", s1)", "The actual integral value", "x = sp.symbols(\"x\")\n\ninteg = sp.Integral(f(x), (x, x_lower_bound, x_upper_bound))\nsp.Eq(integ, integ.doit())", "The error ratio", "actual_s1 = float(integ.doit())\n\nerror = actual_s1 - s1\nprint(\"Error ratio = {:.6f}%\".format(abs(error / actual_s1) * 100.))", "Graphical illustration", "fig, ax = plt.subplots(1, 1, figsize=(12, 8))\n\nx_array = np.arange(x_lower_bound, x_upper_bound, 0.01)\ny_array = f(x_array)\n\nplt.axis([x_lower_bound, x_upper_bound, y_lower_bound, y_upper_bound])\nplt.plot(random_points[:,0], random_points[:,1], ',k')\nplt.plot(random_points_in_pos[:,0], random_points_in_pos[:,1], ',r')\nplt.plot(random_points_in_neg[:,0], random_points_in_neg[:,1], ',r')\nplt.hlines(y=0, xmin=x_lower_bound, xmax=x_upper_bound)\nplt.plot(x_array, y_array, '-r', linewidth=2)\n\nplt.show()", "Suite\nDans le document suivant, nous allons appliquer le même principe pour calculer la constante $\\pi$ : \nhttp://www.jdhp.org/docs/notebook/maths_montecarlo_compute_pi_fr.html." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
antoniomezzacapo/qiskit-tutorial
community/aqua/general/eoh.ipynb
apache-2.0
[ "The EOH (Evolution of Hamiltonian) Algorithm\nThis notebook demonstrates how to use the Qiskit Aqua library to invoke the EOH algorithm and process the result.\nFurther information may be found for the algorithms in the online Aqua documentation.\nFor this particular demonstration, we illustrate the EOH algorithm. First, two Operator instances we created are randomly generated Hamiltonians.", "import numpy as np\nfrom qiskit_aqua.operator import Operator\n\nnum_qubits = 2\ntemp = np.random.random((2 ** num_qubits, 2 ** num_qubits))\nqubitOp = Operator(matrix=temp + temp.T)\ntemp = np.random.random((2 ** num_qubits, 2 ** num_qubits))\nevoOp = Operator(matrix=temp + temp.T)", "For EOH, we would like to evolve some initial state (e.g. the uniform superposition state) with evoOp and do a measurement using qubitOp. Below, we illustrate how such an example dynamics process can be easily prepared.", "from qiskit_aqua.input import get_input_instance\n\nparams = {\n 'problem': {\n 'name': 'eoh'\n },\n 'algorithm': {\n 'name': 'EOH',\n 'num_time_slices': 1\n },\n 'initial_state': {\n 'name': 'CUSTOM',\n 'state': 'uniform'\n },\n 'backend': {\n 'name': 'statevector_simulator'\n }\n}\nalgo_input = get_input_instance('EnergyInput')\nalgo_input.qubit_op = qubitOp\nalgo_input.add_aux_op(evoOp)", "With all the necessary pieces prepared, we can then proceed to run the algorithm and examine the result.", "from qiskit_aqua import run_algorithm\n\nret = run_algorithm(params, algo_input)\nprint('The result is\\n{}'.format(ret))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Naereen/notebooks
Combien_de_numeros_de_mobiles_francais_sont_des_nombres_premiers.ipynb
mit
[ "Combien de numéros de mobiles français sont des nombres premiers ?\nA question simple, réponse simple :\nDépendances", "from sympy import isprime\n\nprint(isprime.__doc__[:180])", "Réponse", "first_number = 6_00_00_00_00\nlast_number = 7_99_99_99_99\n\n# test rapide\n#last_number = first_number + 20\n\nall_numbers = range(first_number, last_number + 1)\n\ndef count_prime_numbers_in_range(some_range):\n count = 0\n for number in some_range:\n if isprime(number):\n count += 1\n return count", "Conclusion", "count = count_prime_numbers_in_range(all_numbers)\n\nprint(f\"Pour des numéros de téléphones, nombres entre {first_number} et {last_number} (inclus), il y a {count} nombres premiers.\")", "Et donc, on peut calculer la part de nombres premiers parmi les numéros de téléphones mobiles français.\nDe souvenir, c'était environ 5.1%, vérifions :", "total_number = len(all_numbers)\n\nprint(f\"Pour des numéros de téléphones, nombres entre {first_number} et {last_number} (inclus), il y a {count/total_number:%} nombres premiers.\")", "Et voilà, c'était simple !" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_stats_spatio_temporal_cluster_sensors.ipynb
bsd-3-clause
[ "%matplotlib inline", "Spatiotemporal permutation F-test on full sensor data\nTests for differential evoked responses in at least\none condition using a permutation clustering test.\nThe FieldTrip neighbor templates will be used to determine\nthe adjacency between sensors. This serves as a spatial prior\nto the clustering. Significant spatiotemporal clusters will then\nbe visualized using custom matplotlib code.", "# Authors: Denis Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom mne.viz import plot_topomap\n\nimport mne\nfrom mne.stats import spatio_temporal_cluster_test\nfrom mne.datasets import sample\nfrom mne.channels import read_ch_connectivity\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = {'Aud_L': 1, 'Aud_R': 2, 'Vis_L': 3, 'Vis_R': 4}\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 30, l_trans_bandwidth='auto', h_trans_bandwidth='auto',\n filter_length='auto', phase='zero')\nevents = mne.read_events(event_fname)", "Read epochs for the channel of interest", "picks = mne.pick_types(raw.info, meg='mag', eog=True)\n\nreject = dict(mag=4e-12, eog=150e-6)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=None, reject=reject, preload=True)\n\nepochs.drop_channels(['EOG 061'])\nepochs.equalize_event_counts(event_id)\n\ncondition_names = 'Aud_L', 'Aud_R', 'Vis_L', 'Vis_R'\nX = [epochs[k].get_data() for k in condition_names] # as 3D matrix\nX = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering", "Load FieldTrip neighbor definition to setup sensor connectivity", "connectivity, ch_names = read_ch_connectivity('neuromag306mag')\n\nprint(type(connectivity)) # it's a sparse matrix!\n\nplt.imshow(connectivity.toarray(), cmap='gray', origin='lower',\n interpolation='nearest')\nplt.xlabel('{} Magnetometers'.format(len(ch_names)))\nplt.ylabel('{} Magnetometers'.format(len(ch_names)))\nplt.title('Between-sensor adjacency')", "Compute permutation statistic\nHow does it work? We use clustering to bind together features which are\nsimilar. Our features are the magnetic fields measured over our sensor\narray at different times. This reduces the multiple comparison problem.\nTo compute the actual test-statistic, we first sum all F-values in all\nclusters. We end up with one statistic for each cluster.\nThen we generate a distribution from the data by shuffling our conditions\nbetween our samples and recomputing our clusters and the test statistics.\nWe test for the significance of a given cluster by computing the probability\nof observing a cluster of that size. For more background read:\nMaris/Oostenveld (2007), \"Nonparametric statistical testing of EEG- and\nMEG-data\" Journal of Neuroscience Methods, Vol. 164, No. 1., pp. 177-190.\ndoi:10.1016/j.jneumeth.2007.03.024", "# set cluster threshold\nthreshold = 50.0 # very high, but the test is quite sensitive on this data\n# set family-wise p-value\np_accept = 0.001\n\ncluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,\n threshold=threshold, tail=1,\n n_jobs=1,\n connectivity=connectivity)\n\nT_obs, clusters, p_values, _ = cluster_stats\ngood_cluster_inds = np.where(p_values < p_accept)[0]", "Note. The same functions work with source estimate. The only differences\nare the origin of the data, the size, and the connectivity definition.\nIt can be used for single trials or for groups of subjects.\nVisualize clusters", "# configure variables for visualization\ntimes = epochs.times * 1e3\ncolors = 'r', 'r', 'steelblue', 'steelblue'\nlinestyles = '-', '--', '-', '--'\n\n# grand average as numpy arrray\ngrand_ave = np.array(X).mean(axis=1)\n\n# get sensor positions via layout\npos = mne.find_layout(epochs.info).pos\n\n# loop over significant clusters\nfor i_clu, clu_idx in enumerate(good_cluster_inds):\n # unpack cluster information, get unique indices\n time_inds, space_inds = np.squeeze(clusters[clu_idx])\n ch_inds = np.unique(space_inds)\n time_inds = np.unique(time_inds)\n\n # get topography for F stat\n f_map = T_obs[time_inds, ...].mean(axis=0)\n\n # get signals at significant sensors\n signals = grand_ave[..., ch_inds].mean(axis=-1)\n sig_times = times[time_inds]\n\n # create spatial mask\n mask = np.zeros((f_map.shape[0], 1), dtype=bool)\n mask[ch_inds, :] = True\n\n # initialize figure\n fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))\n title = 'Cluster #{0}'.format(i_clu + 1)\n fig.suptitle(title, fontsize=14)\n\n # plot average test statistic and mark significant sensors\n image, _ = plot_topomap(f_map, pos, mask=mask, axes=ax_topo,\n cmap='Reds', vmin=np.min, vmax=np.max)\n\n # advanced matplotlib for showing image with figure and colorbar\n # in one plot\n divider = make_axes_locatable(ax_topo)\n\n # add axes for colorbar\n ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)\n plt.colorbar(image, cax=ax_colorbar)\n ax_topo.set_xlabel('Averaged F-map ({:0.1f} - {:0.1f} ms)'.format(\n *sig_times[[0, -1]]\n ))\n\n # add new axis for time courses and plot time courses\n ax_signals = divider.append_axes('right', size='300%', pad=1.2)\n for signal, name, col, ls in zip(signals, condition_names, colors,\n linestyles):\n ax_signals.plot(times, signal, color=col, linestyle=ls, label=name)\n\n # add information\n ax_signals.axvline(0, color='k', linestyle=':', label='stimulus onset')\n ax_signals.set_xlim([times[0], times[-1]])\n ax_signals.set_xlabel('time [ms]')\n ax_signals.set_ylabel('evoked magnetic fields [fT]')\n\n # plot significant time range\n ymin, ymax = ax_signals.get_ylim()\n ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],\n color='orange', alpha=0.3)\n ax_signals.legend(loc='lower right')\n ax_signals.set_ylim(ymin, ymax)\n\n # clean up viz\n mne.viz.tight_layout(fig=fig)\n fig.subplots_adjust(bottom=.05)\n plt.show()", "Exercises\n\nWhat is the smallest p-value you can obtain, given the finite number of\n permutations?\nuse an F distribution to compute the threshold by traditional significance\n levels. Hint: take a look at scipy.stats.distributions.f" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
agicquel/dut
M4107C/TP2.ipynb
gpl-3.0
[ "TP2 - Object recognition using neural networks and convolutional neural networks\nM4108C/M4109C - INFOgr2D\nStudent 1: Antoine Gicquel\n<br>\nFor submission: <font style=\"color:blue\"> TP2_nom1_nom2.iypnb </font>, Due: <font style=\"color:blue\"> 18/03/2018 </font>\nIntroduction\nIn this lab, we design and observe the performance of the fully connected neural networks (NNs) as well as the convolutional neural networks (CNNs) for object regconition task. All implementations should be in Keras with Tensorflow backend. This lab includes three parts:\nIn the first part, we perform object recognition using NNs and CNNs on the CIFAR-10 dataset (import from Keras).\nIn the second part, we work on the image data which are imported from disk.\nThe last part includes some advanced exercices.\nRead and response to each question. Use the print() function to show results in code cells and write your comments/responses using Markdown cells. \nIMPORTANT: Every result should be commented!\nNOTE: (max 20 pts)\n- part I: 10 pts\n- part II: 6 pts\n- part III: 2 pts\n- clarity and presentation: 2 pts\nPart I. Object recognition using CIFAR-10 dataset <font color='red'> (10 pts)<font/>\nI-1. The CIFAR-10 data\n1) Load CIFAR dataset and describe its information (number of training/test images, image size, number of classes, class names, etc.) <font color='red'> (1 pts)<font/>", "from __future__ import print_function\nimport numpy as np\nnp.random.seed(7)\n\nimport keras\nfrom keras.datasets import cifar10\n\n# load and split data into training and test sets --> it may take some times with your own laptop\n(x_train, y_train), (x_test, y_test) = cifar10.load_data()\n\n# describe your data (use print function)\nprint(\"train size : \",x_train.shape)\nprint(\"test size : \",x_test.shape)\nprint(\"train label : \",y_train.shape)\nprint(\"test label : \",y_test.shape)\nnclass = len(np.unique(y_train))\nprint(\"number of classes:\",nclass)", "Your response: \nIl y a 50000 images de 32 sur 32 avec 3 caneaux de couleurs pour l'entrainement et 10000 images de test.\n2) Display some image samples with their class labels using matplotlib.pyplot <font color='red'> (1 pts)<font/>", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nlabels = [\"airplane\", \"automobile\", \"bird\", \"cat\", \"deer\", \"dog\", \"frog\", \"horse\", \"ship\", \"truck\"]\n\nfor i in range(0,9):\n plt.subplot(3, 3, i+1)\n plt.imshow(x_train[i], cmap=plt.get_cmap('gray')); plt.axis('off')\n print(labels[y_train[i][0]])", "Your comment:\nLes labels sont donnés du haut vers la droite avec les images correspondantes.\nVoici les 9 images.\n3) (If necessary) Reduce the number of training images (using half of them for example) for quick training and small-GPU computer", "x_train = x_train[0:25000,:]\ny_train = y_train[0:25000]\n\nprint(\"train size : \",x_train.shape)\nprint(\"train label : \",y_train.shape)", "On a divisé par deux le nombre d'image.\nI-2. Fully-connected NNs on CIFAR-10\n1) Design a fully connected NN named 'modelCifar_nn1' including 2 layers of 256 and 512 neurons with the sigmoid activation function. Train this model with 10 epochs and batch_size = 500 (remember to pre-process them before). Test the model and report the following results: \n- number of total parameters (explain how to compute?)\n- training and testing time\n- test loss and accuracy\n- number of iterations to complete one epoch (explain how to compute?) \n<font color='red'> (2 pts)<font/>\n<br/>\nExplanation:<br/>\n-> one epoch = one forward pass and one backward pass of all the training examples<br/>\n-> batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.<br/>", "# pre-process your data\nx_train = x_train.reshape(x_train.shape[0], 32*32*3)\nx_test = x_test.reshape(x_test.shape[0], 32*32*3)\n\nx_train = x_train.astype('float32')/255\nx_test = x_test.astype('float32')/255\n\nfrom keras.utils import np_utils\ny_train_cat = np_utils.to_categorical(y_train, nclass)\ny_test_cat = np_utils.to_categorical(y_test, nclass)\ny_train_cat.shape\n\nprint(\"train size : \",x_train.shape)\nprint(\"test size : \",x_test.shape)", "Your comment:\nConversion des labels d'entiers en catégories et conversion des valeurs.", "# Define the model\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout, Activation\nfrom keras.optimizers import RMSprop\n\nmodelCifar_nn1 = Sequential() \nmodelCifar_nn1.add(Dense(256, input_shape=(3072,),activation='sigmoid'))\nmodelCifar_nn1.add(Dense(512, activation='sigmoid'))\nmodelCifar_nn1.add(Dense(10,activation='softmax')) #Last layer has nclass nodes\nmodelCifar_nn1.summary()\n\n# compile and train the model\nimport time\n# compile the model\nmodelCifar_nn1.compile(loss='categorical_crossentropy', optimizer =RMSprop(lr=0.001), metrics=[\"accuracy\"])\n\n# train the model\nstart_t_mod= time.time()\nmodelCifar_nn1.fit(x_train, y_train_cat, batch_size=500, epochs = 10)\nfinish_t_mod = time.time()\n\ntime = finish_t_mod - start_t_mod\nprint(\"training time :\", time)\n\n# evaluate the model\nscore = modelCifar_nn1.evaluate(x_test, y_test_cat)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n", "Your observation and comment:\nL'exactitude est de 43% avec le modele sigmoid et 10 etochs.\n2) Design the NN model named modelCifar_nn2 by replacing the sigmoid activation with the ReLu activation. Train and test this model. Compare to the first one. <font color='red'> (1 pts)<font/>", "# Define the model \nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout, Activation\nfrom keras.optimizers import RMSprop\n\nmodelCifar_nn2 = Sequential() \nmodelCifar_nn2.add(Dense(256, input_shape=(3072,),activation='relu'))\nmodelCifar_nn2.add(Dense(512, activation='relu'))\nmodelCifar_nn2.add(Dense(10,activation='softmax')) #Last layer has nclass nodes\nmodelCifar_nn2.summary()\n\n# compile and train the model\nimport time\n# compile the model\nmodelCifar_nn2.compile(loss = 'categorical_crossentropy', optimizer = RMSprop(lr=0.001), metrics = [\"accuracy\"])\n\n# train the model\nstart_t_mod= time.time()\nmodelCifar_nn2.fit(x_train, y_train_cat, batch_size = 500, epochs = 10)\nfinish_t_mod = time.time()\n\ntime = finish_t_mod - start_t_mod\nprint(\"training time :\", time)\n\n# evaluate the model\nscore = modelCifar_nn2.evaluate(x_test, y_test_cat)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])", "Your observation and comment: L'exactitude est de 20% avec le modele sigmoid et 10 etochs.\nI-2. CNNs on CIFAR-10\n1) Now design a CNN named modelCifar_cnn1 consisting of 2 convolutional layers + one fully-connected layer as follows:\n- Conv_1: 16 filters of size 3x3, no padding, no stride, activation Relu\n- maxPool_1: size 2x2\n- Conv_2: 32 filters of size 3x3, no padding, no stride, activation Relu\n- maxPool_2: size 2x2\n- fc layer (Dense) 128 nodes\n- [Do not forget Flatten() and final output dense layer with 'softmax' activation]\nReload and preprocess the data. Train this model with 10 epochs and batch_size = 500. Test the model and report the following results:\n- number of total parameters (explain how to compute?)\n- training and testing time\n- test loss and accuracy\n<font color='red'> (2 pts)<font/>", "# reload and pre-process your data\n(x2_train, y2_train), (x2_test, y2_test) = cifar10.load_data()\n\n#x2_train = x_train[0:25000,:]\n#y2_train = y_train[0:25000]\n\nx2_train = x2_train.astype('float32')\nx2_test = x2_test.astype('float32')\nx2_train = x2_train / 255.0\nx2_test = x2_test / 255.0\n\n# one hot encode outputs\ny2_train = np_utils.to_categorical(y_train)\ny2_test = np_utils.to_categorical(y_test)\n\nprint(\"train 2 size : \",x2_train.shape)\nprint(\"test 2 size : \",x2_test.shape)\nprint(\"train 2 label : \",y2_train.shape)\nprint(\"test 2 label : \",y2_test.shape)\n\n# Define the model\nfrom keras.layers.convolutional import Conv2D, MaxPooling2D\nfrom keras.layers import Flatten\nfrom keras.constraints import maxnorm\n\nmodelCifar_cnn1 = Sequential() \nmodelCifar_cnn1.add(Conv2D(16, (3, 3), input_shape=(32, 32, 3), padding='same', activation='relu', kernel_constraint=maxnorm(y2_test.shape[1])))\nmodelCifar_cnn1.add(MaxPooling2D(pool_size=(2, 2)))\nmodelCifar_cnn1.add(Dropout(0.2))\nmodelCifar_cnn1.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(y2_test.shape[1])))\nmodelCifar_cnn1.add(MaxPooling2D(pool_size=(2, 2)))\nmodelCifar_cnn1.add(Flatten())\nmodelCifar_cnn1.add(Dense(128, activation='relu', kernel_constraint=maxnorm(y2_test.shape[1])))\nmodelCifar_cnn1.add(Dropout(0.5))\nmodelCifar_cnn1.add(Dense(10, activation='softmax'))\n\n# compile and train the model\nimport time\nfrom keras.optimizers import SGD\n# compile the model\n#modelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer =RMSprop(lr=0.001), metrics=[\"accuracy\"])\n#modelCifar_cnn1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\nepochs = 10\nlrate = 0.01\ndecay = lrate/epochs\nsgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)\n\nmodelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['accuracy'])\n#modelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])\n\n# train the model\nstart_t_mod= time.time()\nmodelCifar_cnn1.fit(x2_train, y2_train, validation_data=(x2_test, y2_test), epochs=epochs, batch_size=500)\nfinish_t_mod = time.time()\n\ntime = finish_t_mod - start_t_mod\nprint(\"training time :\", time)\n\n# evaluate the model\nscore = modelCifar_cnn1.evaluate(x2_test, y2_test_cat)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])", "Your observation and comment:\n2) Now modify the modelCifar_cnn1 by changing the filter size of 2 convolutional layers to 5x5. The new model is called modelCifar_cnn2. Train and test the model. Compare to the first CNN. <font color='red'> (1 pts)<font/>", "# Define the model \n# modelCifar_cnn2 = Sequential() \n\n", "Your observation and comment:\n*3) Compare the two CNNs with the two NNs in section I-1 in terms of accuracy, loss, number of parameters, calculation time, ect. * <font color='red'> (2 pts)<font/>\nFill the following table for comparison:\n| Models | Number of parameters | Training time | Accuracy | \n| ---------------|:---------------------:|:--------------:|:--------:|\n| modelCifar_nn1 | \n| modelCifar_nn2 | \n| modelCifar_cnn1| \n| modelCifar_cnn2| \nYour observation and comment:\nPart II - Cat and Dog classification <font color='red'> (6 pts)<font/>\nIn this part, we design and train CNNs on our data (import from disk). We will work on a small dataset including only 2 classes (cat and dog). Each one has 1000 images for training and 200 for validation.\nYou can download the data from: \n(https://drive.google.com/open?id=15cQfeAuDY1CRuOduF5LZwWZ4koL6Dti9)\n1) Describe the downloaded data: numer of training and validation images, number of classes, class names? Do the images have the same size? <font color='red'> (1 pts)<font/>\nYour response:\n2) Show some cat and dog images from the train set. Comment. <font color='red'> (1 pts)<font/>\nNow we import the ImageDataGenerator module of Keras. This module can be used to pre-process the images and to perform data augmentation. We use 'flow_from_directory()' to generate batches of image data (and their labels) directly from our images in their respective folders (from disk).", "from keras.preprocessing.image import ImageDataGenerator\n\nbatchSize = 100\ndatagen = ImageDataGenerator(rescale=1./255)\n\ntrain_datagen = datagen.flow_from_directory(\n 'dataTP2/train', # this is your target directory which includes the training images\n target_size = (50, 50), # all images will be resized to 50x50 pixels for fast computation\n batch_size = batchSize,\n class_mode = 'categorical') \n\nvalidation_datagen = datagen.flow_from_directory(\n 'dataTP2/validation', # this is your target directory which includes the validation images\n target_size = (50, 50), # all images will be resized to 50x50 pixels for fast computation\n batch_size = batchSize,\n class_mode = 'categorical')\n", "3) Now describe your pre-processed data for training and validation: numer of training and validation images, number of classes, class names? Do the images have the same size? <font color='red'> (1 pts)<font/>\nYour response:\n4) Redefine, train and validate the 2 CNNs in Part I (namely modelPart2_cnn1, modelPart2_cnn2) on the new data using model.fit_generator instead of model.fit. Observe and compare the results. <font color='red'> (3 pts)<font/>", "# Define the model \n# modelPart2_cnn1 = Sequential() \n\n\n\n# train with .fit_generator\n# modelPart2_cnn1.fit_generator(...)\n\n\n# Define the model \n# modelPart2_cnn2 = Sequential() \n\n\n# train with .fit_generator\n# modelPart2_cnn2.fit_generator(...)\n\n", "Your observation and comments:\nPart III - Advances <font color='red'> (2 pts)<font/>\nIn this part, you are free to improve your CNN performance using Data augmentation, Dropout, batch normalization, etc. Define at least 2 more CNNs to improve the classification performance of the CIFAR-10 dataset based on the first CNN (modelCifar_cnn1). That means you are not allowed to add more layers, change the number of filters or filter size, etc. Only the use of Data augmentation, Dropout, batch normalization is allowed. To use these techniques, further reading is required.\nFor each one, you are required to define the model, train, test and report the results.", "# Define new model \n# modelCifar_cnn3 = Sequential() \n# train and test\n", "Result, observation and comment:", "# Define new model \n# modelCifar_cnn4 = Sequential() \n# train and test", "Result, observation and comment:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sshleifer/object_detection_kitti
object_detection/kitti-inference.ipynb
apache-2.0
[ "Object Detection Demo\nWelcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.\nImports", "import numpy as np\nimport os\nimport pickle\nimport six.moves.urllib as urllib\nimport sys\nsys.path.append(\"..\")\nimport tarfile\nimport tensorflow as tf\nimport zipfile\nfrom object_detection.eval_util import evaluate_detection_results_pascal_voc\n\nfrom collections import defaultdict\nfrom io import StringIO\nfrom matplotlib import pyplot as plt\nfrom PIL import Image\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n# This is needed since the notebook is stored in the object_detection folder.\n\nfrom utils import label_map_util\n\nfrom utils import visualization_utils as vis_util", "Object detection imports\nHere are the imports from the object detection module.", "from utils import label_map_util\n\nfrom utils import visualization_utils as vis_util\ndef get_annotations(image_path):\n img_id = os.path.basename(image_path)[:-4]\n annotation_path = os.path.join(\n os.path.split(os.path.dirname(image_path))[0], 'Annotations',\n '{}.xml'.format(img_id)\n )\n return xml_to_dict(annotation_path)\nfrom utils.kitti import show_groundtruth, create_results_list\nfrom utils.kitti import visualize_predictions\nimport glob", "Model preparation\nVariables\nAny model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file. \nBy default we use an \"SSD with Mobilenet\" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.", "# What model to download.\nFREEZE_DIR = 'atrous_frozen_v2/'\nPATH_TO_CKPT = os.path.join(FREEZE_DIR,\n 'frozen_inference_graph.pb'\n )\n# List of the strings that is used to add correct label for each box.\nPATH_TO_LABELS = os.path.join('data', 'kitti_map.pbtxt')\n\nNUM_CLASSES = 9", "Load a (frozen) Tensorflow model into memory.", "detection_graph = tf.Graph()\nwith detection_graph.as_default():\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:\n serialized_graph = fid.read()\n od_graph_def.ParseFromString(serialized_graph)\n tf.import_graph_def(od_graph_def, name='')", "Loading label map\nLabel maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine", "PATH_TO_LABELS = os.path.join('data', 'kitti_map.pbtxt')\nlabel_map = label_map_util.load_labelmap(PATH_TO_LABELS)\ncategories = label_map_util.convert_label_map_to_categories(label_map, \n max_num_classes=NUM_CLASSES, use_display_name=True)\ncategory_index = label_map_util.create_category_index(categories)", "Helper code", "def load_image_into_numpy_array(image):\n (im_width, im_height) = image.size\n return np.array(image.getdata()).reshape(\n (im_height, im_width, 3)).astype(np.uint8)\n\nwith open('kitti_data/train.txt') as f:\n train_ids = f.readlines()[0].split(',')\nwith open('kitti_data/valid.txt') as f:\n valid_ids = f.readlines()[0].split(',')\n\nlen(train_ids)\n\nlen(valid_ids)", "Detection", "PATH_TO_TEST_IMAGES_DIR = 'voc_kitti_valid/VOC2012/JPEGImages/'\np = 'voc_kitti_valid/VOC2012/JPEGImages/1023.jpg'\nTEST_IMAGE_PATHS = [ p]\nFIGSIZE = (20, 20)\n\nimport glob\ndef glob_base(pat): return list(map(os.path.basename, glob.glob(pat)))\n\nfrom create_dataset import *", "Check that valid files dont overlap with train files", "valid_ids = glob_base(VOC_VALID_DIR + '/VOC2012/JPEGImages/*.jpg')\ntrain_ids = glob_base(VOC_TRAIN_DIR+ '/VOC2012/JPEGImages/*.jpg')\n\nassert len(pd.Index(valid_ids).intersection(train_ids)) == 0\n\ntest_dir = 'voc_kitti_valid/VOC2012/JPEGImages/'\ntest_image_paths = [os.path.join(test_dir, x) for x in valid_ids]\n\nlen(test_image_paths)\n\ntrain_labs= glob.glob('kitti_data/training/label_2/*.txt')\ntest_labs = glob.glob('kitti_data/valid/label_2/*.txt')", "Calculate MaP by category (8 mins roughly)", "%%time\nwith detection_graph.as_default():\n with tf.Session(graph=detection_graph) as sess:\n res = create_results_list(test_image_paths, sess, detection_graph)\n\nimport pandas as pd\nperf = pd.Series(evaluate_detection_results_pascal_voc(res, categories))\n\nperf", "Make nice performance table", "def clean_idx(perf):\n x = list(perf.index.map(lambda x: x[33:]))\n x[-1] = 'Total'\n perf.index = x\n return perf\n\nperf = clean_idx(perf)\n\nperf.to_frame('rcnn_mAP')#.round(3).to_csv('~/Desktop/faster_rcnn_mAP_by_category.csv')\n\ndef get_dict_slice(res, slc_obj):\n '''get a slice of the values for each key in a dict'''\n output = {}\n for k in res.keys():\n output[k] = res[k][slc_obj]\n return output\n \n ", "Calculate MaP for each image", "%%capture\nimg_scores = {image_id: evaluate_detection_results_pascal_voc(\n get_dict_slice(res, slice(i, i+1)), categories)\n for i, image_id in enumerate((res['image_id'][:-1]))}\n \n \n\nOVERALL_PERF_KEY = 'Precision/mAP@0.5IOU'\n\n#pickle.dump(res, open('mobile_net_valid_results_dct.pkl', 'wb'))\n\nres['image_id'][0]\n\nfrom kitti_constants import name_to_id\nfrom object_detection.utils.visualization_utils import visualize_boxes_and_labels_on_image_array\nfrom utils.kitti import get_boxes_scores_classes, visualize_predictions\n\ndef get_img_scores(image_path):\n imageid = os.path.basename(image_path)[:-4]\n return pd.Series(img_scores[imageid]).round(2).dropna()\n \n\n%%time\n%precision 4\nimport time\nwith detection_graph.as_default():\n with tf.Session(graph=detection_graph) as sess:\n image_path = np.random.choice(test_image_paths)\n image = Image.open(image_path)\n image_np = load_image_into_numpy_array(image)\n start = time.time()\n image_process = visualize_predictions(image_np, sess, detection_graph)\n # boxes, scores, classes, num_detections = get_boxes_scores_classes(image_np, sess, detection_graph)\n print('inference time: {} seconds'.format(\n np.round(time.time() - start, 2)))\n print ('MaP scores\\n{}'.format(get_img_scores(image_path)))\n plt.figure(figsize=FIGSIZE)\n plt.imshow(image_process)\n plt.title('Model', fontsize=16)\n #plt.imsave(image_process, 'worst_prediction labs.jpg')\n plt.figure(figsize=FIGSIZE)\n truth_img = show_groundtruth(image_path)\n plt.imshow(truth_img)\n plt.title('Human Labels', fontsize=16)\n plt.figure(figsize=FIGSIZE)\n plt.imshow(load_image_into_numpy_array(Image.open(image_path)))\n plt.title('Raw Image')\n #plt.savefig('worst_prediction labs.jpg')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
VVard0g/ThreatHunter-Playbook
docs/notebooks/windows/05_defense_evasion/WIN-190101151110.ipynb
mit
[ "Active Directory Replication User Backdoor\nMetadata\n| Metadata | Value |\n|:------------------|:---|\n| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |\n| creation date | 2019/01/01 |\n| modification date | 2020/09/20 |\n| playbook related | ['WIN-180815210510'] |\nHypothesis\nAdversaries with enough permissions (domain admin) might be adding an ACL to the Root Domain for any user to abuse active directory replication services.\nTechnical Context\nActive Directory replication is the process by which the changes that originate on one domain controller are automatically transferred to other domain controllers that store the same data.\nActive Directory data takes the form of objects that have properties, or attributes.\nEach object is an instance of an object class, and object classes and their respective attributes are defined in the Active Directory schema. The values of the attributes define the object, and a change to a value of an attribute must be transferred from the domain controller on which it occurs to every other domain controller that stores a replica of that object.\nOffensive Tradecraft\nAn adversary with enough permissions (domain admin) can add an ACL to the Root Domain for any user, despite being in no privileged groups, having no malicious sidHistory, and not having local admin rights on the domain controller. This is done to bypass detection rules looking for Domain Admins or the DC machine accounts performing active directory replication requests against a domain controller.\nThe following access rights / permissions are needed for the replication request according to the domain functional level\n| Control access right symbol | Identifying GUID used in ACE |\n| :-----------------------------| :------------------------------|\n| DS-Replication-Get-Changes | 1131f6aa-9c07-11d1-f79f-00c04fc2dcd2 |\n| DS-Replication-Get-Changes-All | 1131f6ad-9c07-11d1-f79f-00c04fc2dcd2 |\n| DS-Replication-Get-Changes-In-Filtered-Set | 89e95b76-444d-4c62-991a-0facbeda640c |\nAdditional reading\n* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/active_directory_replication.md\nSecurity Datasets\n| Metadata | Value |\n|:----------|:----------|\n| docs | https://securitydatasets.com/notebooks/atomic/windows/defense_evasion/SDWIN-190301125905.html |\n| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/defense_evasion/host/empire_powerview_ldap_ntsecuritydescriptor.zip |\nAnalytics\nInitialize Analytics Engine", "from openhunt.mordorutils import *\nspark = get_spark()", "Download & Process Security Dataset", "sd_file = \"https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/defense_evasion/host/empire_powerview_ldap_ntsecuritydescriptor.zip\"\nregisterMordorSQLTable(spark, sd_file, \"sdTable\")", "Analytic I\nLook for any user accessing directory service objects with replication permissions GUIDs\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| Windows active directory | Microsoft-Windows-Security-Auditing | User accessed AD Object | 4662 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, SubjectUserName, ObjectName, OperationType\nFROM sdTable\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4662\n AND ObjectServer = \"DS\"\n AND AccessMask = \"0x40000\"\n AND ObjectType LIKE \"%19195a5b_6da0_11d0_afd3_00c04fd930c9%\"\n'''\n)\ndf.show(10,False)", "Analytic II\nLook for any user modifying directory service objects with replication permissions GUIDs\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| Windows active directory | Microsoft-Windows-Security-Auditing | User modified AD Object | 5136 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, SubjectUserName, ObjectDN, AttributeLDAPDisplayName\nFROM sdTable\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 5136\n AND lower(AttributeLDAPDisplayName) = \"ntsecuritydescriptor\"\n AND (AttributeValue LIKE \"%1131f6aa_9c07_11d1_f79f_00c04fc2dcd2%\"\n OR AttributeValue LIKE \"%1131f6ad_9c07_11d1_f79f_00c04fc2dcd2%\"\n OR AttributeValue LIKE \"%89e95b76_444d_4c62_991a_0facbeda640c%\")\n'''\n)\ndf.show(10,False)", "Known Bypasses\nFalse Positives\nNone\nHunter Notes\nNone\nHunt Output\n| Type | Link |\n| :----| :----|\n| Sigma Rule | https://github.com/SigmaHQ/sigma/blob/master/rules/windows/builtin/security/win_ad_object_writedac_access.yml |\n| Sigma Rule | https://github.com/SigmaHQ/sigma/blob/master/rules/windows/builtin/security/win_account_backdoor_dcsync_rights.yml |\nReferences\n\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/1522b774-6464-41a3-87a5-1e5633c3fbbb\nhttps://docs.microsoft.com/en-us/windows/desktop/adschema/c-domain\nhttps://docs.microsoft.com/en-us/windows/desktop/adschema/c-domaindns\nhttp://www.harmj0y.net/blog/redteaming/a-guide-to-attacking-domain-trusts/\nhttps://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc782376(v=ws.10)\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-drsr/f977faaa-673e-4f66-b9bf-48c640241d47" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
spro/practical-pytorch
reinforce-gridworld/reinforce-gridworld.ipynb
mit
[ "Practical PyTorch: Playing GridWorld with Reinforcement Learning (Policy Gradients with REINFORCE)\nIn this project we'll teach a neural network to navigate through a dangerous grid world.\n\nTraining uses policy gradients via the REINFORCE algorithm and a simplified Actor-Critic method. A single network calculates both a policy to choose the next action (the actor) and an estimated value of the current state (the critic). Rewards are propagated through the graph with PyTorch's reinforce method.\nResources\n\nThe Reinforcement learning book from Sutton & Barto\nThe REINFORCE paper from Ronald J. Williams (1992)\nScholarpedia article on policy gradient methods\nA Lecture from David Silver (of UCL, DeepMind) on policy gradients\nThe REINFORCE PyTorch example this tutorial is based on\n\nRequirements\nThe main requirements are PyTorch (of course), and numpy, matplotlib, and iPython for animating the states.", "import numpy as np\nfrom itertools import count\nimport random\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch.autograd as autograd\nfrom torch.autograd import Variable\n\nimport matplotlib.mlab as mlab\nimport matplotlib.pyplot as plt\nimport matplotlib.animation\nfrom IPython.display import HTML\n%pylab inline\n\nfrom helpers import *", "The Grid World, Agent and Environment\nFirst we'll build the training environment, which is a simple square grid world with various rewards and a goal. If you're just interested in the training code, skip down to building the actor-critic network\nThe Grid\nThe Grid class keeps track of the grid world: a 2d array of empty squares, plants, and the goal.\n\nPlants are randomly placed values from -1 to 0.5 (mostly poisonous) and if the agent lands on one, that value is added to the agent's health. The agent's goal is to reach the goal square, placed in one of the corners. As the agent moves around it gradually loses health so it has to move with purpose.\nThe agent can see a surrounding area VISIBLE_RADIUS squares out from its position, so the edges of the grid are padded by that much with negative values. If the agent \"falls off the edge\" it dies instantly.", "MIN_PLANT_VALUE = -1\nMAX_PLANT_VALUE = 0.5\nGOAL_VALUE = 10\nEDGE_VALUE = -10\nVISIBLE_RADIUS = 1\n\nclass Grid():\n def __init__(self, grid_size=8, n_plants=15):\n self.grid_size = grid_size\n self.n_plants = n_plants\n \n def reset(self):\n padded_size = self.grid_size + 2 * VISIBLE_RADIUS\n self.grid = np.zeros((padded_size, padded_size)) # Padding for edges\n \n # Edges\n self.grid[0:VISIBLE_RADIUS, :] = EDGE_VALUE\n self.grid[-1*VISIBLE_RADIUS:, :] = EDGE_VALUE\n self.grid[:, 0:VISIBLE_RADIUS] = EDGE_VALUE\n self.grid[:, -1*VISIBLE_RADIUS:] = EDGE_VALUE\n \n # Randomly placed plants\n for i in range(self.n_plants):\n plant_value = random.random() * (MAX_PLANT_VALUE - MIN_PLANT_VALUE) + MIN_PLANT_VALUE\n ry = random.randint(0, self.grid_size-1) + VISIBLE_RADIUS\n rx = random.randint(0, self.grid_size-1) + VISIBLE_RADIUS\n self.grid[ry, rx] = plant_value\n \n # Goal in one of the corners\n S = VISIBLE_RADIUS\n E = self.grid_size + VISIBLE_RADIUS - 1\n gps = [(E, E), (S, E), (E, S), (S, S)]\n gp = gps[random.randint(0, len(gps)-1)]\n self.grid[gp] = GOAL_VALUE\n \n def visible(self, pos):\n y, x = pos\n return self.grid[y-VISIBLE_RADIUS:y+VISIBLE_RADIUS+1, x-VISIBLE_RADIUS:x+VISIBLE_RADIUS+1]", "The Agent\nThe Agent has a current position and a health. All this class does is update the position based on an action (up, right, down or left) and decrement a small STEP_VALUE at every time step, so that it eventually starves if it doesn't reach the goal.\nThe world based effects on the agent's health are handled by the Environment below.", "START_HEALTH = 1\nSTEP_VALUE = -0.02\n\nclass Agent:\n def reset(self):\n self.health = START_HEALTH\n\n def act(self, action):\n # Move according to action: 0=UP, 1=RIGHT, 2=DOWN, 3=LEFT\n y, x = self.pos\n if action == 0: y -= 1\n elif action == 1: x += 1\n elif action == 2: y += 1\n elif action == 3: x -= 1\n self.pos = (y, x)\n self.health += STEP_VALUE # Gradually getting hungrier", "The Environment\nThe Environment encapsulates the Grid and Agent, and handles the bulk of the logic of assigning rewards when the agent acts. If an agent lands on a plant or goal or edge, its health is updated accordingly. Plants are removed from the grid (set to 0) when \"eaten\" by the agent. Every time step there is also a slight negative health penalty so that the agent must keep finding plants or reach the goal to survive.\nThe Environment's main function is step(action) &rarr; (state, reward, done), which updates the world state with a chosen action and returns the resulting state, and also returns a reward and whether the episode is done. The state it returns is what the agent will use to make its action predictions, which in this case is the visible grid area (flattened into one dimension) and the current agent health (to give it some \"self awareness\").\nThe episode is considered done if won or lost - won if the agent reaches the goal (agent.health &gt;= GOAL_VALUE) and lost if the agent dies from falling off the edge, eating too many poisonous plants, or getting too hungry (agent.health &lt;= 0).\nIn this experiment the environment only returns a single reward at the end of the episode (to make it more challenging). Values from plants and the step penalty are implicit - they might cause the agent to live longer or die sooner, but they aren't included in the final reward.\nThe Environment also keeps track of the grid and agent states for each step of an episode, for visualization.", "class Environment:\n def __init__(self):\n self.grid = Grid()\n self.agent = Agent()\n\n def reset(self):\n \"\"\"Start a new episode by resetting grid and agent\"\"\"\n self.grid.reset()\n self.agent.reset()\n c = math.floor(self.grid.grid_size / 2)\n self.agent.pos = (c, c)\n \n self.t = 0\n self.history = []\n self.record_step()\n \n return self.visible_state\n \n def record_step(self):\n \"\"\"Add the current state to history for display later\"\"\"\n grid = np.array(self.grid.grid)\n grid[self.agent.pos] = self.agent.health * 0.5 # Agent marker faded by health\n visible = np.array(self.grid.visible(self.agent.pos))\n self.history.append((grid, visible, self.agent.health))\n \n @property\n def visible_state(self):\n \"\"\"Return the visible area surrounding the agent, and current agent health\"\"\"\n visible = self.grid.visible(self.agent.pos)\n y, x = self.agent.pos\n yp = (y - VISIBLE_RADIUS) / self.grid.grid_size\n xp = (x - VISIBLE_RADIUS) / self.grid.grid_size\n extras = [self.agent.health, yp, xp]\n return np.concatenate((visible.flatten(), extras), 0)\n \n def step(self, action):\n \"\"\"Update state (grid and agent) based on an action\"\"\"\n self.agent.act(action)\n \n # Get reward from where agent landed, add to agent health\n value = self.grid.grid[self.agent.pos]\n self.grid.grid[self.agent.pos] = 0\n self.agent.health += value\n \n # Check if agent won (reached the goal) or lost (health reached 0)\n won = value == GOAL_VALUE\n lost = self.agent.health <= 0\n done = won or lost\n \n # Rewards at end of episode\n if won:\n reward = 1\n elif lost:\n reward = -1\n else:\n reward = 0 # Reward will only come at the end\n\n # Save in history\n self.record_step()\n \n return self.visible_state, reward, done", "Visualizing History\nTo visualize an episode the animate(history) function uses Matplotlib to plot the grid state and agent health over time, and turn the resulting frames into a GIF.", "def animate(history):\n frames = len(history)\n print(\"Rendering %d frames...\" % frames)\n fig = plt.figure(figsize=(6, 2))\n fig_grid = fig.add_subplot(121)\n fig_health = fig.add_subplot(243)\n fig_visible = fig.add_subplot(244)\n fig_health.set_autoscale_on(False)\n health_plot = np.zeros((frames, 1))\n\n def render_frame(i):\n grid, visible, health = history[i]\n # Render grid\n fig_grid.matshow(grid, vmin=-1, vmax=1, cmap='jet')\n fig_visible.matshow(visible, vmin=-1, vmax=1, cmap='jet')\n # Render health chart\n health_plot[i] = health\n fig_health.clear()\n fig_health.axis([0, frames, 0, 2])\n fig_health.plot(health_plot[:i + 1])\n\n anim = matplotlib.animation.FuncAnimation(\n fig, render_frame, frames=frames, interval=100\n )\n\n plt.close()\n display(HTML(anim.to_html5_video()))", "Testing the Environment\nLet's test what we have so far with a quick simulation:", "env = Environment()\nenv.reset()\nprint(env.visible_state)\n\ndone = False\nwhile not done:\n _, _, done = env.step(2) # Down\n\nanimate(env.history)", "Actor-Critic network\nValue-based reinforcement learning methods like Q-Learning try to predict the expected reward of the next state(s) given an action. In contrast, a policy method tries to directly choose the best action given a state. Policy methods are conceptually simpler but training can be tricky - due to the high variance of rewards, it can easily become unstable or just plateau at a local minimum.\nCombining a value estimation with the policy helps regularize training by establishing a \"baseline\" reward that learns alongside the actor. Subtracting a baseline value from the rewards essentially trains the actor to perform \"better than expected\".\nIn this case, both actor and critic (baseline) are combined into a single neural network with 5 outputs: the probabilities of the 4 possible actions, and an estimated value.\nThe input layer inp transforms the environment state, $(radius*2+1)^2$ squares plus the agent's health and position, into an internal state. The output layer out transforms that internal state to probabilities of possible actions plus the estimated value.", "class Policy(nn.Module):\n def __init__(self, hidden_size):\n super(Policy, self).__init__()\n \n visible_squares = (VISIBLE_RADIUS * 2 + 1) ** 2\n input_size = visible_squares + 1 + 2 # Plus agent health, y, x\n \n self.inp = nn.Linear(input_size, hidden_size)\n self.out = nn.Linear(hidden_size, 4 + 1, bias=False) # For both action and expected value\n\n def forward(self, x):\n x = x.view(1, -1)\n x = F.tanh(x) # Squash inputs\n x = F.relu(self.inp(x))\n x = self.out(x)\n \n # Split last five outputs into scores and value\n scores = x[:,:4]\n value = x[:,4]\n return scores, value", "Selecting actions\nTo select actions we treat the output of the policy as a multinomial distribution over actions, and sample from that to choose a single action. Thanks to the REINFORCE algorithm we can calculate gradients for discrete action samples by calling action.reinforce(reward) at the end of the episode.\nTo encourage exploration in early episodes, here's one weird trick: apply dropout to the action scores, before softmax. Randomly masking some scores will cause less likely scores to be chosen. The dropout percent gradually decreases from 30% to 5% over the first 200k episodes.", "DROP_MAX = 0.3\nDROP_MIN = 0.05\nDROP_OVER = 200000\n\ndef select_action(e, state):\n drop = interpolate(e, DROP_MAX, DROP_MIN, DROP_OVER)\n \n state = Variable(torch.from_numpy(state).float())\n scores, value = policy(state) # Forward state through network\n scores = F.dropout(scores, drop, True) # Dropout for exploration\n scores = F.softmax(scores)\n action = scores.multinomial() # Sample an action\n\n return action, value", "Playing through an episode\nA single episode is the agent moving through the environment from start to finish. We keep track of the chosen action and value outputs from the model, and resulting rewards to reinforce at the end of the episode.", "def run_episode(e):\n state = env.reset()\n actions = []\n values = []\n rewards = []\n done = False\n\n while not done:\n action, value = select_action(e, state)\n state, reward, done = env.step(action.data[0, 0])\n actions.append(action)\n values.append(value)\n rewards.append(reward)\n\n return actions, values, rewards", "Using REINFORCE with a value baseline\nThe policy gradient method is similar to regular supervised learning, except we don't know the \"correct\" action for any given state. Plus we are only getting a single reward at the end of the episode. To give rewards to past actions we fake history by copying the final reward (and possibly intermediate rewards) back in time with a discount factor:\n\nThen for every time step, we use action.reinforce(reward) to encourage or discourage those actions.\nWe will use the value output of the network as a baseline, and use the difference of the reward and the baseline with reinforce. The value estimate itself is trained to be close to the actual reward with a MSE loss.", "gamma = 0.9 # Discounted reward factor\n\nmse = nn.MSELoss()\n\ndef finish_episode(e, actions, values, rewards):\n \n # Calculate discounted rewards, going backwards from end\n discounted_rewards = []\n R = 0\n for r in rewards[::-1]:\n R = r + gamma * R\n discounted_rewards.insert(0, R)\n discounted_rewards = torch.Tensor(discounted_rewards)\n\n # Use REINFORCE on chosen actions and associated discounted rewards\n value_loss = 0\n for action, value, reward in zip(actions, values, discounted_rewards):\n reward_diff = reward - value.data[0] # Treat critic value as baseline\n action.reinforce(reward_diff) # Try to perform better than baseline\n value_loss += mse(value, Variable(torch.Tensor([reward]))) # Compare with actual reward\n\n # Backpropagate\n optimizer.zero_grad()\n nodes = [value_loss] + actions\n gradients = [torch.ones(1)] + [None for _ in actions] # No gradients for reinforced values\n autograd.backward(nodes, gradients)\n optimizer.step()\n \n return discounted_rewards, value_loss", "With everything in place we can define the training parameters and create the actual Environment and Policy instances. We'll also use a SlidingAverage helper to keep track of average rewards over time.", "hidden_size = 50\nlearning_rate = 1e-4\nweight_decay = 1e-5\n\nlog_every = 1000\nrender_every = 20000\n\nenv = Environment()\npolicy = Policy(hidden_size=hidden_size)\noptimizer = optim.Adam(policy.parameters(), lr=learning_rate)#, weight_decay=weight_decay)\n\nreward_avg = SlidingAverage('reward avg', steps=log_every)\nvalue_avg = SlidingAverage('value avg', steps=log_every)\n", "Finally, we run a bunch of episodes and wait for some results. The average final reward will help us track whether it's learning. This took about an hour on a 2.8GHz CPU to get some reasonable results.", "e = 0\n\nwhile reward_avg < 0.75:\n actions, values, rewards = run_episode(e)\n final_reward = rewards[-1]\n \n discounted_rewards, value_loss = finish_episode(e, actions, values, rewards)\n \n reward_avg.add(final_reward)\n value_avg.add(value_loss.data[0])\n \n if e % log_every == 0:\n print('[epoch=%d]' % e, reward_avg, value_avg)\n \n if e > 0 and e % render_every == 0:\n animate(env.history)\n \n e += 1\n\n# Plot average reward and value loss\nplt.plot(np.array(reward_avg.avgs))\nplt.show()\nplt.plot(np.array(value_avg.avgs))\nplt.show()", "As you can see from the shape of the rewards graph, training these kinds of networks is a rollercoaster of luck.\nExercises\n\nUncomment the line in the Environment class that returns a reward every step - the agent tends learn a bit quicker because the effects of eating plants are more immediately rewarded.\nTry with a bigger grid size, bigger visible area, bigger network, etc.\nTry with a recurrent network - it will train slower (in clock time) but often reaches higher values in fewer episodes.\nObserve the effects of different learning rates and gamma values." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
lileiting/goatools
notebooks/relatonships_in_the_go.ipynb
bsd-2-clause
[ "Relationships in the GO\nAlex Warwick Vesztrocy, March 2016\nFor some analyses, it is possible to only use the <code>is_a</code> definitions given in the Gene Ontology. \nHowever, it is important to remember that this isn't always the case. As such, <code>GOATOOLS</code> includes the option to load the relationship definitions also.\nLoading GO graph with the relationship tags\nThis is possible by using the <code>optional_attrs</code> argument, upon instantiating a <code>GODag</code>. This can be done with either the full or the basic version of the ontology. Here, the full version shall be used.", "from goatools.obo_parser import GODag\nimport wget\n\ngo_fn = wget.download('http://geneontology.org/ontology/go.obo')\ngo = GODag(go_fn, optional_attrs=['relationship'])", "Viewing relationships in the GO graph\nSo now, when looking at an individual term (which has relationships defined in the GO) these are listed in a nested manner. As an example, look at <code>GO:1901990</code> which has a single <code>has_part</code> relationship, as well as a <code>regulates</code> one.", "eg_term = go['GO:1901990']\n\neg_term", "These different relationship types are stored as a dictionary within the relationship attribute on a GO term.", "print(eg_term.relationship.keys())\n\nprint(eg_term.relationship['regulates'])", "Example use case\nOne example use case for the relationship terms, would be to look for all functions which regulate pseudohyphal growth (<code>GO:0007124</code>). That is:\n\nA pattern of cell growth that occurs in conditions of nitrogen limitation and abundant fermentable carbon source. Cells become elongated, switch to a unipolar budding pattern, remain physically attached to each other, and invade the growth substrate. \nSource: https://www.ebi.ac.uk/QuickGO/GTerm?id=GO:0007124#term=info&info=1", "term_of_interest = go['GO:0007124']", "First, find the relationship types which contain \"regulates\":", "regulates = frozenset([typedef \n for typedef in go.typedefs.keys() \n if 'regulates' in typedef])\nprint(regulates)", "Now, search through the terms in the tree for those with a relationship in this list and add them to a dictionary dependent on the type of regulation.", "from collections import defaultdict\n\nregulating_terms = defaultdict(list)\n\nfor t in go.values():\n if hasattr(t, 'relationship'):\n for typedef in regulates.intersection(t.relationship.keys()):\n if term_of_interest in t.relationship[typedef]:\n regulating_terms['{:s}d_by'.format(typedef[:-1])].append(t)", "Now <code>regulating_terms</code> contains the GO terms which relate to regulating protein localisation to the nucleolus.", "print('{:s} ({:s}) is:'.format(term_of_interest.name, term_of_interest.id))\nfor (k, v) in regulating_terms.items():\n print('\\n - {:s}:'.format(k))\n for t in v:\n print(' -- {:s} {:s}'.format(t.id, t.name))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/sdk/sdk_automl_image_classification_online_export_edge.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex SDK: AutoML training image classification model for export to edge\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_classification_online_export_edge.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_classification_online_export_edge.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_classification_online_export_edge.ipynb\">\n Open in Google Cloud Notebooks\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex SDK to create image classification models to export as an Edge model using a Google Cloud AutoML model.\nDataset\nThe dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.\nObjective\nIn this tutorial, you create a AutoML image classification model from a Python script using the Vertex SDK, and then export the model as an Edge model in TFLite format. You can alternatively create models with AutoML using the gcloud command-line tool or online using the Cloud Console.\nThe steps performed include:\n\nCreate a Vertex Dataset resource.\nTrain the model.\nExport the Edge model from the Model resource to Cloud Storage.\nDownload the model locally.\nMake a local prediction.\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.", "import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG\n\nif os.environ[\"IS_TESTING\"]:\n ! pip3 install --upgrade tensorflow $USER_FLAG", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import google.cloud.aiplatform as aip", "Initialize Vertex SDK for Python\nInitialize the Vertex SDK for Python for your project and corresponding bucket.", "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)", "Tutorial\nNow you are ready to start creating your own AutoML image classification model.\nLocation of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.", "IMPORT_FILE = (\n \"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv\"\n)", "Quick peek at your data\nThis tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.", "if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head", "Create the Dataset\nNext, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:\n\ndisplay_name: The human readable name for the Dataset resource.\ngcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.\nimport_schema_uri: The data labeling schema for the data items.\n\nThis operation may take several minutes.", "dataset = aip.ImageDataset.create(\n display_name=\"Flowers\" + \"_\" + TIMESTAMP,\n gcs_source=[IMPORT_FILE],\n import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,\n)\n\nprint(dataset.resource_name)", "Create and run training pipeline\nTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.\nCreate training pipeline\nAn AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:\n\ndisplay_name: The human readable name for the TrainingJob resource.\nprediction_type: The type task to train the model for.\nclassification: An image classification model.\nobject_detection: An image object detection model.\nmulti_label: If a classification task, whether single (False) or multi-labeled (True).\nmodel_type: The type of model for deployment.\nCLOUD: Deployment on Google Cloud\nCLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.\nCLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.\nMOBILE_TF_VERSATILE_1: Deployment on an edge device.\nMOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.\nMOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.\nbase_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.\n\nThe instantiated object is the DAG (directed acyclic graph) for the training job.", "dag = aip.AutoMLImageTrainingJob(\n display_name=\"flowers_\" + TIMESTAMP,\n prediction_type=\"classification\",\n multi_label=False,\n model_type=\"MOBILE_TF_LOW_LATENCY_1\",\n base_model=None,\n)\n\nprint(dag)", "Run the training pipeline\nNext, you run the DAG to start the training job by invoking the method run, with the following parameters:\n\ndataset: The Dataset resource to train the model.\nmodel_display_name: The human readable name for the trained model.\ntraining_fraction_split: The percentage of the dataset to use for training.\ntest_fraction_split: The percentage of the dataset to use for test (holdout data).\nvalidation_fraction_split: The percentage of the dataset to use for validation.\nbudget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).\ndisable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.\n\nThe run method when completed returns the Model resource.\nThe execution of the training pipeline will take upto 20 minutes.", "model = dag.run(\n dataset=dataset,\n model_display_name=\"flowers_\" + TIMESTAMP,\n training_fraction_split=0.8,\n validation_fraction_split=0.1,\n test_fraction_split=0.1,\n budget_milli_node_hours=8000,\n disable_early_stopping=False,\n)", "Review model evaluation scores\nAfter your model has finished training, you can review the evaluation scores for it.\nFirst, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.", "# Get model resource ID\nmodels = aip.Model.list(filter=\"display_name=flowers_\" + TIMESTAMP)\n\n# Get a reference to the Model Service client\nclient_options = {\"api_endpoint\": f\"{REGION}-aiplatform.googleapis.com\"}\nmodel_service_client = aip.gapic.ModelServiceClient(client_options=client_options)\n\nmodel_evaluations = model_service_client.list_model_evaluations(\n parent=models[0].resource_name\n)\nmodel_evaluation = list(model_evaluations)[0]\nprint(model_evaluation)", "Export as Edge model\nYou can export an AutoML image classification model as a Edge model which you can then custom deploy to an edge device or download locally. Use the method export_model() to export the model to Cloud Storage, which takes the following parameters:\n\nartifact_destination: The Cloud Storage location to store the SavedFormat model artifacts to.\nexport_format_id: The format to save the model format as. For AutoML image classification there is just one option:\ntf-saved-model: TensorFlow SavedFormat for deployment to a container.\ntflite: TensorFlow Lite for deployment to an edge or mobile device.\nedgetpu-tflite: TensorFlow Lite for TPU\ntf-js: TensorFlow for web client\n\ncoral-ml: for Coral devices\n\n\nsync: Whether to perform operational sychronously or asynchronously.", "response = model.export_model(\n artifact_destination=BUCKET_NAME, export_format_id=\"tflite\", sync=True\n)\n\nmodel_package = response[\"artifactOutputUri\"]", "Download the TFLite model artifacts\nNow that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage.", "! gsutil ls $model_package\n# Download the model artifacts\n! gsutil cp -r $model_package tflite\n\ntflite_path = \"tflite/model.tflite\"", "Instantiate a TFLite interpreter\nThe TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows:\n\nInstantiate an TFLite interpreter for the TFLite model.\nInstruct the interpreter to allocate input and output tensors for the model.\nGet detail information about the models input and output tensors that will need to be known for prediction.", "import tensorflow as tf\n\ninterpreter = tf.lite.Interpreter(model_path=tflite_path)\ninterpreter.allocate_tensors()\n\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\ninput_shape = input_details[0][\"shape\"]\n\nprint(\"input tensor shape\", input_shape)", "Get test item\nYou will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.", "test_items = ! gsutil cat $IMPORT_FILE | head -n1\ntest_item = test_items[0].split(\",\")[0]\n\nwith tf.io.gfile.GFile(test_item, \"rb\") as f:\n content = f.read()\ntest_image = tf.io.decode_jpeg(content)\nprint(\"test image shape\", test_image.shape)\n\ntest_image = tf.image.resize(test_image, (224, 224))\nprint(\"test image shape\", test_image.shape, test_image.dtype)\n\ntest_image = tf.cast(test_image, dtype=tf.uint8).numpy()", "Make a prediction with TFLite model\nFinally, you do a prediction using your TFLite model, as follows:\n\nConvert the test image into a batch of a single image (np.expand_dims)\nSet the input tensor for the interpreter to your batch of a single image (data).\nInvoke the interpreter.\nRetrieve the softmax probabilities for the prediction (get_tensor).\nDetermine which label had the highest probability (np.argmax).", "import numpy as np\n\ndata = np.expand_dims(test_image, axis=0)\n\ninterpreter.set_tensor(input_details[0][\"index\"], data)\n\ninterpreter.invoke()\n\nsoftmax = interpreter.get_tensor(output_details[0][\"index\"])\n\nlabel = np.argmax(softmax)\n\nprint(label)", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nAutoML Training Job\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_all = True\n\nif delete_all:\n # Delete the dataset using the Vertex dataset object\n try:\n if \"dataset\" in globals():\n dataset.delete()\n except Exception as e:\n print(e)\n\n # Delete the model using the Vertex model object\n try:\n if \"model\" in globals():\n model.delete()\n except Exception as e:\n print(e)\n\n # Delete the endpoint using the Vertex endpoint object\n try:\n if \"endpoint\" in globals():\n endpoint.delete()\n except Exception as e:\n print(e)\n\n # Delete the AutoML or Pipeline trainig job\n try:\n if \"dag\" in globals():\n dag.delete()\n except Exception as e:\n print(e)\n\n # Delete the custom trainig job\n try:\n if \"job\" in globals():\n job.delete()\n except Exception as e:\n print(e)\n\n # Delete the batch prediction job using the Vertex batch prediction object\n try:\n if \"batch_predict_job\" in globals():\n batch_predict_job.delete()\n except Exception as e:\n print(e)\n\n # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n try:\n if \"hpt_job\" in globals():\n hpt_job.delete()\n except Exception as e:\n print(e)\n\n if \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fangohr/paper-supplement-2016-dmi-nanocylinder-hysteresis
notebooks/data-format-explanations.ipynb
mit
[ "Explantions of the data formats\nThe raw data for figures 1 and 2 are found the directories, data/figure_1/ and data/figure_2 respectively.\nThis notebook explains the different data types found in the these directories.", "import numpy as np", "Figure 1\nhysteresis_loops\nThe data for the hysteresis loops is found in ../data/figure_1/hysteresis_loops/sim_hysteresis_FeGe_nanodisk_d150_h*.npy where * specifies the thickness/height of the nanocylinder for 10-90nm in steps of 5nm.\nThese files contain the components of the average magnetisation of the sample, $m_x$, $m_y$, $m_z$ and the average energy, $E$ of the sample recorded at each field step of the hysteresis (there were 801 field steps in total for the hysteresis field swept up and down).\nThe data was stored as a numpy array in the form ($m_x$, $m_y$, $m_z$, $E$), and thus has a shape, (4, 801).", "mx, my, mz, E = np.load('../data/figure_1/hysteresis_loops/sim_hysteresis_FeGe_nanodisk_d150_h20.npy')", "magnetisation_profiles\nThe magnetisation profile ($m_z$ sampled along the diameter, on the top surface of a nanocylinder sample) data was sampled at 100 equally spaced points along the diameter for the 35nm and 55nm nanocylinders and at 200 equally spaced points along the diameter for the 20nm samples (there were no particular reasons for the more refined sample).\n../data/figure_1/figure_1/magnetisation_profiles/hysteresis_probe_d150_h55_mz*.npy where * indicates the step number in the hysteresis (801 in total), from when the data was sampled.", "mz_profile = np.load('../data/figure_1/magnetisation_profiles/hysteresis_probe_d150_h55_mz210.npy')", "3d_data\nSeveral png files of showing the 3d images of the magnetisation at various stages throughout the hysteresis of a 55nm thick nanocylinder can be found at in the directory, ../data/figure_1/3d_data/images/sim_hysteresis_FeGe_nanodisk_d150_h55_*.png, where * is the step number from the hysteresis from which the state appeared.\nThese images were generated from vtk files, whch can be found at `../data/figure_1/3d_data/vtk/'\nusing the 3D visulisation programme, Paraview (http://www.paraview.org/).\nFigure 2\nAt each point in the hysteresis, the state type was determined through the image detection approach and saved into a multi dimensional array of the shape (H, t), with H being the external hysteresis field and t being the thickness.\nThe array was populated with integers ranging from 0-5, depending on the state type, where:\n\n0: incomplete skyrmion (core down)\n1: transition state (no radial symmetry)\n2: skyrmion (core up)\n3: target state\n4: skyrmion (core down)\n5: incomplete skyrmion (core up)", "state_types = np.load('../data/figure_2/phase_diagram_state_types_demag.npy')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dfm/dfm.io
static/downloads/notebooks/pymc-tensorflow.ipynb
mit
[ "Title: PyMC3 + TensorFlow\nDate: 2018-08-02\nCategory: Data Analysis\nSlug: pymc-tensorflow\nSummary: the most ambitious crossover event in history\nMath: true", "%matplotlib inline\n%config InlineBackend.figure_format = \"retina\"\n\nfrom matplotlib import rcParams\nrcParams[\"savefig.dpi\"] = 100\nrcParams[\"figure.dpi\"] = 100\nrcParams[\"font.size\"] = 20", "In this tutorial, I will describe a hack that let's us use PyMC3 to sample a probability density defined using TensorFlow.\nThis isn't necessarily a Good Idea™, but I've found it useful for a few projects so I wanted to share the method.\nTo start, I'll try to motivate why I decided to attempt this mashup, and then I'll give a simple example to demonstrate how you might use this technique in your own work.\nWhy TensorFlow?\nI recently started using TensorFlow as a framework for probabilistic modeling (and encouraging other astronomers to do the same) because the API seemed stable and it was relatively easy to extend the language with custom operations written in C++.\nThis second point is crucial in astronomy because we often want to fit realistic, physically motivated models to our data, and it can be inefficient to implement these algorithms within the confines of existing probabilistic programming languages.\nTo this end, I have been working on developing various custom operations within TensorFlow to implement scalable Gaussian processes and various special functions for fitting exoplanet data (Foreman-Mackey et al., in prep, ha!).\nThese experiments have yielded promising results, but my ultimate goal has always been to combine these models with Hamiltonian Monte Carlo sampling to perform posterior inference.\nI don't know of any Python packages with the capabilities of projects like PyMC3 or Stan that support TensorFlow out of the box.\nI know that Edward/TensorFlow probability has an HMC sampler, but it does not have a NUTS implementation, tuning heuristics, or any of the other niceties that the MCMC-first libraries provide.\nWhy HMC?\nThe benefit of HMC compared to some other MCMC methods (including one that I wrote) is that it is substantially more efficient (i.e. requires less computation time per independent sample) for models with large numbers of parameters.\nTo achieve this efficiency, the sampler uses the gradient of the log probability function with respect to the parameters to generate good proposals.\nThis means that it must be possible to compute the first derivative of your model with respect to the input parameters.\nTo do this in a user-friendly way, most popular inference libraries provide a modeling framework that users must use to implement their model and then the code can automatically compute these derivatives.\nWhy PyMC3?\nAs far as I can tell, there are two popular libraries for HMC inference in Python: PyMC3 and Stan (via the pystan interface).\nI have previously blogged about extending Stan using custom C++ code and a forked version of pystan, but I haven't actually been able to use this method for my research because debugging any code more complicated than the one in that example ended up being far too tedious.\nFurthermore, since I generally want to do my initial tests and make my plots in Python, I always ended up implementing two version of my model (one in Stan and one in Python) and it was frustrating to make sure that these always gave the same results.\nPyMC3 is much more appealing to me because the models are actually Python objects so you can use the same implementation for sampling and pre/post-processing.\nThe catch with PyMC3 is that you must be able to evaluate your model within the Theano framework and I wasn't so keen to learn Theano when I had already invested a substantial amount of time into TensorFlow and since Theano has been deprecated as a general purpose modeling language.\nWhat I really want is a sampling engine that does all the tuning like PyMC3/Stan, but without requiring the use of a specific modeling framework.\nI imagine that this interface would accept two Python functions (one that evaluates the log probability, and one that evaluates its gradient) and then the user could choose whichever modeling stack they want.\nThat being said, my dream sampler doesn't exist (despite my weak attempt to start developing it) so I decided to see if I could hack PyMC3 to do what I wanted.\nThe TensorFlow + Theano mashup\nTo get started on implementing this, I reached out to Thomas Wiecki (one of the lead developers of PyMC3 who has written about a similar MCMC mashups) for tips,\nHe came back with a few excellent suggestions, but the one that really stuck out was to \"...write your logp/dlogp as a theano op that you then use in your (very simple) model definition\".\nThe basic idea here is that, since PyMC3 models are implemented using Theano, it should be possible to write an extension to Theano that knows how to call TensorFlow.\nThen, this extension could be integrated seamlessly into the model.\nThe two key pages of documentation are the Theano docs for writing custom operations (ops) and the PyMC3 docs for using these custom ops.\nAfter starting on this project, I also discovered an issue on GitHub with a similar goal that ended up being very helpful.\nBased on these docs, my complete implementation for a custom Theano op that calls TensorFlow is given below.\nThis implemetation requires two theano.tensor.Op subclasses, one for the operation itself (TensorFlowOp) and one for the gradient operation (_TensorFlowGradOp).\nLike Theano, TensorFlow has support for reverse-mode automatic differentiation, so we can use the tf.gradients function to provide the gradients for the op.", "import numpy as np\n\nimport tensorflow as tf\nsession = tf.get_default_session()\nif session is None:\n session = tf.InteractiveSession()\n\nimport theano\nimport theano.tensor as tt\n\ndef _to_tensor_type(shape):\n return tt.TensorType(dtype=\"float64\", broadcastable=[False]*len(shape))\n\n\nclass TensorFlowOp(tt.Op):\n \"\"\"A custom Theano Op uses TensorFlow as the computation engine\n \n Args:\n target (Tensor): The TensorFlow tensor defining the output of\n this operation\n parameters (list(Tensor)): A list of TensorFlow tensors that\n are inputs to this operation\n names (Optional(list)): A list of names for the parameters.\n These are the names that will be used within PyMC3\n feed_dict (Optional(dict)): A \"feed_dict\" that is provided to\n the TensorFlow session when the operation is executed\n session (Optional): A TensorFlow session that can be used to\n evaluate the operation\n \n \"\"\"\n def __init__(self, target, parameters, names=None, feed_dict=None, session=None):\n self.parameters = parameters\n self.names = names\n self._feed_dict = dict() if feed_dict is None else feed_dict\n self._session = session\n self.target = target\n \n # Execute the operation once to work out the shapes of the\n # parameters and the target\n in_values, out_value = self.session.run(\n [self.parameters, self.target], feed_dict=self._feed_dict)\n self.shapes = [np.shape(v) for v in in_values]\n self.output_shape = np.shape(out_value)\n \n # Based on this result, work out the shapes that the Theano op\n # will take in and return\n self.itypes = tuple([_to_tensor_type(shape) for shape in self.shapes])\n self.otypes = tuple([_to_tensor_type(self.output_shape)])\n \n # Build another custom op to represent the gradient (see below)\n self._grad_op = _TensorFlowGradOp(self)\n\n @property\n def session(self):\n \"\"\"The TensorFlow session associated with this operation\"\"\"\n if self._session is None:\n self._session = tf.get_default_session()\n return self._session\n \n def get_feed_dict(self, sample):\n \"\"\"Get the TensorFlow feed_dict for a given sample\n \n This method will only work when a value for ``names`` was provided\n during instantiation.\n \n sample (dict): The specification of a specific sample in the chain\n \n \"\"\"\n if self.names is None:\n raise RuntimeError(\"'names' must be set in order to get the feed_dict\")\n return dict(((param, sample[name])\n for name, param in zip(self.names, self.parameters)),\n **self._feed_dict)\n \n def infer_shape(self, node, shapes):\n \"\"\"A required method that returns the shape of the output\"\"\"\n return self.output_shape,\n\n def perform(self, node, inputs, outputs):\n \"\"\"A required method that actually executes the operation\"\"\"\n # To execute the operation using TensorFlow we must map the inputs from\n # Theano to the TensorFlow parameter using a \"feed_dict\"\n feed_dict = dict(zip(self.parameters, inputs), **self._feed_dict)\n outputs[0][0] = np.array(self.session.run(self.target, feed_dict=feed_dict))\n\n def grad(self, inputs, gradients):\n \"\"\"A method that returns Theano op to compute the gradient\n \n In this case, we use another custom op (see the definition below).\n \n \"\"\"\n op = self._grad_op(*(inputs + gradients))\n # This hack seems to be required for ops with a single input\n if not isinstance(op, (list, tuple)):\n return [op]\n return op\n\nclass _TensorFlowGradOp(tt.Op):\n \"\"\"A custom Theano Op defining the gradient of a TensorFlowOp\n \n Args:\n base_op (TensorFlowOp): The original Op\n \n \"\"\"\n def __init__(self, base_op):\n self.base_op = base_op\n \n # Build the TensorFlow operation to apply the reverse mode\n # autodiff for this operation\n # The placeholder is used to include the gradient of the\n # output as a seed\n self.dy = tf.placeholder(tf.float64, base_op.output_shape)\n self.grad_target = tf.gradients(base_op.target,\n base_op.parameters,\n grad_ys=self.dy)\n\n # This operation will take the original inputs and the gradient\n # seed as input\n types = [_to_tensor_type(shape) for shape in base_op.shapes]\n self.itypes = tuple(types + [_to_tensor_type(base_op.output_shape)])\n self.otypes = tuple(types)\n \n def infer_shape(self, node, shapes):\n return self.base_op.shapes\n\n def perform(self, node, inputs, outputs):\n feed_dict = dict(zip(self.base_op.parameters, inputs[:-1]),\n **self.base_op._feed_dict)\n feed_dict[self.dy] = inputs[-1]\n result = self.base_op.session.run(self.grad_target, feed_dict=feed_dict)\n for i, r in enumerate(result):\n outputs[i][0] = np.array(r)", "We can test that our op works for some simple test cases.\nFor example, we can add a simple (read: silly) op that uses TensorFlow to perform an elementwise square of a vector.", "from theano.tests import unittest_tools as utt\nnp.random.seed(42)\n\n# Define the operation in TensorFlow\nx = tf.Variable(np.random.randn(5), dtype=tf.float64)\nsq = tf.square(x)\nsession.run(tf.global_variables_initializer())\n\n# Define the Theano op\nsquare_op = TensorFlowOp(sq, [x])\n\n# Test that the gradient is correct\npt = session.run(square_op.parameters)\nutt.verify_grad(square_op, pt)", "This is obviously a silly example because Theano already has this functionality, but this can also be generalized to more complicated models.\nThis TensorFlowOp implementation will be sufficient for our purposes, but it has some limitations including:\n\nBy design, the output of the operation must be a single tensor. It shouldn't be too hard to generalize this to multiple outputs if you need to, but I haven't tried.\nThe input and output variables must have fixed dimensions. When the TensorFlowOp is initialized, the input and output tensors will be evaluated using the current TensorFlow session to work out the shapes.\netc., I'm sure.\n\nAn example\nFor this demonstration, we'll fit a very simple model that would actually be much easier to just fit using vanilla PyMC3, but it'll still be useful for demonstrating what we're trying to do.\nWe'll fit a line to data with the likelihood function:\n$$\np({y_n}\\,|\\,m,\\,b,\\,s) = \\prod_{n=1}^N \\frac{1}{\\sqrt{2\\,\\pi\\,s^2}}\\,\\exp\\left(-\\frac{(y_n-m\\,x_n-b)^2}{s^2}\\right)\n$$\nwhere $m$, $b$, and $s$ are the parameters.\nWe'll choose uniform priors on $m$ and $b$, and a log-uniform prior for $s$.\nTo get started, generate some data:", "import numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(42)\n\ntrue_params = np.array([0.5, -2.3, -0.23])\n\nN = 50\nt = np.linspace(0, 10, 2)\nx = np.random.uniform(0, 10, 50)\ny = x * true_params[0] + true_params[1]\ny_obs = y + np.exp(true_params[-1]) * np.random.randn(N)\n\nplt.plot(x, y_obs, \".k\", label=\"observations\")\nplt.plot(t, true_params[0]*t + true_params[1], label=\"truth\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.legend(fontsize=14);", "Next, define the log-likelihood function in TensorFlow:", "m_tensor = tf.Variable(0.0, dtype=tf.float64, name=\"m\")\nb_tensor = tf.Variable(0.0, dtype=tf.float64, name=\"b\")\nlogs_tensor = tf.Variable(0.0, dtype=tf.float64, name=\"logs\")\n\nt_tensor = tf.constant(t, dtype=tf.float64)\nx_tensor = tf.constant(x, dtype=tf.float64)\ny_tensor = tf.constant(y_obs, dtype=tf.float64)\n\nmean = m_tensor * x_tensor + b_tensor\npred = m_tensor * t_tensor + b_tensor\n\nloglike = -0.5 * tf.reduce_sum(tf.square(y_tensor - mean)) * tf.exp(-2*logs_tensor)\nloglike -= 0.5 * N * logs_tensor\n\nsession.run(tf.global_variables_initializer())", "And then we can fit for the maximum likelihood parameters using an optimizer from TensorFlow:", "params = [m_tensor, b_tensor, logs_tensor]\nopt = tf.contrib.opt.ScipyOptimizerInterface(-loglike, params)\nopt.minimize(session)", "Here is the maximum likelihood solution compared to the data and the true relation:", "plt.plot(x, y_obs, \".k\", label=\"observations\")\nplt.plot(t, true_params[0]*t + true_params[1], label=\"truth\")\nplt.plot(t, pred.eval(), label=\"max.\\ like.\")\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.legend(fontsize=14);", "Finally, let's use PyMC3 to generate posterior samples for this model:", "import pymc3 as pm\n\n# First, expose the TensorFlow log likelihood implementation to Theano\n# so that PyMC3 can use it\n# NOTE: The \"names\" parameter refers to the names that will be used in\n# in the PyMC3 model (see below)\ntf_loglike = TensorFlowOp(loglike, [m_tensor, b_tensor, logs_tensor],\n names=[\"m\", \"b\", \"logs\"])\n\n# Test the gradient\npt = session.run(tf_loglike.parameters)\nutt.verify_grad(tf_loglike, pt)\n\n# Set up the model as usual\nwith pm.Model() as model:\n # Uniform priors on all the parameters\n m = pm.Uniform(\"m\", -5, 5)\n b = pm.Uniform(\"b\", -5, 5)\n logs = pm.Uniform(\"logs\", -5, 5)\n \n # Define a custom \"potential\" to calculate the log likelihood\n pm.Potential(\"loglike\", tf_loglike(m, b, logs))\n \n # NOTE: You *must* use \"cores=1\" because TensorFlow can't deal\n # with being pickled!\n trace = pm.sample(1000, tune=2000, cores=1, nuts_kwargs=dict(target_accept=0.9))", "After sampling, we can make the usual diagnostic plots.\nFirst, the trace plots:", "pm.traceplot(trace);", "Then the \"corner\" plot:", "# http://corner.readthedocs.io\nimport corner\n\nsamples = np.vstack([trace[k].flatten() for k in [\"m\", \"b\", \"logs\"]]).T\ncorner.corner(samples, labels=[\"m\", \"b\", \"log(s)\"]);", "And finally the posterior predictions for the line:", "plt.plot(x, y_obs, \".k\", label=\"observations\")\n\nfor j in np.random.randint(len(trace), size=25):\n feed_dict = tf_loglike.get_feed_dict(trace[j])\n plt.plot(t, pred.eval(feed_dict=feed_dict), color=\"C1\", alpha=0.3)\n\nplt.plot(t, true_params[0]*t + true_params[1], label=\"truth\")\nplt.plot([], [], color=\"C1\", label=\"post.\\ samples\")\n\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.legend(fontsize=14);", "Conclusion\nIn this post, I demonstrated a hack that allows us to use PyMC3 to sample a model defined using TensorFlow.\nThis might be useful if you already have an implementation of your model in TensorFlow and don't want to learn how to port it it Theano, but it also presents an example of the small amount of work that is required to support non-standard probabilistic modeling languages with PyMC3.\nIt should be possible (easy?) to implement something similar for TensorFlow probability, PyTorch, autograd, or any of your other favorite modeling frameworks.\nI hope that you find this useful in your research and don't forget to cite PyMC3 in all your papers. Thanks for reading!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joekasp/ionic_liquids
ionic_liquids/examples/.ipynb_checkpoints/Example_Workflow-checkpoint.ipynb
mit
[ "Example of the Workflow\nThis is an example of main.py in the ionic_liquids folder. I will first have to import the libraries that are necessary to run this program, including train_test_split that allows for splitting datasets into training sets and test sets necessary to run machine learning.", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom rdkit import Chem\nfrom rdkit.Chem import AllChem, Descriptors\nfrom rdkit.ML.Descriptors.MoleculeDescriptors import MolecularDescriptorCalculator as Calculator", "For this example, I will utilize the following filename, machine learning model, and directory name to save the model.", "FILENAME = 'inputdata2.xlsx'\nMODEL = 'mlp_regressor'\nDIRNAME = 'my_test'", "The following step prepares the data to be read in the machine_learning methods. First, we need to get the data into a readable form and parse, if necessary. In our case, we need to parse the values and errors in the last column of the FILENAME.", "def read_data(filename):\n \"\"\"\n Reads data in from given file to Pandas DataFrame\n\n Inputs\n -------\n filename : string of path to file\n\n Returns\n ------\n df : Pandas DataFrame\n\n \"\"\"\n cols = filename.split('.')\n name = cols[0]\n filetype = cols[1]\n if (filetype == 'csv'):\n df = pd.read_csv(filename)\n elif (filetype in ['xls', 'xlsx']):\n df = pd.read_excel(filename)\n else:\n raise ValueError('Filetype not supported')\n\n # clean the data if necessary\n df['EC_value'], df['EC_error'] = zip(*df['ELE_COD'].map(lambda x: x.split('±')))\n df = df.drop('EC_error', 1)\n df = df.drop('ELE_COD', 1)\n\n return df\n\ndf = read_data(FILENAME)\n", "Secondly, we will create a X matrix and y vector that are send to the molecular descriptor function in utils.py. The X matrix will hold all of our inputs for the machine learning whereas y vector will be the actual electronic conductivity values.", "def molecular_descriptors(data):\n \"\"\"\n Use RDKit to prepare the molecular descriptor\n\n Inputs\n ------\n data: dataframe, cleaned csv data\n\n Returns\n ------\n prenorm_X: normalized input features\n Y: experimental electrical conductivity\n\n \"\"\"\n\n n = data.shape[0]\n # Choose which molecular descriptor we want\n list_of_descriptors = ['NumHeteroatoms', 'ExactMolWt',\n 'NOCount', 'NumHDonors',\n 'RingCount', 'NumAromaticRings', \n 'NumSaturatedRings', 'NumAliphaticRings']\n # Get the molecular descriptors and their dimension\n calc = Calculator(list_of_descriptors)\n D = len(list_of_descriptors)\n d = len(list_of_descriptors)*2 + 4\n\n Y = data['EC_value']\n X = np.zeros((n, d))\n X[:, -3] = data['T']\n X[:, -2] = data['P']\n X[:, -1] = data['MOLFRC_A']\n for i in range(n):\n A = Chem.MolFromSmiles(data['A'][i])\n B = Chem.MolFromSmiles(data['B'][i])\n X[i][:D] = calc.CalcDescriptors(A)\n X[i][D:2*D] = calc.CalcDescriptors(B)\n\n prenorm_X = pd.DataFrame(X,columns=['NUM', 'NumHeteroatoms_A', \n 'MolWt_A', 'NOCount_A','NumHDonors_A', \n 'RingCount_A', 'NumAromaticRings_A', \n 'NumSaturatedRings_A',\n 'NumAliphaticRings_A', \n 'NumHeteroatoms_B', 'MolWt_B', \n 'NOCount_B', 'NumHDonors_B',\n 'RingCount_B', 'NumAromaticRings_B', \n 'NumSaturatedRings_B', \n 'NumAliphaticRings_B',\n 'T', 'P', 'MOLFRC_A'])\n\n prenorm_X = prenorm_X.drop('NumAliphaticRings_A', 1)\n prenorm_X = prenorm_X.drop('NumAliphaticRings_B', 1)\n\n return prenorm_X, Y\nX, y = molecular_descriptors(df)", "We can prepare our testing and training data set for the machine learning calling using train_test_split, a function called from sklearn module of python.", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)", "Followingly, the program will normalize the testing data using the training data set. This will also provide us with the mean value and standard deviation of X.", "def normalization(data, means=None, stdevs=None):\n \"\"\"\n Normalizes the data using the means and standard\n deviations given, calculating them otherwise.\n Returns the means and standard deviations of columns.\n\n Inputs\n ------\n data : Pandas DataFrame\n means : optional numpy argument of column means\n stdevs : optional numpy argument of column st. devs\n\n Returns\n ------\n normed : the normalized DataFrame\n means : the numpy row vector of column means\n stdevs : the numpy row vector of column st. devs\n\n \"\"\"\n cols = data.columns\n data = data.values\n\n if (means is None) or (stdevs is None):\n means = np.mean(data, axis=0)\n stdevs = np.std(data, axis=0, ddof=1)\n else:\n means = np.array(means)\n stdevs = np.array(stdevs)\n\n # handle special case of one row\n if (len(data.shape) == 1) or (data.shape[0] == 1):\n for i in range(len(data)):\n data[i] = (data[i] - means[i]) / stdevs[i]\n else: \n for i in range(data.shape[1]):\n data[:,i] = (data[:,i] - means[i]*np.ones(data.shape[0])) / stdevs[i]\n\n normed = pd.DataFrame(data, columns=cols)\n\n return normed, means, stdevs\n\nX_train, X_mean, X_std = normalization(X_train)\nX_test, trash, trash = normalization(X_test, X_mean, X_std)", "We coded three models into our program: MLP_regressor, LASSO, and SVR. Each of these models are well documented in sklearn, a library in python. In the actual program, you can use all three models, but for the purpose of this example, we chose mlp_regressor. The ValueError will only raise if you do not use one of the three models. A good example is if you were to change the MODEL used to 'MLP_classifier'.", "if (MODEL.lower() == 'mlp_regressor'):\n obj = methods.do_MLP_regressor(X_train, y_train)\nelif (MODEL.lower() == 'lasso'):\n obj = methods.do_lasso(X_train, y_train)\nelif (MODEL.lower() == 'svr'):\n obj = methods.do_svr(X_train, y_train)\nelse:\n raise ValueError(\"Model not supported\")", "After the method is called , it will be saved to an objective. This objective is saved along with the mean and standard deviation and the training set in the directory, named DIRNAME. This step is not as important for the workflow but vital to the success of the graphical user interface.", "def save_model(obj, X_mean, X_stdev, X=None, y=None, dirname='default'):\n \"\"\"\n Save the trained regressor model to the file\n\n Input\n ------\n obj: model object\n X_mean : mean for each column of training X\n X_stdev : stdev for each column of training X\n X : Predictor matrix\n y : Response vector\n dirname : the directory to save contents\n\n Returns\n ------\n None\n \"\"\"\n if (dirname == 'default'):\n timestamp = str(datetime.now())[:19]\n dirname = 'model_'+timestamp.replace(' ', '_')\n else:\n pass\n if not os.path.exists(dirname):\n os.makedirs(dirname)\n\n filename = dirname + '/model.pkl'\n joblib.dump(obj, filename)\n\n joblib.dump(X_mean, dirname+'/X_mean.pkl')\n joblib.dump(X_stdev, dirname+'/X_stdev.pkl')\n\n if (X is not None):\n filename = dirname + '/X_data.pkl'\n joblib.dump(X, filename)\n else:\n pass\n\n if (y is not None):\n filename = dirname + '/y_data.pkl'\n joblib.dump(y, filename)\n else:\n pass\n\n return\n\nsave_model(obj, X_mean, X_std, X_train, y_train, dirname=DIRNAME)", "Lastly, the experimental values will be scatter plotted against the predicted values. We will use the parity_plot to do so. plt.show() function will just allow the plot to show up.", "def parity_plot(y_pred, y_act):\n \"\"\"\n Creates a parity plot\n\n Input\n -----\n y_pred : predicted values from the model\n y_act : 'true' (actual) values\n\n Output\n ------\n fig : matplotlib figure\n\n \"\"\"\n\n fig = plt.figure(figsize=FIG_SIZE)\n plt.scatter(y_act, y_pred)\n plt.plot([y_act.min(), y_act.max()], [y_act.min(), y_act.max()],\n lw=4, color='r')\n plt.xlabel('Actual')\n plt.ylabel('Predicted')\n\n return fig\n\nmy_plot = parity_plot(y_train, obj.predict(X_train))\nplt.show(my_plot)", "Feel free to look at the other examples that will be more explicit about the functions. I hope you enjoy our package and use it to fit your needs!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
stsouko/CGRtools
doc/tutorial/5_transformation_rules.ipynb
lgpl-3.0
[ "5. Transformation rules extraction\n\n(c) 2019, 2020 Dr. Ramil Nugmanov;\n(c) 2019 Dr. Timur Madzhidov; Ravil Mukhametgaleev\n\nInstallation instructions of CGRtools package information and tutorial's files see on https://github.com/stsouko/CGRtools\nNOTE: Tutorial should be performed sequentially from the start. Random cell running will lead to unexpected results.", "import pkg_resources\nif pkg_resources.get_distribution('CGRtools').version.split('.')[:2] != ['4', '0']:\n print('WARNING. Tutorial was tested on 4.0 version of CGRtools')\nelse:\n print('Welcome!')\n\n# load data for tutorial\nfrom pickle import load\nfrom traceback import format_exc\n\nwith open('molecules.dat', 'rb') as f:\n molecules = load(f) # list of MoleculeContainer objects\nwith open('reactions.dat', 'rb') as f:\n reactions = load(f) # list of ReactionContainer objects\n\nm3 = molecules[2]\nm7 = m3.copy()\nm7.standardize()\nm7.thiele()\nm8 = m7.substructure([4, 5, 6, 7, 8, 9])\ncgr1 = m7 ^ m8 \n\nfrom CGRtools.containers import *", "CGRtools can be used to generate molecules and reactions based on a given transformation rule.\nHow to extract transformation rule", "cgr1\n\ncgr1.center_atoms # list of atom numbers of reaction center. If several centers exist they will also be added to this list.\n\ncgr1.center_bonds # list of dynamic bonds as tuples of adjacent atom numbers\n\ncgr1.centers_list # list of lists of atom numbers belonging to each reaction center. \n # Distant reaction centers will be split into separate lists\n\nrc1 = cgr1.substructure([13, 7], as_query=True) # get reaction center from CGR and transform reaction into query\nprint(rc1)\nrc1", "rc1 is phenol reduction, phenol is transformed into unsubstituted benzene" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
mohanprasath/Course-Work
coursera/python_for_data_science/3.4_Objects_and_Classes.ipynb
gpl-3.0
[ "<a href=\"http://cocl.us/topNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png\" width = 750, align = \"center\"></a>\n<a href=\"https://www.bigdatauniversity.com\"><img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 300, align = \"center\"></a>\n<h1, align=center>PYTHON OBJECTS AND CLASSES</h1>\nWelcome!\nObjects in programming are like objects in real life. Like life, there are different classes of objects. In this notebook, we will create two classes called Circle and Rectangle. By the end of this notebook, you will have a better idea about :\n-what a class is\n-what an attribute is\n-what a method is\nDon’t worry if you don’t get it the first time, as much of the terminology is confusing. Don’t forget to do the practice tests in the notebook.\nIntroduction\nCreating a Class\nThe first part of creating a class is giving it a name: In this notebook, we will create two classes, Circle and Rectangle. We need to determine all the data that make up that class, and we call that an attribute. Think about this step as creating a blue print that we will use to create objects. In figure 1 we see two classes, circle and rectangle. Each has their attributes, they are variables. The class circle has the attribute radius and colour, while the rectangle has the attribute height and width. Let’s use the visual examples of these shapes before we get to the code, as this will help you get accustomed to the vocabulary.\n<a ><img src = \"https://ibm.box.com/shared/static/h2w03relr84lb8ofto2zk0dp9naiykfg.png\" width = 500, align = \"center\"></a>\n \n<h4 align=center>\n\n\n#### Figure 1: Classes circle and rectangle, and each has their own attributes. The class circle has the attribute radius and colour, the rectangle has the attribute height and width. \n\n\n#### Instances of a Class: Objects and Attributes\n\nAn instance of an object is the realisation of a class, and in figure 2 we see three instances of the class circle. We give each object a name: red circle, yellow circle and green circle. Each object has different attributes, so let's focus on the attribute of colour for each object.\n\n <a ><img src = \"https://ibm.box.com/shared/static/bz20uxc78sbv8knixnl3a52z2u2r74zp.png\" width = 500, align = \"center\"></a>\n <h4 align=center>\n Figure 2: Three instances of the class circle or three objects of type circle. \n\n\n\n\n The colour attribute for the red circle is the colour red, for the green circle object the colour attribute is green, and for the yellow circle the colour attribute is yellow. \n\n\n#### Methods \n\nMethods give you a way to change or interact with the object; they are functions that interact with objects. For example, let’s say we would like to increase the radius by a specified amount of a circle. We can create a method called **add_radius(r)** that increases the radius by **r**. This is shown in figure 3, where after applying the method to the \"orange circle object\", the radius of the object increases accordingly. The “dot” notation means to apply the method to the object, which is essentially applying a function to the information in the object.\n\n <a ><img src = \"https://ibm.box.com/shared/static/53b39xh7snepk0my8z7t9n9wzres4drf.png\" width = 500, align = \"center\"></a>\n <h4 align=center>\n Figure 3: Applying the method “add_radius” to the object orange circle object . \n\n\n\n\n# Creating a Class\n\nNow we are going to create a class circle, but first, we are going to import a library to draw the objects:", "import matplotlib.pyplot as plt\n%matplotlib inline \n", "The first step in creating your own class is to use the class keyword, then the name of the class as shown in Figure 4. In this course the class parent will always be object: \n<a ><img src = \"https://ibm.box.com/shared/static/q9394f3aip7lbu4k1yct5pczst5ec3sk.png\" width = 400, align = \"center\"></a>\n \n<h4 align=center>\n Figure 4: Three instances of the class circle or three objects of type circle. \n\n\n\nThe next step is a special method called a constructor **__init__**, which is used to initialize the object. The input are data attributes. The term **self** contains all the attributes in the set. For example the **self.color** gives the value of the attribute colour and **self.radius** will give you the radius of the object. We also have the method **add_radius()** with the parameter **r**, the method adds the value of **r** to the attribute radius. To access the radius we use the sintax **self.radius**. The labeled syntax is summarized in Figure 5:\n\n\n\n <a ><img src = \"https://ibm.box.com/shared/static/25j0jezklf6snhh3ps61d0djzwx8kgwa.png\" width = 600, align = \"center\"></a>\n <h4 align=center>\n Figure 5: Labeled syntax of the object circle.\n\n\n\n\nThe actual object is shown below. We include the method drawCircle to display the image of a circle. We set the default radius to 3 and the default colour to blue:", "\nclass Circle(object):\n \n def __init__(self,radius=3,color='blue'):\n \n self.radius=radius\n self.color=color \n \n def add_radius(self,r):\n \n self.radius=self.radius+r\n return(self.radius)\n def drawCircle(self):\n \n plt.gca().add_patch(plt.Circle((0, 0), radius=self.radius, fc=self.color))\n plt.axis('scaled')\n plt.show() ", "Creating an instance of a class Circle\nLet’s create the object RedCircle of type Circle to do the following:", "RedCircle=Circle(10,'red')", "We can use the dir command to get a list of the object's methods. Many of them are default Python methods.", "dir(RedCircle)", "We can look at the data attributes of the object:", "RedCircle.radius\n\nRedCircle.color", "We can change the object's data attributes:", "RedCircle.radius=1\n\nRedCircle.radius", "We can draw the object by using the method drawCircle():", "RedCircle.drawCircle()", "We can increase the radius of the circle by applying the method add_radius(). Let increases the radius by 2 and then by 5:", "print('Radius of object:',RedCircle.radius)\nRedCircle.add_radius(2)\nprint('Radius of object of after applying the method add_radius(2):',RedCircle.radius)\nRedCircle.add_radius(5)\nprint('Radius of object of after applying the method add_radius(5):',RedCircle.radius)", "Let’s create a blue circle. As the default colour is blue, all we have to do is specify what the radius is:", "BlueCircle=Circle(radius=100)", "As before we can access the attributes of the instance of the class by using the dot notation:", "BlueCircle.radius\n\nBlueCircle.color", "We can draw the object by using the method drawCircle():", "BlueCircle.drawCircle()", "Compare the x and y axis of the figure to the figure for RedCircle; they are different.\nThe Rectangle Class\nLet's create a class rectangle with the attributes of height, width and colour. We will only add the method to draw the rectangle object:", "class Rectangle(object):\n \n def __init__(self,width=2,height =3,color='r'):\n self.height=height \n self.width=width\n self.color=color\n \n def drawRectangle(self):\n import matplotlib.pyplot as plt\n plt.gca().add_patch(plt.Rectangle((0, 0),self.width, self.height ,fc=self.color))\n plt.axis('scaled')\n plt.show()", "Let’s create the object SkinnyBlueRectangle of type Rectangle. Its width will be 2 and height will be 3, and the colour will be blue:", "SkinnyBlueRectangle= Rectangle(2,10,'blue')", "As before we can access the attributes of the instance of the class by using the dot notation:", "SkinnyBlueRectangle.height \n\nSkinnyBlueRectangle.width\n\nSkinnyBlueRectangle.color", "We can draw the object:", "SkinnyBlueRectangle.drawRectangle()", "Let’s create the object “FatYellowRectangle” of type Rectangle :", "FatYellowRectangle = Rectangle(20,5,'yellow')", "We can access the attributes of the instance of the class by using the dot notation:", "FatYellowRectangle.height \n\nFatYellowRectangle.width\n\nFatYellowRectangle.color", "We can draw the object:", "FatYellowRectangle.drawRectangle()", "<a href=\"http://cocl.us/bottemNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png\" width = 750, align = \"center\"></a>\nAbout the Authors:\nJoseph Santarcangelo has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.\n<hr>\nCopyright &copy; 2017 Cognitiveclass.ai. This notebook and its source code are released under the terms of the MIT License.​" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bjornaa/ladim
examples/line/animate.ipynb
mit
[ "Animating LADiM output\nThis notebook demonstrates how to animate LADiM output in a jupyter notebook.\nThe example is modified from a notebook from Pål N. Sævik\nImports", "# Basic\nimport numpy as np\nfrom netCDF4 import Dataset\n\n# Plotting\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom IPython.display import HTML\n%matplotlib inline\n\n# Ladim\nfrom postladim import ParticleFile", "Basic settings", "# Files\nladim_file = 'line.nc'\ngrid_file = '../data/ocean_avg_0014.nc'\n\n# Subgrid for plotting\ni0, i1 = 50, 150\nj0, j1 = 60, 140", "Plot initial particle distribution\nThis also prepares for the following animation.", "# Read data for background plot\nwith Dataset(grid_file) as ncid:\n H = ncid.variables['h'][j0:j1, i0:i1]\n M = ncid.variables['mask_rho'][j0:j1, i0:i1]\n lon = ncid.variables['lon_rho'][j0:j1, i0:i1]\n lat = ncid.variables['lat_rho'][j0:j1, i0:i1]\n \n# Cell centers and boundaries\nXcell = np.arange(i0, i1)\nYcell = np.arange(j0, j1)\nXb = np.arange(i0-0.5, i1)\nYb = np.arange(j0-0.5, j1)\n\n# Set up the plot area\nfig = plt.figure(figsize=(8, 6))\nax = plt.axes(xlim=(i0+1, i1-1), ylim=(j0+1, j1-1), aspect='equal')\n\n# Background bathymetry\ncmap = plt.get_cmap('Blues')\nax.contourf(Xcell, Ycell, H, cmap=cmap)\n\n# Lon/lat lines\nax.contour(Xcell, Ycell, lat, levels=range(55, 63),\n colors='grey', linestyles=':')\nax.contour(Xcell, Ycell, lon, levels=range(-6, 10, 2),\n colors='grey', linestyles=':')\n\n# A simple landmask from the ROMS grid\nconstmap = plt.matplotlib.colors.ListedColormap([0.2, 0.6, 0.4])\nM = np.ma.masked_where(M > 0, M)\nax.pcolormesh(Xb, Yb, M, cmap=constmap)\n\n# particle_file\npf = ParticleFile(ladim_file)\n\n# Particle plot\nX, Y = pf.position(0)\ndots, = ax.plot(X, Y, '.', color='red')\n\n# Time stamp, lower left corner\ntimestamp = ax.text(0.03, 0.03, pf.time(0), fontsize=15,\n transform=ax.transAxes)\n\n# Animation update function\ndef plot_dots(timestep):\n X, Y = pf.position(timestep)\n dots.set_data(X, Y)\n timestamp.set_text(pf.time(timestep))\n return dots\n\nplot_dots(0);", "Animation", "anim = animation.FuncAnimation(fig, plot_dots,\n frames=pf.num_times, interval=50, repeat=False)\n\nHTML(anim.to_html5_video())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
UWSEDS-aut17/uwseds-group-city-fynders
examples/city_fynder.ipynb
mit
[ "Which city would you like to live in?\nCreated by City Fynders\n1. Import data", "import pandas as pd\nimport numpy as np\n\nimport cityfynders.data_processing as dp\n\n(natural, human, economy, tertiary) = dp.read_data()", "2. Add ranks in the DataFrame\nExample for getting ranks", "#natural\n\nnatural['Air'] = natural['Air'].rank(ascending=0)\nnatural['Water_quality'] = natural['Water_quality'].rank(ascending=0)\nnatural['Toxics'] = natural['Toxics'].rank(ascending=0)\nnatural['Hazardous'] = natural['Hazardous'].rank(ascending=0)\nnatural['Green_score_rank'] = natural['Green_score'].rank(ascending=1)\nnatural['Green_score_rank'].fillna(natural['Green_score_rank'].max()+1, inplace=True)\nnatural['Sanitation'].fillna(natural['Sanitation'].max()+1, inplace=True)\n\nnatural['Natural_total_score'] = (natural['Air'] + natural['Water_quality'] + natural['Toxics'] \n + natural['Hazardous'] + natural['Green_score_rank'])\nnatural['Natural_total_rank'] = natural['Natural_total_score'].rank(ascending=1)\n\n(natural, human, economy, tertiary) = dp.data_rank(natural, human, economy, tertiary)", "3. Get location information", "import geopy as gy\nfrom geopy.geocoders import Nominatim\n\ndef find_loc(dataframe):\n geolocator = Nominatim()\n lat = []\n lon = []\n for index, row in dataframe.iterrows():\n loc = geolocator.geocode(row['City'] + ' ' + row['State'] + ' United States')\n lat.append(loc.latitude)\n lon.append(loc.longitude)\n return lat, lon\n\n(Lat, Lon) = find_loc(human)", "4. Create a rank DataFrame and save as csv file", "rank = dp.create_rank(natural, human, economy, tertiary, Lat, Lon)", "5. Plot using plotly package", "from cityfynders.plotly_usmap import usmap\n\nusmap(rank)\n\nusmap(rank, 'natural')\n\nimport plotly\nimport plotly.plotly as py\n\n\n# human related rank\ndf = rank\ndf = df.sort_values('Human_related_rank', ascending=1)\ndf['reverse_rank'] = df['Human_related_rank'].rank(ascending=0)\n\ndf['text'] = df['City'] + '<br># Final Rank ' + (df['Human_related_rank']).astype(str) +\\\n '<br># Crime rank ' + (df['Crime_rank']).astype(str)+ '<br># Hospital rank ' +\\\n (df['Hospital_rank']).astype(str)+'<br># Early education rank ' + (df['Early_education_rank']).astype(str)+\\\n '<br># University education rank ' + (df['University_education_rank']).astype(str)\n\n\n\nlimits = [(0,10),(10,20),(20,30),(30,40),(40,50)]\ncolors = [\"rgb(0,116,217)\",\"rgb(255,65,54)\",\"rgb(133,20,75)\",\"rgb(255,133,27)\",\"lightgrey\"]\ncities = []\n\n\nfor i in range(len(limits)):\n lim = limits[i]\n df_sub = df[lim[0]:lim[1]]\n city = dict(\n type = 'scattergeo',\n locationmode = 'USA-states',\n lon = df_sub['Longitude'],\n lat = df_sub['Latitude'],\n text = df_sub['text'],\n marker = dict(\n size = df_sub['reverse_rank']*15,\n color = colors[i],\n line = dict(width=0.5, color='rgb(40,40,40)'),\n sizemode = 'area'\n ),\n name = '{0} - {1}'.format(lim[0],lim[1]) )\n cities.append(city)\n\n layout = dict(\n title = 'The human related ranking of US big cities',\n showlegend = True,\n geo = dict(\n scope='usa',\n projection=dict( type='albers usa' ),\n showland = True,\n landcolor = 'rgb(217, 217, 217)',\n subunitwidth=1,\n countrywidth=1,\n subunitcolor=\"rgb(255, 255, 255)\",\n countrycolor=\"rgb(255, 255, 255)\"\n ),\n )\n\nfig = dict( data=cities, layout=layout )\nplotly.offline.plot( fig, validate=False, filename='human-related-ranking-map.html' )", "6. Correlation Analysis", "# Correction Matrix Plot\nimport matplotlib.pyplot as plt\nimport cityfynders.data_processing as dp\n\n(natural, human, economy, tertiary) = dp.read_data()\nalldata = human\nfor i in[natural, economy, tertiary]:\n factors = list(i.columns.values)\n for j in factors:\n alldata[j] = i[j]\n\ndf = alldata[['Population', 'Violent', 'Rape', 'Robbery', 'Colleges',\n 'Percent_graduate_degree', 'AvgSATScore', 'NumTop200UnivInState',\n 'NumHospital', 'Jan_T', 'April_T', 'july_T', 'Oct_T', 'Prep_inch',\n 'Prep_days', 'Snowfall_inch', 'Green_score', 'Air', 'Water_quality',\n 'Toxics', 'Hazardous', 'Sanitation', 'Percent unemployment',\n 'State sale tax rate', 'Local tax rate', 'Total rate', 'Median Income',\n 'AvgTuition', 'Bars', 'Restaurant', 'Museums', 'Libraries',\n 'Pro_sports_team', 'Park_acres_per_1000_residents', 'NumTop200Restau']] \n\n\nnames = ['Pop', 'Violent', 'Colleges', 'Rape', 'Robbery', 'Colleges',\n 'Percent_graduate_degree', 'AvgSATScore', 'NumTop200UnivInState',\n 'NumHospital', 'Jan_T', 'April_T', 'july_T', 'Oct_T', 'Prep_inch',\n 'Prep_days', 'Snowfall_inch', 'Green_score', 'Air', 'Water_quality',\n 'Toxics', 'Hazardous', 'Sanitation', 'Percent unemployment',\n 'State sale tax rate', 'Local tax rate', 'Total rate', 'Median Income',\n 'AvgTuition', 'Bars', 'Restaurant', 'Museums', 'Libraries',\n 'Pro_sports_team', 'Park_acres_per_1000_residents', 'NumTop200Restau']\ndata = df\ncorrelations = data.corr()\n# plot correlation matrix\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\ncax = ax.matshow(correlations, vmin=-1, vmax=1)\nfig.colorbar(cax)\nticks = numpy.arange(0,34,1)\nax.set_xticks(ticks)\nax.set_yticks(ticks)\nax.set_xticklabels(names,rotation = 'vertical')\nax.set_yticklabels(names)\nplt.show()", "Based on the correlation plot, some interesting findings are found: \n1. Bars, restaurants both has positive impact on people with median income, this make sense beacause people with median income may spend more money on the entertainment place\n2. Factors like toxic, hazadous materials negatively correlated with air and water quality, this is also sensible for the city with cleaner air and water quality may have less toxic and hazardous materials" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nimagh/CNN_Implementations
Notebooks/CDAE.ipynb
gpl-3.0
[ "Denoising Autoencoders\nStacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion - Vincent et al. 2010\nUse this code with no warranty and please respect the accompanying license.", "# Imports\n%reload_ext autoreload\n%autoreload 1\n\nimport os, sys\nsys.path.append('../')\nsys.path.append('../common')\nsys.path.append('../GenerativeModels')\n\nfrom tools_general import tf, np\nfrom IPython.display import Image\nfrom tools_train import vis_square\nfrom tools_config import data_dir\nfrom tools_train import get_train_params, plot_latent_variable\nimport matplotlib.pyplot as plt\nimport imageio\nfrom tensorflow.examples.tutorials.mnist import input_data\nfrom tools_train import get_demo_data\n\n# define parameters\nnetworktype = 'CDAE_MNIST'\n\nwork_dir = '../trained_models/%s/' %networktype\nif not os.path.exists(work_dir): os.makedirs(work_dir)", "Network definitions", "from CDAE import create_encoder, create_decoder, create_cdae_trainer", "Training CDAE\nYou can either get the fully trained models from the google drive or train your own models using the CDAE.py script.\nExperiments\nCreate demo networks and restore weights", "iter_num = 30030\nbest_model = work_dir + \"Model_Iter_%.3d.ckpt\"%iter_num\nbest_img = work_dir + 'Rec_Iter_%d.jpg'%iter_num\nImage(filename=best_img)\n\nlatentD = 2\nbatch_size = 128\n\ntf.reset_default_graph() \ndemo_sess = tf.InteractiveSession()\n\nis_training = tf.placeholder(tf.bool, [], 'is_training')\n\nXph = tf.placeholder(tf.float32, [None, 28, 28, 1])\n\nXenc_op = create_encoder(Xph, is_training, latentD, reuse=False, networktype=networktype + '_Enc') \nXrec_op = create_decoder(Xenc_op, is_training, latentD, reuse=False, networktype=networktype + '_Dec')\n \ntf.global_variables_initializer().run()\n\nEnc_varlist = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_Enc') \nDec_varlist = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_Dec')\n \nsaver = tf.train.Saver(var_list=Enc_varlist+Dec_varlist)\nsaver.restore(demo_sess, best_model)", "Organization of the data on the latent space\nHere we encode all the test set data and plot the corresponding 2D values. The color will repsent respective number.", "#Get uniform samples over the labels\nspl = 800 # sample_per_label\ndata = input_data.read_data_sets(data_dir, one_hot=False, reshape=False)\nXdemo, Xdemo_labels = get_demo_data(data, spl)\nZdemo = np.random.normal(size=[spl * 10, latentD], loc=0.0, scale=1.).astype(np.float32)\n\ndecoded_data = demo_sess.run(Xenc_op, feed_dict={Xph:Xdemo, is_training:False})\nplot_latent_variable(decoded_data, Xdemo_labels)", "Generate new data\nSo CDAE is not a generative model per se and complex sampling methods exist that enable generating new data from their latent code. c.f. Generalized Denoising Auto-Encoders as Generative Models" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sassoftware/sas-viya-programming
python/data-mining/Factorization Machine Recommendation Engine Workflow.ipynb
apache-2.0
[ "Recommendation Engine for Movie Reviews\nFactorization Machine (FM) is one of the newest algorithms in the Machine Learning space, and has been developed in SAS. FM is a general prediction algorithm, similar to Support Vector Machines, that can deal with very sparce data, an area where traditional Machine Learning techniques fail.\n<br>\nRecommendation Engines are notoriously difficult due to their sparcity. We have many users and many rated items, but most users have rated very few of the items. Therefore, we will try to use a Factorization Machine to implement new movie recommendations for users\n<br>\nThis notebook has Five parts:\n1. Notebook Setup & Server Connection \n2. Data Exploration\n2. Recommendation Engine Considerations\n3. Train Recommendation Engine\n4. Make Recommendations\nPart I: Notebook Setup & Server Conneciton\nIn this section, we will load the necessary Python Packages, as well as set up a connection to our CAS Server\n<br>\nOur API is contained in our swat package, which will convert Python syntax below into language the CAS server can understand and execute. Results are then brought back to the Python Client", "#Load Packages\nimport swat\nfrom swat import *\nfrom swat.render import render_html\nfrom matplotlib import pyplot as plt\nimport numpy as np\n%matplotlib inline\nfrom IPython.display import HTML\nswat.options.cas.print_messages = True\n\n# Connect to the session\ns = CAS(cashost, casport)\n\n# Define directory and data file name\nindata_dir=\"/viyafiles/ankram/Data\"\nindata='movie_reviews'\nmovie_info= 'Movies_10k_desc_final'\n\n# Create a CAS library called DMLib pointing to the defined directory\n## Note, need to specify the srctype is path, otherwise it defaults to HDFS\ns.table.addCaslib(datasource={'srctype':'path'}, name='DMlib', path=indata_dir);\n\n# Push the relevant table In-Memory if it does not already exist\n## Note, this is a server side data load, not being loaded from the client\na = s.loadTable(caslib='DMlib', path=indata+'.sas7bdat', casout={'name':indata});\nb = s.loadTable(caslib='DMlib', path=movie_info+'.sas7bdat', casout={'name':movie_info});\n\n# Load necessary actionsets\nactions = ['fedSQL', 'transpose','sampling','factmac','astore', 'recommend']\n[s.loadactionset(i) for i in actions]\n\n# Set variables for later use by models\ntarget = 'rating'\nclass_inputs = ['usr_id', 'movie']\nall_inputs = [target] + class_inputs\n\n#Pointer Shortcut\nindata_p = a.casTable\nmovie_info_p = b.casTable", "Part 2: Data Exploration\nWe have the following input datasets/explorations available:\n\nMovie Dataset: Additional metadata, such as year, genre, and parental rating for movies\nUser Ratings: Movie Ratings available for each user\nOverall Average: Average Rating across all users and movie\n<br>\n\nOur goal is to recommend two new movies for each user\nMovie Dataset", "#See Overview Data\nprint(len(indata_p), \"Movies\")\nmovie_info_p[movie_info_p.columns[0:7]].head()\n\n#Distribution of Parental Ratings\nmovie_info_p['parental_rating'].value_counts()", "User Ratings", "print(len(indata_p), \"Ratings\")\nprint(len(indata_p[class_inputs[0]].value_counts()), \"Users\")\nindata_p.head()", "Distribution of Overall Reviews", "freq_table = (s.fedSQL.execDirect('''\n SELECT ''' + target + ''', count(*) as Frequency\n FROM '''+ indata +'''\n GROUP BY ''' + target + ''';\n''')['Result Set'].sort_values(target)).set_index(np.arange(1,6))\n\nplt.figure(figsize = (15, 10))\nplt.bar(np.arange(1,6),freq_table['FREQUENCY'], align='center')\nplt.xlabel('Actual Rating', size=20)\nplt.ylabel('# of Ratings (Frequency)', size=20)\nplt.title('Plot of Rating Frequency', size=25)\nplt.xlim([.5,5.5]);\nprint(indata_p[target].mean(), \"Average Review\")", "Part III: Recommendation Engine Considerations\nTwo Items to Consider:\n\nHoldout Sample: For Validation of Model\nModel Bias:\nGlobal Bias: Average rating for all users and movies\nUser Bias: Average rating per user\nMovie Bias: Average rating per movie\n<br>\n\n\n\nHoldout Sample\nFactorization machines need to be validated on users and movies that have been included in the training\n<br>\nTo accomplish this, we will stratify on both user and movie\n<br>\nWe will use a large training sample of ~90% of the data", "# Create a 70/30 stratified Split on Users\ns.sampling.stratified(\n table = dict(name = indata, groupby = class_inputs[0]),\n output = dict(casOut = dict(name = indata + '_prt_' + class_inputs[0], replace = True), copyVars = 'ALL'),\n samppct = 70,\n partind = True,\n seed = 123\n)\n\n# Create a 70/30 split for the movies\ns.sampling.stratified(\n table = dict(name = indata, groupby = class_inputs[1]),\n output = dict(casOut = dict(name = indata + '_prt_' + class_inputs[1], replace = True), copyVars = 'ALL'),\n samppct = 70,\n partind = True,\n seed = 123\n)\n\n# Combine the samples together into one dataset so that it's stratified by user and movie\n# Make the data 'blind' if it is part of the validation set so that it can be assessed\ns.fedSQL.execDirect('''\n CREATE TABLE ''' + indata +'''_prt {options replace=true} AS\n SELECT\n a.''' + class_inputs[0] + ''',\n a.''' + class_inputs[1] + ''',\n a.''' + target + ''',\n CASE WHEN a._PartInd_ + b._PartInd_ > 0 THEN 1 ELSE 0 END AS _PartInd_\n FROM \n ''' + indata + '_prt_' + class_inputs[0] + ''' a\n INNER JOIN ''' + indata + '_prt_' + class_inputs[1] + ''' b \n ON a.''' + class_inputs[0] + ' = b.' + class_inputs[0] + '''\n AND a.''' + class_inputs[1] + ' = b.' + class_inputs[1] + ''';\n''')\n\ns.CASTable(indata + '_prt')[all_inputs].query('_PartInd_=0').head()", "Bias Tables\nBias occurs because users unknowingly rate on different scales. Thus, a four star rating does not mean the same thing for two different users\n<br>\nThe Factorization Machine accounts for this bias by assuming a predicted rating is the sum of:\n1. A global bias (the average rating over all users and movies)\n2. A per-user bias (the average of the ratings given by the user)\n3. A per-item bias (the average of the ratings given to that movie)\n4. And a pairwise interaction term between the user and that particular item\n<br>\nFactorization Machines account for these innate biases when making predictions, and are able to estimate the pairwise interactions between specific users and movies in sparse data.\nOverall Bias", "indata_p[target].mean()", "User Bias Table", "# We can use SQL to find this, and further format using Python - sort_values() and head()\nrender_html(\ns.fedSQL.execDirect('''\n SELECT \n ''' + class_inputs[0] + ''', \n COUNT(''' + target + ''') AS num_ratings, \n AVG(''' + target + ''') AS avg_rating,\n AVG(''' + target + ''')-3.55 AS user_bias\n FROM ''' + indata + '''\n GROUP BY usr_id\n''')['Result Set'].sort_values(class_inputs[0]).head()\n)", "Movie Bias Table", "# I could use SQL to find this as well, but decided to use Python built-in functionality - groupby()\nmovie_bias = s.CASTable(indata).groupby(class_inputs[1])[target].summary(subset=['N','Mean']).concat_bygroups().Summary\nmovie_bias['Movie_Bias'] = movie_bias['Mean'] - 3.55\nmovie_bias.head()", "Part IV: Train the Recommendation Engine\n\nFactorization Machine Training\nAssess Model on holdout Sample\nSee Actual Rating vs Predicted", "#Join the Data together\ns.fedSQL.execDirect('''\n CREATE TABLE '''+ indata +'''_model {options replace=True} AS\n SELECT \n t1.*,\n t2.year,\n t2.parental_rating\n FROM\n ''' + indata + '''_prt t1\n LEFT JOIN ''' + movie_info +''' t2\n ON t1.movie = t2.movieId\n''')\n\ns.dataStep.runCode('''\n data '''+ indata +'''_model2 (replace=YES);\n set '''+ indata +'''_model;\n if parental_rating = \"\"\n then parental_rating=\"None\";\n if year ne .;\n run;\n \n ''')\n\ns.CASTable('movie_reviews_model2').head()", "Factorization Machine Training\nThe algorithm runs 5 iterations until converging <br>\nNote: We use Mean Squared Error(MSE) and Root Mean Squared Error(RMSE) to meausure the accuracy of our training. <br>", "# Build the factorization machine\nclass_inputs = ['usr_id', 'movie','parental_rating', 'year']\n\nr = s.factmac.factmac(\n table = dict(name = indata + '_model2', where = '_PartInd_ = 1'),\n inputs = class_inputs,\n nominals = class_inputs,\n target = target,\n maxIter = 5,\n nFactors = 2,\n learnStep = 0.1,\n seed = 12345,\n savestate = dict(name = 'fm_model', replace = True)\n)\n\nr['FinalLoss']", "Assess Model Holdout Sample\nWe want to calculate these fit statistics on the holdout sample to get an unbiased estimate of model performance on new data. We want to ensure that ourengines makes robust predicitons on new data and does not overfit\n<br>\nNote: A RMSE of 1 means that on average we miss the actual rating by 1 star", "# Score the factorization machine\ns.CASTable(indata + '_model2').astore.score(\n rstore = dict(name = 'fm_model'),\n out = dict(name = indata + '_scored', replace = True),\n copyVars = all_inputs + ['_PartInd_']\n)\n\n# Find the (predicted - actual) error rate on the validation set\ns.fedSQL.execDirect('''\n CREATE TABLE eval {options replace=true} AS\n SELECT \n a.*, \n a.P_''' + target + ' - a.' + target + ''' AS error\n FROM\n ''' + indata + '''_scored a\n WHERE a._PartInd_= 0\n''')\n\n# Compute the Mean Squared Error and Root Mean Squared Error\ns.fedSQL.execDirect('''\n SELECT \n AVG(error**2) AS MSE,\n SQRT(AVG(error**2)) AS RMSE\n FROM eval\n''')", "See Actual Rating vs Average Predicted Rating\nWhat we are hoping to see is that the average prediction has a positive correlation with the actual rating.", "rating = (s.fedSQL.execDirect('''\n SELECT ''' + target + ''', \n count(*) AS freqnency,\n AVG(P_''' + target + ''') AS avg_prediction\n FROM eval\n GROUP BY ''' + target + ''';\n''')['Result Set'].sort_values(target)).set_index(np.arange(1,6))\n\nrating", "Let's plot this table using Matplotlib\n\nBars Represent Actual Rating\n\nLine Represents the Average Predicted Rating for each rating level (1-5 stars)\n\n\nThe Relationship is linear, although we have room for improvement on lower end (1 star) or upper end (5 star)", "plt.figure(figsize = (12, 8))\nplt.bar(np.arange(1,6),rating['rating'], color='#eeefff', align='center')\nplt.plot(rating['rating'], rating['AVG_PREDICTION'], linewidth=3, label='Average Prediction')\nplt.xlabel('Actual Rating', size=20)\nplt.ylabel('Average Predicted Rating', size=20)\nplt.title('Plot of Average Predicted vs Actual Ratings', size=25)\nplt.ylim([0,6])\nplt.xlim([.5,5.5]);", "Part V: Make Recommendations for Users\nHere, we display the top two rated movies for ten users in our dataset\n* The recommendations have the highest predicted ratings for that specific user\n* Algorithm only recommends movies the user has not seen", "# Transpose the data using the completely redesigned transpose CAS action - this is running multi-threaded\ntest=s.transpose(\n table = dict(name = indata, groupBy = class_inputs[0], vars = target),\n id = class_inputs[1],\n casOut = dict(name = indata + '_transposed', replace = True)\n)\ns.CASTable(indata + '_transposed').head()\n\n\n# Find the movies the users have not watched and predict their potential rating\ns.transpose(\n table = dict(name = indata + '_transposed', groupBy = class_inputs[0]),\n casOut = dict(name = indata + '_long', replace = True)\n)\n\ns.dataStep.runcode('''\ndata ''' + indata + '''_long;\n set ''' + indata + '''_long;\n ''' + class_inputs[1] + ''' = 1.0*_NAME_;\n drop _NAME_;\n''')\n\ns.fedSQL.execDirect('''\nCREATE TABLE scoring_table{options replace=TRUE} AS\n SELECT \n a.*,\n b.title,\n b.year,\n b.Parental_Rating,\n b.genres\n FROM \n '''+ indata +'''_long a\n INNER JOIN '''+ movie_info +''' b\n ON\n a.'''+ class_inputs[1] +''' = b.movieId\n''')\n\n#Make Recommendations\nastore = s.CASTable('scoring_table')[all_inputs].query(target + ' is null').astore.score(\n rstore = dict(name = 'fm_model'),\n out = dict(name = indata + '_scored_new', replace = True),\n copyVars = class_inputs + ['title']\n)\n\n#See top recommendations per user\ns.CASTable('movie_reviews_scored_new') \\\n .groupby(class_inputs[0]) \\\n .sort_values([class_inputs[0], 'P_' + target], ascending = [True, False]) \\\n .query(\"parental_rating ^= 'NA'\") \\\n .head(2) \\\n .head(14)\n\n#Close the connection\ns.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
TomTranter/OpenPNM
examples/topology/Adding Boundary Pores.ipynb
mit
[ "Adding Boundary Pores", "import numpy as np\nimport openpnm as op\nnp.random.seed(10)\n%matplotlib inline", "Start by creating a Delaunay network. Because it uses random base points it will better illustrate the process of adding boundary pores to arbitrary networks:", "pn = op.network.Delaunay(num_points=200, shape=[1, 1, 0])\nprint(pn)", "As can be seen in the above printout, the Delaunay class predefines many labels including boundaries and sides. In fact, as can be seen in the plot below, the Delaunay class also adds boundary pores to the topology. (Note that the Delaunay network is generated randomly so your's will not look the same, nor have the same number of total pores and throats). In this case, the location of the boundary pores is determined from the Voronoi cell that surrounds each Delaunay point, so the boundary cells apper to be randomly oriented relative to the internal pore they are connected with. In the example that follows, we'll be removing these pores, then adding boundary pores in a manual way.", "#NBVAL_IGNORE_OUTPUT\nfig = op.topotools.plot_connections(network=pn)\nfig = op.topotools.plot_coordinates(network=pn, fig=fig, c='r')\nfig.set_size_inches((7, 7))", "For the purpose of this tutorial, we will trim these boundary pores from the network since we'll be adding our own.", "op.topotools.trim(network=pn, pores=pn.pores('boundary'))\nprint(pn)", "Plotting the network now shows the missing pores. Our goal will be re-add boundary pores to each face.", "#NBVAL_IGNORE_OUTPUT\nfig = op.topotools.plot_connections(network=pn)\nfig = op.topotools.plot_coordinates(network=pn, fig=fig, c='r')\nfig.set_size_inches((7, 7))", "Find surface pores\nThe topotools module in OpenPNM provides many handy helper functions for dealing with topology. We'll first use the find_surface_pores function. It works be specifying the location of a set of marker points outside the domain, then performing a Delaunay tessellation between these markers and the network pores. Any pores that form a simplex with the marker points are considered to be on the surface. By default OpenPNM will place one marker on each edge of the domain in an attempt to find all the surfaces. In our case, we will specify them manually to only find one face. \nSpecifying the markers can be a challenge. If we only specify a single marker, we will only find a limited number of surface pores due to the way the triangulation works.", "#NBVAL_IGNORE_OUTPUT\nmarkers = np.array([[-0.1, 0.5]])\nop.topotools.find_surface_pores(network=pn, markers=markers, label='left_surface')\nfig = op.topotools.plot_connections(network=pn)\nfig = op.topotools.plot_coordinates(network=pn, pores=pn.pores('left_surface'), fig=fig, c='r')\nfig.set_size_inches((7, 7))", "As can be seen, some of the pores in deeper recesses of the surface were not found by this method. If we want to be certain of finding all the surface pores on the left side of the domain we can add more markers:", "#NBVAL_IGNORE_OUTPUT\nmarkers = np.array([[-0.1, 0.2], [-0.1, 0.4], [-0.1, 0.6], [-0.1, 0.8]])\nop.topotools.find_surface_pores(network=pn, markers=markers, label='left_surface')\nfig = op.topotools.plot_connections(network=pn)\nfig = op.topotools.plot_coordinates(network=pn, pores=pn.pores('left_surface'), fig=fig, c='r')\nfig.set_size_inches((7, 7))", "Now we've captured several more pores. In some cases we may actually get more than we wanted, including some that are more correctly on the bottom of the domain. This is why finding surfaces requires a careful touch, although this problem becomes less important in domains with more pores.\nCloning surface pores\nNext we want to take the newly labeled surface pores and 'clone' them. This creates new pores in the network that are physically located in the same place as their 'parents'. They are also connected only to their 'parents' by default which is what we want, though this can be changed using the mode argument. In the following code, we tell the function to clone the 'left_surface' pores and to give them a new label of 'left_boundary'.", "op.topotools.clone_pores(network=pn, pores=pn.pores('left_surface'), labels=['left_boundary'])", "Now that we've cloned the pores, we need to move them. In this case we want them to all site on teh x=0 boundary face. We can do this by directly altering the 'pore.coords' array:", "Ps = pn.pores('left_boundary')\ncoords = pn['pore.coords'][Ps]\ncoords *= [0, 1, 1]\npn['pore.coords'][Ps] = coords\nprint(pn)", "The above code will set the x-coordinate of each of the cloned pores to 0, while maintaining the other coordinates the same. The result is:", "#NBVAL_IGNORE_OUTPUT\nfig = op.topotools.plot_connections(network=pn)\nfig = op.topotools.plot_coordinates(network=pn, pores=pn.pores('left_surface'), fig=fig, c='r')\nfig = op.topotools.plot_coordinates(network=pn, pores=pn.pores('left_boundary'), fig=fig, c='g')\nfig.set_size_inches((7, 7))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
v1.1/_downloads/ebb1d490725ed27549701eaf2c36b8b4/declarative_tutorial.ipynb
bsd-3-clause
[ "%matplotlib inline", "MetPy Declarative Syntax Tutorial\nThe declarative syntax that is a part of the MetPy packaged is designed to aid in simple\ndata exploration and analysis needs by simplifying the plotting context from typical verbose\nPython code. The complexity of data wrangling and plotting are hidden behind the simplified\nsyntax to allow a lower barrier to investigating your data.\nImports\nYou'll note that the number of imports is smaller due to using the declarative syntax.\nThere is no need to import Matplotlib or Cartopy to your code as all of that is done\nbehind the scenes.", "from datetime import datetime, timedelta\n\nimport xarray as xr\n\nimport metpy.calc as mpcalc\nfrom metpy.cbook import get_test_data\nfrom metpy.io import metar\nfrom metpy.plots.declarative import (BarbPlot, ContourPlot, FilledContourPlot, MapPanel,\n PanelContainer, PlotObs)\nfrom metpy.units import units", "Getting Data\nDepending on what kind of data you are wanting to plot you'll use either Xarray (for gridded\ndata), Pandas (for CSV data), or the MetPy METAR parser (for METAR data).\nWe'll start this tutorial by reading in a gridded dataset using Xarray.", "# Open the netCDF file as a xarray Dataset and parse the full dataset\ndata = xr.open_dataset(get_test_data('GFS_test.nc', False)).metpy.parse_cf()\n\n# View a summary of the Dataset\nprint(data)", "Set Datetime\nSet the date/time of that you desire to plot", "plot_time = datetime(2010, 10, 26, 12)", "Subsetting Data\nMetPy provides wrappers for the usual xarray indexing and selection routines that can handle\nquantities with units. For DataArrays, MetPy also allows using the coordinate axis types\nmentioned above as aliases for the coordinates. And so, if we wanted data to be just over\nthe U.S. for plotting purposes", "ds = data.metpy.sel(lat=slice(70, 10), lon=slice(360 - 150, 360 - 55))", "For full details on xarray indexing/selection, see\nxarray's documentation &lt;http://xarray.pydata.org/en/stable/indexing.html&gt;_.\nCalculations\nIn MetPy 1.0 and later, calculation functions accept Xarray DataArray's as input and the\noutput a DataArray that can be easily added to an existing Dataset.\nAs an example, we calculate wind speed from the wind components and add it as a new variable\nto our Dataset.", "ds['wind_speed'] = mpcalc.wind_speed(ds['u-component_of_wind_isobaric'],\n ds['v-component_of_wind_isobaric'])", "Plotting\nWith that miniaml preparation, we are now ready to use the simplified plotting syntax to be\nable to plot our data and analyze the meteorological situation.\nGeneral Structure\n\n\nSet contour attributes\n\n\nSet map characteristics and collect contours\n\n\nCollect panels and plot\n\n\nShow (or save) the results\n\n\nValid Plotting Types for Gridded Data:\n\n\nContourPlot()\n\n\nFilledContourPlot()\n\n\nImagePlot()\n\n\nPlotBarbs()\n\n\nMore complete descriptions of these and other plotting types, as well as the map panel and\npanel container classes are at the end of this tutorial.\nLet's plot a 300-hPa map with color-filled wind speed, which we calculated and added to\nour Dataset above, and geopotential heights over the CONUS.\nWe'll start by setting attributes for contours of Geopotential Heights at 300 hPa.\nWe need to set at least the data, field, level, and time attributes. We'll set a few others\nto have greater control over hour the data is plotted.", "# Set attributes for contours of Geopotential Heights at 300 hPa\ncntr2 = ContourPlot()\ncntr2.data = ds\ncntr2.field = 'Geopotential_height_isobaric'\ncntr2.level = 300 * units.hPa\ncntr2.time = plot_time\ncntr2.contours = list(range(0, 10000, 120))\ncntr2.linecolor = 'black'\ncntr2.linestyle = 'solid'\ncntr2.clabels = True", "Now we'll set the attributes for plotting color-filled contours of wind speed at 300 hPa.\nAgain, the attributes that must be set include data, field, level, and time. We'll also set\na colormap and colorbar to be purposeful for wind speed. Additionally, we'll set the\nattribute to change the units from m/s to knots, which is the common plotting units for\nwind speed.", "# Set attributes for plotting color-filled contours of wind speed at 300 hPa\ncfill = FilledContourPlot()\ncfill.data = ds\ncfill.field = 'wind_speed'\ncfill.level = 300 * units.hPa\ncfill.time = plot_time\ncfill.contours = list(range(10, 201, 20))\ncfill.colormap = 'BuPu'\ncfill.colorbar = 'horizontal'\ncfill.plot_units = 'knot'", "Once we have our contours (and any colorfill plots) set up, we will want to define the map\npanel that we'll plot the data on. This is the place where we can set the view extent,\nprojection of our plot, add map lines like coastlines and states, set a plot title.\nOne of the key elements is to add the data to the map panel as a list with the plots\nattribute.", "# Set the attributes for the map and add our data to the map\npanel = MapPanel()\npanel.area = [-125, -74, 20, 55]\npanel.projection = 'lcc'\npanel.layers = ['states', 'coastline', 'borders']\npanel.title = f'{cfill.level.m}-hPa Heights and Wind Speed at {plot_time}'\npanel.plots = [cfill, cntr2]", "Finally we'll collect all of the panels to plot on the figure, set the size of the figure,\nand ultimately show or save the figure.", "# Set the attributes for the panel and put the panel in the figure\npc = PanelContainer()\npc.size = (15, 15)\npc.panels = [panel]", "All of our setting now produce the following map!", "# Show the image\npc.show()", "That's it! What a nice looking map, with relatively simple set of code.\nAdding Wind Barbs\nWe can easily add wind barbs to the plot we generated above by adding another plot type\nand adding it to the panel. The plot type for wind barbs is PlotBarbs() and has its own\nset of attributes to control plotting a vector quantity.\nWe start with setting the attributes that we had before for our 300 hPa plot including,\nGeopotential Height contours, and color-filled wind speed.", "# Set attributes for contours of Geopotential Heights at 300 hPa\ncntr2 = ContourPlot()\ncntr2.data = ds\ncntr2.field = 'Geopotential_height_isobaric'\ncntr2.level = 300 * units.hPa\ncntr2.time = plot_time\ncntr2.contours = list(range(0, 10000, 120))\ncntr2.linecolor = 'black'\ncntr2.linestyle = 'solid'\ncntr2.clabels = True\n\n# Set attributes for plotting color-filled contours of wind speed at 300 hPa\ncfill = FilledContourPlot()\ncfill.data = ds\ncfill.field = 'wind_speed'\ncfill.level = 300 * units.hPa\ncfill.time = plot_time\ncfill.contours = list(range(10, 201, 20))\ncfill.colormap = 'BuPu'\ncfill.colorbar = 'horizontal'\ncfill.plot_units = 'knot'", "Now we'll set the attributes for plotting wind barbs, with the required attributes of data,\ntime, field, and level. The skip attribute is particularly useful for thining the number of\nwind barbs that are plotted on the map and again we'll convert to units of knots.", "# Set attributes for plotting wind barbs\nbarbs = BarbPlot()\nbarbs.data = ds\nbarbs.time = plot_time\nbarbs.field = ['u-component_of_wind_isobaric', 'v-component_of_wind_isobaric']\nbarbs.level = 300 * units.hPa\nbarbs.skip = (3, 3)\nbarbs.plot_units = 'knot'", "Add all of our plot types to the panel, don't forget to add in the new wind barbs to our plot\nlist!", "# Set the attributes for the map and add our data to the map\npanel = MapPanel()\npanel.area = [-125, -74, 20, 55]\npanel.projection = 'lcc'\npanel.layers = ['states', 'coastline', 'borders']\npanel.title = f'{cfill.level.m}-hPa Heights and Wind Speed at {plot_time}'\npanel.plots = [cfill, cntr2, barbs]\n\n# Set the attributes for the panel and put the panel in the figure\npc = PanelContainer()\npc.size = (15, 15)\npc.panels = [panel]\n\n# Show the figure\npc.show()", "Plot Surface Obs\nWe can also plot surface (or upper-air) observations at point locations using the simplified\nsyntax. Whether it is surface or upper-air data, the PlotObs() class is what you would\nwant to use. Then you would add those observations to a map panel and then collect the panels\nto plot the figure; similar to what you would do for a gridded plot.", "df = metar.parse_metar_file(get_test_data('metar_20190701_1200.txt', False), year=2019,\n month=7)\n\n# Let's take a look at the variables that we could plot coming from our METAR observations.\nprint(df.keys())\n\n# Set the observation time\nobs_time = datetime(2019, 7, 1, 12)", "Setting of our attributes for plotting observations is pretty straignforward and just needs\nto be lists for the variables, and a comparable number of items for plot characteristics that\nare specific to the individual fields. For example, the locations around a station plot, the\nplot units, and any plotting formats would all meed to have the same number of items as the\nfields attribute.\nPlotting wind bards is done through the vector_field attribute and you can reduce the number\nof points plotted (especially important for surface observations) with the reduce points\nattribute.\nFor a very basic plot of one field, the minimum required attributes are the data, time,\nfields, and location attributes.", "# Plot desired data\nobs = PlotObs()\nobs.data = df\nobs.time = obs_time\nobs.time_window = timedelta(minutes=15)\nobs.level = None\nobs.fields = ['cloud_coverage', 'air_temperature', 'dew_point_temperature',\n 'air_pressure_at_sea_level', 'current_wx1_symbol']\nobs.plot_units = [None, 'degF', 'degF', None, None]\nobs.locations = ['C', 'NW', 'SW', 'NE', 'W']\nobs.formats = ['sky_cover', None, None, lambda v: format(v * 10, '.0f')[-3:],\n 'current_weather']\nobs.reduce_points = 0.75\nobs.vector_field = ['eastward_wind', 'northward_wind']", "We use the same Classes for plotting our data on a map panel and collecting all of the\npanels on the figure. In this case we'll focus in on the state of Indiana for plotting.", "# Panel for plot with Map features\npanel = MapPanel()\npanel.layout = (1, 1, 1)\npanel.projection = 'lcc'\npanel.area = 'in'\npanel.layers = ['states']\npanel.title = f'Surface plot for {obs_time}'\npanel.plots = [obs]\n\n# Bringing it all together\npc = PanelContainer()\npc.size = (10, 10)\npc.panels = [panel]\n\npc.show()", "Detailed Attribute Descriptions\nThis final section contains verbose descriptions of the attributes that can be set by the\nplot types used in this tutorial.\nContourPlot()\nThis class is designed to plot contours of gridded data, most commonly model output from the\nGFS, NAM, RAP, or other gridded dataset (e.g., NARR).\nAttributes:\ndata\nThis attribute must be set with the variable name that contains the xarray dataset.\n(Typically this is the variable ds)\nfield\nThis attribute must be set with the name of the variable that you want to contour.\nFor example, to plot the heights of pressure surfaces from the GFS you would use the name\n‘Geopotential_height_isobaric’\nlevel\nThis attribute sets the level of the data you wish to plot. If it is a pressure level,\nthen it must be set to a unit bearing value (e.g., 500*units.hPa). If the variable does\nnot have any vertical levels (e.g., mean sea-level pressure), then the level attribute must\nbe set to None.\ntime\nThis attribute must be set with a datetime object, just as with the PlotObs() class.\nTo get a forecast hour, you can use the timedelta function from datetime to add the number of\nhours into the future you wish to plot. For example, if you wanted the six hour forecast from\nthe 00 UTC 2 February 2020 model run, then you would set the attribute with:\ndatetime(2020, 2, 2, 0) + timedelta(hours=6)\ncontours\nThis attribute sets the contour values to be plotted with a list. This can be set manually\nwith a list of integers in square brackets (e.g., [5400, 5460, 5520, 5580, 5640, 5700])\nor programmatically (e.g., list(range(0, 10000, 60))). The second method is a way to\neasily set a contour interval (in this case 60).\nclabel\nThis attribute can be set to True if you desire to have your contours labeled.\nlinestyle\nThis attribute can be set to make the contours ‘solid’, ‘dashed’, ‘dotted’,\nor ‘dashdot’. Other linestyles are can be used and are found at:\nhttps://matplotlib.org/3.1.0/gallery/lines_bars_and_markers/linestyles.html\nDefault is ‘solid’.\nlinewidth\nThis attribute alters the width of the contours (defaults to 1). Setting the value greater\nthan 1 will yield a thicker contour line.\nlinecolor\nThis attribute sets the color of the contour lines. Default is ‘black’. All colors from\nmatplotlib are valid: https://matplotlib.org/3.1.0/_images/sphx_glr_named_colors_003.png\nplot_units\nIf you want to change the units for plotting purposes, add the string value of the units\ndesired. For example, if you want to plot temperature in Celsius, then set this attribute\nto ‘degC’.\nFilledContourPlot()\nWorks very similarly to ContourPlot(), except that contours are filled using a colormap\nbetween contour values. All attributes for ContourPlot() work for color-filled plots,\nexcept for linestyle, linecolor, and linewidth. Additionally, there are the following\nattributes that work for color-filling:\nAttributes:\ncolormap\nThis attribute is used to set a valid colormap from either Matplotlib or MetPy:\nMatplotlib Colormaps: https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html\nMetPy Colormaps: https://unidata.github.io/MetPy/v1.0/api/generated/metpy.plots.ctables.html\ncolorbar\nThis attribute can be set to ‘vertical’ or ‘horizontal’, which is the location the\ncolorbar will be plotted on the panel.\nimage_range\nA set of values indicating the minimum and maximum for the data being plotted. This\nattribute should be set as (min_value, max_value), where min_value and max_value are\nnumeric values.\nPanelContainer()\nAttributes:\nsize\nThe size of the figure in inches (e.g., (10, 8))\npanels\nA list collecting the panels to be plotted in the figure.\nshow\nShow the plot\nsave\nSave the figure using the Matplotlib arguments/keyword arguments\nMapPanel()\nAttributes:\nlayout\nThe Matplotlib layout of the figure. For a single panel figure the setting should be\n(1, 1, 1)\nprojection\nThe projection can be set with the name of a default projection (‘lcc’, ‘mer’, or\n‘ps’) or it can be set to a Cartopy projection.\nlayers\nThis attribute will add map layers to identify boundaries or features to plot on the map.\nValid layers are 'borders', 'coastline', 'states', 'lakes', 'land',\n'ocean', 'rivers', 'counties'.\narea\nThis attribute sets the geographical area of the panel. This can be set with a predefined\nname of an area including all US state postal abbreviations (e.g., ‘us’, ‘natl’,\n‘in’, ‘il’, ‘wi’, ‘mi’, etc.) or a tuple value that corresponds to\nlongitude/latitude box based on the projection of the map with the format\n(west-most longitude, east-most longitude, south-most latitude, north-most latitude).\nThis tuple defines a box from the lower-left to the upper-right corner.\ntitle\nThis attribute sets a title for the panel.\nplots\nA list collecting the observations to be plotted in the panel.\nBarbPlot()\nThis plot class is used to add wind barbs to the plot with the following\nAttributes:\ndata\nThis attribute must be set to the variable that contains the vector components to be plotted.\nfield\nThis attribute is a list of the vector components to be plotted. For the typical\nmeteorological case it would be the [‘u-compopnent’, ‘v-component’].\ntime\nThis attribute should be set to a datetime object, the same as for all other declarative\nclasses.\nbarblength\nThis attribute sets the length of the wind barbs. The default value is based on the\nfont size.\ncolor\nThis attribute sets the color of the wind barbs, which can be any Matplotlib color.\nDefault color is ‘black’.\nearth_relative\nThis attribute can be set to False if the vector components are grid relative (e.g., for NAM\nor NARR output)\npivot\nThis attribute can be set to a string value about where the wind barb will pivot relative to\nthe grid point. Possible values include ‘tip’ or ‘middle’. Default is ‘middle’.\nPlotObs()\nThis class is used to plot point observations from the surface or upper-air.\nAttributes:\ndata\nThis attribute needs to be set to the DataFrame variable containing the fields that you\ndesire to plot.\nfields\nThis attribute is a list of variable names from your DataFrame that you desire to plot at the\ngiven locations around the station model.\nlevel\nFor a surface plot this needs to be set to None.\ntime\nThis attribute needs to be set to subset your data attribute for the time of the observations\nto be plotted. This needs to be a datetime object.\nlocations\nThis attribute sets the location of the fields to be plotted around the surface station\nmodel. The default location is center (‘C’)\ntime_range\nThis attribute allows you to define a window for valid observations (e.g., 15 minutes on\neither side of the datetime object setting. This is important for surface data since actual\nobserved times are not all exactly on the hour. If multiple observations exist in the defined\nwindow, the most recent observations is retained for plotting purposes.\nformats\nThis attribute sets a formatter for text or plotting symbols around the station model. For\nexample, plotting mean sea-level pressure is done in a three-digit code and a formatter can\nbe used to achieve that on the station plot.\nMSLP Formatter: lambda v: format(10 * v, '.0f')[-3:]\nFor plotting symbols use the available MetPy options through their name. Valid symbol formats\nare 'current_weather', 'sky_cover', 'low_clouds', 'mid_clouds',\n'high_clouds', and 'pressure_tendency'.\ncolors\nThis attribute can change the color of the plotted observation. Default is ‘black’.\nAcceptable colors are those available through Matplotlib:\nhttps://matplotlib.org/3.1.1/_images/sphx_glr_named_colors_003.png\nvector_field\nThis attribute can be set to a list of wind component values for plotting\n(e.g., [‘uwind’, ‘vwind’])\nvector_field_color\nSame as colors except only controls the color of the wind barbs. Default is ‘black’.\nreduce_points\nThis attribute can be set to a real number to reduce the number of stations that are plotted.\nDefault value is zero (e.g., no points are removed from the plot)." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joshamilton/Hamilton_acI_2017
code/02-calculateSeedCompounds.ipynb
mit
[ "Reverse Ecology and Metatranscriptomics of Uncultivated Freshwater Actinobacteria\nCalculating Seed Sets\nOverview of Reverse Ecology\nThe term reverse ecology refer to a set of computational techniques which aim to infer the ecological traits of an organism directly from its metabolic network. The \"flavor\" of reverse ecology used in this work relies on the computation of a metabolic network's seed sets, the minimal set of metabolites which must be exogenously acquired for growth. Seed compounds will reveal auxotrophies and biosynthetic capabilities that define metabolic phenotypes for each clade. The approach for calculating seed sets used here follows the original theory of Borenstein et al [2008].\nReverse ecology represents metabolic networks as graphs (a type of mathematical object). A graph consists of a set of objects (nodes) that are connected to one another (via edges). Graphs may be directed or undirected. In an undirected graph, a connection from A to B implies a connection from B to A. In a directed graph, edges point from A to B and not vice-versa. Metabolic networks are represented as directed graphs.\nAnalyzing Metabolic Network Graphs\nIn graph theory, a connected component of a graph is a subgraph in which all pairs of vertices are connected to each other by paths, and which has no connections to nodes outside the subgraph. \nA graph which has only a single component is fully connected. The seed-detection algorithm used in reverse ecology analysis requires the metabolic network to contain a single, fully connected graph. \nThe code below computes the connected components of the metabolic network graphs for each clade and makes a histogram.", "# Import special features for iPython\nimport matplotlib\n%matplotlib inline\n\n# Import Python modules \n# These custom-written modules should have been included with the package\n# distribution. \nfrom reverseEcology import metadataFunctions as mf\nfrom reverseEcology import graphFunctions as gf\n\n# Define local folder structure for data input and processing.\nmergedModelDir = '../models/merged'\nsummaryStatsDir = '../data/mergedModelStats'\n\n# Import the list of models\ndirList = mf.getDirList(mergedModelDir)\n\ngraphStatArray, diGraphStatArray = gf.computeGraphStats(dirList, mergedModelDir, summaryStatsDir)\ngf.plotGraphStats(graphStatArray)", "Reducing Graphs to Their Largest Component\nThe third histogram shows that the largest component of each graph contains at least 80% of the metabolites (nodes) in the graph. This gives two options: \"fill in\" the graph to make it fully connected (e.g., by adding reactions to the metabolic network), or perform reverse ecology analysis on the largest component. \nThe code below reduces the graph for each clade to its largest component.", "# Import Python modules \n# These custom-written modules should have been included with the package\n# distribution. \nfrom reverseEcology import metadataFunctions as mf\nfrom reverseEcology import graphFunctions as gf\n\n# Define local folder structure for data input and processing.\nmergedModelDir = '../models/merged'\nsummaryStatsDir = '../data/mergedModelStats'\n\n# Import the list of models\ndirList = mf.getDirList(mergedModelDir)\n\nreducedGraphStatArray = gf.reduceToLargeComponent(dirList, mergedModelDir, summaryStatsDir)", "Computation of Seed Sets\nThe seed set is the set of compounds that, based on metabolic network topology, are exogenously acquired for growth. Formally, the seed set of a network is the minimal subset of compounds (nodes) that cannot be synthesized from other compounds in the network, and whose presence in the environment permits the production of all other compounds in the network. In other words, the seed set of a network is a set of nodes from which all other nodes can be reached.\nThe seed set detection algorithm decomposes the metabolic network into its strongly connected components (SCCs), sets of nodes such that each node is reachable from every other. This decomposition enables the seed set detection problem to be reduced to the problem of detecting source components in the condensation of the original network.\nTo find the seed sets, each source component in the condensation is then expanded to its original nodes. Because each vertex of the condensation is an SCC of the original graph, each vertex of the condensation contains a set of \"equivalent nodes\", meaning that each node can be reached from the others.\nThus, the seed set detection algorithm contains four steps.\n1. Identify the SCCs of the (directed) network graph\n2. Use the SCCs to derive the condensation of the original graph\n3. Identify source components in the condensation\n4. Expand each source component of the condensation into its original node\nThe code below performs the four steps shown above for the metabolic network graph of each genome. The seed compounds for each graph are written to a file, with each line in the file containing a set of equivalent seed compounds. The code also plots histograms of the number and size of the seed sets against network size.", "# Import special features for iPython\nimport matplotlib\n%matplotlib inline\n\n# Import Python modules \n# These custom-written modules should have been included with the package\n# distribution. \nfrom reverseEcology import metadataFunctions as mf\nfrom reverseEcology import graphFunctions as gf\n\n# Define local folder structure for data input and processing.\nmergedModelDir = '../models/merged'\nseedDir = '../results/seedCompounds'\n\n# Import the list of models\ndirList = mf.getDirList(mergedModelDir)\n\nseedSetList = gf.computeSeedSets(dirList, mergedModelDir, seedDir)\ngf.plotSeedStatsForTribes(seedSetList, reducedGraphStatArray)", "Working with Seed Sets\nTo facilitate analysis, the functions below:\n- write a single matrix for seed compounds for all genomes\n- compute the fraction of genomes in which each seed compound appears", "# Import Python modules \n# These custom-written modules should have been included with the package\n# distribution. \nfrom reverseEcology import metadataFunctions as mf\nfrom reverseEcology import seedFunctions as ef\n\n# Define local folder structure for data input and processing.\nseedDir = '../results/seedCompounds'\nsummaryStatsDir = '../results/seedCompounds'\n\n# Import the list of models\ndirList = mf.getDirList(seedDir)\ndirList.remove('seedMatrixWeighted.csv') # May be present from a previous run\n\nseedMatrixDF = ef.consolidateSeeds(dirList, seedDir, summaryStatsDir)", "References\n\nBorenstein, E., Kupiec, M., Feldman, M. W., & Ruppin, E. (2008). Large-scale reconstruction and phylogenetic analysis of metabolic environments. Proceedings of the National Academy of Sciences, 105(38), 14482–14487. http://doi.org/10.1073/pnas.0806162105" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
enbanuel/phys202-2015-work
assignments/assignment09/IntegrationEx01.ipynb
mit
[ "Integration Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import integrate", "Trapezoidal rule\nThe trapezoidal rule generates a numerical approximation to the 1d integral:\n$$ I(a,b) = \\int_a^b f(x) dx $$\nby dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:\n$$ h = (b-a)/N $$\nNote that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.\nWrite a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).", "def trapz(f, a, b, N):\n \"\"\"Integrate the function f(x) over the range [a,b] with N points.\"\"\"\n # YOUR CODE HERE\n h = (b-a)/N\n v = np.arange(0, N)\n r = h*(0.5*f(a) + 0.5*f(b) + f(a+v*h).sum())\n return r\n\nf = lambda x: x**2\ng = lambda x: np.sin(x)\n\nI = trapz(f, 0, 1, 1000)\nassert np.allclose(I, 0.33333349999999995)\nJ = trapz(g, 0, np.pi, 1000)\nassert np.allclose(J, 1.9999983550656628)", "Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.", "# YOUR CODE HERE\nt, err1 = integrate.quad(f, 0, 1)\np, err2 = integrate.quad(g, 0, np.pi)\nprint(I, t, err1)\nprint(J, p, err2)\n\nassert True # leave this cell to grade the previous one" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/sql/raw/ex4.ipynb
apache-2.0
[ "Introduction\nYou've built up your SQL skills enough that the remaining hands-on exercises will use different datasets than you see in the explanations. If you need to get to know a new dataset, you can run a couple of SELECT queries to extract and review the data you need. \nThe next exercises are also more challenging than what you've done so far. Don't worry, you are ready for it!\nRun the code in the following cell to get everything set up:", "# Set up feedback system\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.sql.ex4 import *\nprint(\"Setup Complete\")", "The World Bank has made tons of interesting education data available through BigQuery. Run the following cell to see the first few rows of the international_education table from the world_bank_intl_education dataset.", "from google.cloud import bigquery\n\n# Create a \"Client\" object\nclient = bigquery.Client()\n\n# Construct a reference to the \"world_bank_intl_education\" dataset\ndataset_ref = client.dataset(\"world_bank_intl_education\", project=\"bigquery-public-data\")\n\n# API request - fetch the dataset\ndataset = client.get_dataset(dataset_ref)\n\n# Construct a reference to the \"international_education\" table\ntable_ref = dataset_ref.table(\"international_education\")\n\n# API request - fetch the table\ntable = client.get_table(table_ref)\n\n# Preview the first five lines of the \"international_education\" table\nclient.list_rows(table, max_results=5).to_dataframe()", "Exercises\nThe value in the indicator_code column describes what type of data is shown in a given row. \nOne interesting indicator code is SE.XPD.TOTL.GD.ZS, which corresponds to \"Government expenditure on education as % of GDP (%)\".\n1) Government expenditure on education\nWhich countries spend the largest fraction of GDP on education? \nTo answer this question, consider only the rows in the dataset corresponding to indicator code SE.XPD.TOTL.GD.ZS, and write a query that returns the average value in the value column for each country in the dataset between the years 2010-2017 (including 2010 and 2017 in the average). \nRequirements:\n- Your results should have the country name rather than the country code. You will have one row for each country.\n- The aggregate function for average is AVG(). Use the name avg_ed_spending_pct for the column created by this aggregation.\n- Order the results so the countries that spend the largest fraction of GDP on education show up first.\nIn case it's useful to see a sample query, here's a query you saw in the tutorial (using a different dataset):\n```\nQuery to find out the number of accidents for each day of the week\nquery = \"\"\"\n SELECT COUNT(consecutive_number) AS num_accidents, \n EXTRACT(DAYOFWEEK FROM timestamp_of_crash) AS day_of_week\n FROM bigquery-public-data.nhtsa_traffic_fatalities.accident_2015\n GROUP BY day_of_week\n ORDER BY num_accidents DESC\n \"\"\"\n```", "# Your code goes here\ncountry_spend_pct_query = \"\"\"\n SELECT _____\n FROM `bigquery-public-data.world_bank_intl_education.international_education`\n WHERE ____\n GROUP BY ____\n ORDER BY ____\n \"\"\"\n\n# Set up the query (cancel the query if it would use too much of \n# your quota, with the limit set to 1 GB)\nsafe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)\ncountry_spend_pct_query_job = client.query(country_spend_pct_query, job_config=safe_config)\n\n# API request - run the query, and return a pandas DataFrame\ncountry_spending_results = country_spend_pct_query_job.to_dataframe()\n\n# View top few rows of results\nprint(country_spending_results.head())\n\n# Check your answer\nq_1.check()", "For a hint or the solution, uncomment the appropriate line below.", "#q_1.hint()\n#q_1.solution()", "2) Identify interesting codes to explore\nThe last question started by telling you to focus on rows with the code SE.XPD.TOTL.GD.ZS. But how would you find more interesting indicator codes to explore?\nThere are 1000s of codes in the dataset, so it would be time consuming to review them all. But many codes are available for only a few countries. When browsing the options for different codes, you might restrict yourself to codes that are reported by many countries.\nWrite a query below that selects the indicator code and indicator name for all codes with at least 175 rows in the year 2016.\nRequirements:\n- You should have one row for each indicator code.\n- The columns in your results should be called indicator_code, indicator_name, and num_rows.\n- Only select codes with 175 or more rows in the raw database (exactly 175 rows would be included).\n- To get both the indicator_code and indicator_name in your resulting DataFrame, you need to include both in your SELECT statement (in addition to a COUNT() aggregation). This requires you to include both in your GROUP BY clause.\n- Order from results most frequent to least frequent.", "# Your code goes here\ncode_count_query = \"\"\"____\"\"\"\n\n# Set up the query\nsafe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)\ncode_count_query_job = client.query(code_count_query, job_config=safe_config)\n\n# API request - run the query, and return a pandas DataFrame\ncode_count_results = code_count_query_job.to_dataframe()\n\n# View top few rows of results\nprint(code_count_results.head())\n\n# Check your answer\nq_2.check()", "For a hint or the solution, uncomment the appropriate line below.", "#q_2.hint()\n#q_2.solution()", "Keep Going\nClick here to learn how to use AS and WITH to clean up your code and help you construct more complex queries." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
quasars100/Resonance_testing_scripts
python_tutorials/Exceptions.ipynb
gpl-3.0
[ "Catching close encounters and escaping planets using exceptions\nSometimes one is interested in catching a close encounter between two planets or planets escaping the planetary system. This can be easily done with REBOUND. What you do when a close encounter or an escape happens, is up to you.\nSome integrators are better suited to simulate close encounters than other. For example, the non-symplectic integrator IAS15 has an adaptive timestep scheme that resolves close encounters very well. Integrators that use a fixed timestep WHFast are more likely to miss close encounters.\nIn this tutorial we'll start with a two-planet system that will go unstable on a short timescale:", "import rebound\nimport numpy as np\ndef setupSimulation():\n rebound.reset()\n rebound.integrator = \"ias15\" # IAS15 is the default integrator, so we don't need this line\n rebound.add(m=1.)\n rebound.add(m=1e-3,a=1.)\n rebound.add(m=1e-3,a=1.25)\n rebound.move_to_com()", "Let's integrate this system for 100 orbital periods.", "setupSimulation()\nrebound.integrate(100.*2.*np.pi)", "Rebound exits the integration routine normally. We can now explore the final particle orbits:", "for o in rebound.calculate_orbits():\n print(o)", "We see that the orbits of both planets changed significantly and we can already speculate that there was a close encounter.\nLet's redo the simulation, but this time set the minD flag in the integrate routine. If this flag is set, then REBOUND calculates the minimum distance between any two particle pairs. If the distance is less than minD, then the integration is stopped and an exception thrown. The Hill radius is given by $r_{\\rm Hill} \\approx a \\sqrt{\\frac{m}{3M}}$ which is approximately 0.06 in our case. Let's set the breakout flag minD to roughly three Hill radii and see what happens:", "setupSimulation() # Resets everything\nrebound.integrate(100.*2.*np.pi, minD=0.2)", "As you see, we got an exception! Let's redo the simulation once again and store the particle distance while we're integrating. This time we'll also catch the exception with a try/except construct so that our script doesn't break.", "setupSimulation() # Resets everything\nNoutputs = 1000\ntimes = np.linspace(0,100.*2.*np.pi,Noutputs)\ndistances = np.zeros(Noutputs)\nps = rebound.particles # ps is now an array of pointers. It will update as the simulation runs.\ntry:\n for i,time in enumerate(times):\n rebound.integrate(time,minD=0.2)\n dx = ps[1].x - ps[2].x\n dy = ps[1].y - ps[2].y\n dz = ps[1].z - ps[2].z\n distances[i] = np.sqrt(dx*dx+dy*dy+dz*dz)\nexcept rebound.CloseEncounter as e:\n print(\"Close encounter detected at t=%f, between particles %d and %d.\" % (rebound.t, e.id1, e.id2))", "Let plot the distance as a function of time.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(10,5))\nax = plt.subplot(111)\nax.set_xlabel(\"time [orbits]\")\nax.set_xlim([0,rebound.t/(2.*np.pi)])\nax.set_ylabel(\"distance\")\nplt.plot(times/(2.*np.pi), distances);\nplt.plot([0.0,12],[0.2,0.2]) # Plot our close encounter criteria;", "We did indeed find the close enounter correctly. We could now do something with the two particles that collided. \nLet's to the simplest thing, let's merge them. To do that we'll first calculate our new merged planet coordinates, then clear all particles from REBOUND and finally add the new particles.", "import copy\ndef mergeParticles(id1,id2):\n old_ps = rebound.particles\n new_ps = []\n for i in range(rebound.N):\n if i!=id1 and i!=id2:\n new_ps.append(copy.deepcopy(old_ps[i])) \n mergedPlanet = rebound.Particle()\n mergedPlanet.m = old_ps[id1].m + old_ps[id2].m\n mergedPlanet.x = (old_ps[id1].m*old_ps[id1].x + old_ps[id2].m*old_ps[id2].x) /mergedPlanet.m\n mergedPlanet.y = (old_ps[id1].m*old_ps[id1].y + old_ps[id2].m*old_ps[id2].y) /mergedPlanet.m\n mergedPlanet.z = (old_ps[id1].m*old_ps[id1].z + old_ps[id2].m*old_ps[id2].z) /mergedPlanet.m\n mergedPlanet.vx = (old_ps[id1].m*old_ps[id1].vx + old_ps[id2].m*old_ps[id2].vx)/mergedPlanet.m\n mergedPlanet.vy = (old_ps[id1].m*old_ps[id1].vy + old_ps[id2].m*old_ps[id2].vy)/mergedPlanet.m\n mergedPlanet.vz = (old_ps[id1].m*old_ps[id1].vz + old_ps[id2].m*old_ps[id2].vz)/mergedPlanet.m\n new_ps.append(mergedPlanet)\n del(rebound.particles)\n rebound.add(new_ps)\n\nsetupSimulation() # Resets everything\nprint(\"Number of particles at the beginning of the simulation: %d.\"%rebound.N)\nfor i,time in enumerate(times):\n try:\n rebound.integrate(time,minD=0.2)\n except rebound.CloseEncounter as e:\n print(\"Close encounter detected at t=%f, between particles %d and %d. Merging.\" % (rebound.t, e.id1, e.id2))\n mergeParticles(e.id1,e.id2)\nprint(\"Number of particles at the end of the simulation: %d.\"%rebound.N)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Ruediger-Braun/compana16
Lektion13.ipynb
gpl-3.0
[ "Lektion 13", "from sympy import *\ninit_printing()\nfrom IPython.display import display", "Besselfunktionen", "x = Symbol('x')\ny = Function('y')\ndgl = Eq(y(x).diff(x, 2), -1/x*y(x).diff(x) + 1/x**2*y(x) +4*y(x))\ndgl\n\n#dsolve(dgl) # NotImplementedError\n\n#N = 8\nN=18\n\na = [Symbol('a'+str(j)) for j in range(N)]\n\nn = Symbol('n')\n\nys = sum([a[j]*x**j for j in range(N)]) \nys \n\ngl = dgl.subs(y(x), ys).doit()\ngl\n\np1 = (gl.lhs - gl.rhs).expand()\np1\n\np1.coeff(x**(-2))\n\np1.coeff(x**(-1))\n\np1.coeff(x, 1)\n\ngls = []\nfor j in range(N+1):\n glg = Eq(p1.coeff(x, j-2), 0)\n if glg != True:\n gls.append(glg)\ngls\n\n#solve(gls) #NotImplementedError\n\nLsg = solve(gls[:-1])\nLsg", "Die ungeraden a werden rückwärts gelöst. Das ist verwirrend.", "var = a.copy() # böse Falle\ndel var[1]\nvar\n\nLsg = solve(gls[:-1], var)\nLsg", "Wir hatten das beim ersten Mal mit $N=8$ gemacht. Das sind zu wenige Daten. Jetzt noch Mal mit $N=18$.\nAus Gründen, die ich nicht verstehe, muss man den Kernel zurücksetzen, bevor man mit dem neuen $N$ startet.", "#raise Unterbrechung\n\nLsg[a[1]] = a[1]\n\nq = [Lsg[a[2*j+3]]/Lsg[a[2*j+1]] for j in range(int(N/2)-2)]\ndisplay(q)\n\nliste = []\nfor j in range(int(N/2-2)):\n m = Lsg[a[2*j+1]]/Lsg[a[2*j+3]]\n liste.append(m/(j+2))\nliste", "Also\n$$ \\frac{a_{2j+3}}{a_{2j+1}} = \\frac1{(j+1)(j+2)} $$\nDas bedeutet\n$$ a_{2n+3} = \\prod_{j=0}^n \\frac1{(j+1)(j+2)} a_1 = \\frac{a_1}{(n+1)!(n+2)!}. $$\nProbe", "for j in range(int(N/2)-2):\n display((Lsg[a[2*j+1]], a[1]/factorial(j)/factorial(j+1)))\n\nS1 = Sum(x**(2*n+1)/factorial(n)/factorial(n+1), (n,0,oo))\nS1\n\nu = S1.doit()\nu\n\nsrepr(u)\n\nbesseli?", "Eine zweite Lösung müsste man erhalten können, indem man einen Ansatz aus einer Potenzreihe und dem Produkt aus dem Logarithmus und einer Potenzreihe macht.\nDas Reduktionsverfahren von d'Alembert führt auf ein schwieriges Integral.", "tmp = dgl.subs(y(x), u).doit()\ntmp", "http://dlmf.nist.gov\n$$ I_{\\nu-1}(z) - I_{\\nu+1}(z) = \\frac{2\\nu}z I_\\nu(z) $$", "(tmp.lhs - tmp.rhs).series(x, 0, 20)", "Pattern matching", "x = Symbol('x')\n\nx1 = Wild('x1')\n\n\npattern = sin(x1)\n\na = sin(2*x+5)\n\nm = a.match(pattern)\nm\n\nb = 2*sin(x1/2)*cos(x1/2)\nb\n\nb.subs(m)\n\ndef expand_sin_x_halbe(term):\n x1 = Wild('x1')\n pattern = sin(x1)\n ersetzung = 2*sin(x1/2)*cos(x1/2)\n m = term.match(pattern)\n if m:\n return ersetzung.subs(m)\n else:\n return term \n\nexpand_sin_x_halbe(sin(x/2))\n\nseries(expand_sin_x_halbe(sin(x)) - sin(x), x, 0, 20)\n\n\na = 2*sin(x)\nexpand_sin_x_halbe(a)\n\na.is_Mul\n\na.args\n\ndef expand_sin_x_halbe(ausdr):\n ausdr = S(ausdr)\n x1 = Wild('x1')\n pattern = sin(x1)\n ersetzung = 2*sin(x1/2)*cos(x1/2)\n m = ausdr.match(pattern)\n if m:\n res = ersetzung.subs(m)\n elif ausdr.is_Mul:\n res = 1\n for term in ausdr.args:\n res = res * expand_sin_x_halbe(term)\n elif ausdr.is_Add:\n res = 0\n for term in ausdr.args:\n res = res + expand_sin_x_halbe(term)\n else:\n res = ausdr\n return res\n\nexpand_sin_x_halbe(sin(2*x))\n\nexpand_sin_x_halbe(sin(2*x)/2)\n\nexpand_sin_x_halbe(1+10*sin((x+1)**2))\n\nausdr = (1 + sin(x/2))**3\n\nexpand_sin_x_halbe(ausdr)\n\nausdr.is_Pow", "Mit etwas Fleiß bekommt man ein vollständiges Ersetzungssystem" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
marcinofulus/PR2014
CUDA/iCSE_PR_Rownanie_Logistyczne.ipynb
gpl-3.0
[ "Diagram bifurkacyjny dla równania logistycznego $x \\to a x (1-x)$\nRównanie logistyczne jest niezwykle prostym równaniem iteracyjnym wykazującym zaskakująco złożone zachowanie. Jego własności są od lat siedemdziesiątych przedmiotem poważnych prac matematycznych. Pomimo tego wciąż wiele własności jest niezbadanych i zachowanie się rozwiązań tego równania jest dostępne tylko do analizy numerycznej.\nPoniższy przykład wykorzystuje pyCUDA do szybkiego obliczenia tak zwanego diagramu bifurkacyjnego równania logistycznego. Uzyskanie takiego diagramu wymaga jednoczesnej symulacji wielu równań z różnymi warunkami początkowymi i różnymi parametrami. Jest to idealne zadanie dla komputera równoległego.\nSposób implementacji\nPierwszą implementacja naszego algorytmu będzie zastosowanie szablonu jądra zwanego \n ElementwiseKernel\nJest to prosty sposób na wykonanie tej samej operacji na dużym wektorze danych.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\nimport pycuda.gpuarray as gpuarray\n\nfrom pycuda.curandom import rand as curand\nfrom pycuda.compiler import SourceModule\nimport pycuda.driver as cuda\n\ntry:\n ctx.pop()\n ctx.detach()\nexcept:\n print (\"No CTX!\")\n\ncuda.init()\ndevice = cuda.Device(0)\nctx = device.make_context()\nprint (device.name(), device.compute_capability(),device.total_memory()/1024.**3,\"GB\")\nprint (\"a tak wogóle to mamy tu:\",cuda.Device.count(), \" urządzenia\")\n", "Jądro Elementwise\nZdefiniujemy sobie jądro, które dla wektora stanów początkowych, element po elementcie wykona iteracje rówania logistycznego. Ponieważ będziemy chcieli wykonać powyższe iteracje dla różnych parametrów $a$, zdefiniujemy nasze jądro tak by brało zarówno wektor wartości paramteru $a$ jak i wektor wartości początkowych. Ponieważ będziemy mieli tą samą wartość parametru $a$ dla wielu wartości początkowych to wykorzystamy użyteczną w tym przypadku funkcję numpy:\na = np.repeat(a,Nx)", "import numpy as np \nNx = 1024\nNa = 1024\n\na = np.linspace(3.255,4,Na).astype(np.float32)\na = np.repeat(a,Nx)\n\na_gpu = gpuarray.to_gpu(a)\nx_gpu = curand((Na*Nx,))\n\nfrom pycuda.elementwise import ElementwiseKernel\niterate = ElementwiseKernel(\n \"float *a, float *x\",\n \"x[i] = a[i]*x[i]*(1.0f-x[i])\",\n \"iterate\")\n\n\n%%time\nNiter = 1000\nfor i in range(Niter):\n iterate(a_gpu,x_gpu)\nctx.synchronize()\na,x = a_gpu.get(),x_gpu.get()\n\n\n\nplt.figure(num=1, figsize=(10, 6))\n\nevery = 10\nplt.plot(a[::every],x[::every],'.',markersize=1)\nplt.plot([3.83,3.83],[0,1])", "Algorytm z pętlą wewnątrz jądra CUDA\nNapiszmy teraz algorytm, który będzie iterował równanie Niter razy wewnątrz jednego wywołania jądra CUDA.", "import pycuda.gpuarray as gpuarray\n\nfrom pycuda.curandom import rand as curand\nfrom pycuda.compiler import SourceModule\nimport pycuda.driver as cuda\n\ntry:\n ctx.pop()\n ctx.detach()\nexcept:\n print( \"No CTX!\")\n\ncuda.init()\ndevice = cuda.Device(0)\nctx = device.make_context()\n\n\nmod = SourceModule(\"\"\"\n __global__ void logistic_iterations(float *a,float *x,int Niter)\n {\n \n int idx = threadIdx.x + blockDim.x*blockIdx.x;\n float a_ = a[idx];\n float x_ = x[idx];\n int i;\n for (i=0;i<Niter;i++){\n \n x_ = a_*x_*(1-x_);\n }\n \n x[idx] = x_;\n }\n \"\"\")\nlogistic_iterations = mod.get_function(\"logistic_iterations\")\n\n\nblock_size=128\nNx = 10240\nNa = 1024*2\nblocks = Nx*Na//block_size\n\na = np.linspace(3.255,4,Na).astype(np.float32)\na = np.repeat(a,Nx)\n\na_gpu = gpuarray.to_gpu(a)\nx_gpu = curand((Na*Nx,))\n\n%%time\nlogistic_iterations(a_gpu,x_gpu, np.int32(10000),block=(block_size,1,1), grid=(blocks,1,1))\nctx.synchronize()\n\n\na,x = a_gpu.get(),x_gpu.get()\n\nplt.figure(num=1, figsize=(9, 8))\nevery = 100\nplt.plot(a[::every],x[::every],'.',markersize=1,alpha=1)\nplt.plot([3.83,3.83],[0,1])\n\nH, xedges, yedges = np.histogram2d(a,x,bins=(1024,1024))\n\nplt.figure(num=1, figsize=(10,10))\n\nplt.imshow(1-np.log(H.T+5e-1),origin='lower',cmap='gray')", "Porównanie z wersją CPU\nDla porównania napiszemy prosty program, który oblicza iteracje równania logistycznego na CPU. Zatosujemy język cython, który umożliwia automatyczne skompilowanie funkcji do wydajnego kodu, którego wydajność jest porównywalna z kodem napisanym w języku C lub podobnym.\nW wyniku działania programu widzimy, że nasze jądro wykonuje obliczenia znacznie szybciej.", "%load_ext Cython\n\n%%cython\ndef logistic_cpu(double a = 3.56994):\n cdef double x\n \n cdef int i\n x = 0.1\n for i in range(1000*1024*1024):\n x = a*x*(1.0-x)\n \n return x\n\n%%time\nlogistic_cpu(1.235)\n\nprint(\"OK\")", "Wizualizacja wyników", "import matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\na1,a2 = 3,3.56994567\n\nNx = 1024\nNa = 1024\n\na = np.linspace(a1,a2,Na).astype(np.float32)\na = np.repeat(a,Nx)\n\na_gpu = gpuarray.to_gpu(a)\nx_gpu = curand((Na*Nx,))\nx = x_gpu.get()\n\nfig = plt.figure()\n\nevery = 1\nNiter = 10000\nfor i in range(Niter):\n if i%every==0:\n \n plt.cla()\n \n plt.xlim(a1,a2)\n\n plt.ylim(0,1)\n fig.suptitle(\"iteracja: %05d\"%i)\n plt.plot(a,x,'.',markersize=1)\n plt.savefig(\"/tmp/%05d.png\"%i)\n if i>10:\n every=2\n if i>30:\n every=10\n if i>100:\n every=50 \n if i>1000:\n every=500 \n iterate(a_gpu,x_gpu)\n ctx.synchronize()\n a,x = a_gpu.get(),x_gpu.get()\n\n\n\n%%sh \ncd /tmp\ntime convert -delay 20 -loop 0 *.png anim_double.gif && rm *.png\n\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\nblock_size=128\nNx = 1024*5\nNa = 1024*3\nblocks = Nx*Na//block_size\n\nnframes = 22\nfor i,(a1,a2) in enumerate(zip(np.linspace(3,3.77,nframes),np.linspace(4,3.83,nframes))):\n\n a = np.linspace(a1,a2,Na).astype(np.float32)\n a = np.repeat(a,Nx)\n\n a_gpu = gpuarray.to_gpu(a)\n x_gpu = curand((Na*Nx,))\n x = x_gpu.get()\n\n \n logistic_iterations(a_gpu,x_gpu, np.int32(10000),block=(block_size,1,1), grid=(blocks,1,1))\n ctx.synchronize()\n\n a,x = a_gpu.get(),x_gpu.get()\n H, xedges, yedges = np.histogram2d(a,x,bins=(np.linspace(a1,a2,1024),np.linspace(0,1,1024)))\n\n fig, ax = plt.subplots(figsize=[10,7])\n \n ax.imshow(1-np.log(H.T+5e-1),origin='lower',cmap='gray',extent=[a1,a2,0,1])\n #plt.xlim(a1,a2)\n #plt.ylim(0,1)\n ax.set_aspect(7/10*(a2-a1))\n #fig.set_size_inches(8, 5)\n\n fig.savefig(\"/tmp/zoom%05d.png\"%i)\n plt.close(fig)\n\n%%sh \ncd /tmp\ntime convert -delay 30 -loop 0 *.png anim_zoom.gif && rm *.png\n\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\nblock_size=128\nNx = 1024*5\nNa = 1024*3\nblocks = Nx*Na//block_size\n\na1,a2 = 1,4\nx1,x2 = 0., 1\n\na = np.linspace(a1,a2,Na).astype(np.float32)\na = np.repeat(a,Nx)\n\na_gpu = gpuarray.to_gpu(a)\nx_gpu = curand((Na*Nx,))\nx = x_gpu.get()\n\n\nlogistic_iterations(a_gpu,x_gpu, np.int32(10000),block=(block_size,1,1), grid=(blocks,1,1))\nctx.synchronize()\n\na,x = a_gpu.get(),x_gpu.get()\nH, xedges, yedges = np.histogram2d(a,x,bins=(np.linspace(a1,a2,1024),np.linspace(x1,x2,1024)))\n\nfig, ax = plt.subplots(figsize=[10,7])\n\nax.imshow(1-np.log(H.T+5e-1),origin='lower',cmap='gray',extent=[a1,a2,x1,x2])\n#plt.xlim(a1,a2)\n#plt.ylim(0,1)\nax.set_aspect(7/10*(a2-a1)/(x2-x1))\n#fig.set_size_inches(8, 5)\n\nfig.savefig(\"/tmp/zoom.png\")\nplt.close(fig)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hetaodie/hetaodie.github.io
assets/media/uda-ml/deep/shensd/IMDB数据/.ipynb_checkpoints/IMDB_In_Keras-zh-checkpoint.ipynb
mit
[ "使用 Keras 分析 IMDB 电影数据", "# Imports\nimport numpy as np\nimport keras\nfrom keras.datasets import imdb\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation\nfrom keras.preprocessing.text import Tokenizer\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnp.random.seed(42)", "1. 加载数据\n该数据集预先加载了 Keras,所以一个简单的命令就会帮助我们训练和测试数据。 这里有一个我们想看多少单词的参数。 我们已将它设置为1000,但你可以随时尝试设置为其他数字。", "# Loading the data (it's preloaded in Keras)\n(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)\n\nprint(x_train.shape)\nprint(x_test.shape)", "2. 检查数据\n请注意,数据已经过预处理,其中所有单词都包含数字,评论作为向量与评论中包含的单词一起出现。 例如,如果单词'the'是我们词典中的第一个单词,并且评论包含单词'the',那么在相应的向量中有 1。\n输出结果是 1 和 0 的向量,其中 1 表示正面评论,0 是负面评论。", "print(x_train[0])\nprint(y_train[0])", "3. 输出的 One-hot 编码\n在这里,我们将输入向量转换为 (0,1)-向量。 例如,如果预处理的向量包含数字 14,则在处理的向量中,第 14 个输入将是 1。", "# One-hot encoding the output into vector mode, each of length 1000\ntokenizer = Tokenizer(num_words=1000)\nx_train = tokenizer.sequences_to_matrix(x_train, mode='binary')\nx_test = tokenizer.sequences_to_matrix(x_test, mode='binary')\nprint(x_train[0])", "同时我们将对输出进行 one-hot 编码。", "# One-hot encoding the output\nnum_classes = 2\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\nprint(y_train.shape)\nprint(y_test.shape)", "4. 模型构建\n使用 sequential 在这里构建模型。 请随意尝试不同的层和大小! 此外,你可以尝试添加 dropout 层以减少过拟合。", "# TODO: Build the model architecture\n\n# TODO: Compile the model using a loss function and an optimizer.\n", "5. 训练模型\n运行模型。 你可以尝试不同的 batch_size 和 epoch 数量!", "# TODO: Run the model. Feel free to experiment with different batch sizes and number of epochs.", "6. 评估模型\n你可以在测试集上评估模型,这将为你提供模型的准确性。你得出的结果可以大于 85% 吗?", "score = model.evaluate(x_test, y_test, verbose=0)\nprint(\"Accuracy: \", score[1])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CheungChanDevCoder/blog
python/data_analyze/用pandas做数据分析.ipynb
mit
[ "用pandas做数据分析\n关于数据分析\n根据jetbrains公司2018年对python开发人员的调查, 从事数据分析的python使用者超过了\n web开发和自动化测试.\n\n在诸多数据科学的框架和库中,numpy pandas是最流行的 \n\n而numpy为pandas提供了基础的底层数据结构和处理函数, 用ndarray和ufunc解决了性能问题.\n## pandas的核心数据结构 Series 和 DataFrame\nSeries 是个定长的字典序列, 可以看成是只有一列的Excel, 或者数据库表里面的一行记录\nSeries有两个基本属性:index 和 values\nindex如果不指定默认是<code>[0,1,2,3...]</code> 也可以自己指定索引 <code>index=['a', 'b', 'c', 'd']</code>", "import pandas as pd\nx1 = pd.Series([1,2,3,4])\nx2 = pd.Series(data=[1,2,3,4], index=['a', 'b', 'c', 'd'])\nprint(\"x1\".center(100,\"*\"))\nprint(x1)\nprint(\"x2\".center(100,\"*\"))\nprint(x2)\n\n\nd = {'a':1, 'b':2, 'c':3, 'd':4}\nx3 = pd.Series(d)\nprint(x3)\n", "Dataframe 则类似于excel里面的一张表,或者数据库的一张表. 可以看出是一组相同的index组成的Series组成的一个dict. 或者说一个多列的excel表", "data = {'Chinese': [66, 95, 93, 90,80],'English': [65, 85, 92, 88, 90],'Math': [30, 98, 96, 77, 90]}\ndf1 = pd.DataFrame(data)\ndf2 = pd.DataFrame(data, \n index=['ZhangFei', 'GuanYu', 'ZhaoYun', 'HuangZhong', 'DianWei'], \n columns=['English', 'Math', \"Chinese\"])\nprint(\"df1\".center(100,\"*\"))\nprint(df1)\nprint(\"df2\".center(100,\"*\"))\nprint(df2)\n", "数据的导入和输出\npandas提供了非常简单的方式来读取excel csv 数据库 html pickle 甚至是剪贴板中的的数据成为pandas中的DataFrame类型, 也可以很方便的将DataFrame转换成dict list json 数据库 甚至是html里面", "print(\"列出当前路径\".center(100,\"*\"))\n!ls\nprint(\"用pandas读取csv\".center(100,\"*\"))\ndf = pd.read_csv(\"肉类热量表.csv\")\nprint(df)\n\ndf.to_excel(\"pandas导出的肉类热量表.xlsx\")\n!ls\n\n# 为了保证程序能像预料中那样再次运行, 删除掉生成的excel\n!rm pandas导出的肉类热量表.xlsx\n!ls\ndf", "数据清洗\n比方说有以下场景\n删除不必要的行 pandas提供了一个drop方法", "df[\"测试\"] = \"啦啦啦\"\ndf.loc[\"冰淇淋\"] = \"乱入\"\ndf\n\ndf.drop(index=[\"冰淇淋\"], inplace=True)\nprint(\"删除index\".center(100,\"*\"))\nprint(df)\ndf.drop(columns=[\"测试\"], inplace=True)\nprint(\"删除columns\".center(100,\"*\"))\nprint(df)", "对列名或者行名进行重命名操作, pandas提供了rename方法", "df.rename(columns={\"食品\":\"食品名称\",\"数量\":\"计量单位\"},inplace=True)\ndf", "有时候数据可能有重复的值, 可以使用drop_duplicates方法来去除", "df.loc[17] = [\"烧鸭\",\"1 份 (120 克)\",356]\ndf\n\ndf.drop_duplicates(subset=\"食品名称\",inplace=True)\ndf", "排序可以用sort_values", "df.sort_values(\"热量(大卡)\", inplace=True, ascending=False)\ndf", "做数据清洗的时候,可能由于是爬回来的数据, 数据不完整,有空的情况", "import numpy as np\ndf.loc[15,\"计量单位\"] = np.nan\ndf.isnull()\ndf\n\ndf = df.reset_index()\ndf", "做数据清洗的时候, 有时候可能想根据原有的列,做计算, 然后增加新列. 我们模拟一下场景", "size = np.random.randint(1,20,size=17)\ndf[\"份数\"] = size\ndf", "我们希望计算出一列总热量来", "df[\"总热量\"] = df[\"热量(大卡)\"] * df[\"份数\"]\ndf", "数据统计\npandas 带了好多数据统计函数, 如果是不能执行的,比如算平均数不是数字的行会自动忽略", "print(\"count\".center(100, \"*\"))\nprint(df.count())\nprint(\"min\".center(100, \"*\"))\nprint(df.min())\nprint(\"sum\".center(100, \"*\"))\nprint(df.sum())\nprint(\"describe\".center(100, \"*\"))\nprint(df.describe())\nprint(df[\"热量(大卡)\"].min())", "数据表合并\nDataFrame就类似于数据库的表, 有时候希望做一些join操作", "df1 = pd.DataFrame({'name':['ZhangFei', 'GuanYu', 'a', 'b', 'c'], 'data1':range(5)})\ndf2 = pd.DataFrame({'name':['ZhangFei', 'GuanYu', 'A', 'B', 'C'], 'data2':range(5)})\nprint(\"df1\".center(100, \"*\"))\nprint(df1)\nprint(\"df2\".center(100, \"*\"))\nprint(df2)", "针对指定列进行连接", "df3 = pd.merge(df1, df2, on='name')\ndf3", "内连接, 左连接, 右连接 , 内连接", "print(\"inner\".center(100,\"*\"))\ndf3 = pd.merge(df1, df2, how='inner')\nprint(df3)\nprint(\"left\".center(100,\"*\"))\ndf3 = pd.merge(df1, df2, how='left')\nprint(df3)\nprint(\"right\".center(100,\"*\"))\ndf3 = pd.merge(df1, df2, how='right')\nprint(df3)\nprint(\"outer\".center(100,\"*\"))\ndf3 = pd.merge(df1, df2, how='outer')\nprint(df3)\n", "用sql操作pandas", "import pandas as pd\nfrom pandas import DataFrame\nfrom pandasql import sqldf\ndf1 = DataFrame({'name':['ZhangFei', 'GuanYu', 'a', 'b', 'c'], 'data1':range(5)})\nprint(\"df1\".center(100, \"*\"))\nprint(df1)\nsql = \"select * from df1 where name ='ZhangFei'\"\nprint(\"执行sql\".center(100, \"*\"))\nprint(sqldf(sql, globals()))\n", "将json导入到mysql", "df = pd.read_json(\"menzhen_jk.json\")\ndf\n\nfrom sqlalchemy import create_engine\n\n# mac下安装mysqlclient失败了, 至今没有装好, 不过可以用pymysql\nSQLALCHEMY_DATABASE_URI = 'mysql+pymysql://root:123456@localhost:3306/data_analyze?charset=utf8mb4'\nconn= create_engine(SQLALCHEMY_DATABASE_URI)\n\ndf.to_sql(\"menzhen_jk\", con=conn,if_exists='replace',index=False, chunksize=100)", "练习\n现在有两个csv, 一个是从s查询的结果, 有两列一个是url , 另一个是黑白 . 另一个csv是从url_detect接口查出来的. 一列是url 另一列是检出威胁的引擎的列表用逗号隔开的字符串, 有可能是空字符串或者Nan. 现在要求汇总这两个csv. 如果url_detect接口里面的结果不是Nan或者是空字符串或者是字符串safe, 不是这三种情况结果就按黑, 否则就按s的结果.", "!ls\n!head url黑白.csv\n\ndf_url_detect = pd.read_csv(\"url黑白.csv\")\ndf_url_detect\n\ndf_s = pd.read_csv(\"url黑白_from_s.csv\")\ndf_s\n\nimport pandas as pd\nimport numpy as np\n\nnew_df = pd.merge(df_url_detect, df_s, on=\"url\")\n# new_df\n\nnew_df.rename(columns={\"黑白_x\": \"url_detect\", \"黑白_y\": \"s\"}, inplace=True)\n\n\ndef new_bw(df):\n df[\"黑白\"] = df[\"s\"]\n if df[\"黑白\"] != \"黑\":\n if not (df[\"url_detect\"] is np.nan or df[\"url_detect\"] == \"\" or df[\"url_detect\"] == \"safe\"):\n df[\"黑白\"] = \"黑\"\n if df[\"黑白\"] == \"未知\":\n df[\"黑白\"] = \"白\"\n return df\n\n\nnew_df = new_df.apply(new_bw, axis=1)\nnew_df\n\n\n\n# new_df.drop(columns=[\"url_detect\", \"s\"], inplace=True)\n\n\nnew_df.count()\n\nnew_df.to_csv(\"汇总黑白.csv\", index=False)\nnew_df.to_excel(\"汇总黑白.xlsx\", index=False)\n\n!ls\n\n!rm 汇总黑白.csv 汇总黑白.xlsx\n\n!ls", "方法二", "df_url_detect", "将NaN填充为safe就好解决了", "df_url_detect = df_url_detect.fillna(\"safe\")", "再看一下还有没有空白", "df_url_detect[\"黑白\"].unique()", "甚至可以看一下个数有多少", "df_url_detect[\"黑白\"].value_counts()", "实际上我们如果不知道哪个是最多的, 我们填充NAN值也经常用平均值或者出现个数最多的值来填充.怎样用出现次数最多的值填充呢", "max_bk = df_url_detect[\"黑白\"].value_counts().index[0]\nprint(max_bk)\ndf_url_detect[\"黑白\"].fillna(max_bk , inplace= True)\ndf_url_detect\n\nnew_df = pd.merge(df_url_detect, df_s, on=\"url\")\nnew_df.rename(columns={\"黑白_x\": \"url_detect\", \"黑白_y\": \"s\"}, inplace=True)\nnew_df\n\nnew_df[\"黑白\"] = np.where(new_df[\"url_detect\"] != \"safe\", \"黑\", new_df[\"s\"])\nnew_df", "发现黑白这一列里面有未知, 应该改成白", "new_df[new_df[\"黑白\"] == \"未知\"]\n\nnew_df.loc[new_df[\"黑白\"] == \"未知\", \"黑白\"] = \"白\"\nnew_df.drop(columns=[\"url_detect\", \"s\"], inplace=True)\nnew_df", "秒出", "new_df[\"黑白\"].value_counts()\n\nnew_df.to_csv" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GuillaumeDec/machine-learning
deep-lstm-rnn-anomaly-detector/rnn_cloudmle.ipynb
gpl-3.0
[ "<h1> Time series prediction using RNNs, with TensorFlow and Cloud ML Engine </h1>\n\nThis notebook illustrates:\n<ol>\n<li> Creating a Recurrent Neural Network in TensorFlow\n<li> Creating a Custom Estimator in tf.contrib.learn \n<li> Training on Cloud ML Engine\n</ol>\n\n<p>\n\n<h3> Simulate some time-series data </h3>\n\nEssentially a set of sinusoids with random amplitudes and frequencies.", "!pip install --upgrade tensorflow\n\nimport tensorflow as tf\nprint tf.__version__\n\nimport numpy as np\nimport tensorflow as tf\nimport seaborn as sns\nimport pandas as pd\n\nSEQ_LEN = 10\ndef create_time_series():\n freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6\n ampl = np.random.random() + 0.5 # 0.5 to 1.5\n x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl\n return x\n\nfor i in xrange(0, 5):\n sns.tsplot( create_time_series() ); # 5 series\n\ndef to_csv(filename, N):\n with open(filename, 'w') as ofp:\n for lineno in xrange(0, N):\n seq = create_time_series()\n line = \",\".join(map(str, seq))\n ofp.write(line + '\\n')\n\nto_csv('train.csv', 1000) # 1000 sequences\nto_csv('valid.csv', 50)\n\n!head -5 train.csv valid.csv", "<h2> RNN </h2>\n\nFor more info, see:\n<ol>\n<li> http://colah.github.io/posts/2015-08-Understanding-LSTMs/ for the theory\n<li> https://www.tensorflow.org/tutorials/recurrent for explanations\n<li> https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb for sample code\n</ol>\n\nHere, we are trying to predict from 8 values of a timeseries, the next two values.\n<p>\n\n<h3> Imports </h3>\n\nSeveral tensorflow packages and shutil", "import tensorflow as tf\nimport shutil\nimport tensorflow.contrib.learn as tflearn\nimport tensorflow.contrib.layers as tflayers\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nimport tensorflow.contrib.metrics as metrics\nimport tensorflow.contrib.rnn as rnn", "<h3> Input Fn to read CSV </h3>\n\nOur CSV file structure is quite simple -- a bunch of floating point numbers (note the type of DEFAULTS). We ask for the data to be read BATCH_SIZE sequences at a time. The Estimator API in tf.contrib.learn wants the features returned as a dict. We'll just call this timeseries column 'rawdata'.\n<p>\nOur CSV file sequences consist of 10 numbers. We'll assume that 8 of them are inputs and we need to predict the next two.", "DEFAULTS = [[0.0] for x in xrange(0, SEQ_LEN)]\nBATCH_SIZE = 20\nTIMESERIES_COL = 'rawdata'\nN_OUTPUTS = 2 # in each sequence, 1-8 are features, and 9-10 is label\nN_INPUTS = SEQ_LEN - N_OUTPUTS", "Reading data using the Estimator API in tf.learn requires an input_fn. This input_fn needs to return a dict of features and the corresponding labels.\n<p>\nSo, we read the CSV file. The Tensor format here will be batchsize x 1 -- entire line. We then decode the CSV. At this point, all_data will contain a list of Tensors. Each tensor has a shape batchsize x 1. There will be 10 of these tensors, since SEQ_LEN is 10.\n<p>\nWe split these 10 into 8 and 2 (N_OUTPUTS is 2). Put the 8 into a dict, call it features. The other 2 are the ground truth, so labels.", "# read data and convert to needed format\ndef read_dataset(filename, mode=tf.contrib.learn.ModeKeys.TRAIN): \n def _input_fn():\n num_epochs = 100 if mode == tf.contrib.learn.ModeKeys.TRAIN else 1\n \n # could be a path to one file or a file pattern.\n input_file_names = tf.train.match_filenames_once(filename)\n \n filename_queue = tf.train.string_input_producer(\n input_file_names, num_epochs=num_epochs, shuffle=True)\n reader = tf.TextLineReader()\n _, value = reader.read_up_to(filename_queue, num_records=BATCH_SIZE)\n\n value_column = tf.expand_dims(value, -1)\n print 'readcsv={}'.format(value_column)\n \n # all_data is a list of tensors\n all_data = tf.decode_csv(value_column, record_defaults=DEFAULTS) \n inputs = all_data[:len(all_data)-N_OUTPUTS] # first few values\n label = all_data[len(all_data)-N_OUTPUTS : ] # last few values\n \n # from list of tensors to tensor with one more dimension\n inputs = tf.concat(inputs, axis=1)\n label = tf.concat(label, axis=1)\n print 'inputs={}'.format(inputs)\n \n return {TIMESERIES_COL: inputs}, label # dict of features, label\n return _input_fn", "<h3> Define RNN </h3>\n\nA recursive neural network consists of possibly stacked LSTM cells.\n<p>\nThe RNN has one output per input, so it will have 8 output cells. We use only the last output cell, but rather use it directly, we do a matrix multiplication of that cell by a set of weights to get the actual predictions. This allows for a degree of scaling between inputs and predictions if necessary (we don't really need it in this problem).\n<p>\nFinally, to supply a model function to the Estimator API, you need to return a ModelFnOps. The rest of the function creates the necessary objects.", "LSTM_SIZE = 3 # number of hidden units in each of the LSTM cells\n\n# create the inference model\ndef simple_rnn(features, targets, mode):\n # 0. Reformat input shape to become a sequence\n x = tf.split(features[TIMESERIES_COL], N_INPUTS, 1)\n #print 'x={}'.format(x)\n \n # 1. configure the RNN\n lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)\n outputs, _ = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)\n\n # slice to keep only the last cell of the RNN\n outputs = outputs[-1]\n #print 'last outputs={}'.format(outputs)\n \n # output is result of linear activation of last layer of RNN\n weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))\n bias = tf.Variable(tf.random_normal([N_OUTPUTS]))\n predictions = tf.matmul(outputs, weight) + bias\n \n # 2. loss function, training/eval ops\n if mode == tf.contrib.learn.ModeKeys.TRAIN or mode == tf.contrib.learn.ModeKeys.EVAL:\n loss = tf.losses.mean_squared_error(targets, predictions)\n train_op = tf.contrib.layers.optimize_loss(\n loss=loss,\n global_step=tf.contrib.framework.get_global_step(),\n learning_rate=0.01,\n optimizer=\"SGD\")\n eval_metric_ops = {\n \"rmse\": tf.metrics.root_mean_squared_error(targets, predictions)\n }\n else:\n loss = None\n train_op = None\n eval_metric_ops = None\n \n # 3. Create predictions\n predictions_dict = {\"predicted\": predictions}\n \n # 4. return ModelFnOps\n return tflearn.ModelFnOps(\n mode=mode,\n predictions=predictions_dict,\n loss=loss,\n train_op=train_op,\n eval_metric_ops=eval_metric_ops)", "<h3> Experiment </h3>\n\nDistributed training is launched off using an Experiment. The key line here is that we use tflearn.Estimator rather than, say tflearn.DNNRegressor. This allows us to provide a model_fn, which will be our RNN defined above. Note also that we specify a serving_input_fn -- this is how we parse the input data provided to us at prediction time.", "def get_train():\n return read_dataset('train.csv', mode=tf.contrib.learn.ModeKeys.TRAIN)\n\ndef get_valid():\n return read_dataset('valid.csv', mode=tf.contrib.learn.ModeKeys.EVAL)\n\ndef serving_input_fn():\n feature_placeholders = {\n TIMESERIES_COL: tf.placeholder(tf.float32, [None, N_INPUTS])\n }\n \n features = {\n key: tf.expand_dims(tensor, -1)\n for key, tensor in feature_placeholders.items()\n }\n features[TIMESERIES_COL] = tf.squeeze(features[TIMESERIES_COL], axis=[2])\n \n print 'serving: features={}'.format(features[TIMESERIES_COL])\n \n return tflearn.utils.input_fn_utils.InputFnOps(\n features,\n None,\n feature_placeholders\n )\n\nfrom tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils\ndef experiment_fn(output_dir):\n # run experiment\n return tflearn.Experiment(\n tflearn.Estimator(model_fn=simple_rnn, model_dir=output_dir),\n train_input_fn=get_train(),\n eval_input_fn=get_valid(),\n eval_metrics={\n 'rmse': tflearn.MetricSpec(\n metric_fn=metrics.streaming_root_mean_squared_error\n )\n },\n export_strategies=[saved_model_export_utils.make_export_strategy(\n serving_input_fn,\n default_output_alternative_key=None,\n exports_to_keep=1\n )]\n )\n\nshutil.rmtree('outputdir', ignore_errors=True) # start fresh each time\nlearn_runner.run(experiment_fn, 'outputdir')", "<h3> Standalone Python module </h3>\n\nTo train this on Cloud ML Engine, we take the code in this notebook, make an standalone Python module.", "%bash\n# run module as-is\nREPO=$(pwd)\necho $REPO\nrm -rf outputdir\nexport PYTHONPATH=${PYTHONPATH}:${REPO}/simplernn\npython -m trainer.task \\\n --train_data_paths=\"${REPO}/train.csv*\" \\\n --eval_data_paths=\"${REPO}/valid.csv*\" \\\n --output_dir=${REPO}/outputdir \\\n --job-dir=./tmp", "Try out online prediction. This is how the REST API will work after you train on Cloud ML Engine", "%writefile test.json\n{\"rawdata\": [0.0,0.0527,0.10498,0.1561,0.2056,0.253,0.2978,0.3395]}\n\n%bash\nMODEL_DIR=$(ls ./outputdir/export/Servo/)\ngcloud ml-engine local predict --model-dir=./outputdir/export/Servo/$MODEL_DIR --json-instances=test.json", "<h3> Cloud ML Engine </h3>\n\nNow to train on Cloud ML Engine.", "%bash\n# run module on Cloud ML Engine\nREPO=$(pwd)\nBUCKET=cloud-training-demos-ml # CHANGE AS NEEDED\nOUTDIR=gs://${BUCKET}/simplernn/model_trained\nJOBNAME=simplernn_$(date -u +%y%m%d_%H%M%S)\nREGION=us-central1\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n --region=$REGION \\\n --module-name=trainer.task \\\n --package-path=${REPO}/simplernn/trainer \\\n --job-dir=$OUTDIR \\\n --staging-bucket=gs://$BUCKET \\\n --scale-tier=BASIC \\\n --runtime-version=1.2 \\\n -- \\\n --train_data_paths=\"gs://${BUCKET}/train.csv*\" \\\n --eval_data_paths=\"gs://${BUCKET}/valid.csv*\" \\\n --output_dir=$OUTDIR \\\n --num_epochs=100", "<h2> Variant: long sequence </h2>\n\nTo create short sequences from a very long sequence.", "import tensorflow as tf\nimport numpy as np\n\ndef breakup(sess, x, lookback_len):\n N = sess.run(tf.size(x))\n windows = [tf.slice(x, [b], [lookback_len]) for b in xrange(0, N-lookback_len)]\n windows = tf.stack(windows)\n return windows\n\nx = tf.constant(np.arange(1,11, dtype=np.float32))\nwith tf.Session() as sess:\n print 'input=', x.eval()\n seqx = breakup(sess, x, 5)\n print 'output=', seqx.eval()", "Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
w4zir/ml17s
assignments/.ipynb_checkpoints/assignment02-logistic-regression-and-neural-network-checkpoint.ipynb
mit
[ "CSAL4243: Introduction to Machine Learning\nMuhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)\nAssignment 2:\nDigits Recognition using Logistic Regression & Neural Networks\nIn this assignment you are going to use Logistic Regression and Neural Networks. You are going to use digits dataset from digits recognition competition on kaggle. First task is to train a logistic regression model from scikit learn on the training dataset and then predict the labels of the given test dataset and submit it to kaggle. Then you are going to play around with the regularization parameter of logistic regression and see if it has any effect on your results. Later you are going to use neural networks from scikit learn and train it on the same dataset and use the trained model to predict the labels of the test dataset and submit the results to kaggle. You will need to report the results of neural networks as well. Lastly you will create some handwritten digits using a drawing software like MS paint or even write it on a paper and take a picture of it and see how good your trained model works on it. \nNote:\nThe given images are grey scale and has digits written in white, make sure your generated digits are of the same format.\nOverview\n\nDigit Recognizer Dataset\nTasks\nResources\nCredits\n\n<br>\n<br>\nDigit Recognizer Dataset\nThe dataset you are going to use in this assignment is called Digit Recognizer, available at kaggle. To download the dataset go to dataset data tab. Download 'train.csv', 'test.csv' and 'sample_submission.csv.gz' files. 'train.csv' is going to be used for training the model. 'test.csv' is used to test the model i.e. generalization. 'sample_submission.csv.gz' contain sample submission file that you need to generate to be submitted to kaggle.\nNote:\nThare are some tutorials available at the dataset tutorial section which you can use as a starting point. Specially the A beginner’s approach to classification which uses scikit learn's SVM classifier. You can replace it with logistic regression and neural network. You can download the notebook by clicking fork notebook first and then download button.\n<br>\nTasks\n\nUse scikit learn logistic regression to train on digit recognizer dataset from kaggle competition. Submit your best result to the competition and report result.\nUse different values of regularization parameter (parameter C which is inverse of regularization parameter i.e. C = $\\frac{1}{\\lambda}$) in logistic regression and report the effect.\nUse scikit learn neural network to train on digit recognizer dataset and subimit your best result.\nHand draw digits using any drawing software with black background and white font and test it on the trained model above and report results.\n\nNote:\n\nIf your system takes too much time on training then reduce training data. Around 5000 examples are enough to get a good classifier.\nIt is a good idea to convert images to binary values i.e. 0's and 1's. \nSince dataset include images of $28\\times 28$ dimensions, you should use opencv libaray for image resize if needed in task 4. You can download it as anaconda package.\n\nImage resize using opencv", "import cv2\nimg = cv2.imread('test.png',0)\nresized_image = cv2.resize(img, (28, 28), interpolation = cv2.INTER_AREA) ", "convert grey scale image to binary\ncovert every non zero value to one.", "test_images[test_images>0]=1\ntrain_images[train_images>0]=1", "Resources\nCourse website: https://w4zir.github.io/ml17s/\nCourse resources\nCredits\nRaschka, Sebastian. Python machine learning. Birmingham, UK: Packt Publishing, 2015. Print.\nAndrew Ng, Machine Learning, Coursera\nScikit Learn Linear Regression" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
ThomasChauve/aita
Exemple/Documentation.ipynb
gpl-3.0
[ "This documention is made from a jupyter notebook available in 'Exemple/Documentation.ipynb'\nLoad data from G50 analyser\nLoading data from G50 analyser after you convert binary file into ASCII file with 5 columuns.", "import AITAToolbox.loadData_aita as lda\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nimport matplotlib.patches as patches\nimport numpy as np\nimport math\nimport scipy\n%matplotlib widget", "Without 'micro_test.bmp'", "data=lda.aita5col('orientation_test.dat')", "With 'micro_test.bmp'\nThe ‘micro_test.bmp’ file should be a black and white image with the grains boundaries being white. Therefore you should use :\npython\ndata=lda.aita5col('orientation_test.dat','micro.test.bmp')\nBasic treatment\nCroping\nIt can be usefull to select a sub area. Within a juptyter notebook use interactive_crop(). In ipython use just crop function.", "out=data.interactive_crop(new=True)", "Once you did 'Export crop' the position of the rectangle can be find in :\npython\nout.pos=np.array([xmin,xmax,ymin,ymax])", "ss=np.shape(data.phi.field)\nres=data.phi.res\n\nfig,ax=plt.subplots()\ndata.phi1.plot()\nrect=patches.Rectangle((out.pos[0]*res, (ss[0]-out.pos[3])*res), (out.pos[1]-out.pos[0])*res, (out.pos[3]-out.pos[2])*res, linewidth=1, edgecolor='b', facecolor='none')\nplt.title('Full data with crop area')\nax.add_patch(rect)\nplt.subplots()\nout.crop_data.phi1.plot()\nplt.title('Crop data')", "Segmentation of microstrucuture\nIf you want to do it. It should not be use if you already load the microstrucure.", "plt.figure(figsize=(10,10))\nseg=out.crop_data.interactive_segmentation()", "Once you did 'Export AITA' you can save the microstrucuture using :\npython\ndata.micro.save_bmp('micro')\nYou can also see the parameter used for the segmentation in res.", "print('Use scharr:',seg.use_scharr)\nprint('Value scharr:',seg.val_scharr)\nprint('Use canny:',seg.use_canny)\nprint('Value canny:',seg.val_canny)\nprint('Images canny:',seg.img_canny)\nprint('Use quality:',seg.use_quality)\nprint('Value quality:',seg.val_quality)\nprint('Include border:',seg.include_border)", "Filter the data\nThis function filter the bad indexed value. Using G50 analyser a quelity factor is given between 0 and 100. Usualy using data with a quality factor higher than 75 is a good option.", "data.filter(75)", "Colormap\nPlotting the colormap with the grains boundaries\nFull ColorWheel\nAdvantages\n1. The full colorwheel has unique relation between color and orientation.\nInconveniants\n1. The color are discountinous for $v=\\left[x,y,z=0 \\pm \\varepsilon\\right]$", "plt.figure(figsize=(8,8))\ndata.plot()\ndata.micro.plotBoundary(dilatation=2)\nplt.title('Colormap')", "The associated full colorwheel :", "plt.figure(figsize=(2,2))\nplt.imshow(lda.aita.lut())\nplt.axis('off')\nplt.title('LUT')", "Semi ColorWheel\nAdvantage\n1. No color discontinuity for $v=\\left[x,y,z=0 \\pm \\varepsilon\\right]$\nInconvinent\n\nThe semi colorwheel has non unique relation between color and orientation.", "plt.figure(figsize=(8,8))\ndata.plot(semi=True)\ndata.micro.plotBoundary(dilatation=2)\nplt.title('Colormap')", "The associated full colorwheel :", "plt.figure(figsize=(2,2))\nplt.imshow(lda.aita.lut(semi=True))\nplt.axis('off')\nplt.title('LUT')", "Pole figure\nThere is various option to plot the pole figure here we focus on some of them but to see all of them refer to the documentation of plotpdf function.\nThe color coding of the pole figure is obtain using a Kernel Density Estimation (KDE). This KDE has to be manipulating carrefully. If you want to have a basic idea of what is a KDE you can look at https://mathisonian.github.io/kde/.\nRepresentation\nPole figure all sample\nHere some of the option are shown as contour plot, and with or without circle for specific angle.\nBe aware that to reduce the computation time we only used by default 10000 orientations selected randomly. You can modify this using 'nbp' value. If you set nbp to 0 it use all the data.", "plt.figure(figsize=(7,7),dpi=160)\nplt.subplot(2,2,1)\ndata.plotpdf(contourf=True,angle=0,cm2=cm.gray)\nplt.subplot(2,2,2)\ndata.plotpdf(contourf=True)\nplt.subplot(2,2,3)\ndata.plotpdf(angle=0)\nplt.subplot(2,2,4)\ndata.plotpdf()", "Kernel Density Estimation\nIf you want to have an idea of a basic KDE in one dimention refer to https://mathisonian.github.io/kde/\nHere there is some specificities du to the fact that we are computing KDE on a sphere. To do so we are using sklearn.neighbors.KernelDensity (https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KernelDensity.html). \nThe 'metric' is set to 'haversine' for spherical computation (for exemple see https://scikit-learn.org/stable/auto_examples/neighbors/plot_species_kde.html#sphx-glr-auto-examples-neighbors-plot-species-kde-py)\nWe are using a gaussian kernel.\nWarning : The 'bandwidth' parameter is crutial parameter to set. It can have a strong influence on your pole figure and you interpretation. You should set it up carefully and be critic on your pole figure. Here we show different pole figure for different bandwidth value using the same data as input.", "plt.figure(figsize=(7,7),dpi=160)\nplt.subplot(2,2,1)\ndata.plotpdf(contourf=True,angle=0,bw=0.05)\nplt.title('bw=0.05')\nplt.subplot(2,2,2)\ndata.plotpdf(contourf=True,angle=0,bw=0.1)\nplt.title('bw=0.1')\nplt.subplot(2,2,3)\ndata.plotpdf(contourf=True,angle=0,bw=0.3)\nplt.title('bw=0.3')\nplt.subplot(2,2,4)\ndata.plotpdf(contourf=True,angle=0,bw=2.0)\nplt.title('bw=2')", "Misorientation profile", "res=data.interactive_misorientation_profile()\n\nplt.figure()\nplt.plot(res.x,res.mis2o,'b-',label='mis2o')\nplt.plot(res.x,res.mis2p,'k-',label='mis2p')\nplt.grid()\nplt.legend()\nplt.xlabel('Distance')\nplt.ylabel('Angle')", "Grelon function\nThe grelon function compute the angle between the c-axis at each pixel and the unit radial vector from the center. The center is given by the user.\nUsing the interactive_grelon function you can click as many time as you want. When you push export, it will compute the angle using the last click (input)", "grelon=data.interactive_grelon()\n\n#You can plot the angle map in degree using.\nplt.figure()\ngrelon.map.plot()\n#You can find the center use for the computation\nplt.plot(grelon.center[0],grelon.center[1],'ks')", "Misorientation function", "angle=out.crop_data.misorientation(filter_angle=5*math.pi/180)\nrandom_angle=out.crop_data.misorientation(filter_angle=5*math.pi/180,random=True)\n\nkernel_a = scipy.stats.gaussian_kde(angle)\nxeval_a=np.linspace(0,math.pi/2,180)\nyeval_a=kernel_a(xeval_a)\n\nkernel_ra = scipy.stats.gaussian_kde(random_angle)\nxeval_ra=np.linspace(0,math.pi/2,180)\nyeval_ra=kernel_ra(xeval_ra)\n\nplt.figure()\nplt.plot(xeval_a,yeval_a)\nplt.plot(xeval_ra,yeval_ra)\n\nangle.shape" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.18/_downloads/7bb2e6f1056f5cae3a98ccc12aac266f/plot_eeg_no_mri.ipynb
bsd-3-clause
[ "%matplotlib inline", "EEG forward operator with a template MRI\nThis tutorial explains how to compute the forward operator from EEG data\nusing the standard template MRI subject fsaverage.\n.. important:: Source reconstruction without an individual T1 MRI from the\n subject will be less accurate. Do not over interpret\n activity locations which can be off by multiple centimeters.\n<div class=\"alert alert-info\"><h4>Note</h4><p>`plot_montage` show all the standard montages in MNE-Python.</p></div>\n:depth: 2", "# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Joan Massich <mailsik@gmail.com>\n#\n# License: BSD Style.\n\nimport os.path as op\n\nimport mne\nfrom mne.datasets import eegbci\nfrom mne.datasets import fetch_fsaverage\n\n# Download fsaverage files\nfs_dir = fetch_fsaverage(verbose=True)\nsubjects_dir = op.dirname(fs_dir)\n\n# The files live in:\nsubject = 'fsaverage'\ntrans = op.join(fs_dir, 'bem', 'fsaverage-trans.fif')\nsrc = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')\nbem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')", "Load the data\nWe use here EEG data from the BCI dataset.", "raw_fname, = eegbci.load_data(subject=1, runs=[6])\nraw = mne.io.read_raw_edf(raw_fname, preload=True)\n\n\n# Clean channel names to be able to use a standard 1005 montage\nch_names = [c.replace('.', '') for c in raw.ch_names]\nraw.rename_channels({old: new for old, new in zip(raw.ch_names, ch_names)})\n\n# Read and set the EEG electrode locations\nmontage = mne.channels.read_montage('standard_1005', ch_names=raw.ch_names,\n transform=True)\n\nraw.set_montage(montage)\nraw.set_eeg_reference(projection=True) # needed for inverse modeling\n\n# Check that the locations of EEG electrodes is correct with respect to MRI\nmne.viz.plot_alignment(\n raw.info, src=src, eeg=['original', 'projected'], trans=trans, dig=True)", "Setup source space and compute forward", "fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,\n bem=bem, eeg=True, mindist=5.0, n_jobs=1)\nprint(fwd)\n\n# for illustration purposes use fwd to compute the sensitivity map\neeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')\neeg_map.plot(time_label='EEG sensitivity', subjects_dir=subjects_dir,\n clim=dict(lims=[5, 50, 100]))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
varnion/sabesPy
rascunho.ipynb
bsd-3-clause
[ "# baixei o grafico da noticia em 2015-03-15 às 01:02:03\n#!wget http://s2.glbimg.com/dSOq2KmDstd2L_434azcbAHSPjc=/s.glbimg.com/jo/g1/f/original/2015/03/14/reservatorios1403.jpg", "Acho que esse gráfico do G1 tá errado\nG1 São Paulo\nNível de água do Cantareira vai a 14,5%; todos reservatórios sobem\nlink da notícia\n<!---![g1](reservatorios1403.jpg)-->", "from IPython.display import display, Image\n\n## eis a imagem da notícia\ninfograficoG1 = Image('reservatorios1403.jpg')\ndisplay(infograficoG1)", "A sabesp disponibiliza dados para consulta neste endereço, mas não faço idéia de como pegar os dados com o python... \nainda bem que uma boa alma já fez uma api que dá conta do serviço!", "import urllib.request\nreq = urllib.request.urlopen(\"https://sabesp-api.herokuapp.com/\").read().decode()\n\nimport json\ndata = json.loads(req)\n\nimport datetime as dt\nprint('dados disponibilizados pela sabesb hoje, %s \\n-----' % dt.date.today())\nfor x in data:\n print (x['name'])\n for i in range(len(x['data'])):\n item = x['data'][i]\n print ('item %d) %35s = %s' % (i, item['key'], item['value']))\n \n #print ( [item['value'] for item in x['data'] ])\n print('-----')\n\n## com isso posso usar list comprehension para pegar os dados que me interessam\n[ (x['name'], x['data'][0]['value']) for x in data ] \n\nimport datetime as dt\n# datas usadas no grafico do G1\ntoday = dt.date(2015,3,14)\nyr = dt.timedelta(days=365)\nlast_year = today - yr\n\ntoday=today.isoformat()\nlast_year=last_year.isoformat()\n\ndef getData(date):\n \"\"\"recebe um objeto date ou uma string com a data no \n formato YYYY-MM-DD e retorna uma 'Série' (do pacote pandas)\n com os níveis dos reservatórios da sabesp\"\"\"\n \n# def parsePercent(s):\n# \"\"\"recebe uma string no formato '\\d*,\\d* %' e retorna o float equivalente\"\"\"\n# return float(s.replace(\",\",\".\").replace(\"%\",\"\"))\n# da pra fazer com o lambda tbm, huehue\n fixPercent = lambda s: float(s.replace(\",\",\".\").replace(\"%\",\"\"))\n \n import datetime\n if type(date) == datetime.date:\n date = date.isoformat()\n \n ## requisição\n import urllib.request\n req = urllib.request.urlopen(\"https://sabesp-api.herokuapp.com/\" + date).read().decode()\n \n ## transforma o json em dicionario\n import json\n data = json.loads(req)\n \n ## serie\n dados = [ fixPercent(x['data'][0]['value']) for x in data ]\n sistemas = [ x['name'] for x in data ]\n \n import pandas as pd\n return pd.Series(dados, index=sistemas, name=date) \n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns ## só pra deixar o matplotlib com o estilo bonitão do seaborn ;)\nsns.set_context(\"talk\")\n#pd.options.display.mpl_style = 'default'\n\ndf = pd.DataFrame([getData(today), getData(last_year)]) #, index=[today, last_year])\n\ndf.T.plot(kind='bar', rot=0, figsize=(8,4))\nplt.show()\n\ndf", "OK. Tudo certo. Bate com os gráficos mostrados pelo G1, apenas está sendo mostrado de uma forma diferente. \nSó temos um pequeno problema aí: esses percentuais são em relação à capacidade reservatório na data consultada. Acontece que, pelo menos para o Cantareira e o Alto Tietê, esse volume VARIA (volume morto mandou um oi). \nVejam:", "datas = [last_year,\n '2014-05-15', # pré-volume morto\n '2014-05-16', # estréia da \"primeira reserva técnica\", a.k.a. volume morto\n '2014-07-12',\n '2014-10-23',\n '2014-10-24', # \"segunda reserva técnica\" ou \"VOLUME MORTO 2: ELECTRIC BOOGALOO\"\n '2015-01-01', # feliz ano novo ?\n today]\nimport numpy as np\ndf = pd.DataFrame(pd.concat(map(getData, datas), axis=1))\n\ndf = df.T\n\ndf\n\ndef plotSideBySide(dfTupl, cm=['Spectral', 'coolwarm']):\n fig, axes = plt.subplots(1,2, figsize=(17,5))\n\n for i, ax in enumerate(axes):\n dfTupl[i].ix[:].T.plot(\n kind='bar', ax=ax,\n rot=0, colormap=cm[i])\n\n #ax[i].\n\n for j in range(len(dfTupl[i].columns)):\n itens = dfTupl[i].ix[:,j]\n y = 0\n if itens.max() > 0:\n y = itens.max()\n ax.text(j, y +0.5, \n '$\\Delta$\\n{:0.1f}%'.format(itens[1] - itens[0]), \n ha='center', va='bottom', \n fontsize=14, color='k')\n\n plt.show()\n\n#%psource plotaReservatecnica\n\ndados = df.ix[['2014-05-15','2014-05-16']], df.ix[['2014-10-23','2014-10-24']]\n\nplotSideBySide(dados)", "o cantareira tem capacidade total de quase 1 trilhão de litros, segundo a matéria do G1. \nEntão, entre os dias 15 e 16 de março, POOF: 180 bilhões de litros surgiram num passe de mágica!\nDepois, em outubro, POOF. Surgem mais 100 bilhões. \nQUE BRUXARIA É ESSA?!?\nO próprio site da sabesp esclarece:\n\n\nA primeira reserva técnica entrou em operação em 16/05/2014 e acrescentou mais 182,5 bilhões de litros ao sistema - 18,5% de acréscimo;\n\n\nA segunda reserva técnica entrou em operação em 24/10/2014 e acrescentou mais 105,4 bilhões de litros ao sistema - 10,7% de acréscimo \n\n\nOu seja, o grafico do G1 realmente está errado. Alguém avisa os caras.", "def fixCantareira(p, data):\n \"\"\"corrige o percentual divulgado pela sabesp\"\"\"\n \n def str2date(data, format='%Y-%m-%d'):\n \"\"\"converte uma string contendo uma data e retorna um objeto date\"\"\"\n import datetime as dt\n return dt.datetime.strptime(data,format)\n \n vm1day = str2date('16/05/2014', format='%d/%m/%Y')\n vm2day = str2date('24/10/2014', format='%d/%m/%Y')\n \n vm1 = 182.5\n vm2 = 105.4\n \n def percReal(perc,volumeMorto=0):\n a = perc/100\n volMax = 982.07\n volAtual = volMax*a -volumeMorto\n b = 100*volAtual/volMax\n b = np.round(b,1)\n return b\n \n \n if str2date(data) < vm1day:\n print(data, p, end=' ')\n perc = percReal(p)\n print('===>', perc)\n return perc\n \n elif str2date(data) < vm2day:\n print('primeira reserva técnica em uso', data, p, end=' ')\n perc = percReal(p, volumeMorto=vm1)\n print('===>', perc)\n return perc\n \n else:\n print('segunda reserva técnica em uso', data, p, end=' ')\n perc = percReal(p, volumeMorto=vm1+vm2)\n print('===>', perc)\n return perc\n \n\ndFixed = df.copy()\ndFixed.Cantareira = ([fixCantareira(p, dia) for p, dia in zip(df.Cantareira, df.index)])\n\ndados = dFixed.ix[['2014-05-15','2014-05-16']], dFixed.ix[['2014-10-23','2014-10-24']]\n\nplotSideBySide(dados)", "AAAAAAAAH, AGORA SIM! Corrigido. Agora vamos comparar o grafico com os dados usados pelo G1 e o com dados corrigidos", "dias = ['2014-03-14','2015-03-14']\ndados = df.ix[dias,:], dFixed.ix[dias,:]\n\nplotSideBySide(dados,cm=[None,None])", "G1 errou 30%. errou feio, errou rude.\nEstamos muito longe do nível do ano passado. E, mesmo que estivessemos com 15% da capacidade do cantareira, ainda seria uma situação crítica.\nPS: Ainda faltou corrigir o percentual pro Alto Tietê, que também está usando uma \"reserva técnica\".", "dFixed.ix[dias]" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ergosimulation/mpslib
scikit-mps/examples/ex01_mpslib_getting_started.ipynb
lgpl-3.0
[ "MPSlib: Getting started with MPSlib/scikit-mps in Python\nThis a small example getting started with MPSlib through an iPython notebook", "import numpy as np\nimport matplotlib.pyplot as plt\nimport mpslib as mps\n", "Setup MPSLib\nFirst one need to initialize an instance of the mpslib object.", "# Initialize MPSlib using default algortihm, and seetings\nO = mps.mpslib();\n\n# Initialize MPSlib using the mps_snesim_tree algorthm, and a simulation grid of size [80,70,1]\nO = mps.mpslib(method='mps_snesim_tree', simulation_grid_size=[80,70,1])\n\n# specific parameters can be parsed directly when calling mps.mpslib (as abobve), or set by updating the O and O.par structure as \n#O.parameter_filename = 'mps_snesim.txt'\nO.par['debug_level']=-1\nO.par['n_cond']=25\nO.par['n_real']=16\nO.par['n_threads']=5\nO.par['do_entropy']=1\nO.par['simulation_grid_size']=np.array([80,50,1])\n\n# All adjustable parameters for the specifric chosen MPSlib algorithm are\nO.par", "Choose training image", "\nTI, TI_filename = mps.trainingimages.strebelle(di=2, coarse3d=1)\n#TI, TI_filename = mps.trainingimages.rot90()\nO.par['ti_fnam']=TI_filename\nplt.imshow(TI[:,:,0].T)", "Run MPSlib\nThe chosen MPSlib algorithm is run using a single thread by executing \nO.run()\n\nand using multiple threads by executing\nO.run_parallel()", "#O.run()\nO.run_parallel()", "Plot some realizations using matplotlib", "O.plot_reals()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
danlamanna/scratch
notebooks/experimental/Geoserver.ipynb
apache-2.0
[ "Using Geoserver to load data on the map\nIn this notebook we'll take a look at using Geoserver to render raster data to the map. Geoserver is an open source server for sharing geospatial data. It includes a tiling server which the GeoJS map uses to render data efficiently to the map for visualization. Geonotebook comes with a vagrant virtual machine for hosting a local instance of Geoserver. This instance can be used for testing geonotebook. To use it simply install vagrant using your system package manager, in a checked out copy of the source code go to the devops/geoserver/ folder and run vagrant up", "%matplotlib inline\nfrom matplotlib import pylab as plt", "Make sure you have the geoserver VM running\nThe following cell will check whether or not your have a running instance of the geoserver virtual machine available. The following cell should show text to the effect of:\n```\nCurrent machine states:\ngeoserver running (virtualbox)\nThe VM is running. To stop this VM, you can run vagrant halt to\nshut it down forcefully, or you can run vagrant suspend to simply\nsuspend the virtual machine. In either case, to restart it again,\nsimply run vagrant up.\n```\nIf it does not show the geoserver machine in a state of running You can load the machine by going to ../devops/geoserver/ and running vagrant up", "!cd ../devops/geoserver && vagrant status", "Display geoserver status\nThis should ensure the client can successfully connect to your VM, if you do not see the Geoserver 'Status' page then something is wrong and the rest of the notebook may not function correctly.", "from IPython.core.display import display, HTML\nfrom geonotebook.config import Config\ngeoserver = Config().vis_server\ndisplay(HTML(geoserver.c.get(\"/about/status\").text))", "Get the data from S3\nNext get some sample data from S3. This GeoTiff represents NBAR data for September from 2010 covering a section of Washington states Glacier National Park. It is aproximately 200Mb and may take some time to download from Amazon's S3.\nThe tiff itself has been slightly transformed from its original HDF dataset. In particular it only has 4 bands (R,G,B & NDVI) and includes some geotiff tags with band statistics.", "!curl -o /tmp/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff http://golden-tile-geotiffs.s3.amazonaws.com/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff", "Adding an RGB layer to the map\nHere we add our first data layer to the map. To do this we use a RasterData object imported from the geonotebook.wrappers package. By default RasterData objects read tiffs using the rasterio library. RasterData objects are designed to provide a consistent API to raster data across a number of different readers and systems. We will use the add_layer function to add the RasterData object to the map.", "# Set the center of the map to the location the data\nM.set_center(-120.32, 47.84, 7)\n\nfrom geonotebook.wrappers import RasterData\n\nrd = RasterData('data/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff')\nrd", "To add the layer we call M.add_layer passing in a subset of the raster data set's bands. In this case we index into rd with the list [1, 2, 3]. This actually returns a new RasterData object with only three bands available (in this case bands 1, 2 and 3 corrispond to Red, Green and Blue). When adding layers you can only add a layer with either 3 bands (R,G,B) or one band (we'll see a one band example in a moment).", "M.add_layer(rd[1, 2, 3], opacity=1.0)\n\nM.layers.annotation.points[0].data.next()\n\nfrom geonotebook.vis.ktile.utils import get_layer_vrt\nprint get_layer_vrt(M.layers[0])", "This should have added an RGB dataset to the map for visualization. You can also see what layers are available via the M.layers attribute.", "M.layers", "The dataset may appear alarmingly dark. This is because the data itself is not well formated. We can see this by looking at band min and max values:", "print(\"Color Min Max\")\nprint(\"Red: {}, {}\".format(rd[1].min, rd[1].max))\nprint(\"Green: {}, {}\".format(rd[2].min, rd[2].max))\nprint(\"Blue: {}, {}\".format(rd[3].min, rd[3].max))", "R,G,B values should be between 0 and 1. We can remedy this by changing some of the styling options that are available on the layers including setting an interval for scaling our data, and setting a gamma to brighten the image. \nFirst we'll demonstrate removing the layer:", "M.remove_layer(M.layers[0])", "Then we can re-add the layer with a color interval of 0 to 1.", "M.add_layer(rd[1, 2, 3], interval=(0,1))", "We can also brighten this up by changing the gamma. \nNote We don't have to remove the layer before updating it's options. Calling M.add_layer(...) with the same rd object will simply replace any existing layer with the same name. By default the layer's name is inferred from the filename.", "M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5)", "Finally, let's add a little opacity to layer so we can see some of the underlying base map features.", "M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5, opacity=0.75)\n\n# Remove the layer before moving on to the next section\nM.remove_layer(M.layers[0])", "Adding a single band Layer\nAdding a single band layer uses the same M.add_layer(...) interface. Keep in mind that several of the styling options are slightly different. By default single band rasters are rendered with a default mapping of colors to band values.", "M.add_layer(rd[4])", "You may find this colormap a little aggressive, in which case you can replace the colormap with any of the built in matplotlib colormaps:", "cmap = plt.get_cmap('winter', 10)\n\nM.add_layer(rd[4], colormap=cmap, opacity=0.8)", "Including custom color maps as in this example. Here we create a linear segmented colormap that transitions from Blue to Beige to Green. When mapped to our NDVI band data -1 will appear blue, 0 will appear beige and 1 will appear green.", "from matplotlib.colors import LinearSegmentedColormap\n\n# Divergent Blue to Beige to Green colormap\ncmap =LinearSegmentedColormap.from_list(\n 'ndvi', ['blue', 'beige', 'green'], 20)\n\n# Add layer with custom colormap\nM.add_layer(rd[4], colormap=cmap, opacity=0.8, min=-1.0, max=1.0)", "What can I do with this data?\nWe will address the use of annotations for analysis and data comparison in a separate notebook. For now Let's focus on a small agricultural area north of I-90:", "M.set_center(-119.25618502500376, 47.349300631765104, 11)", "Go ahead and start a rectangular annotation (Second button to the right of the 'CellToolbar' button - with the square icon). \nPlease annotate a small region of the fields.\n\nWe can access this data from from the annotation's data attribute. We'll cover exactly what is going on here in another notebook.", "layer, data = next(M.layers.annotation.rectangles[0].data)\ndata", "As a sanity check we can prove the data is the region we've selected by plotting the data with matplotlib's imshow function:\nNote The scale of the matplotlib image may seem slightly different than the rectangle you've selected on the map. This is because the map is displaying in Web Mercator projection (EPSG:3857) while imshow is simply displaying the raw data, selected out of the geotiff (you can think of it as being in a 'row', 'column' projection).", "import numpy as np\n\nfig, ax = plt.subplots(figsize=(16, 16))\nax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))", "NDVI Segmentation analysis\nOnce we have this data we can run arbitrary analyses on it. In the next cell we use a sobel filter and a watershed transformation to generate a binary mask of the data. We then use an implementation of marching cubes to vectorize the data, effectively segmenting green areas (e.g. fields) from surrounding areas.\nThis next cell requires both scipy and scikit-image. Check your operating system documentation for how best to install these packages.", "# Adapted from the scikit-image segmentation tutorial\n# See: http://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html\nimport numpy as np\n\nfrom skimage import measure\nfrom skimage.filters import sobel\nfrom skimage.morphology import watershed\nfrom scipy import ndimage as ndi\n\n\nTHRESHOLD = 20\nWATER_MIN = 0.2\nWATER_MAX = 0.6\n\nfig, ax = plt.subplots(figsize=(16, 16))\nedges = sobel(data)\n\n\nmarkers = np.zeros_like(data)\nmarkers[data > WATER_MIN] = 2\nmarkers[data > WATER_MAX] = 1\n\n\nmask = (watershed(edges, markers) - 1).astype(bool)\nseg = np.zeros_like(mask, dtype=int)\nseg[~mask] = 1\n\n# Fill holes\nseg = ndi.binary_fill_holes(seg)\n\n# Ignore entities smaller than THRESHOLD\nlabel_objects, _ = ndi.label(seg)\nsizes = np.bincount(label_objects.ravel())\nmask_sizes = sizes > THRESHOLD\nmask_sizes[0] = 0\n\nclean_segs = mask_sizes[label_objects]\n\n\n# Find contours of the segmented data\ncontours = measure.find_contours(clean_segs, 0)\nax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))\n\nax.axis('tight')\n\nfor n, contour in enumerate(contours):\n ax.plot(contour[:, 1], contour[:, 0], linewidth=4)\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cloudmesh/book
notebooks/pandas/DataCleaning-Preparation.ipynb
apache-2.0
[ "Data Cleaning and Preparation\nResources:\nChapter 7 in 'Python for Data Analysis' by Wes McKinney (2017, O'Reilly)\n* https://github.com/wesm/pydata-book\nChapter 3 in 'Python Data Science Handbook' by Jake VanderPlas (2016, O'Reilly)\n* https://jakevdp.github.io/PythonDataScienceHandbook/\nDataset: 2015 NSDUH\n\nNational Survey on Drug Abuse and Health (NSDUH) 2015 \nSubstance Abuse and Mental Health Services Administration \nCenter for Behavioral Health Statistics and Quality, October 27, 2016\nhttp://datafiles.samhsa.gov/study/national-survey-drug-use-and-health-nsduh-2015-nid16893\n\nStep1: Load the data\n\nImport python modules\nload data file and save as DataFrame object\nSubset dataframe by column", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfile = pd.read_table('NSDUH-2015.tsv', low_memory=False)\ndata = pd.DataFrame(file)\n\ndata.shape\n\ndf = pd.DataFrame(data, columns=['QUESTID2', 'CATAG6', 'IRSEX','IRMARITSTAT',\n 'EDUHIGHCAT', 'IRWRKSTAT18', 'COUTYP2', 'HEALTH2','STDANYYR1',\n 'HEPBCEVER1','HIVAIDSEV1','CANCEREVR1','INHOSPYR','AMDELT',\n 'AMDEYR','ADDPR2WK1','ADWRDST1','DSTWORST1','IMPGOUTM1',\n 'IMPSOCM1','IMPRESPM1','SUICTHNK1','SUICPLAN1','SUICTRY1',\n 'PNRNMLIF','PNRNM30D','PNRWYGAMT','PNRNMFLAG','PNRNMYR',\n 'PNRNMMON','OXYCNNMYR','DEPNDPYPNR','ABUSEPYPNR','PNRRSHIGH',\n 'HYDCPDAPYU','OXYCPDAPYU','OXCNANYYR2','TRAMPDAPYU','MORPPDAPYU',\n 'FENTPDAPYU','BUPRPDAPYU','OXYMPDAPYU','DEMEPDAPYU','HYDMPDAPYU',\n 'HERFLAG','HERYR','HERMON','ABODHER', 'MTDNPDAPYU',\n 'IRHERFY','TRBENZAPYU','ALPRPDAPYU','LORAPDAPYU','CLONPDAPYU',\n 'DIAZPDAPYU','SVBENZAPYU','TRIAPDAPYU','TEMAPDAPYU','BARBITAPYU',\n 'SEDOTANYR2','COCFLAG','COCYR','COCMON','CRKFLAG',\n 'CRKYR','AMMEPDAPYU','METHAMFLAG','METHAMYR','METHAMMON',\n 'HALLUCFLAG','LSDFLAG','ECSTMOFLAG','DAMTFXFLAG','KETMINFLAG',\n 'TXYRRESOV1','TXYROUTPT1','TXYRMHCOP1','TXYREMRGN1','TXCURRENT1',\n 'TXLTYPNRL1','TXYRNOSPIL','AUOPTYR1','MHLMNT3','MHLTHER3',\n 'MHLDOC3','MHLCLNC3','MHLDTMT3','AUINPYR1','AUALTYR1'])\ndf.shape\n\ndf.head()\n\ndf.tail()", "Step 2: Recode null and missing values NaN\n\nReplace values for Bad Data, Don't know, Refused, Blank, Skip with NaN\nReplace NaN with 0", "df.replace([83, 85, 91, 93, 94, 97, 98, 99, 991, 993], np.nan, inplace=True)\ndf.fillna(0, inplace=True)\ndf.head()", "Step 3: Recode values for selected features:\nOrder matters here, because some features were saved as new variables\n* Recode 2=0: \n['STDANYYR1','HEPBCEVER1', 'HIVAIDSEV1', 'CANCEREVR1', 'INHOSPYR ',\n 'AMDELT','AMDEYR','ADDPR2WK1','DSTWORST1', 'IMPGOUTM1',\n 'IMPSOCM1','IMPRESPM1','SUICTHNK1','SUICPLAN1','SUICTRY1',\n 'PNRNMLIF','PNRNM30D','PNRWYGAMT','PNRWYGAMT','PNRRSHIGH'\n 'TXYRRESOV1','TXYROUTPT1','TXYRMHCOP1','TXYREMRGN1', 'TXCURRENT1', \n 'TXLTYPNRL1','AUOPTYR1','AUINPYR1','AUALTYR1']\n* Recode 3=1: ['PNRRSHIGH', 'TXLTYPNRL1','TXYREMRGN1', 'AUOPTYR1','AUALTYR1']\n* Recode 5=1: ['TXYRRESOV1', 'TXYROUTPT1','TXYRMHCOP1']\n* Recode 6=0: TXLTYPNRL\n* Recode male=0, female=1: IRSEX \n* Recode 1=4, 2=3, 3=2, 4=1: IRMARITSTAT \n* Recode 5=0: EDUHIGHCAT\n* Recode 1=2, 2=1, 3=0, 4=0: IRWRKSTAT18 \n* Recode 1=3, 3=1: COUTYP2: \n* Recode 1=0, 2=1, 3=2, 4=3: ADWRDST1", "columns = ['STDANYYR1','HEPBCEVER1', 'HIVAIDSEV1', 'CANCEREVR1', 'INHOSPYR ',\n 'AMDELT','AMDEYR','ADDPR2WK1','DSTWORST1', 'IMPGOUTM1',\n 'IMPSOCM1','IMPRESPM1','SUICTHNK1','SUICPLAN1','SUICTRY1',\n 'PNRNMLIF','PNRNM30D','PNRWYGAMT','PNRWYGAMT','PNRRSHIGH'\n 'TXYRRESOV1','TXYROUTPT1','TXYRMHCOP1','TXYREMRGN1', 'TXCURRENT1', \n 'TXLTYPNRL1','AUOPTYR1','AUINPYR1','AUALTYR1']\n \nfor col in df:\n df[col].replace(2,0,inplace=True)\n\ndf.head()\n\ncol = ['PNRRSHIGH', 'TXLTYPNRL1', 'TXYREMRGN1', 'AUOPTYR1','AUALTYR1']\n\nfor col in df:\n df[col].replace(3,1,inplace=True)\n\ndf.head()\n\ndf['SEX'] = df['IRSEX'].replace([1,2], [0,1])\ndf['MARRIED'] = df['IRMARITSTAT'].replace([1,2,3,4], [4,3,2,1])\ndf['EDUCAT'] = df['EDUHIGHCAT'].replace([1,2,3,4,5], [2,3,4,5,1])\ndf['EMPLOY18'] = df['IRWRKSTAT18'].replace([1,2,3,4], [2,1,0,0])\ndf['CTYMETRO'] = df['COUTYP2'].replace([1,2,3],[3,2,1])\n\ndf['EMODSWKS'] = df['ADWRDST1'].replace([1,2,3,4], [0,1,2,3])\ndf['TXLTPNRL'] = df['TXLTYPNRL1'].replace(6,0)\n\ndf['TXYRRESOV'] = df['TXYRRESOV1'].replace(5,1)\ndf['TXYROUTPT'] = df['TXYROUTPT1'].replace(5,1)\ndf['TXYRMHCOP'] = df['TXYRMHCOP1'].replace(5,1)\n\ndf.head()\n\ndf.shape", "Examine column names", "df.columns", "Step 4: Rename Select Features for Description", "df = df.rename(columns={'QUESTID2':'QID','CATAG6':'AGECAT',\n 'STDANYYR1':'STDPYR','HEPBCEVER1':'HEPEVR','CANCEREVR1':'CANCEVR','INHOSPYR':'HOSPYR', \n 'AMDELT':'DEPMELT','AMDEYR':'DEPMEYR','ADDPR2WK1':'DEPMWKS','DSTWORST1':'DEPWMOS',\n 'IMPGOUTM1':'EMOPGOUT','IMPSOCM1':'EMOPSOC','IMPRESPM1':'EMOPWRK',\n 'SUICTHNK1':'SUICTHT','SUICPLAN1':'SUICPLN','SUICTRY1':'SUICATT',\n 'PNRNMLIF':'PRLUNDR','PNRNM30D':'PRLUNDR30','PNRWYGAMT':'PRLGRTYR',\n 'PNRNMFLAG':'PRLMISEVR','PNRNMYR':'PRLMISYR','PNRNMMON':'PRLMISMO',\n 'OXYCNNMYR':'PRLOXYMSYR','DEPNDPYPNR':'PRLDEPYR','ABUSEPYPNR':'PRLABSRY', \n 'PNRRSHIGH':'PRLHIGH','HYDCPDAPYU':'HYDRCDYR','OXYCPDAPYU':'OXYCDPRYR', \n 'OXCNANYYR2':'OXYCTNYR','TRAMPDAPYU':'TRMADLYR','MORPPDAPYU':'MORPHPRYR',\n 'FENTPDAPYU':'FENTNYLYR','BUPRPDAPYU':'BUPRNRPHN','OXYMPDAPYU':'OXYMORPHN',\n 'DEMEPDAPYU':'DEMEROL','HYDMPDAPYU':'HYDRMRPHN','HERFLAG':'HEROINEVR',\n 'HERYR':'HEROINYR', 'HERMON':'HEROINMO','ABODHER':'HEROINAB',\n 'MTDNPDAPYU':'METHADONE','IRHERFY':'HEROINFQY',\n 'TRBENZAPYU':'TRQBENZODZ','ALPRPDAPYU':'TRQALPRZM','LORAPDAPYU':'TRQLRZPM',\n 'CLONPDAPYU':'TRQCLNZPM','DIAZPDAPYU':'TRQDIAZPM','SVBENZAPYU':'SDBENZDPN',\n 'TRIAPDAPYU':'SDTRZLM','TEMAPDAPYU':'SDTMZPM','BARBITAPYU':'SDBARBTS', \n 'SEDOTANYR2':'SDOTHYR','COCFLAG':'COCNEVR','COCYR':'COCNYR','COCMON':'COCNMO',\n 'CRKFLAG':'CRACKEVR','CRKYR':'CRACKYR','AMMEPDAPYU':'AMPHTMNYR', \n 'METHAMFLAG':'METHEVR','METHAMYR':'METHYR','METHAMMON':'METHMO',\n 'HALLUCFLAG':'HLCNEVR','LSDFLAG':'LSDEVR','ECSTMOFLAG':'MDMAEVR',\n 'DAMTFXFLAG':'DMTEVR','KETMINFLAG':'KETMNEVR', \n 'TXYRRESOV':'TRTRHBOVN','TXYROUTPT':'TRTRHBOUT','TXYRMHCOP':'TRTMHCTR',\n 'TXYREMRGN1':'TRTERYR','TXCURRENT1':'TRTCURRCV','TXLTPNRL':'TRTCURPRL',\n 'TXYRNOSPIL':'TRTGAPYR','AUOPTYR1':'MHTRTOYR','MHLMNT3':'MHTRTCLYR',\n 'MHLTHER3':'MHTRTTHPY','MHLDOC3':'MHTRTDRYR', 'MHLCLNC3':'MHTRTMDOUT',\n 'MHLDTMT3':'MHTRTHPPGM','AUINPYR1':'MHTRTHSPON','AUALTYR1':'MHTRTALT'})\n \ndf.shape", "Step 5: Subset Data Frame with updated features", "df1 = df[['QID','AGECAT','SEX', 'MARRIED', 'EDUCAT', \n 'EMPLOY18','CTYMETRO','HEALTH2','STDPYR','HEPEVR','CANCEVR','HOSPYR', \n 'DEPMELT','DEPMEYR','DEPMWKS','DEPWMOS','EMODSWKS','EMOPGOUT',\n 'EMOPSOC','EMOPWRK','SUICTHT','SUICPLN','SUICATT',\n 'PRLUNDR','PRLUNDR30','PRLGRTYR','PRLMISEVR','PRLMISYR',\n 'PRLMISMO','PRLOXYMSYR','PRLDEPYR','PRLABSRY','PRLHIGH',\n 'HYDRCDYR','OXYCDPRYR','OXYCTNYR','TRMADLYR','MORPHPRYR',\n 'FENTNYLYR','BUPRNRPHN','OXYMORPHN','DEMEROL','HYDRMRPHN',\n 'HEROINEVR','HEROINYR','HEROINMO','HEROINAB','METHADONE','HEROINFQY',\n 'TRQBENZODZ','TRQALPRZM','TRQLRZPM','TRQCLNZPM','TRQDIAZPM',\n 'SDBENZDPN','SDTRZLM','SDTMZPM','SDBARBTS','SDOTHYR',\n 'COCNEVR','COCNYR','COCNMO','CRACKEVR','CRACKYR',\n 'AMPHTMNYR','METHEVR','METHYR','METHMO',\n 'HLCNEVR','LSDEVR','MDMAEVR','DMTEVR','KETMNEVR', \n 'TRTRHBOVN','TRTRHBOUT','TRTMHCTR','TRTERYR','TRTCURRCV',\n 'TRTCURPRL','TRTGAPYR','MHTRTOYR','MHTRTCLYR','MHTRTTHPY',\n 'MHTRTDRYR','MHTRTMDOUT','MHTRTHPPGM','MHTRTHSPON','MHTRTALT']]\ndf1.shape\n\ndf1.head()", "Step 6: Export data frame to CSV file", "df1.to_csv('nsduh-dataset.csv', sep=',', encoding='utf-8')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ML4DS/ML4all
C1.Intro_Classification/Intro_Classification_student.ipynb
mit
[ "Introduction to Classification.\nNotebook version: 2.3 (Oct 25, 2020)\n\nAuthor: Jesús Cid Sueiro (jcid@tsc.uc3m.es)\n Jerónimo Arenas García (jarenas@tsc.uc3m.es)\n\nChanges: v.1.0 - First version. Extracted from a former notebook on K-NN\n v.2.0 - Adapted to Python 3.0 (backcompatible with Python 2.7)\n v.2.1 - Minor corrections affecting the notation and assumptions\n v.2.2 - Updated index notation\n v.2.3 - Adaptation to slides conversion", "# To visualize plots in the notebook\n%matplotlib inline \n\n# Import some libraries that will be necessary for working with data and displaying plots\nimport csv # To read csv files\nimport random\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import spatial\nfrom sklearn import neighbors, datasets", "1. The Classification problem\nIn a generic classification problem, we are given an observation vector ${\\bf x}\\in \\mathbb{R}^N$ which is known to belong to one and only one category or class, $y$, from the set ${\\mathcal Y} = {0, 1, \\ldots, M-1}$. \nThe goal of a classifier system is to predict $y$ based on ${\\bf x}$.\nTo design the classifier, we are given a collection of labelled observations ${\\mathcal D} = {({\\bf x}k, y_k)}{k=0}^{K-1}$ where, for each observation ${\\bf x}_k$, the value of its true category, $y_k$, is known.\n1.1. Binary Classification\nWe will focus in binary classification problems, where the label set is binary, ${\\mathcal Y} = {0, 1}$. \nDespite its simplicity, this is the most frequent case. Many multi-class classification problems are usually solved by decomposing them into a collection of binary problems.\n1.2. The independence and identical distribution (i.i.d.) assumption.\nThe classification algorithms, as many other machine learning algorithms, are based on two major underlying hypothesis:\n\nIndependence: All samples are statistically independent.\nIdentical distribution: All samples in dataset ${\\mathcal D}$ have been generated by the same distribution $p_{{\\bf X}, Y}({\\bf x}, y)$.\n\nThe i.i.d. assumption is essential to guarantee that a classifier based on ${\\mathcal D}$ has a good perfomance when applied to new input samples. \nThe underlying distribution is unknown (if we knew it, we could apply classic decision theory to make optimal predictions). This is why we need the data in ${\\mathcal D}$ to design the classifier. \n2. A simple classification problem: the Iris dataset\n(Iris dataset presentation is based on this <a href=http://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/> Tutorial </a> by <a href=http://machinelearningmastery.com/about/> Jason Brownlee</a>) \nAs an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository </a>. Quoted from the dataset description:\n\nThis is perhaps the best known database to be found in the pattern recognition literature. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. [...] One class is linearly separable from the other 2; the latter are NOT linearly separable from each other. \n\nThe class is the species, which is one of setosa, versicolor or virginica. Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters.", "# Taken from Jason Brownlee notebook.\nwith open('datasets/iris.data', 'r') as csvfile:\n\tlines = csv.reader(csvfile)\n\tfor row in lines:\n\t\tprint(','.join(row))", "2.1. Training and test\nNext, we will split the data into two sets:\n\nTraining set, that will be used to learn the classification model\nTest set, that will be used to evaluate the classification performance\n\nThe data partition must be random, in such a way that the statistical distribution of both datasets is the same.\nThe code fragment below defines a function loadDataset that loads the data in a CSV with the provided filename, converts the flower measures (that were loaded as strings) into numbers and, finally, it splits the data into a training and test sets.", "# Adapted from a notebook by Jason Brownlee\ndef loadDataset(filename, split):\n xTrain = []\n cTrain = []\n xTest = []\n cTest = []\n\n with open(filename, 'r') as csvfile:\n lines = csv.reader(csvfile)\n dataset = list(lines)\n for i in range(len(dataset)-1):\n for y in range(4):\n dataset[i][y] = float(dataset[i][y])\n item = dataset[i]\n if random.random() < split:\n xTrain.append(item[0:-1])\n cTrain.append(item[-1])\n else:\n xTest.append(item[0:-1])\n cTest.append(item[-1])\n return xTrain, cTrain, xTest, cTest", "We can use this function to get a data split. An expected ratio of 67/33 samples for train/test will be used. However, note that, because of the way samples are assigned to the train or test datasets, the exact number of samples in each partition will differ if you run the code several times.", "xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('./datasets/iris.data', 0.67)\nnTrain_all = len(xTrain_all)\nnTest_all = len(xTest_all)\nprint('Train:', str(nTrain_all))\nprint('Test:', str(nTest_all))", "2.2. Scatter plots\nTo get some intuition about this four dimensional dataset we can plot 2-dimensional projections taking only two variables each time.", "i = 2 # Try 0,1,2,3\nj = 3 # Try 0,1,2,3 with j!=i\n\n# Take coordinates for each class separately\nxiSe = [xTrain_all[n][i] for n in range(nTrain_all) if cTrain_all[n]=='Iris-setosa']\nxjSe = [xTrain_all[n][j] for n in range(nTrain_all) if cTrain_all[n]=='Iris-setosa']\nxiVe = [xTrain_all[n][i] for n in range(nTrain_all) if cTrain_all[n]=='Iris-versicolor']\nxjVe = [xTrain_all[n][j] for n in range(nTrain_all) if cTrain_all[n]=='Iris-versicolor']\nxiVi = [xTrain_all[n][i] for n in range(nTrain_all) if cTrain_all[n]=='Iris-virginica']\nxjVi = [xTrain_all[n][j] for n in range(nTrain_all) if cTrain_all[n]=='Iris-virginica']\n\nplt.plot(xiSe, xjSe,'bx', label='Setosa')\nplt.plot(xiVe, xjVe,'r.', label='Versicolor')\nplt.plot(xiVi, xjVi,'g+', label='Virginica')\nplt.xlabel('$x_' + str(i) + '$')\nplt.ylabel('$x_' + str(j) + '$')\nplt.legend(loc='best')\nplt.show()", "In the following, we will design a classifier to separate classes \"Versicolor\" and \"Virginica\" using $x_0$ and $x_1$ only. To do so, we build a training set with samples from these categories, and a binary label $y^{(k)} = 1$ for samples in class \"Virginica\", and $0$ for \"Versicolor\" data.", "# Select two classes\nc0 = 'Iris-versicolor' \nc1 = 'Iris-virginica'\n\n# Select two coordinates\nind = [0, 1]\n\n# Take training test\nX_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all) \n if cTrain_all[n]==c0 or cTrain_all[n]==c1])\nC_tr = [c for c in cTrain_all if c==c0 or c==c1]\nY_tr = np.array([int(c==c1) for c in C_tr])\nn_tr = len(X_tr)\n\n# Take test set\nX_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all) \n if cTest_all[n]==c0 or cTest_all[n]==c1])\nC_tst = [c for c in cTest_all if c==c0 or c==c1]\nY_tst = np.array([int(c==c1) for c in C_tst])\nn_tst = len(X_tst)\n\n# Separate components of x into different arrays (just for the plots)\nx0c0 = [X_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]\nx1c0 = [X_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]\nx0c1 = [X_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]\nx1c1 = [X_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]\n\n# Scatterplot.\nlabels = {'Iris-setosa': 'Setosa', \n 'Iris-versicolor': 'Versicolor',\n 'Iris-virginica': 'Virginica'}\nplt.plot(x0c0, x1c0,'r.', label=labels[c0])\nplt.plot(x0c1, x1c1,'g+', label=labels[c1])\nplt.xlabel('$x_' + str(ind[0]) + '$')\nplt.ylabel('$x_' + str(ind[1]) + '$')\nplt.legend(loc='best')\n\nplt.show()", "3. A Baseline Classifier: Maximum A Priori.\nFor the selected data set, we have two clases and a dataset with the following class proportions:", "print(f'Class 0 {c0}: {n_tr - sum(Y_tr)} samples')\nprint(f'Class 1 ({c1}): {sum(Y_tr)} samples')", "The maximum a priori classifier assigns any sample ${\\bf x}$ to the most frequent class in the training set. Therefore, the class prediction $y$ for any sample ${\\bf x}$ is", "y = int(2*sum(Y_tr) > n_tr)\nprint(f'y = {y} ({c1 if y==1 else c0})')", "The error rate for this baseline classifier is:", "# Training and test error arrays\nE_tr = (Y_tr != y)\nE_tst = (Y_tst != y)\n\n# Error rates\npe_tr = float(sum(E_tr)) / n_tr\npe_tst = float(sum(E_tst)) / n_tst\nprint('Pe(train):', str(pe_tr))\nprint('Pe(test):', str(pe_tst))", "The error rate of the baseline classifier is a simple benchmark for classification. Since the maximum a priori decision is independent on the observation, ${\\bf x}$, any classifier based on ${\\bf x}$ should have a better (or, at least, not worse) performance than the baseline classifier.\n3. Parametric vs non-parametric classification.\nMost classification algorithms can be fitted to one of two categories:\n\n\nParametric classifiers: to classify any input sample ${\\bf x}$, the classifier applies some function $f_{\\bf w}({\\bf x})$ which depends on some parameters ${\\bf w}$. The training dataset is used to estimate ${\\bf w}$. Once the parameter has been estimated, the training data is no longer needed to classify new inputs.\n\n\nNon-parametric classifiers: the classifier decision for any input ${\\bf x}$ depend on the training data in a direct manner. The training data must be preserved to classify new data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
steinam/teacher
jup_notebooks/datenbanken/.ipynb_checkpoints/Subselects_11FI3-checkpoint.ipynb
mit
[ "Subselect / Unterabfragen)\nZur Durchführung einer Abfrage werden Informationen benötigt, die erst durch eine eigene Abfrage geholt werden müssen.\nSie können stehen\n\nals Vertreter für einen Wert\nals Vertreter für eine Liste\nals Vertreter für eine Tabelle\nals Vertreter für ein Feld", "%load_ext sql\n\n%sql mysql://steinam:steinam@localhost/versicherung_complete", "", "% load_ext sql", "Vertreter für Wert\nNenne alle Mitarbeiter der Abteilung „Schadensabwicklung“.", "%%sql\n\n\nselect Personalnummer, Name, Vorname \nfrom Mitarbeiter \nwhere Abteilung_ID = \n( select ID from Abteilung \nwhere Kuerzel = 'Schadensabwicklung' ); \n", "Lösung", "%%sql\n\nselect Personalnummer, Name, Vorname \nfrom Mitarbeiter \nwhere Abteilung_ID = \n( select ID from Abteilung \nwhere Kuerzel = 'ScAb' );", "Vertreter für Spaltenfunktionen\nDie Ergebnisse von Aggregatfunktionen werden häufig in der WHERE-Klausel benötigt\nBeispiel:\nHole die Schadensfälle mit unterdurchschnittlicher Schadenshöhe. \nLösung\n\nTeil 1: Berechne die durchschnittliche Schadenshöhe aller Schadensfälle. \nTeil 2: Übernimm das Ergebnis als Vergleichswert in die eigentliche Abfrage.", "%%sql\nSELECT ID, Datum, Ort, Schadenshoehe \nfrom Schadensfall \nwhere Schadenshoehe < ( \n select AVG(Schadenshoehe) from Schadensfall \n); ", "Aufgabe\nBestimme alle Schadensfälle, die von der durchschnittlichen Schadenshöhe eines Jahres \nmaximal 300 € abweichen. \nLösung\n\nTeil 1: Bestimme den Durchschnitt aller Schadensfälle innerhalb eines Jahres. \nTeil 2: Hole alle Schadensfälle, deren Schadenshöhe im betreffenden Jahr innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen.", "%%sql\n\nselect sf.ID, sf.Datum, sf.Schadenshoehe, EXTRACT(YEAR from \nsf.Datum) AS Jahr \nfrom Schadensfall sf \nwhere ABS(Schadenshoehe - ( \n select AVG(sf2.Schadenshoehe) \n from Schadensfall sf2 \n where YEAR(sf2.Datum) = YEAR(sf.Datum) \n ) \n ) <= 300; ", "Bemerkung\nDies ist ein Paradebeispiel dafür, wie Unterabfragen nicht benutzt werden sollen. Für jeden \neinzelnen Datensatz muss in der WHERE-Bedingung eine neue Unterabfrage gestartet werden − mit eigener WHERE-Klausel und Durchschnittsberechnung. Viel besser wäre eine der JOIN-Varianten. \nWeitere Lösungsmöglichkeiten (Lutz (13/14)\n```mysql\nselect beschreibung, schadenshoehe \nfrom schadensfall where \nschadenshoehe <= ( \nselect avg(schadenshoehe) \nfrom schadensfall) + 300 \nand schadenshoehe >= (select avg(schadenshoehe) \nfrom schadensfall) - 300 \nselect beschreibung, schadenshoehe \nfrom schadensfall where \nschadenshoehe between ( \nselect avg(schadenshoehe) \nfrom schadensfall) - 300 \nand (select avg(schadenshoehe) \nfrom schadensfall) + 300 \nselect @average:=avg(schadenshoehe) from schadensfall; \nselect id from schadensfall where abs(schadenshoehe - \n@average) <= 300; \n```\nErgebnis als Liste mehrerer Werte\nDas Ergebnis einer Abfrage kann als Filter für die eigentliche Abfrage benutzt werden. \nAufgabe\nBestimme alle Fahrzeuge eines bestimmten Herstellers. \nLösung\n\nTeil 1: Hole die ID des gewünschten Herstellers. \nTeil 2: Hole alle IDs der Tabelle Fahrzeugtyp zu dieser Hersteller-ID. \nTeil 3: Hole alle Fahrzeuge, die zu dieser Liste von Fahrzeugtypen-IDs passen.", "%%sql\n\nselect ID, Kennzeichen, Fahrzeugtyp_ID as TypID \nfrom Fahrzeug \nwhere Fahrzeugtyp_ID in( \n select ID \n from Fahrzeugtyp \n where Hersteller_ID = ( \n select ID \n from Fahrzeughersteller \n where Name = 'Volkswagen' ) ); ", "Aufgabe\nGib alle Informationen zu den Schadensfällen des Jahres 2008, die von der durchschnittlichen Schadenshöhe 2008 maximal 300 € abweichen.\nLösung\n\nTeil 1: Bestimme den Durchschnitt aller Schadensfälle innerhalb von 2008. \nTeil 2: Hole alle IDs von Schadensfällen, deren Schadenshöhe innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen. \nTeil 3: Hole alle anderen Informationen zu diesen IDs.", "%%sql\n\nselect * \nfrom Schadensfall \nwhere ID in ( SELECT ID \nfrom Schadensfall \nwhere ( ABS(Schadenshoehe - ( \n select AVG(sf2.Schadenshoehe) \n from Schadensfall sf2 \n where YEAR(sf2.Datum) = 2008 \n ) \n ) <= 300 ) \nand ( YEAR(Datum) = 2008 ) \n); ", "Vertreter für eine Tabelle\nDas Ergebnis einer Abfrage kann in der Hauptabfrage überall dort eingesetzt werden, wo \neine Tabelle vorgesehen ist. Die Struktur dieser Situation sieht so aus: \n```mysql\nSELECT <spaltenliste> \nFROM <haupttabelle>, \n (SELECT <spaltenliste> \n FROM <zusatztabellen> \n<weitere Bestandteile der Unterabfrage> \n) <name> \n<weitere Bestandteile der Hauptabfrage> \n```\n\nDie Unterabfrage kann grundsätzlich alle SELECT-Bestandteile enthalten.\nORDER BY kann nicht sinnvoll genutzt werden, weil das Ergebnis der Unterabfrage mit der Haupttabelle oder einer\n anderen Tabelle verknüpft wird wodurch eine Sortierung sowieso verlorenginge. \nEs muss ein Name als Tabellen-Alias angegeben werden, der als Ergebnistabelle in der Hauptabfrage verwendet wird.\n\nAufgabe\nBestimme alle Schadensfälle, die von der durchschnittlichen Schadenshöhe eines Jahres maximal 300 € abweichen. \nLösung\n\n\nTeil 1: Stelle alle Jahre zusammen und bestimme den Durchschnitt aller Schadensfälle innerhalb eines Jahres. \n\n\nTeil 2: Hole alle Schadensfälle, deren Schadenshöhe im jeweiligen Jahr innerhalb des Bereichs „Durchschnitt plus/minus 300“ liegen.", "%sql\n\nSELECT sf.ID, sf.Datum, sf.Schadenshoehe, temp.Jahr, \ntemp.Durchschnitt \nFROM Schadensfall sf, \n ( SELECT AVG(sf2.Schadenshoehe) AS Durchschnitt, \n EXTRACT(YEAR FROM sf2.Datum) as Jahr \n FROM Schadensfall sf2 \n group by EXTRACT(YEAR FROM sf2.Datum) \n ) temp \nWHERE temp.Jahr = EXTRACT(YEAR FROM sf.Datum) \nand ABS(Schadenshoehe - temp.Durchschnitt) <= 300; ", "Durch eine Gruppierung werden alle Jahreszahlen und die durchschnittlichen Schadenshöhen zusammengestellt (Teil 1 der Lösung). \nFür Teil 2 der Lösung muss für jeden Schadensfall nur noch Jahr und Schadenshöhe mit dem betreffenden Eintrag in der Ergebnistabelle temp verglichen werden. \n\nDas ist der wesentliche Unterschied und entscheidende Vorteil zu anderen Lösungen: Die \nDurchschnittswerte werden einmalig zusammengestellt und nur noch abgerufen; sie müs-\nsen nicht bei jedem Datensatz neu (und ständig wiederholt) berechnet werden.\nAufgabe\nBestimme alle Fahrzeuge eines bestimmten Herstellers mit Angabe des Typs. \n\nTeil 1: Hole die ID des gewünschten Herstellers. \nTeil 2: Hole alle IDs und Bezeichnungen der Tabelle Fahrzeugtyp, die zu dieser Hersteller-ID gehören. \nTeil 3: Hole alle Fahrzeuge, die zu dieser Liste von Fahrzeugtyp-IDs gehören.", "%%sql\n\nSELECT Fahrzeug.ID, Kennzeichen, Typen.ID As TYP, Typen.Bezeichnung \nFROM Fahrzeug, \n (SELECT ID, Bezeichnung \n FROM Fahrzeugtyp \n WHERE Hersteller_ID = \n (SELECT ID \n FROM Fahrzeughersteller \n WHERE Name = 'Volkswagen' ) \n ) Typen \nWHERE Fahrzeugtyp_ID = Typen.ID; ", "Übungen\n\n\nWelche der folgenden Feststellungen sind richtig, welche sind falsch? \n\nDas Ergebnis einer Unterabfrage kann verwendet werden, wenn es ein einzelner Wert oder eine Liste in Form einer Tabelle ist. Andere Ergebnisse sind nicht möglich. \nEin einzelner Wert als Ergebnis kann durch eine direkte Abfrage oder durch eine Spaltenfunktion erhalten werden. \nUnterabfragen sollten nicht verwendet werden, wenn die WHERE-Bedingung für jede Zeile der Hauptabfrage einen anderen Wert erhält und deshalb die Unterabfrage neu ausgeführt werden muss. \nMehrere Unterabfragen können verschachtelt werden. \nFür die Arbeitsgeschwindigkeit ist es gleichgültig, ob mehrere Unterabfragen oder JOINs verwendet werden. \nEine Unterabfrage mit einer Tabelle als Ergebnis kann GROUP BY nicht sinnvoll nutzen. \nEine Unterabfrage mit einer Tabelle als Ergebnis kann ORDER BY nicht sinnvoll nutzen. \nBei einer Unterabfrage mit einer Tabelle als Ergebnis ist ein Alias-Name für die Tabelle sinnvoll, aber nicht notwendig. \nBei einer Unterabfrage mit einer Tabelle als Ergebnis sind Alias-Namen für die Spalten sinnvoll, aber nicht notwendig. \n\n\n\nWelche Verträge (mit einigen Angaben) hat der Mitarbeiter „Braun, Christian“ abgeschlossen? Ignorieren Sie die Möglichkeit, dass es mehrere Mitarbeiter dieses Namens geben könnte. \n\nZeigen Sie alle Verträge, die zum Kunden 'Heckel Obsthandel GmbH' gehören. Ignorieren Sie die Möglichkeit, dass der Kunde mehrfach gespeichert sein könnte. \nÄndern Sie die Lösung von Übung 3, sodass auch mehrere Kunden mit diesem Namen als Ergebnis denkbar sind. \nZeigen Sie alle Fahrzeuge, die im Jahr 2008 an einem Schadensfall beteiligt waren. \nZeigen Sie alle Fahrzeugtypen (mit ID, Bezeichnung und Name des Herstellers), die im Jahr 2008 an einem Schadensfall beteiligt waren. \nBestimmen Sie alle Fahrzeuge eines bestimmten Herstellers mit Angabe des Typs. \nZeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden. \nVon der Deutschen Post AG wird eine Tabelle PLZ_Aenderung mit folgenden Inhalten geliefert: \n\ncsv\nID PLZalt Ortalt PLZneu Ortneu \n1 45658 Recklinghausen 45659 Recklinghausen \n2 45721 Hamm-Bossendorf 45721 Haltern OT Hamm \n3 45772 Marl 45770 Marl \n4 45701 Herten 45699 Herten\nÄndern Sie die Tabelle Versicherungsnehmer so, dass bei allen Adressen, bei denen PLZ/Ort mit PLZalt/Ortalt \nübereinstimmen, diese Angaben durch PLZneu/Ortneu geändert werden.\n\nHinweise: Beschränken Sie sich auf die Änderung mit der ID=3. (Die vollständige Lösung ist erst mit \nSQL-Programmierung möglich.) Bei dieser Änderungsdatei handelt es sich nur um fiktive Daten, keine echten Änderungen.\n\nSommer 2016", "%sql mysql://steinam:steinam@localhost/so_2016\n\n%%sql\n\n-- Original Roth\nSelect \tKurs.KursID, Kursart.Bezeichnung, \n\t\tKurs.DatumUhrzeitBeginn, \n ((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung \n\t\tfrom Kurs, Kursart, Kundekurs \n\t\twhere KundeKurs.KursID = Kurs.KursID and Kursart.KursartID = Kurs.KursartID \n\t\tgroup by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung\n having Auslastung < 50;\n\n%%sql\n\nselect kursid from kurs\nwhere \n((select teilnehmerMax from kursart where kursart.kursartId = kurs.kursartId) * 0.5)\n> \n(count(KundeKurs.kundenid) where KundeKurs.KursID = kurs.KursID);\n\n%%sql\n\nSelect \tKurs.KursID, Kursart.Bezeichnung, \n\t\tKurs.DatumUhrzeitBeginn, \n ((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung \n\t\tfrom Kurs, Kursart, Kundekurs \n\t\twhere KundeKurs.KursID = Kurs.KursID and Kursart.KursartID = Kurs.KursartID \n\t\tgroup by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung\n having Auslastung < 50\n\n%%sql\n\nSelect \tKurs.KursID, Kursart.Bezeichnung, \n\t\tKurs.DatumUhrzeitBeginn, \n ((count(KundeKurs.KundenID)/Kursart.TeilnehmerMax) * 100) as Auslastung \n\t\tfrom kurs left join kundekurs\n \ton kurs.`kursid` = kundekurs.`Kursid`\n inner join kursart\n on `kurs`.`kursartid` = `kursart`.`kursartid`\n\n \n group by Kurs.KursID, Kurs.DatumUhrzeitBeginn, Kursart.Bezeichnung\n having Auslastung < 50", "New heading" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
adamwlev/adamwlev.github.io
notebooks/Twitter_Movies3.ipynb
mit
[ "Can we create a model to predict the opening weekend box office success of a movie using tweets? Theoretically, it seems like it should be possible. A lot of people tweeting about a movie should indicate that it will be successful. Practically, is it possible to abstract meaning from data that is seemingly so noisy?\nIt seems daunting considering that with a simple strategy there will be plenty of instances where tweets are considered that are not even about the movie in question. For example, when you search for the movie 'Wild (2014)', you get some results like this: \nI was undaunted by this challenge and dove straight in. In the end, without doing anything too sophisticated, I was able to produce a fairly decent model.\nObtaining the Box Office Data\nI obtained data from boxofficemojo.com using the requests library and parsed it using BeautifulSoup4. I specifically worked with this data which is all Domestic (U.S.) movies which have revenue data for a Friday-Saturday-Sunday opening weekends. I grabbed the first 1000 from this list sorted by date with recent first. I made this decision based on the fact that Twitter's popularity has grown extreemly rapidly. I made the judgment that by around late 2009 there were been enough tweets to predict the success of a movie.\nObtaining the Twitter Data\nI used Selenium to crawl Twitter's Advanced Search Engine and get the data. Using the API was out of the questions since the API only has tweets from the last 7 days. Scraping was a challenge because of the 'infinite scroll' feature of this site.\nI used the 'This exact phrase' box for searching. Also, I did a little processing to the movies' titles. To avoid penalizing movies with long titles I stripped all years in parenthesis and subtitles. So for \"Precious: Based on the Novel 'Push' by Sapphire\" I searched for 'Precious'.\nBecause of time and resourse contraints (still working on my Linux skills for AWS) I constrained my data to tweets from the week before the opening of the movie. \nIf I made one search for each movie with the date range set to the whole week, I typically obtained a never ending stream of tweets from the first couple days before the opening. To try to capture the difference in volume of tweets across movies, I did 7 seperate searches - a day is as granular of a window you can search for - and told Selenium to scroll down the page at most 25 times for each search.\nI ended up collecting data for 826 of the 1000 movies that I pulled from boxofficemojo.\nA little cleaning\nI want to focus on the modeling decisions with this post so I will skip to the point in which I aggregated all the data into one nice looking csv file.", "import pandas as pd\npd.set_option('display.max_columns',None)\n\ndata = pd.read_csv('./../data/TwitterMovies/all_the_data.csv')\n\ndata.shape", "Most recent data point:", "data.head(1)", "Least recent data point:", "data.tail(1)", "What the hell are all those columns you might ask. Well opening_wknd is the opening weekend gross which has not been adjusted for inflation. Retweets is currently a string that has the number of Retweets for each tweet that had any Retweets joined with a dash. Likes is the same except for Likes. And Text is a massive document of text from all the tweets that were obtained about the particular movie.\nHere's a sample of text:", "data.iloc[0].Text[0:150]", "Number of characters in this particular document:", "len(data.iloc[0].Text)", "Alright, now, let's get the boring stuff out of the way. Let's get the Retweets and Likes into a more usable form.", "import re\n\npattern = '[0-9]'\nregex = re.compile(pattern)\n\ndata.Retweets = [[0]*(num_tweets-len([c for c in str(ret) if re.match(regex,c)])) + map(int,[c for c in str(ret) if re.match(regex,c)]) for ret,num_tweets in zip(data.Retweets,data.Num_Tweets)]\n\ndata.Likes = [[0]*(num_tweets-len([c for c in str(lik) if re.match(regex,c)])) + map(int,[c for c in str(lik) if re.match(regex,c)]) for lik,num_tweets in zip(data.Likes,data.Num_Tweets)]\n\ndata.head(1)", "Now let's purge the Text data of all links and Twitter handles.", "def cleanse_doc(doc):\n return ' '.join([word for word in doc.split() \n if 'http://' not in word and 'www.' not in word\n and '@' not in word and 'https://' not in word\n and '.com' not in word and '.net' not in word])\n\ndata.Text = [cleanse_doc(doc) for doc in data.Text]\n\ndata.iloc[0].Text[0:150]", "One more boring but necessary step: adjust the revenue numbers for inflation. Since I have less than 7 years of data, I make the simplifying assumption that there has been a constant rate of inflation of this period (12% in total).", "from datetime import date\n\ndef inflation_adjuster(dates,gross):\n inf_rate = .12/7.0\n dates = [date(*map(int,d.split('-'))) for d in dates]\n base = min(dates)\n today = max(dates)\n return [g * (1+inf_rate)**((today-base).days/365.0) / \\\n (1+inf_rate)**((d-base).days/365.0) for d,g in zip(dates,gross)]\n\ndata.opening_wknd = inflation_adjuster(data.opening_date,data.opening_wknd)\n\ndata.tail(1)", "Feature Engineering\nSince the volume of Twitter data has increased so much during the last 7 years, including the time as a feature and looking at interactions will be essential.", "data.insert(1,'days from today',[(date(2016,7,15) - date(*map(int,d.split('-')))).days for d in data.opening_date])\n\ndata.head(1)", "Now, I don't quite know what summary statistics of the Retweets and Likes I should include. So, I'll just include a whole bunch and allow my model to do this work for me.", "import numpy as np\n\ndef trimmed_mean(t):\n t = np.array(t)\n iqr = np.percentile(t,75) - np.percentile(t,25)\n med = np.median(t)\n t = t[np.where((t>=(med-1.5*iqr)) & (t<=(med+1.5*iqr)))]\n return np.mean(t)\n\ndata.insert(5,'Total_Retweets',[sum(ret) for ret in data.Retweets])\ndata.insert(6,'Median_Retweets',[np.median(ret) for ret in data.Retweets])\ndata.insert(7,'Mean_Retweets',data.Total_Retweets/data.Num_Tweets)\ndata.insert(8,'Trimmed_Mean_Retweets',[trimmed_mean(ret) for ret in data.Retweets])\ndata.insert(9,'90th_Percentile_Retweets',[np.percentile(ret,90) for ret in data.Retweets])\ndata.insert(10,'95th_Percentile_Retweets',[np.percentile(ret,95) for ret in data.Retweets])\ndata.insert(11,'98th_Percentile_Retweets',[np.percentile(ret,98) for ret in data.Retweets])\ndata.insert(12,'99th_Percentile_Retweets',[np.percentile(ret,99) for ret in data.Retweets])\ndata.insert(13,'99.5th_Percentile_Retweets',[np.percentile(ret,99.5) for ret in data.Retweets])\ndata.insert(14,'Max_Retweets',[max(ret) for ret in data.Retweets])\n\ndata.insert(16,'Total_Likes',[sum(lik) for lik in data.Likes])\ndata.insert(17,'Median_Likes',[np.median(lik) for lik in data.Likes])\ndata.insert(18,'Mean_Likes',data.Total_Likes/data.Num_Tweets)\ndata.insert(19,'Trimmed_Mean_Likes',[trimmed_mean(lik) for lik in data.Likes])\ndata.insert(20,'90th_Percentile_Likes',[np.percentile(lik,90) for lik in data.Likes])\ndata.insert(21,'95th_Percentile_Likes',[np.percentile(lik,95) for lik in data.Likes])\ndata.insert(22,'98th_Percentile_Likes',[np.percentile(lik,98) for lik in data.Likes])\ndata.insert(23,'99th_Percentile_Likes',[np.percentile(lik,99) for lik in data.Likes])\ndata.insert(24,'99.5th_Percentile_Likes',[np.percentile(lik,99.5) for lik in data.Likes])\ndata.insert(25,'Max_Likes',[max(lik) for lik in data.Likes])\n\ndata.head(1)", "Now I'll use sklearn's TfidfVectorizer to convert my documents into text frequencies. I will use n-grams of lengths one, two, and three. In order to filter out some noise, I'm going to specify that for a feature to be made, the word or phrase must have been in at least 20% of the documents.\nAlso, from previous iterations, I learned that it is a good idea to tokenize the titles of the movies and include these tokens as stop words. The reason being that words that are in the titles of movies are considered to be some of the most important features in the Tf-idf world and including these words as features will make it more difficult for a model to pick out the truly generalizable word features.\nIf a token from the titles appears in at least 10% of the titles, I will allow it to be a feature. If not, it will be a stop word.", "from sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS\n\ntitles = [title.lower() for title in data.title]\n\ntfidf_titles = TfidfVectorizer(use_idf=True,stop_words=None,ngram_range=(1,3),max_df=.10)\n\ntfidf_titles.fit_transform(titles);\n\nstop_words = tfidf_titles.get_feature_names() + titles + list(ENGLISH_STOP_WORDS)\ntfidf = TfidfVectorizer(use_idf=True,stop_words=stop_words,ngram_range=(1,3),min_df=.20)\n\nX_text = tfidf.fit_transform(data.Text).toarray()\n\nX_text.shape", "I'll save the names of the text features for later.", "d = tfidf.get_feature_names()", "Modeling with just the Summary Statistics\nThe first step will be to split the data into a train and test set. I'll use a 70%, 30% train, test split.", "np.random.seed(200)\ntrain_inds = np.random.choice(range(len(data)),int(.7*len(data)),replace=False)\ntest_inds = [i for i in range(len(data)) if i not in train_inds]\n\ny_train, y_test = data.opening_wknd[train_inds], data.opening_wknd[test_inds]\n\nprint 'Length of Training Set: %d.' % len(y_train),'Length of Test Set: %d.' % len(y_test)", "Since I have a suspicion that some of my features might be redundant, I'll use the Lasso to encourage sparcity.", "from sklearn.linear_model import LassoCV\n\nlcv = LassoCV(n_alphas=100, fit_intercept=True, normalize=True, max_iter=1e7, cv=50, n_jobs=-1, random_state=3)\n\nsummary_stat_feature_inds = [1] + range(4,15) + range(16,26)\nX_summary_stat = data.iloc[:,summary_stat_feature_inds]\n\nX_summary_stat.head(3)\n\nX_summary_stat.shape\n\nlcv.fit(X_summary_stat.iloc[train_inds,:],y_train);", "Let's take a look at how many of the 22 features have non-zero coefficients.", "np.sum(lcv.coef_!=0)", "Now how did the model perform? Let's look at the model's R-squared using the mean MSE across the 50 folds for the optimal choice of alpha.", "1 - lcv.mse_path_.mean(1).min()/np.var(y_train)", "Now let's visualize how this model did.", "import matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nplt.figure(figsize=(10,5))\nplt.plot(lcv.alphas_,lcv.mse_path_.mean(1),lw=5,c='purple',label='Mean of 50 Folds');\nplt.axhline(np.var(y_train),label='MSE of Prediction with Mean',lw=2,ls='--',c='r');\nplt.legend();\nplt.ylim(0,12e14);\nplt.ylabel('Mean Squared Error');\nplt.xlabel('Lambda');", "This model performed slightly better than using the mean as a predictor. Since we know that at least the interaction between volume of tweets and likes and retweets with the time feature should be predictive, let's try adding linear interaction terms.", "from sklearn.preprocessing import PolynomialFeatures\n\npoly = PolynomialFeatures(degree=2, interaction_only=True, include_bias=False)\n\nX_summary_stat = poly.fit_transform(X_summary_stat)\n\nX_summary_stat.shape\n\nlcv.fit(X_summary_stat[train_inds],y_train);", "How many of the 253 features have non-zero coefficents?", "sum(lcv.coef_!=0)", "Let's see if the R-squared is any better.", "1 - lcv.mse_path_.mean(1).min()/np.var(y_train)", "It is. And the graph..", "plt.figure(figsize=(10,5))\nplt.plot(lcv.alphas_,lcv.mse_path_.mean(1),lw=5,c='purple',label='Mean of 50 Folds');\nplt.axhline(np.var(y_train),label='MSE of Prediction with Mean',lw=2,ls='--',c='r');\nplt.legend();\nplt.ylim(0,12e14);\nplt.ylabel('Mean Squared Error');\nplt.xlabel('Lambda');", "Modeling with just the Text Features", "lcv.fit(X_text[train_inds],y_train);", "How many non-zero features?", "sum(lcv.coef_!=0)", "R-Squared?", "1 - lcv.mse_path_.mean(1).min()/np.var(y_train)", "What are the most predictive features?", "for ind in np.argsort(np.abs(lcv.coef_))[::-1][0:10]:\n print d[ind],lcv.coef_[ind]", "Putting the Two Sets of Features Together\nI do not want to take only the pre-selected features from the previous models and then run a model with just those features since that is one of the classic over-fitting pitfalls. Instead I will just run a single lasso with both sets of features.\nIn order to implement this strategy while avoiding overfitting, feature-selection would need to happen inside each fold of cross-validation. This seems beyond the scope of a simple post so I will not do this here.", "X = np.hstack((X_summary_stat,X_text))\n\nlcv.fit(X[train_inds],y_train);", "Number of non-zero features?", "sum(lcv.coef_!=0)", "R-Squared?", "1 - lcv.mse_path_.mean(1).min()/np.var(y_train)", "Interestingly, the r-squared went down a tiny bit when I included the summary stat features. \nTesting\nI will end by simply predicting the hold-out movies, computing the R-Squared of the prediction and visualizing the predictions.", "lcv.score(X[test_inds],y_test)", "Not bad!\nHere is a scatter plot of truth vs. predictions:", "predictions = lcv.predict(X[test_inds])\nfig,ax = plt.subplots(1,1,figsize=(8,8))\nplt.scatter(y_test/1e8,predictions/1e8);\nplt.plot([0,2.0],[0,2.0])\nplt.xlabel('True Opening Weekend Gross',fontsize=20);\nplt.ylabel('Predicted Opening Weekend Gross',fontsize=20);\nplt.xlim(-.2,2);\nplt.ylim(-.2,2);\nplt.title('True vs. Predicted (Hundreds of Millions of Dollars)',fontsize=16);\nax.tick_params(axis='x', labelsize=14)\nax.tick_params(axis='y', labelsize=14);", "The model seems to be struggling most with the largest grossing movies. This makes some sense since the distribution is very heavy-tailed.\nAnother way of quantifying the results, correlation between test predictions and test outcomes:", "np.corrcoef(y_test,predictions)[0,1]", "Conclusion\nI believe I have demonstrated that Twitter can be a rich data source for predicting movies' success. Making this model better could be done by collecting more data - i.e. going further back than 7 days before the opening, and scrolling down more than 25 times for each search. I wonder if what was holding back the summary statistic features from being predictive was the small amount of data that was collected.\nAs for the text features, a simple improvement might be to add as stop words any words that are too specific to a few movies but are not in the titles of the movies - i.e. 'Bond'. A more complicated improvement might be to try to decipher whether a tweet is about the movie in question on a tweet by tweet basis.\nMore generally, I think I have shown that data from Twitter can definitely have predictive value.\nBonus material\nJust for fun, let's fit a model to the whole data set and see the 20 text features with the largest positive coefficients, and the 20 with the largest negative coefficients.", "lcv.fit(X_text,data.opening_wknd);", "Top 20 positive features:", "print ', '.join([d[ind] for ind in np.argsort(lcv.coef_)[::-1][0:20]])", "Top 20 negative features:", "print ', '.join([d[ind] for ind in np.argsort(lcv.coef_)[0:20]])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jupyter/nbgrader
nbgrader/docs/source/user_guide/submitted/bitdiddle/ps1/problem1.ipynb
bsd-3-clause
[ "Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\\rightarrow$Run All).\nMake sure you fill in any place that says YOUR CODE HERE or \"YOUR ANSWER HERE\", as well as your name and collaborators below:", "NAME = \"Ben Bitdiddle\"\nCOLLABORATORS = \"Alyssa P. Hacker\"", "For this problem set, we'll be using the Jupyter notebook:\n\n\nPart A (2 points)\nWrite a function that returns a list of numbers, such that $x_i=i^2$, for $1\\leq i \\leq n$. Make sure it handles the case where $n<1$ by raising a ValueError.", "def squares(n):\n \"\"\"Compute the squares of numbers from 1 to n, such that the \n ith element of the returned list equals i^2.\n \n \"\"\"\n if n < 1:\n raise ValueError\n s = []\n for i in range(n):\n s.append(i**2)\n return s", "Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does:", "squares(10)\n\n# \"\"\"Check that squares returns the correct output for several inputs\"\"\"\n# assert squares(1) == [1]\n# assert squares(2) == [1, 4]\n# assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]\n# assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]", "Part B (1 point)\nUsing your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.", "def sum_of_squares(n):\n \"\"\"Compute the sum of the squares of numbers from 1 to n.\"\"\"\n total = 0\n s = squares(n)\n for i in range(len(s)):\n total += s[i]\n return total", "The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:", "sum_of_squares(10)\n\n\"\"\"Check that sum_of_squares returns the correct answer for various inputs.\"\"\"\nassert sum_of_squares(1) == 1\nassert sum_of_squares(2) == 5\nassert sum_of_squares(10) == 385\nassert sum_of_squares(11) == 506\n\n\"\"\"Check that sum_of_squares relies on squares.\"\"\"\norig_squares = squares\ndel squares\ntry:\n sum_of_squares(1)\nexcept NameError:\n pass\nelse:\n raise AssertionError(\"sum_of_squares does not use squares\")\nfinally:\n squares = orig_squares", "Part C (1 point)\nUsing LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.\n$\\sum_{i=0}^n i^2$\n\nPart D (2 points)\nFind a usecase for your sum_of_squares function and implement that usecase in the cell below.", "# YOUR CODE HERE\nraise NotImplementedError()", "Part E (4 points)\nState the formulae for an arithmetic and geometric sum and verify them numerically for an example of your choice.\n$\\sum x^i = \\frac{1}{1-x}$" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ColeLab/informationtransfermapping
MasterScripts/Manuscript5_RegionInformationEstimate_CPRORules.ipynb
gpl-3.0
[ "Manuscript5 - Calculating information estimate for each brain-region using vertex-level activation patterns for each rule domain\nAnalysis for Fig. 5\nMaster code for Ito et al., 2017¶\nTakuya Ito (takuya.ito@rutgers.edu)", "import sys\nsys.path.append('utils/')\nimport numpy as np\nimport loadGlasser as lg\nimport scripts3_functions as func\nimport scipy.stats as stats\nfrom IPython.display import display, HTML\nimport matplotlib.pyplot as plt\nimport statsmodels.sandbox.stats.multicomp as mc\nimport statsmodels.api as sm\nimport sys\nimport multiprocessing as mp\nimport pandas as pd\nimport multregressionconnectivity as mreg\nimport warnings\nwarnings.filterwarnings('ignore')\n%matplotlib inline\nimport nibabel as nib\nimport os\nos.environ['OMP_NUM_THREADS'] = str(1)\nimport permutationTesting as pt\n", "0.0 Basic parameters", "# Set basic parameters\nbasedir = '/projects2/ModalityControl2/'\ndatadir = basedir + 'data/'\nresultsdir = datadir + 'resultsMaster/'\nrunLength = 4648\n\nsubjNums = ['032', '033', '037', '038', '039', '045', \n '013', '014', '016', '017', '018', '021', \n '023', '024', '025', '026', '027', '031', \n '035', '046', '042', '028', '048', '053', \n '040', '049', '057', '062', '050', '030', '047', '034']\n\n# Organized as a 64k vector\nglasserparcels = lg.loadGlasserParcels()\n\nnParcels = 360\n\n# Load in Glasser parcels in their native format\n# Note that this parcel file is actually flipped (across hemispheres), but it doesn't matter since we're using the same exact file to reconstruct the data\nglasser2 = nib.load('/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii')\nglasser2 = glasser2.get_data()\nglasser2 = np.squeeze(glasser2)\n", "1.0 Load in vertex-wise betas across all miniblocks for all brain regions\n1.1 Define some basic functions for RSA pipeline", "def loadBetas(subj):\n datadir = '/projects2/ModalityControl2/data/resultsMaster/glmMiniblockBetaSeries/'\n filename = subj + '_miniblock_taskBetas_Surface64k.csv'\n betas = np.loadtxt(datadir + filename, delimiter=',')\n betas = betas[:,17:].T\n return betas\n\n\ndef setUpRSAMatrix(subj,ruledim):\n \"\"\"\n Sets up basic SVM Matrix for a classification of a particular rule dimension and network\n \"\"\"\n \n betas = loadBetas(subj)\n rules, rulesmb = func.importRuleTimingsV3(subj,ruledim)\n \n svm_mat = np.zeros((betas.shape))\n samplecount = 0\n labels = []\n for rule in rulesmb:\n rule_ind = rulesmb[rule].keys()\n sampleend = samplecount + len(rule_ind)\n svm_mat[samplecount:sampleend,:] = betas[rule_ind,:]\n labels.extend(np.ones(len(rule_ind),)*rule)\n samplecount += len(rule_ind)\n \n labels = np.asarray(labels)\n \n svm_dict = {}\n nParcels = 360\n for roi in range(1,nParcels+1):\n roi_ind = np.where(glasserparcels==roi)[0]\n svm_dict[roi] = svm_mat[:,roi_ind]\n \n return svm_dict, labels\n\ndef rsaCV(svm_mat,labels, subj):\n \"\"\"Runs a leave-4-out CV for a 4 way classification\"\"\"\n \n cvfolds = []\n # 32 folds, if we do a leave 4 out for 128 total miniblocks\n # Want to leave a single block from each rule from each CV\n for rule in np.unique(labels):\n cvfolds.append(np.where(labels==rule)[0])\n cvfolds = np.asarray(cvfolds)\n \n # Number of CVs is columns\n ncvs = cvfolds.shape[1]\n nrules = cvfolds.shape[0]\n # For each CV fold, make sure the fold is constructed randomly\n for i in range(nrules): np.random.shuffle(cvfolds[i,:])\n\n corr_rho_cvs = []\n err_rho_cvs = []\n acc_ind = []\n infoEstimate = []\n for cv in range(ncvs):\n # Select a test set from the CV Fold matrix\n test_ind = cvfolds[:,cv].copy()\n # The accuracy array should be the same as test_idn\n acc_ind.extend(cvfolds[:,cv].copy())\n # Delete the CV included from the train set\n train_ind = np.delete(cvfolds,cv,axis=1)\n \n # Identify the train and test sets\n svm_train = svm_mat[np.reshape(train_ind,-1),:]\n svm_test = svm_mat[test_ind,:]\n \n prototype = {}\n # Construct RSA prototypes\n for rule in range(nrules):\n prototype_ind = np.reshape(train_ind[rule,:],-1)\n prototype[rule] = np.mean(svm_mat[prototype_ind],axis=0)\n \n corr_rho = []\n err_rho = []\n for rule1 in range(nrules):\n tmp = []\n for rule2 in range(nrules):\n r = np.arctanh(stats.spearmanr(prototype[rule1],svm_test[rule2,:])[0])\n if rule1==rule2: \n corr_rho.append(r)\n else:\n tmp.append(r)\n err_rho.append(np.mean(tmp))\n\n corr_rho_cvs.append(np.mean(corr_rho))\n err_rho_cvs.append(np.mean(err_rho))\n # Compute miniblock-wise information estimate\n for i in range(len(corr_rho)):\n infoEstimate.append(corr_rho[i] - err_rho[i])\n\n # independent var (constant terms + information estimate)\n infoEstimate = np.asarray(infoEstimate)\n ind_var = np.vstack((np.ones((len(infoEstimate),)),infoEstimate))\n ind_var = ind_var.T\n\n\n return np.mean(corr_rho_cvs), np.mean(err_rho_cvs), np.mean(infoEstimate)\n \n \ndef subjRSACV((subj,ruledim,behav)):\n svm_dict, labels = setUpRSAMatrix(subj,ruledim)\n corr_rhos = {}\n err_rhos = {}\n infoEstimate = {}\n for roi in svm_dict:\n svm_mat = svm_dict[roi].copy()\n # Demean each sample\n svmmean = np.mean(svm_mat,axis=1)\n svmmean.shape = (len(svmmean),1)\n svm_mat = svm_mat - svmmean\n\n# svm_mat = preprocessing.scale(svm_mat,axis=0)\n\n corr_rhos[roi], err_rhos[roi], infoEstimate[roi] = rsaCV(svm_mat, labels, subj)\n \n return corr_rhos, err_rhos, infoEstimate\n\n", "2.0 - Estimate information estimates for all regions for all 3 rule domains", "ruledims = ['logic','sensory','motor']\nbehav='acc'\ncorr_rhos = {}\nerr_rhos = {}\ndiff_rhos = {}\nfor ruledim in ruledims:\n corr_rhos[ruledim] = {}\n err_rhos[ruledim] = {}\n diff_rhos[ruledim] = {}\n \n print 'Running', ruledim\n\n inputs = []\n for subj in subjNums: inputs.append((subj,ruledim,behav))\n\n# pool = mp.Pool(processes=8)\n pool = mp.Pool(processes=16)\n results = pool.map_async(subjRSACV,inputs).get()\n pool.close()\n pool.join()\n\n # Reorganize results\n corr_rhos[ruledim] = np.zeros((nParcels,len(subjNums)))\n err_rhos[ruledim] = np.zeros((nParcels,len(subjNums)))\n diff_rhos[ruledim] = np.zeros((nParcels,len(subjNums)))\n \n scount = 0\n for result in results:\n for roi in range(nParcels):\n corr_rhos[ruledim][roi,scount] = result[0][roi+1]\n err_rhos[ruledim][roi,scount] = result[1][roi+1]\n diff_rhos[ruledim][roi,scount] = result[2][roi+1]\n scount += 1", "Save CSVs for baseline leave-4-out CV on information transfer estimate", "outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'\n\nfor ruledim in ruledims:\n filename = 'Regionwise_InformationEstimate_LeaveOneOut_' + ruledim + '.csv'\n np.savetxt(outdir + filename, diff_rhos[ruledim], delimiter=',')\n ", "Run statistics (t-tests and multiple comparisons correction with FDR)", "df_stats = {}\n# Output to CSV matrix\nsig_t = np.zeros((nParcels,len(ruledims)))\nsig_effect = np.zeros((nParcels,len(ruledims)))\neffectsize = {}\nrulecount = 0\nfor ruledim in ruledims:\n df_stats[ruledim] = {}\n df_stats[ruledim]['t'] = np.zeros((nParcels,))\n df_stats[ruledim]['p'] = np.zeros((nParcels,))\n effectsize[ruledim] = np.zeros((nParcels,))\n for roi in range(nParcels):\n t, p = stats.ttest_1samp(diff_rhos[ruledim][roi,:], 0)\n# t, p = stats.ttest_rel(corr_rhos[ruledim][roi,:], err_rhos[ruledim][roi,:])\n \n effectsize[ruledim][roi] = np.mean(diff_rhos[ruledim][roi,:])\n\n ps = np.zeros(())\n if t > 0:\n p = p/2.0\n else:\n p = 1.0 - p/2.0\n df_stats[ruledim]['t'][roi] = t\n df_stats[ruledim]['p'][roi] = p\n \n arr = df_stats[ruledim]['p']\n df_stats[ruledim]['q'] = mc.fdrcorrection0(arr)[1]\n\n \n\n qbin = df_stats[ruledim]['q'] < 0.05\n sig_t[:,rulecount] = np.multiply(df_stats[ruledim]['t'],qbin)\n sig_effect[:,rulecount] = np.multiply(effectsize[ruledim],qbin)\n \n rulecount += 1\n ", "Map statistics and results to surface using workbench", "sig_t_vertex = np.zeros((len(glasser2),len(ruledims)))\neffects_vertex = np.zeros((len(glasser2),len(ruledims)))\neffects_vertex_sig = np.zeros((len(glasser2),len(ruledims)))\ncol = 0\nfor cols in range(sig_t_vertex.shape[1]):\n for roi in range(nParcels):\n parcel_ind = np.where(glasser2==(roi+1))[0]\n sig_t_vertex[parcel_ind,col] = sig_t[roi,col]\n effects_vertex[parcel_ind,col] = effectsize[ruledims[col]][roi]\n effects_vertex_sig[parcel_ind,col] = sig_effect[roi,col]\n col += 1\n\n# Write file to csv and run wb_command\noutdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'\nfilename = 'RegionwiseIE_FDRThresholded_Tstat.csv'\nnp.savetxt(outdir + filename, sig_t_vertex,fmt='%s')\nwb_file = 'RegionwiseIE_FDRThresholded_Tstat.dscalar.nii'\nglasserfilename = '/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii'\nwb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'\nos.system(wb_command)\n\n# Compute effect size baseline (information content)\noutdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'\nfilename = 'RegionwiseIE_InformationEstimate.csv'\nnp.savetxt(outdir + filename, effects_vertex,fmt='%s')\nwb_file = 'RegionwiseIE_InformationEstimate.dscalar.nii'\nwb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'\nos.system(wb_command)\n\n# Compute Thresholded effect size baseline (information content)\noutdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'\nfilename = 'RegionwiseIE_FDRThresholded_InformationEstimate.csv'\nnp.savetxt(outdir + filename, effects_vertex_sig,fmt='%s')\nwb_file = 'RegionwiseIE_FDRThresholded_InformationEstimate.dscalar.nii'\nwb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'\nos.system(wb_command)", "Run FWE Correction using permutation testing", "pt = reload(pt)\n\nfwe_Ts = np.zeros((nParcels,len(ruledims)))\nfwe_Ps = np.zeros((nParcels,len(ruledims)))\nrulecount = 0\nfor ruledim in ruledims:\n t, p = pt.permutationFWE(diff_rhos[ruledim], nullmean=0, permutations=10000, nproc=15)\n# t, p = pt.permutationFWE(corr_rhos[ruledim] - err_rhos[ruledim],\n# nullmean=0, permutations=1000, nproc=15)\n fwe_Ts[:,rulecount] = t\n fwe_Ps[:,rulecount] = p\n \n rulecount += 1\n\n# Compare t-values from permutation function and above\n\nrulecount = 0\nfor ruledim in ruledims:\n if np.sum(df_stats[ruledim]['t']==fwe_Ts[:,rulecount])==360:\n print 'Correct t-values match up'\n else:\n print 'Error! Likely a bug in the code'\n rulecount += 1", "Write out significant ROIs", "fwe_Ps2 = (1.0000 - fwe_Ps) # One-tailed test on upper tail\nsig_mat = fwe_Ps2 < 0.0500 # One-sided t-test (Only interested in values greater than 95% interval)\noutdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'\nfilename = 'FWE_corrected_pvals_allruledims.csv'\nnp.savetxt(outdir + filename, fwe_Ps, delimiter=',')\n\nsig_t = np.zeros((nParcels,len(ruledims)))\nsig_effect = np.zeros((nParcels,len(ruledims)))\nrulecount = 0\nfor ruledim in ruledims:\n sig_t[:,rulecount] = np.multiply(fwe_Ts[:,rulecount],sig_mat[:,rulecount])\n sig_effect[:,rulecount] = np.multiply(effectsize[ruledim],sig_mat[:,rulecount])\n \n # Read out statistics for manuscript\n # Identify significant regions\n sig_ind = sig_mat[:,rulecount] == True\n nonsig_ind = sig_mat[:,rulecount] == False\n \n print 'Average significant IE for', ruledim, ':', np.mean(effectsize[ruledim][sig_ind])\n print 'Average significant T-stats for', ruledim, ':', np.mean(fwe_Ts[:,rulecount][sig_ind])\n print 'Maximum significant p-value for', ruledim, ':', np.max(fwe_Ps2[:,rulecount][sig_ind])\n print '----'\n print 'Average nonsignificant IE for', ruledim, ':', np.mean(effectsize[ruledim][nonsig_ind])\n print 'Average nonsignificant T-stats for', ruledim, ':', np.mean(fwe_Ts[:,rulecount][nonsig_ind])\n print 'Minimum nonsignificant p-value for', ruledim, ':', np.min(fwe_Ps2[:,rulecount][nonsig_ind])\n print '\\n'\n print '*****************'\n rulecount += 1\n \n", "Map out FWE-corrected statistics/results to surface using workbench", "sig_t_vertex = np.zeros((len(glasser2),len(ruledims)))\neffects_vertex = np.zeros((len(glasser2),len(ruledims)))\neffects_vertex_sig = np.zeros((len(glasser2),len(ruledims)))\ncol = 0\nfor cols in range(sig_t_vertex.shape[1]):\n for roi in range(nParcels):\n parcel_ind = np.where(glasser2==(roi+1))[0]\n sig_t_vertex[parcel_ind,col] = sig_t[roi,col]\n effects_vertex_sig[parcel_ind,col] = sig_effect[roi,col]\n col += 1\n\n# Write file to csv and run wb_command\noutdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'\nfilename = 'RegionwiseIE_FWERThresholded_Tstat.csv'\nnp.savetxt(outdir + filename, sig_t_vertex,fmt='%s')\nwb_file = 'RegionwiseIE_FWERThresholded_Tstat.dscalar.nii'\nglasserfilename = '/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii'\nwb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'\nos.system(wb_command)\n\n# # Compute effect size baseline (information content)\n# outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'\n# filename = 'RegionwiseIE_InformationEstimate.csv'\n# np.savetxt(outdir + filename, effects_vertex,fmt='%s')\n# wb_file = 'RegionwiseIE_InformationEstimate.dscalar.nii'\n# wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'\n# os.system(wb_command)\n\n# Compute Thresholded effect size baseline (information content)\noutdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'\nfilename = 'RegionwiseIE_FWERThresholded_InformationEstimate.csv'\nnp.savetxt(outdir + filename, effects_vertex_sig,fmt='%s')\nwb_file = 'RegionwiseIE_FWERThresholded_InformationEstimate.dscalar.nii'\nwb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'\nos.system(wb_command)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pastephens/pysal
pysal/contrib/viz/mapping_guide.ipynb
bsd-3-clause
[ "import numpy as np\nimport pysal as ps\nimport random as rdm\nfrom pysal.contrib.viz import mapping as maps\n%matplotlib inline\nfrom pylab import *", "Guide for the mapping module in PySAL\nContributors:\n\nDani Arribas-Bel &lt;daniel.arribas.bel@gmail.com&gt;\nSerge Rey &lt;sjsrey@gmail.com&gt;\n\nThis document describes the main structure, components and usage of the mapping module in PySAL. The is organized around three main layers:\n\nA lower-level layer that reads polygon, line and point shapefiles and returns a Matplotlib collection.\nA medium-level layer that performs some usual transformations on a Matplotlib object (e.g. color code polygons according to a vector of values).\nA higher-level layer intended for end-users for particularly useful cases and style preferences pre-defined (e.g. Create a choropleth).\n\nLower-level component\nThis includes basic functionality to read spatial data from a file (currently only shapefiles supported) and produce rudimentary Matplotlib objects. The main methods are:\n\n\nmap_poly_shape: to read in polygon shapefiles\n\n\nmap_line_shape: to read in line shapefiles\n\n\nmap_point_shape: to read in point shapefiles\n\n\nThese methods all support an option to subset the observations to be plotted (very useful when missing values are present). They can also be overlaid and combined by using the setup_ax function. the resulting object is very basic but also very flexible so, for minds used to matplotlib this should be good news as it allows to modify pretty much any property and attribute.\nExample", "shp_link = ps.examples.get_path('columbus.shp')\nshp = ps.open(shp_link)\nsome = [bool(rdm.getrandbits(1)) for i in ps.open(shp_link)]\n\nfig = figure()\n\nbase = maps.map_poly_shp(shp)\nbase.set_facecolor('none')\nbase.set_linewidth(0.75)\nbase.set_edgecolor('0.8')\nsome = maps.map_poly_shp(shp, which=some)\nsome.set_alpha(0.5)\nsome.set_linewidth(0.)\ncents = np.array([poly.centroid for poly in ps.open(shp_link)])\npts = scatter(cents[:, 0], cents[:, 1])\npts.set_color('red')\n\nax = maps.setup_ax([base, some, pts], [shp.bbox, shp.bbox, shp.bbox])\nfig.add_axes(ax)\nshow()", "Medium-level component\nThis layer comprises functions that perform usual transformations on matplotlib objects, such as color coding objects (points, polygons, etc.) according to a series of values. This includes the following methods:\n\n\nbase_choropleth_classless\n\n\nbase_choropleth_unique\n\n\nExample", "net_link = ps.examples.get_path('eberly_net.shp')\nnet = ps.open(net_link)\nvalues = np.array(ps.open(net_link.replace('.shp', '.dbf')).by_col('TNODE'))\n\npts_link = ps.examples.get_path('eberly_net_pts_onnetwork.shp')\npts = ps.open(pts_link)\n\nfig = figure()\n\nnetm = maps.map_line_shp(net)\nnetc = maps.base_choropleth_unique(netm, values)\n\nptsm = maps.map_point_shp(pts)\nptsm = maps.base_choropleth_classif(ptsm, values)\nptsm.set_alpha(0.5)\nptsm.set_linewidth(0.)\n\nax = maps.setup_ax([netc, ptsm], [net.bbox, net.bbox])\nfig.add_axes(ax)\nshow()", "base_choropleth_classif\n\nHigher-level component\nThis currently includes the following end-user functions:\n\nplot_poly_lines: very quick shapefile plotting.", "maps.plot_poly_lines(ps.examples.get_path('columbus.shp'))\n", "plot_choropleth: for quick plotting of several types of chocopleths.", "shp_link = ps.examples.get_path('columbus.shp')\nvalues = np.array(ps.open(ps.examples.get_path('columbus.dbf')).by_col('HOVAL'))\n\ntypes = ['classless', 'unique_values', 'quantiles', 'equal_interval', 'fisher_jenks']\nfor typ in types:\n maps.plot_choropleth(shp_link, values, typ, title=typ)", "To-Do list\nGeneral concepts and specific ideas to implement over time, with enough description so they can be brought to life.\n\nSupport for points in medium and higher layer\nLISA cluster maps\n\nCaution note on plotting points\nSupport for points (dots) is still not quite polished. Ideally, one would like to create a PathCollection from scratch so it is analogue to the creation of a PolyCollection or LineCollection. However, for the time being, we are relying on the wrapper plt.scatter, which makes it harder to extract the collection and plug it in a different figure. For that reason, it is recommended that, for the time being, one creates the line and/or polygon map as shown in this notebook and then grabs the output axis and uses ax.scatter to overlay the points.\nNOTE: the PathCollection created by plt.scatter is detailed on line 3142 of _axes.py. Maybe we can take some inspiration from there to create our own PathCollection for points so they live at the same level as polygons." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ellisonbg/ipyleaflet
examples/Video.ipynb
mit
[ "Video overlay\nThis notebook shows how you can overlay a video on a Leaflet map. This video can come off-the-shelf from an URL, but we will also see how to create your own video from NumPy arrays.\nThe following libraries are needed:\n* tqdm\n* rasterio\n* numpy\n* matplotlib\n* ipyleaflet\nThe recommended way is to try to conda install them first, and if they are not found then pip install.", "import ftplib\nimport os\nfrom tqdm import tqdm\nimport rasterio\nfrom rasterio.warp import reproject, Resampling\nfrom affine import Affine\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport subprocess\nfrom base64 import b64encode\nfrom ipyleaflet import Map, VideoOverlay\ntry:\n from StringIO import StringIO\n py3 = False\nexcept ImportError:\n from io import StringIO, BytesIO\n py3 = True\n\ncenter = [25, -130]\nzoom = 4\nm = Map(center=center, zoom=zoom)\nm", "If you happen to find a pre-built video for which you know the geographic bounds, overlaying it is as simple as:", "bounds = [(13, -130), (32, -100)]\nio = VideoOverlay(url='https://www.mapbox.com/bites/00188/patricia_nasa.webm', bounds=bounds)\nm.add_layer(io)\nio.interact(opacity=(0.0,1.0,0.01))", "However if you zoom in and play with the opacity, you will see that the coastline in the video does not overlay perfectly with the Leaflet map. This is because Leaflet uses a projection known as Web Mercator, whereas the video comes from satellite imagery. The view of the Earth from a geostationary satellite is called a disk, which is the part of the Earth that the satellite sees. Usually the disk is reprojected to e.g. WGS84, which is slightly different from Web Mercator. On a relatively small area, like in this video, the difference will be acceptable, but if you want to show some data over the whole map, it can be a problem. In that case you will need to reproject, as we will see.\nHere we download some satellite rainfall estimates (NASA's Global Precipitation Measurement). They come in the WGS84 projection and cover the entire globe between latitudes 60N and 60S. They have a spatial resolution of 0.1° and a temporal resolution of 30 minutes. We first download each individual files for the day 2017/01/01 (48 files).", "os.makedirs('tif', exist_ok=True)\nftp = ftplib.FTP('arthurhou.pps.eosdis.nasa.gov')\npasswd = 'ebwje/cspdibsuAhnbjm/dpn'\nlogin = ''\nfor c in passwd:\n login += chr(ord(c) - 1)\nftp.login(login, login)\nftp.cwd('gpmdata/2017/01/01/gis')\nlst = [i for i in ftp.nlst() if i.startswith('3B-HHR-GIS.MS.MRG.3IMERG.') and i.endswith('.V05B.tif')]\nfor filename in tqdm(lst):\n if not os.path.exists('tif/' + filename):\n ftp.retrbinary(\"RETR \" + filename, open('tif/' + filename, 'wb').write)\nftp.quit()", "When we convert them into images, we will need to apply a color map to the data. In order to scale this color map, we need to extract the maximum value present in the whole data set. Since the maximum value will appear very rarely, the visual rendering will be better if we saturate the images.", "vmax = 0\nfor f in os.listdir('tif'):\n dataset = rasterio.open('tif/' + f)\n data = dataset.read()[0][300:1500]\n data = np.where(data==9999, np.nan, data) / 10 # in mm/h\n vmax = max(vmax, np.nanmax(data))\nvmax *= 0.5 # saturate a little bit", "We reproject our data (originally in WGS84, also known as EPSG:4326) to Web Mercator (also known as EPSG:3857), which is the projection used by Leaflet. After applying a color map (here plt.cm.jet), we save each image to a PNG file.", "# At this point if GDAL complains about not being able to open EPSG support file gcs.csv, try in the terminal:\n# export GDAL_DATA=`gdal-config --datadir`\n\nos.makedirs('png', exist_ok=True)\n\nfor f in tqdm(os.listdir('tif')):\n png_name = f[:-3] + 'png'\n if not os.path.exists('png/' + png_name):\n dataset = rasterio.open('tif/' + f)\n with rasterio.Env():\n rows, cols = 1200, 3600\n src_transform = Affine(0.1, 0, -180, 0, -0.1, 60)\n src_crs = {'init': 'EPSG:4326'}\n data = dataset.read()[0][300:1500]\n source = np.where((data==9999) | (~np.isfinite(data)), 0, data) / 10 # in mm/h\n\n dst_crs = {'init': 'EPSG:3857'}\n bounds = [-180, -60, 180, 60]\n dst_transform, width, height = rasterio.warp.calculate_default_transform(src_crs, dst_crs, cols, rows, *bounds)\n dst_shape = height, width\n\n destination = np.zeros(dst_shape)\n\n reproject(\n source,\n destination,\n src_transform=src_transform,\n src_crs=src_crs,\n dst_transform=dst_transform,\n dst_crs=dst_crs,\n resampling=Resampling.nearest)\n\n data_web = destination\n fig, ax = plt.subplots(1, figsize=(36, 12))\n fig.subplots_adjust(left=0, right=1, bottom=0, top=1)\n ax.imshow(data_web, vmin=0, vmax=vmax, interpolation='nearest', cmap=plt.cm.jet)\n ax.axis('tight')\n ax.axis('off')\n plt.savefig('png/' + png_name)\n plt.close()", "We use ffmpeg to create a video from our individual images (ffmpeg can be installed on many systems). This utility needs our files to be named with a sequential number, which is why we rename them. The ffmpeg commands are pretty obscure but they do the job, and we finally get a rain.mp4 video!", "png_files = os.listdir('png')\npng_files.sort()\nfor i, f in enumerate(png_files):\n if f.startswith('3B-HHR-GIS.MS.MRG.3IMERG.'):\n os.rename('png/' + f, 'png/f' + str(i).zfill(2) + '.png')\n\nif not os.path.exists('mp4/rain.mp4'):\n os.makedirs('mp4', exist_ok=True)\n bitrate = '4000k'\n framerate = '12'\n cmd = 'ffmpeg -r $framerate -y -f image2 -pattern_type glob -i \"png/*.png\" -c:v libx264 -preset slow -b:v $bitrate -pass 1 -c:a libfdk_aac -b:a 0k -f mp4 -r $framerate -profile:v high -level 4.2 -pix_fmt yuv420p -movflags +faststart -vf \"scale=trunc(iw/2)*2:trunc(ih/2)*2\" /dev/null && \\\n ffmpeg -r $framerate -f image2 -pattern_type glob -i \"png/*.png\" -c:v libx264 -preset slow -b:v $bitrate -pass 2 -c:a libfdk_aac -b:a 0k -r $framerate -profile:v high -level 4.2 -pix_fmt yuv420p -movflags +faststart -vf \"scale=trunc(iw/2)*2:trunc(ih/2)*2\" mp4/rain.mp4'\n cmd = cmd.replace('$framerate', framerate).replace('$bitrate', bitrate)\n subprocess.check_output(cmd, shell=True)", "The video can be sent to the browser by embedding the data into the URL, et voilà!", "center = [0, -70]\nzoom = 3\nm = Map(center=center, zoom=zoom, interpolation='nearest')\nm\n\nif py3:\n f = BytesIO()\nelse:\n f = StringIO()\nwith open('mp4/rain.mp4', 'rb') as f:\n data = b64encode(f.read())\nif py3:\n data = data.decode('ascii')\nvideourl = 'data:video/mp4;base64,' + data\n\nbounds = [(-60, -180), (60, 180)]\nio = VideoOverlay(url=videourl, bounds=bounds)\nm.add_layer(io)\nio.interact(opacity=(0.0,1.0,0.01))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tclaudioe/Scientific-Computing
SC1/06_conjugate_gradient_method.ipynb
bsd-3-clause
[ "<center>\n <h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>\n <h2> Conjugate Gradient Method </h2>\n <h2> <a href=\"#acknowledgements\"> [S]cientific [C]omputing [T]eam </a> </h2>\n <h2> Version: 1.16</h2>\n</center>\nTable of Contents\n\nIntroduction\nGradient Descent\nConjugate Gradient Method\nLet's Play: Practical Exercises and Profiling\nAcknowledgements", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.linalg import solve_triangular\nfrom mpl_toolkits.mplot3d import Axes3D\n%matplotlib inline\n# pip install memory_profiler\n%load_ext memory_profiler\nnp.random.seed(0)\nfrom ipywidgets import interact, IntSlider\nimport matplotlib as mpl\nmpl.rcParams['font.size'] = 14\nmpl.rcParams['axes.labelsize'] = 20\nmpl.rcParams['xtick.labelsize'] = 14\nmpl.rcParams['ytick.labelsize'] = 14\n\ndef plot_matrices_with_values(ax,M,flag_values):\n N=M.shape[0]\n cmap = plt.get_cmap('GnBu')\n ax.matshow(M, cmap=cmap)\n if flag_values:\n for i in np.arange(0, N):\n for j in np.arange(0, N):\n ax.text(i, j, '{:.2f}'.format(M[i,j]), va='center', ha='center', color='r')", "<div id='intro' />\n\nIntroduction\nWelcome to another edition of our Jupyter Notebooks. Here, we'll teach you how to solve $A\\,x = b$ with $A$ being a symmetric positive-definite matrix, but the following methods have a key difference with the previous ones: these do not depend on a matrix factorization. The two methods that we'll see are called the Gradient Descent and the Conjugate Gradient Method. On the latter, we'll also see the benefits of preconditioning.\n<div id='GDragon' />\n\nGradient Descent\nThis is an iterative method. If you remember the iterative methods in the previous Notebook, to find the next approximate solution $\\mathbf{x}{k+1}$ you'd add a vector to the current approximate solution, $\\mathbf{x}_k$, that is: $\\mathbf{x}{k+1} = \\mathbf{x}k + \\text{vector}$. In this method, $\\text{vector}$ is $\\alpha{k}\\,\\mathbf{r}_k$, where $\\mathbf{r}_k$ is the residue ($\\mathbf{b} - A\\,\\mathbf{x}_k$) and $\\alpha_k = \\cfrac{(\\mathbf{r}_k)^T\\,\\mathbf{r}_k}{(\\mathbf{r}_k)^T\\,A\\,\\mathbf{r}_k}$, starting with some initial guess $\\mathbf{x}_0$. Let's look at the implementation below:", "def gradient_descent(A, b, x0, n_iter=10, tol=1e-10):\n n = A.shape[0]\n #array with solutions\n X = np.full((n_iter, n),np.nan)\n X[0] = x0\n\n for k in range(1, n_iter):\n r = b - np.dot(A, X[k-1])\n if np.linalg.norm(r)<tol: # The algorithm \"converged\"\n X[k:] = X[k-1]\n return X\n break\n alpha = np.dot(r, r)/np.dot(r, np.dot(A, r))\n X[k] = X[k-1] + alpha*r\n\n return X", "Now let's try our algorithm! But first, let's borrow a function to generate a random symmetric positive-definite matrix, kindly provided by the previous notebook, and another one to calculate the vectorized euclidean metric.", "\"\"\"\nRandomly generates an nxn symmetric positive-\ndefinite matrix A.\n\"\"\"\ndef generate_spd_matrix(n):\n A = np.random.random((n,n))\n #constructing symmetry\n A += A.T\n #symmetric+diagonally dominant -> symmetric positive-definite\n deltas = 0.1*np.random.random(n)\n row_sum = A.sum(axis=1)-np.diag(A)\n np.fill_diagonal(A, row_sum+deltas)\n return A", "We'll try our algorithm with some matrices of different sizes, and we'll compare it with the solution given by Numpy's solver.", "def show_small_example_GD(n_size=3, n_iter=10):\n np.random.seed(0)\n A = generate_spd_matrix(n_size)\n b = np.ones(n_size)\n x0 = np.zeros(n_size)\n\n X = gradient_descent(A, b, x0, n_iter)\n sol = np.linalg.solve(A, b)\n print('Gradiente descent : ',X[-1])\n print('np solver : ',sol)\n print('norm(difference): \\t',np.linalg.norm(X[-1] - sol)) # difference between gradient_descent's solution and Numpy's solver solution\ninteract(show_small_example_GD,n_size=(3,50,1),n_iter=(5,50,1))", "As we can see, we're getting ok solutions with 15 iterations, even for larger matrices. \nA variant of this method is currently used in training neural networks and in Data Science in general, the main difference is that they call the \\alpha parameter 'learning rate' and keep it constant.\nAnother important reason is that sometimes in Data Science they need to solve a nonlinear system of equations rather than a linear one, the good thing is that to solve nonlinear system of equations we do it by a sequence of linear system of equations!\nNow, we will discuss a younger sibling, the Conjugate Gradient Method, which is the prefered when the associated matrix is symmetric and positive definite.\n<div id='CGM' />\n\nConjugate Gradient Method\nThis method works by succesively eliminating the $n$ orthogonal components of the error, one by one. The method arrives at the solution with the following finite loop:", "def conjugate_gradient(A, b, x0, full_output=False, tol=1e-16):\n n = A.shape[0]\n X = np.full((n+1, n),np.nan) # Storing partial solutions x_i\n R = np.full((n+1, n),np.nan) # Storing residues r_i=b-A\\,x_i\n D = np.full((n+1, n),np.nan) # Storing conjugate directions d_i\n alphas = np.full(n,np.nan) # Storing alpha's\n betas = np.full(n,np.nan) # Storing beta's\n X[0] = x0 # initial guess: x_0\n R[0] = b - np.dot(A, x0) # initial residue: r_0=b-A\\,x_0\n D[0] = R[0] # initial direction: d_0\n n_residuals = np.full(n+1,np.nan) # norm of residuals over iteration: ||r_i||_2\n\n n_residuals[0] = np.linalg.norm(R[0]) # initilizing residual: ||r_0||_2\n x_sol=x0 # first approximation of solution\n \n for k in np.arange(n):\n if np.linalg.norm(R[k])<=tol: # The algorithm converged\n if full_output:\n return X[:k+1], D[:k+1], R[:k+1], alphas[:k+1], betas[:k+1], n_residuals[:k+1]\n else:\n return x_sol\n # This is the 'first' version of the algorithm\n alphas[k] = np.dot(D[k], R[k]) / np.dot(D[k], np.dot(A, D[k]))\n X[k+1] = X[k] + alphas[k]*D[k]\n R[k+1] = R[k] - alphas[k]*np.dot(A, D[k])\n n_residuals[k+1] = np.linalg.norm(R[k+1])\n betas[k] = np.dot(D[k],np.dot(A,R[k+1]))/np.dot(D[k],np.dot(A,D[k]))\n D[k+1] = R[k+1] - betas[k]*D[k]\n x_sol=X[k+1]\n \n if full_output:\n return X, D, R, alphas, betas, n_residuals\n else:\n return x_sol\n\n# This function computes the A-inner product \n# between each pair of vectors provided in V. \n# If 'A' is not provided, it becomes the \n# traditional inner product.\ndef compute_A_orthogonality(V,A='identity'):\n m = V.shape[0]\n n = V.shape[1]\n \n if isinstance(A, str):\n A=np.eye(n)\n \n output = np.full((m-1,m-1),np.nan)\n \n for i in range(m-1):\n for j in range(m-1):\n output[i,j]=np.dot(V[i],np.dot(A,V[j]))\n return output\n\ndef show_small_example_CG(n_size=2,flag_image=False,flag_image_values=True):\n np.random.seed(0)\n A = generate_spd_matrix(n_size)\n b = np.ones(n_size)\n x0 = np.zeros(n_size)\n\n X, D, R, alphas, betas, n_residuals = conjugate_gradient(A, b, x0, True)\n \n if flag_image:\n outR=compute_A_orthogonality(R)\n outD=compute_A_orthogonality(D,A)\n M=8\n fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(2*M,M))\n plot_matrices_with_values(ax1,np.log10(np.abs(outR)+1e-16),flag_image_values)\n ax1.set_title(r'$\\log_{10}(|\\mathbf{r}_i^T \\, \\mathbf{r}_j|+10^{-16})$',pad=20)\n plot_matrices_with_values(ax2,np.log10(np.abs(outD)+1e-16),flag_image_values)\n ax2.set_title(r'$\\log_{10}(|\\mathbf{d}_i^T\\,A\\,\\mathbf{d}_j|+10^{-16})$',pad=20)\n plt.sca(ax3)\n plt.semilogy(n_residuals,'.')\n plt.grid(True)\n plt.ylabel(r'$||\\mathbf{r}_i||$')\n plt.xlabel(r'$i$')\n plt.title('n= %d'%n_size)\n plt.sca(ax4)\n plt.plot(alphas,'.',label=r'$\\alpha_i$',markersize=10)\n plt.plot(betas,'.',label=r'$\\beta_i$',markersize=10)\n plt.grid(True)\n plt.legend()\n plt.xlabel(r'$i$')\n plt.show()\n else:\n print('n_residuals:')\n print(n_residuals)\n print('alphas:')\n print(alphas)\n print('betas:')\n print(betas)\n print('R:')\n print(R)\n print('X:')\n print(X)\n print('D:')\n print(D)\ninteract(show_small_example_CG,n_size=(2,50,1),flag_image=False,flag_image_values=True)\n\ndef plot_iterative_solution(A,b,X,R,D,n=0,elev=30,azim=310):\n L=lambda x: np.dot(x,np.dot(A,x))-np.dot(b,x)\n \n fig=plt.figure(figsize=(20,10))\n ax1 = fig.add_subplot(121, projection='3d')\n ax2 = fig.add_subplot(122, projection='3d')\n \n # Plotting the residual vectors\n for v in R[:n+1]:\n # We use ax1 for the actual values and ax1 for the normalized values.\n # We normalize it just for plotting purposes, otherwise the last\n # vectors look too tiny.\n ax1.quiver(0, 0, 0, v[0], v[1], v[2],color='blue')\n ax2.quiver(0, 0, 0, v[0]/np.linalg.norm(v), v[1]/np.linalg.norm(v), v[2]/np.linalg.norm(v),color='blue')\n # Plotting the residual vectors\n for v in X[1:n+1]:\n ax1.quiver(0, 0, 0, v[0], v[1], v[2],color='red')\n ax2.quiver(0, 0, 0, v[0]/np.linalg.norm(v), v[1]/np.linalg.norm(v), v[2]/np.linalg.norm(v),color='red')\n # Plotting the direction vectors\n for v in D[:n]:\n ax1.quiver(0, 0, 0, v[0], v[1], v[2],color='green',linewidth=10,alpha=0.5)\n ax2.quiver(0, 0, 0, v[0]/np.linalg.norm(v), v[1]/np.linalg.norm(v), \n v[2]/np.linalg.norm(v),color='green',linewidth=10,alpha=0.5)\n \n # plotting evolution of solution\n v = X[0]\n ax1.quiver(0, 0, 0, v[0], v[1], v[2], color='black', linestyle='dashed')\n ax2.quiver(0, 0, 0, v[0]/np.linalg.norm(v), v[1]/np.linalg.norm(v), v[2]/np.linalg.norm(v),color='black',linestyle='dashed')\n for k in np.arange(1,n+1):\n v = X[k]-X[k-1]\n vp= X[k-1]\n ax1.quiver(vp[0], vp[1], vp[2], v[0], v[1], v[2], color='magenta',linewidth=10,alpha=0.5)\n v = X[k]/np.linalg.norm(X[k])-X[k-1]/np.linalg.norm(X[k-1])\n vp= X[k-1]/np.linalg.norm(X[k-1])\n ax2.quiver(vp[0], vp[1], vp[2], v[0], v[1], v[2],color='magenta',linewidth=10,alpha=0.5)\n \n \n #for v in X[]\n ax1.set_xlim(min(0,np.min(X[:,0]),np.min(R[:,0])),max(0,np.max(X[:,0]),np.max(R[:,0])))\n ax1.set_ylim(min(0,np.min(X[:,1]),np.min(R[:,1])),max(0,np.max(X[:,1]),np.max(R[:,1])))\n ax1.set_zlim(min(0,np.min(X[:,2]),np.min(R[:,2])),max(0,np.max(X[:,2]),np.max(R[:,2])))\n ax2.set_xlim(-1,1)\n ax2.set_ylim(-1,1)\n ax2.set_zlim(-1,1)\n #fig.tight_layout()\n ax1.view_init(elev,azim)\n ax2.view_init(elev,azim)\n plt.title('r-blue, x-red, d-green, x-mag, x0-black')\n plt.show()\n\n# Setting a standard name for the variables\nnp.random.seed(0)\nA = generate_spd_matrix(3)\nb = np.ones(3)\nx0 = np.ones(3)\nX, D, R, alphas, betas, n_residuals = conjugate_gradient(A, b, x0, True)\n# For plotting with widgets\nn_widget = IntSlider(min=0, max=b.shape[0], step=1, value=0)\nelev_widget = IntSlider(min=-180, max=180, step=10, value=-180)\nazim_widget = IntSlider(min=0, max=360, step=10, value=30)\nsolution_evolution = lambda n,elev,azim: plot_iterative_solution(A,b,X,R,D,n,elev,azim)\ninteract(solution_evolution,n=n_widget,elev=elev_widget,azim=azim_widget)", "The science behind this algorithm is in the classnotes and in the textbook (Numerical Analysis, 2nd Edition, Timothy Sauer). Now let's try it!\nHere are some questions to think about:\n* What are the advantages and disadvantages of each method: gradient_descent and conjugate_gradient?\n* In which cases can the Conjugate Gradient Method converge in less than $n$ iterations?\n* What will happen if you use the Gradient Descent or Conjugate Gradient Method with non-symmetric, non-positive-definite matrices?\n<div id='LP' />\n\nLet's Play: Practical Exercises and Profiling\nFirst of all, define a function to calculate the progress of the relative error for a given method, that is, input the array of approximate solutions X and the real solution provided by Numpy's solver r_sol and return an array with the relative error for each step.", "def relative_error(X, r_sol):\n n_steps = X.shape[0]\n n_r_sol = np.linalg.norm(r_sol)\n E = np.zeros(n_steps)\n for i in range(n_steps):\n E[i] = np.linalg.norm(X[i] - r_sol) / n_r_sol\n return E", "Trying the two methods with a small non-symmetric, non-positive-definite matrix and plotting the forward error for all the methods.", "def show_output_for_non_symmetric_and_npd(np_seed=0):\n np.random.seed(np_seed)\n n = 10\n A = 10 * np.random.random((n,n))\n b = 10 * np.random.random(n)\n x0 = np.zeros(n)\n\n X1 = gradient_descent(A, b, x0, n)\n X2, D, R, alphas, betas, n_residuals = conjugate_gradient(A, b, x0, True)\n r_sol = np.linalg.solve(A, b)\n\n E1 = relative_error(X1, r_sol)\n E2 = relative_error(X2, r_sol)\n iterations1 = np.linspace(1, n, n)\n iterations2 = np.linspace(1, X2.shape[0], X2.shape[0])\n\n plt.figure(figsize=(10,5))\n plt.xlabel('Iteration')\n plt.ylabel('Relative Error')\n plt.title('Evolution of the Relative Forward Error for each method')\n plt.semilogy(iterations1, E1, 'rd', markersize=8, label='GD') # Red diamonds are for Gradient Descent\n plt.semilogy(iterations2, E2, 'b.', markersize=8, label='CG') # Blue dots are for Conjugate Gradient\n plt.grid(True)\n plt.legend(loc='best')\n plt.show()\ninteract(show_output_for_non_symmetric_and_npd,np_seed=(0,100,1))", "As you can see, if the matrix doesn't meet the requirements for these methods, the results can be quite terrible.\nLet's try again, this time using an appropriate matrix.", "def show_output_for_symmetric_and_pd(np_seed=0,n=100):\n np.random.seed(np_seed)\n A = generate_spd_matrix(n)\n b = np.random.random(n)\n x0 = np.zeros(n)\n\n X1 = gradient_descent(A, b, x0, n)\n X2, D, R, alphas, betas, n_residuals = conjugate_gradient(A, b, x0, True)\n r_sol = np.linalg.solve(A, b)\n\n E1 = relative_error(X1, r_sol)\n E2 = relative_error(X2, r_sol)\n iterations1 = np.linspace(1, n, n)\n iterations2 = np.linspace(1, X2.shape[0], X2.shape[0])\n\n plt.figure(figsize=(10,5))\n plt.xlabel('Iteration')\n plt.ylabel('Relative Error')\n plt.title('Evolution of the Relative Forward Error for each method')\n plt.semilogy(iterations1, E1, 'rd', markersize=8, label='GD') # Red diamonds are for Gradient Descent\n plt.semilogy(iterations2, E2, 'b.', markersize=8, label='CG') # Blue dots are for Conjugate Gradient\n plt.grid(True)\n plt.legend(loc='best')\n plt.xlim([0,40])\n plt.show()\ninteract(show_output_for_symmetric_and_pd,np_seed=(0,100,1),n=(10,1000,10))", "Amazing! We started with a huge relative error and reduced it to practically zero in just under 10 iterations (the algorithms all have 100 iterations but we're showing you the first 40). \nWe can clearly see that the Conjugate Gradient Method converges faster than the Gradient Descent method, even for larger matrices.\nWe can see that, reached a certain size for the matrix, the amount of iterations needed to reach a small error remains more or less the same. We encourage you to try other kinds of matrices to see how the algorithms behave, and experiment with the code. Now let's move on to profiling.\nOf course, you win some, you lose some. Accelerating the convergence of the algorithm means you have to spend more of other resources. We'll use the functions %timeit and %memit to see how the algorithms behave.", "A = generate_spd_matrix(100)\nb = np.ones(100)\nx0 = np.random.random(100)\n\n%timeit gradient_descent(A, b, x0, n_iter=100, tol=1e-5)\n%timeit conjugate_gradient(A, b, x0, tol=1e-5)\n\n# Commented because it is taking too long, we need to review this!\n# %memit gradient_descent(A, b, x0, n_iter=100, tol=1e-5)\n# %memit conjugate_gradient(A, b, x0, tol=1e-5)", "We see something interesting here: all algorithms need about the same amount of memory.\nWhat happened with the measure of time? Why is it so big for the algorithm that has the best convergence rate? Besides the end of the loop, we have one other criteria for stopping the algorithm: When the residue r reaches the exact value of zero, we say that the algorithm converged, and stop. However it's very hard to get an error of zero for randomized initial guesses, so this almost never happens, and we can't take advantage of the convergence rate of the algorithms. \nThere's a way we can fix this: instead of using this criteria, make the algorithm stop when a certain tolerance or threshold is reached. That way, when the error gets small enough, we can stop and say that we got a good enough solution.\nYou can try with different matrices, different initial conditions, different sizes, etc. Try some more plotting, profiling, and experimenting. Have fun!\n<div id='acknowledgements' />\n\nAcknowledgements\n\nMaterial created by professor Claudio Torres (ctorres@inf.utfsm.cl) and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. April 2016.\nModified by professor Claudio Torres (ctorres@inf.utfsm.cl). DI UTFSM. April 2019.\nUpdate May 2020 - v1.15 - C.Torres : Fixing formatting issues.\nUpdate June 2020 - v1.16 - C.Torres : Adding 'compute_A_orthogonality' and extending 'show_small_example_CG'." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/hammoz-consortium/cmip6/models/sandbox-3/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: HAMMOZ-CONSORTIUM\nSource ID: SANDBOX-3\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:03\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nnadeau/pybotics
examples/dynamics.ipynb
mit
[ "Robot Dynamics\n\nHow is the motion of a manipulator affected by external forces?\nWhat joints torques are generated from external forces applied to the manipulator?\nThe following example will demonstrate how to calculate the joint torques required to counteract a given TCP wrench.\n\nCreate the Robot Model", "from pybotics.robot import Robot\nfrom pybotics.predefined_models import ur10\n\nrobot = Robot.from_parameters(ur10())", "Define the Forces/Torques Acting on the TCP\n\nAka the \"wrench\"", "forces = [0, 0, 10]\ntorques = [0, 0, 0]\nwrench = [*forces, *torques]", "Calculate Joint Torques\n\nWhat are the joint torques required to counteract this payload?\nThis calculation can be repeated at each discrete pose in a trajectory for trajectory dynamics", "import numpy as np\n\nrobot.joints = np.deg2rad([0, 0, 0, 0, 0, 0])\nj_torques = robot.compute_joint_torques(wrench)\nprint(f'Robot Joints: {robot.joints}')\nprint(f'Joint Torques: {j_torques}')\n\nrobot.joints = np.deg2rad([0, -90, -90, 0, -90, 0])\nj_torques = robot.compute_joint_torques(wrench)\nprint(f'Robot Joints: {robot.joints}')\nprint(f'Joint Torques: {j_torques}')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
mathemage/h2o-3
examples/deeplearning/notebooks/deeplearning_image_reconstruction_and_clustering.ipynb
apache-2.0
[ "Image Space Projection using Autoencoders\n\nIn this example we are going to autoencode the faces of the olivetti dataset and try to reconstruct them back.", "%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport pandas as pd\nimport scipy.io\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image, display\n\nimport h2o\nfrom h2o.estimators.deeplearning import H2OAutoEncoderEstimator\nh2o.init()", "http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html", "!wget -c http://www.cl.cam.ac.uk/Research/DTG/attarchive/pub/data/att_faces.tar.Z\n\n!tar xzvf att_faces.tar.Z;rm att_faces.tar.Z;", "We now need some code to read pgm files.\nThanks to StackOverflow we have some code to leverage:", "import re\n\ndef read_pgm(filename, byteorder='>'):\n \"\"\"Return image data from a raw PGM file as numpy array.\n\n Format specification: http://netpbm.sourceforge.net/doc/pgm.html\n\n \"\"\"\n with open(filename, 'rb') as f:\n buffer = f.read()\n try:\n header, width, height, maxval = re.search(\n b\"(^P5\\s(?:\\s*#.*[\\r\\n])*\"\n b\"(\\d+)\\s(?:\\s*#.*[\\r\\n])*\"\n b\"(\\d+)\\s(?:\\s*#.*[\\r\\n])*\"\n b\"(\\d+)\\s(?:\\s*#.*[\\r\\n]\\s)*)\", buffer).groups()\n except AttributeError:\n raise ValueError(\"Not a raw PGM file: '%s'\" % filename)\n return np.frombuffer(buffer,\n dtype='u1' if int(maxval) < 256 else byteorder+'u2',\n count=int(width)*int(height),\n offset=len(header)\n ).reshape((int(height), int(width)))\n\n\nimage = read_pgm(\"orl_faces/s12/6.pgm\", byteorder='<')\n\nimage.shape\n\nplt.imshow(image, plt.cm.gray)\nplt.show()\n\nimport glob\nimport os\nfrom collections import defaultdict\n\nimages = glob.glob(\"orl_faces/**/*.pgm\")\n\ndata = defaultdict(list)\nimage_data = []\nfor img in images:\n _,label,_ = img.split(os.path.sep)\n imgdata = read_pgm(img, byteorder='<').flatten().tolist()\n data[label].append(imgdata)\n image_data.append(imgdata)", "Let's import it to H2O", "faces = h2o.H2OFrame(image_data)\n\nfaces.shape\n\nfrom h2o.estimators.deeplearning import H2OAutoEncoderEstimator\n\nmodel = H2OAutoEncoderEstimator( \n activation=\"Tanh\", \n hidden=[50], \n l1=1e-4, \n epochs=10\n)\n\nmodel.train(x=faces.names, training_frame=faces)\n\nmodel", "Reconstructing the hidden space\nNow that we have our model trained, we would like to understand better what is the internal representation of this model? What makes a face a .. face? \nWe will provide to the model some gaussian noise and see what is the results.\nWe star by creating some gaussian noise:", "import pandas as pd\n\ngaussian_noise = np.random.randn(10304)\n\nplt.imshow(gaussian_noise.reshape(112, 92), plt.cm.gray);", "Then we import this data inside H2O. We have to first map the columns to the gaussian data.", "gaussian_noise_pre = dict(zip(faces.names,gaussian_noise))\n\ngaussian_noise_hf = h2o.H2OFrame.from_python(gaussian_noise_pre)\n\nresult = model.predict(gaussian_noise_hf)\n\nresult.shape\n\nimg = result.as_data_frame()\n\nimg_data = img.T.values.reshape(112, 92)\n\nplt.imshow(img_data, plt.cm.gray);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
badlands-model/BayesLands
Examples/regridInput.ipynb
gpl-3.0
[ "Regridding input data to higher resolution\nThe initial resolution of the input file is used as the higher resolution that Badlands model can used. If one started with a given resolution and want to work with an higher one, it is required to regrid the input file to match at least the requested resolution.", "import sys\nprint(sys.version)\nprint(sys.executable)\n\n%matplotlib inline\n\n# Import badlands grid generation toolbox\nimport pybadlands_companion.resizeInput as resize", "1. Load python class and set required resolution", "#help(resize.resizeInput.__init__)\n\nnewRes = resize.resizeInput(requestedSpacing = 40)", "2. Regrid DEM file", "#help(newRes.regridDEM)\nnewRes.regridDEM(inDEM='mountain/data/nodes.csv',outDEM='mountain/data/newnodes.csv')", "3. Regrid Rain file", "#help(newRes.regridRain)\nnewRes.regridRain(inRain='data/rain.csv',outRain='newrain.csv')", "4. Regrid Tectonic files\nHere you have the choice between vertical only displacements file and 3D ones.\nIn cases where you have several files you might create a loop to automatically regrid the files!\nVertical only file", "#help(newRes.regridTecto)\nnewRes.regridTecto(inTec='data/disp.csv', outTec='newdisp.csv')", "3D displacements file", "#help(newRes.regridDisp)\nnewRes.regridDisp(inDisp='data/disp.csv', outDisp='newdisp.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jseabold/statsmodels
examples/notebooks/discrete_choice_overview.ipynb
bsd-3-clause
[ "Discrete Choice Models Overview", "import numpy as np\nimport statsmodels.api as sm", "Data\nLoad data from Spector and Mazzeo (1980). Examples follow Greene's Econometric Analysis Ch. 21 (5th Edition).", "spector_data = sm.datasets.spector.load(as_pandas=False)\nspector_data.exog = sm.add_constant(spector_data.exog, prepend=False)", "Inspect the data:", "print(spector_data.exog[:5,:])\nprint(spector_data.endog[:5])", "Linear Probability Model (OLS)", "lpm_mod = sm.OLS(spector_data.endog, spector_data.exog)\nlpm_res = lpm_mod.fit()\nprint('Parameters: ', lpm_res.params[:-1])", "Logit Model", "logit_mod = sm.Logit(spector_data.endog, spector_data.exog)\nlogit_res = logit_mod.fit(disp=0)\nprint('Parameters: ', logit_res.params)", "Marginal Effects", "margeff = logit_res.get_margeff()\nprint(margeff.summary())", "As in all the discrete data models presented below, we can print a nice summary of results:", "print(logit_res.summary())", "Probit Model", "probit_mod = sm.Probit(spector_data.endog, spector_data.exog)\nprobit_res = probit_mod.fit()\nprobit_margeff = probit_res.get_margeff()\nprint('Parameters: ', probit_res.params)\nprint('Marginal effects: ')\nprint(probit_margeff.summary())", "Multinomial Logit\nLoad data from the American National Election Studies:", "anes_data = sm.datasets.anes96.load(as_pandas=False)\nanes_exog = anes_data.exog\nanes_exog = sm.add_constant(anes_exog, prepend=False)", "Inspect the data:", "print(anes_data.exog[:5,:])\nprint(anes_data.endog[:5])", "Fit MNL model:", "mlogit_mod = sm.MNLogit(anes_data.endog, anes_exog)\nmlogit_res = mlogit_mod.fit()\nprint(mlogit_res.params)", "Poisson\nLoad the Rand data. Note that this example is similar to Cameron and Trivedi's Microeconometrics Table 20.5, but it is slightly different because of minor changes in the data.", "rand_data = sm.datasets.randhie.load(as_pandas=False)\nrand_exog = rand_data.exog.view(float).reshape(len(rand_data.exog), -1)\nrand_exog = sm.add_constant(rand_exog, prepend=False)", "Fit Poisson model:", "poisson_mod = sm.Poisson(rand_data.endog, rand_exog)\npoisson_res = poisson_mod.fit(method=\"newton\")\nprint(poisson_res.summary())", "Negative Binomial\nThe negative binomial model gives slightly different results.", "mod_nbin = sm.NegativeBinomial(rand_data.endog, rand_exog)\nres_nbin = mod_nbin.fit(disp=False)\nprint(res_nbin.summary())", "Alternative solvers\nThe default method for fitting discrete data MLE models is Newton-Raphson. You can use other solvers by using the method argument:", "mlogit_res = mlogit_mod.fit(method='bfgs', maxiter=250)\nprint(mlogit_res.summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
elmaso/tno-ai
aind2-dl-master/IMDB_In_Keras.ipynb
gpl-3.0
[ "Analyzing IMDB Data in Keras", "# Imports\nimport numpy as np\nimport keras\nfrom keras.datasets import imdb\nfrom keras.models import Sequential\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.layers import Dense, Dropout, Activation\nfrom keras.preprocessing.text import Tokenizer\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnp.random.seed(42)", "1. Loading the data\nThis dataset comes preloaded with Keras, so one simple command will get us training and testing data. There is a parameter for how many words we want to look at. We've set it at 1000, but feel free to experiment.", "# Loading the data (it's preloaded in Keras)\n(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)\n\nprint(x_train.shape)\nprint(x_test.shape)", "2. Examining the data\nNotice that the data has been already pre-processed, where all the words have numbers, and the reviews come in as a vector with the words that the review contains. For example, if the word 'the' is the first one in our dictionary, and a review contains the word 'the', then there is a 1 in the corresponding vector.\nThe output comes as a vector of 1's and 0's, where 1 is a positive sentiment for the review, and 0 is negative.", "print(x_train[0])\nprint(y_train[0])", "3. One-hot encoding the output\nHere, we'll turn the input vectors into (0,1)-vectors. For example, if the pre-processed vector contains the number 14, then in the processed vector, the 14th entry will be 1.", "# One-hot encoding the output into vector mode, each of length 1000\ntokenizer = Tokenizer(num_words=1000)\nx_train = tokenizer.sequences_to_matrix(x_train, mode='binary')\nx_test = tokenizer.sequences_to_matrix(x_test, mode='binary')\nprint(x_train[0])", "And we'll also one-hot encode the output.", "# One-hot encoding the output\nnum_classes = 2\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\nprint(y_train.shape)\nprint(y_test.shape)", "4. Building the model architecture\nBuild a model here using sequential. Feel free to experiment with different layers and sizes! Also, experiment adding dropout to reduce overfitting.", "# TODO: Build the model architecture\nmodel = Sequential()\nmodel.add(Dense(512, activation='relu', input_dim=1000))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(num_classes, activation='softmax' ))\nmodel.summary()\n\n\n# TODO: Compile the model using a loss function and an optimizer.\nmodel.compile(loss='categorical_crossentropy', \n optimizer='rmsprop', \n metrics=['accuracy'] )\n", "5. Training the model\nRun the model here. Experiment with different batch_size, and number of epochs!", "# TODO: Run the model. Feel free to experiment with different batch sizes and number of epochs.\ncheckpoint = ModelCheckpoint(filepath='mnist.model.best.hdf5',\n verbose=1, save_best_only=True)\n\nhist = model.fit(x_train, y_train,\n batch_size=32, epochs=10, \n validation_split=0.2, callbacks=[checkpoint],\n verbose=2, shuffle=True)", "6. Evaluating the model\nThis will give you the accuracy of the model, as evaluated on the testing set. Can you get something over 85%?", "model.load_weights('mnist.model.best.hdf5')\nscore = model.evaluate(x_test, y_test, verbose=0)\nscorex = 100*score[1]\nprint(\"Accuracy: %.2f%%\" % scorex )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
shubham0704/ATR-FNN
Data set preparation.ipynb
mit
[ "# creating PIPLELINE\n# __author__ == shubham0704\n\n# dependencies\nimport numpy as np\nfrom PIL import Image\nfrom scipy import ndimage", "Load datasets\nImages will be loaded into train and test data set and will be converted to gray scale", "import glob\nimport os\n\ndef label_getter(file_name):\n \n labels = []\n for file in os.listdir(file_name):\n print(file_name+file)\n inner_files_regex = file_name + file + '/*.jpeg'\n labels.append(glob.glob(inner_files_regex))\n return labels\n\ntrain_folder_labels = label_getter('./TARGETS/TRAIN/')\ntest_folder_labels = label_getter('./TARGETS/TEST/')\nprint(len(train_folder_labels))\nprint(len(test_folder_labels))", "A sample file looks like -", "img = Image.open(train_folder_labels[0][0])\nprint(img.size)\nimage_size = img.size[0]\nimg", "Create pickle of all files\nPickling these files will help to load it on demand\n\nLoad files into memory and create real file data set", "import pickle\n\npixel_depth = 255\n# Note - dividing by pixel depth distrubutes these values around 0. So uniform variance and 0 mean\nprint(image_size)\n# create datasets\ndef load_tank_type(image_labels):\n images = image_labels\n print(len(images))\n dataset = np.ndarray(shape=(len(images), image_size, image_size),\n dtype=np.float32)\n for index,image in enumerate(images):\n image_path = image\n image_data = (ndimage.imread(image_path).astype(float) - \n pixel_depth / 2) / pixel_depth\n #image_data = ndimage.imread(image_path)\n dataset[index,:,:] = image_data\n print('Full dataset tensor:', dataset.shape)\n print('Mean:', np.mean(dataset))\n print('Standard deviation:', np.std(dataset))\n return dataset\n\ndef pickle_files(folder_labels, folder_type, force=False):\n tank_names = ['BMP2','T72','BTR70']\n dataset_names = []\n for index,name in enumerate(tank_names):\n set_filename = folder_type + '_' + name + '.pickle'\n dataset_names.append(set_filename)\n if os.path.exists(set_filename) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping pickling.' % set_filename)\n else:\n print('Pickling %s.' % set_filename)\n dataset = load_tank_type(folder_labels[index])\n try:\n with open(set_filename, 'wb') as f:\n pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', set_filename, ':', e)\n return dataset_names\n\ntrain_dataset = pickle_files(train_folder_labels,'TRAIN')\ntest_dataset = pickle_files(test_folder_labels,'TEST')", "Spot check to see if the file still looks good", "import matplotlib.pyplot as plt\n\npickle_file = train_dataset[0]\nwith open(pickle_file, 'rb') as file:\n tank_set = pickle.load(file)\n sample_image = tank_set[5,:,:]\n plt.figure()\n plt.imshow(sample_image)\n plt.show()\n\n# we need to merge datasets for training in order to train well\ndef make_arrays(nb_rows, img_size):\n if nb_rows:\n dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)\n labels = np.ndarray(nb_rows, dtype=np.int32)\n else:\n dataset, labels = None, None\n return dataset, labels\n\ndef merge_datasets(pickle_files, train_size, valid_size=0):\n num_classes = len(pickle_files)\n valid_dataset, valid_labels = make_arrays(valid_size, image_size)\n train_dataset, train_labels = make_arrays(train_size, image_size)\n vsize_per_class = valid_size // num_classes\n tsize_per_class = train_size // num_classes\n \n start_v, start_t = 0, 0\n end_v, end_t = vsize_per_class, tsize_per_class\n end_l = vsize_per_class+tsize_per_class\n for label, pickle_file in enumerate(pickle_files): \n try:\n with open(pickle_file, 'rb') as f:\n tank_set = pickle.load(f)\n # let's shuffle the tank_type to have random validation and training set\n np.random.shuffle(tank_set)\n if valid_dataset is not None:\n valid_tank = tank_set[:vsize_per_class, :, :]\n valid_dataset[start_v:end_v, :, :] = valid_tank\n valid_labels[start_v:end_v] = label\n start_v += vsize_per_class\n end_v += vsize_per_class\n \n train_tank = tank_set[vsize_per_class:end_l, :, :]\n train_dataset[start_t:end_t, :, :] = train_tank\n train_labels[start_t:end_t] = label\n start_t += tsize_per_class\n end_t += tsize_per_class\n except Exception as e:\n print('Unable to process data from', pickle_file, ':', e)\n raise\n \n return valid_dataset, valid_labels, train_dataset, train_labels\n\n# pass test data first and then I will pass train data\n# break test into validation and test data set\nimport math\nvalid_dataset, valid_labels, test_dataset, test_labels = merge_datasets(\n test_dataset,math.floor(0.7*195)*3,\n math.floor(0.3*195)*3)\n_, _, train_dataset, train_labels = merge_datasets(train_dataset,232*3)\n\nprint(len(valid_dataset))\nprint(len(test_dataset))\nprint(len(train_dataset))\nprint(valid_labels[:5])", "As we can see labels are not shuffled so lets shuffle labels accordingly", "def randomize(dataset, labels):\n permutation = np.random.permutation(labels.shape[0])\n shuffled_dataset = dataset[permutation,:,:]\n shuffled_labels = labels[permutation]\n return shuffled_dataset, shuffled_labels\ntrain_dataset, train_labels = randomize(train_dataset, train_labels)\ntest_dataset, test_labels = randomize(test_dataset, test_labels)\nvalid_dataset, valid_labels = randomize(valid_dataset, valid_labels)", "Finally lets save data for further reuse -", "pickle_file = os.path.join(os.getcwd(), 'final_dataset.pickle')\n\ntry:\n f = open(pickle_file, 'wb')\n save = {\n 'train_dataset': train_dataset,\n 'train_labels': train_labels,\n 'valid_dataset': valid_dataset,\n 'valid_labels': valid_labels,\n 'test_dataset': test_dataset,\n 'test_labels': test_labels,\n }\n pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n f.close()\nexcept Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nstatinfo = os.stat(pickle_file)\nprint('Compressed pickle size:', statinfo.st_size)", "Next step is to use it in a Fuzzy Neural Network" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
keras-team/keras-io
examples/vision/ipynb/autoencoder.ipynb
apache-2.0
[ "Convolutional autoencoder for image denoising\nAuthor: Santiago L. Valdarrama<br>\nDate created: 2021/03/01<br>\nLast modified: 2021/03/01<br>\nDescription: How to train a deep convolutional autoencoder for image denoising.\nIntroduction\nThis example demonstrates how to implement a deep convolutional autoencoder\nfor image denoising, mapping noisy digits images from the MNIST dataset to\nclean digits images. This implementation is based on an original blog post\ntitled Building Autoencoders in Keras\nby François Chollet.\nSetup", "import numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Model\n\n\ndef preprocess(array):\n \"\"\"\n Normalizes the supplied array and reshapes it into the appropriate format.\n \"\"\"\n\n array = array.astype(\"float32\") / 255.0\n array = np.reshape(array, (len(array), 28, 28, 1))\n return array\n\n\ndef noise(array):\n \"\"\"\n Adds random noise to each image in the supplied array.\n \"\"\"\n\n noise_factor = 0.4\n noisy_array = array + noise_factor * np.random.normal(\n loc=0.0, scale=1.0, size=array.shape\n )\n\n return np.clip(noisy_array, 0.0, 1.0)\n\n\ndef display(array1, array2):\n \"\"\"\n Displays ten random images from each one of the supplied arrays.\n \"\"\"\n\n n = 10\n\n indices = np.random.randint(len(array1), size=n)\n images1 = array1[indices, :]\n images2 = array2[indices, :]\n\n plt.figure(figsize=(20, 4))\n for i, (image1, image2) in enumerate(zip(images1, images2)):\n ax = plt.subplot(2, n, i + 1)\n plt.imshow(image1.reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n ax = plt.subplot(2, n, i + 1 + n)\n plt.imshow(image2.reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n plt.show()\n", "Prepare the data", "# Since we only need images from the dataset to encode and decode, we\n# won't use the labels.\n(train_data, _), (test_data, _) = mnist.load_data()\n\n# Normalize and reshape the data\ntrain_data = preprocess(train_data)\ntest_data = preprocess(test_data)\n\n# Create a copy of the data with added noise\nnoisy_train_data = noise(train_data)\nnoisy_test_data = noise(test_data)\n\n# Display the train data and a version of it with added noise\ndisplay(train_data, noisy_train_data)", "Build the autoencoder\nWe are going to use the Functional API to build our convolutional autoencoder.", "input = layers.Input(shape=(28, 28, 1))\n\n# Encoder\nx = layers.Conv2D(32, (3, 3), activation=\"relu\", padding=\"same\")(input)\nx = layers.MaxPooling2D((2, 2), padding=\"same\")(x)\nx = layers.Conv2D(32, (3, 3), activation=\"relu\", padding=\"same\")(x)\nx = layers.MaxPooling2D((2, 2), padding=\"same\")(x)\n\n# Decoder\nx = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=\"relu\", padding=\"same\")(x)\nx = layers.Conv2DTranspose(32, (3, 3), strides=2, activation=\"relu\", padding=\"same\")(x)\nx = layers.Conv2D(1, (3, 3), activation=\"sigmoid\", padding=\"same\")(x)\n\n# Autoencoder\nautoencoder = Model(input, x)\nautoencoder.compile(optimizer=\"adam\", loss=\"binary_crossentropy\")\nautoencoder.summary()", "Now we can train our autoencoder using train_data as both our input data\nand target. Notice we are setting up the validation data using the same\nformat.", "autoencoder.fit(\n x=train_data,\n y=train_data,\n epochs=50,\n batch_size=128,\n shuffle=True,\n validation_data=(test_data, test_data),\n)", "Let's predict on our test dataset and display the original image together with\nthe prediction from our autoencoder.\nNotice how the predictions are pretty close to the original images, although\nnot quite the same.", "predictions = autoencoder.predict(test_data)\ndisplay(test_data, predictions)", "Now that we know that our autoencoder works, let's retrain it using the noisy\ndata as our input and the clean data as our target. We want our autoencoder to\nlearn how to denoise the images.", "autoencoder.fit(\n x=noisy_train_data,\n y=train_data,\n epochs=100,\n batch_size=128,\n shuffle=True,\n validation_data=(noisy_test_data, test_data),\n)", "Let's now predict on the noisy data and display the results of our autoencoder.\nNotice how the autoencoder does an amazing job at removing the noise from the\ninput images.", "predictions = autoencoder.predict(noisy_test_data)\ndisplay(noisy_test_data, predictions)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.15/_downloads/plot_sensor_permutation_test.ipynb
bsd-3-clause
[ "%matplotlib inline", "Permutation T-test on sensor data\nOne tests if the signal significantly deviates from 0\nduring a fixed time window of interest. Here computation\nis performed on MNE sample dataset between 40 and 60 ms.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne import io\nfrom mne.stats import permutation_t_test\nfrom mne.datasets import sample\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = 1\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# pick MEG Gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,\n exclude='bads')\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6))\ndata = epochs.get_data()\ntimes = epochs.times\n\ntemporal_mask = np.logical_and(0.04 <= times, times <= 0.06)\ndata = np.mean(data[:, :, temporal_mask], axis=2)\n\nn_permutations = 50000\nT0, p_values, H0 = permutation_t_test(data, n_permutations, n_jobs=1)\n\nsignificant_sensors = picks[p_values <= 0.05]\nsignificant_sensors_names = [raw.ch_names[k] for k in significant_sensors]\n\nprint(\"Number of significant sensors : %d\" % len(significant_sensors))\nprint(\"Sensors names : %s\" % significant_sensors_names)", "View location of significantly active sensors", "evoked = mne.EvokedArray(-np.log10(p_values)[:, np.newaxis],\n epochs.info, tmin=0.)\n\n# Extract mask and indices of active sensors in layout\nstats_picks = mne.pick_channels(evoked.ch_names, significant_sensors_names)\nmask = p_values[:, np.newaxis] <= 0.05\n\nevoked.plot_topomap(ch_type='grad', times=[0], scalings=1,\n time_format=None, cmap='Reds', vmin=0., vmax=np.max,\n units='-log10(p)', cbar_fmt='-%0.1f', mask=mask,\n size=3, show_names=lambda x: x[4:] + ' ' * 20)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mbakker7/timml
notebooks/BuildingPit.ipynb
mit
[ "BuildingPit Element", "import numpy as np\nimport matplotlib.pyplot as plt\n\n# import sys\n# sys.path.insert(1, \"..\")\n\nimport timml", "Parameters\nDefine some parameters", "kh = 2. # m/day\nf_ani = 0.05 # anisotropy factor\nkv = f_ani*kh\nctop = 800. # resistance top leaky layer in days\n\nztop = 0. # surface elevation\nz_well = -13. # end depth of the wellscreen\nz_dw = -15. # bottom elevation of sheetpile wall\nz_extra = z_dw - 15. # extra layer\nzbot = -60. # bottom elevation of the model\n\nl = 40. # length building pit in m\nb = 30. # width building pit in m\n\nh_bem = -6.21 # m\noffset = 5. # distance groundwater extraction element from sheetpiles in m\n\nxy = [(-l/2, -b/2), (l/2, -b/2), (l/2, b/2), (-l/2, b/2), (-l/2, -b/2)]\n\nfor (x, y) in xy:\n p2, = plt.plot(x, y, \"o\", label=\"building pit pts\")\nplt.axis(\"equal\");\nplt.show()", "Model\nSet up a model", "z = np.array([ztop+1, ztop, z_dw, z_dw, z_extra, z_extra, zbot])\ndz = z[1::2] - z[2::2]\ndz\n\nkh_arr = kh * np.ones(dz.shape)\n\nc = np.r_[np.array([ctop]), dz[:-1]/(2*kv) + dz[1:]/(2*kv)]\nc", "Build model, solve, and calculate total discharge and distance to the 5 cm drawdown contour.", "ml = timml.ModelMaq(kaq=kh_arr, z=z, c=c, topboundary=\"semi\", hstar=0.0, f2py=False)\n\nlayers = np.arange(np.sum(z_dw <= ml.aq.zaqbot))\nlast_lay_dw = layers[-1]\n\ninhom = timml.BuildingPit(ml, xy, kaq=kh_arr, z=z[1:], topboundary=\"conf\", \n c=c[1:], order=4, ndeg=3, layers=layers)\n\ntimml.HeadLineSink(ml, x1=-l/2+offset, y1=b/2-offset, x2=l/2-offset, y2=b/2-offset, hls=h_bem, \n layers=np.arange(last_lay_dw+1))\ntimml.HeadLineSink(ml, x1=-l/2+offset, y1=0, x2=l/2-offset, y2=0, hls=h_bem, \n layers=np.arange(last_lay_dw+1))\ntimml.HeadLineSink(ml, x1=-l/2+offset, y1=-b/2+offset, x2=l/2-offset, y2=-b/2+offset, hls=h_bem, \n layers=np.arange(last_lay_dw+1))\n\n# ml.solve_mp(nproc=2)\nml.solve()\n\nQtot = 0.\n\nfor e in ml.elementlist:\n if e.name == \"HeadLineSink\":\n Qtot += e.discharge()\n\nprint(\"Debiet =\", np.round(Qtot.sum(), 2), \"m3/dag\")\n\ny = np.linspace(-b/2-25, b/2+1100, 201)\nhl = ml.headalongline(np.zeros(201), y, layers=[0])\ny_5cm = np.interp(-0.05, ml.headalongline(np.zeros(201), y, layers=0).squeeze(), y, right=np.nan)\nprint(\"Distance to 5 cm drawdown contour =\", np.round(y_5cm, 2), \"m\")\n\n# Q_arr[mi] = Qtot.sum()\n# y5cm_arr[mi] = y_5cm", "Plot an overview of the model", "ml.plot()", "Visualizations", "x = np.linspace(-l/2-25, l/2+1100, 201)\nhl = ml.headalongline(x, np.zeros(201), layers=[last_lay_dw, last_lay_dw+1])\n\nfig, ax = plt.subplots(1, 1, figsize=(16, 5))\n\nax.plot(x, hl[0].squeeze(), label=\"head layer {}\".format(last_lay_dw))\nax.plot(x, hl[1].squeeze(), label=\"head layer {}\".format(last_lay_dw+1))\n\nax.axhline(-0.05, color=\"r\", linestyle=\"dashed\", lw=0.75, label=\"-0.05 m\")\nax.axhline(-0.5, color=\"k\", linestyle=\"dashed\", lw=0.75, label=\"-0.5 m\")\nax.set_xlabel(\"x (m)\")\nax.set_ylabel(\"head (m)\")\nax.legend(loc=\"best\")\nax.grid(b=True)\n\nxoffset = 15\nzoffset = 15\nx1, x2, y1, y2 = [-l/2-xoffset, -l/2+xoffset, 0, 0]\nnudge = 1e-6\nn = 301\n\n# plot head contour cross-sections\nh = ml.headalongline(np.linspace(x1 + nudge, x2 - nudge, n),\n np.linspace(y1 + nudge, y2 - nudge, n))\nL = np.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2)\nxg = np.linspace(0, L, n)+x1\n\nzg = 0.5 * (ml.aq.zaqbot + ml.aq.zaqtop)\nzg = np.hstack((ml.aq.zaqtop[0], zg, ml.aq.zaqbot[-1]))\nh = np.vstack((h[0], h, h[-1]))\n\nlevels = np.linspace(h_bem, -0.0, 51)\n\nfig, ax = plt.subplots(1, 1, figsize=(16, 8))\nml.plot(win=[x1, x2, y1, y2], orientation=\"ver\", newfig=False)\n# ml.vcontoursf1D(x1, x2, n, levels=101, newfig=False, ax=ax, color=\"r\")\n\ncf = ax.contourf(xg, zg, h, levels)\ncs = ax.contour(xg, zg, h, levels, colors=\"k\", linewidths=0.5)\n# cs2 = ax.contour(xg, zg2, h2, levels, colors=\"r\", linewidths=0.5, linestyles=\"dashed\")\n\n# plt.clabel(cs, fmt=\"%.2f\")\n# plt.clabel(cs2, fmt=\"%.2f\")\n\nax.set_ylim(z_dw-zoffset, z_dw+zoffset)\nax.set_ylabel(\"diepte (m NAP)\");\nax.set_xlabel(\"m\")\nax.set_aspect(\"equal\")\n\nplt.colorbar(cf, ax=ax)\nplt.show()", "Model 2: Add more layers\nAdd more layers to the model to get a more accurate solution of the flow towards the building pit.", "n = 11 # aantal laagjes boven en onder damwand\n\ndz_i_top = (z_well-z_dw)/np.sum(np.arange(n+1))\ndz_i_bot = (z_dw-z_extra)/np.sum(np.arange(2*n+1))\n\nz_layers_top = np.cumsum(np.arange(n)*dz_i_top)\nz_layers_bot = np.cumsum(np.arange(2*n)*dz_i_bot)\n\nzgr = np.r_[z_dw + z_layers_top[::-1], (z_dw-z_layers_bot)[1:]]\n\nz4 = np.r_[np.array([ztop+1, ztop, z_well, z_well]), np.repeat(zgr, 2, 0), z_extra, z_extra, zbot]\n# z4 = np.r_[np.array([ztop+1, ztop, z_well]), np.repeat(zgr, 2, 0)[1:-1]]\n\ndz4 = z4[1:-1:2] - z4[2::2]\n\nkh_arr = kh * np.ones(dz4.shape)\n\nc = np.r_[np.array([ctop]), dz4[:-1]/(2*kv) + dz4[1:]/(2*kv)]\n\nkh_arr2 = kh_arr.copy()\nkh_arr2[0] = 1e-5", "Build model, solve, and calculate total discharge and distance to the 5 cm drawdown contour.", "ml = timml.ModelMaq(kaq=kh_arr, z=z4, c=c, topboundary=\"semi\", hstar=0.0)\n\nlayers = np.arange(np.sum(z_dw <= ml.aq.zaqbot))\nlast_lay_dw = layers[-1]\ninhom = timml.BuildingPit(ml, xy, kaq=kh_arr2, z=z4[1:], topboundary=\"conf\", \n c=c[1:], order=4, ndeg=3, layers=layers)\n\n# wlayers = np.arange(np.sum(z_well <= ml.aq.zaqbot))\nwlayers = np.arange(np.sum(-14 <= ml.aq.zaqbot))\nwlayers=wlayers[1:]\n\ntimml.HeadLineSink(ml, x1=-l/2+offset, y1=b/2-offset, x2=l/2-offset, y2=b/2-offset, hls=h_bem, \n layers=wlayers)\ntimml.HeadLineSink(ml, x1=-l/2+offset, y1=0, x2=l/2-offset, y2=0, hls=h_bem, \n layers=wlayers, order=5)\ntimml.HeadLineSink(ml, x1=-l/2+offset, y1=-b/2+offset, x2=l/2-offset, y2=-b/2+offset, hls=h_bem, \n layers=wlayers);\n\n# ml.solve_mp(nproc=2)\nml.solve()\n\nQtot = 0.\n\nfor e in ml.elementlist:\n if e.name == \"HeadLineSink\":\n Qtot += e.discharge()\n\nprint(\"Debiet =\", np.round(Qtot.sum(), 2), \"m3/dag\")\n\ny = np.linspace(-b/2-25, b/2+1100, 201)\nhl = ml.headalongline(np.zeros(201), y, layers=[0])\ny_5cm = np.interp(-0.05, ml.headalongline(np.zeros(201), y, layers=0).squeeze(), y, right=np.nan)\nprint(\"Distance to 5 cm drawdown contour =\", np.round(y_5cm, 2), \"m\")\n\nlast_lay_dw = layers[-1]\n\nx = np.linspace(-l/2-25, l/2+1, 201)\n# x = np.linspace(-l/2-25, l/2+1100, 201)\nhl = ml.headalongline(x, np.zeros(201), layers=[0, last_lay_dw, last_lay_dw+1])\n\nfig, ax = plt.subplots(1, 1, figsize=(16, 5))\n\nax.plot(x, hl[0].squeeze(), label=\"head layer 0\")\nax.plot(x, hl[1].squeeze(), label=\"head layer {}\".format(last_lay_dw))\nax.plot(x, hl[2].squeeze(), label=\"head layer {}\".format(last_lay_dw+1))\n\nax.axhline(-0.05, color=\"r\", linestyle=\"dashed\", lw=0.75, label=\"-0.05 m\")\nax.axhline(-0.5, color=\"k\", linestyle=\"dashed\", lw=0.75, label=\"-0.5 m\")\nax.set_xlabel(\"x (m)\")\nax.set_ylabel(\"head (m)\")\nax.legend(loc=\"best\")\nax.grid(b=True)\n\nxoffset = 50\nzoffset = 15\nx1, x2, y1, y2 = [-l/2-xoffset, -l/2+xoffset, 0, 0]\nnudge = 1e-6\nn = 301\n\n# plot head contour cross-sections\nh = ml.headalongline(np.linspace(x1 + nudge, x2 - nudge, n),\n np.linspace(y1 + nudge, y2 - nudge, n))\nL = np.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2)\nxg = np.linspace(0, L, n) + x1\n\nzg = 0.5 * (ml.aq.zaqbot + ml.aq.zaqtop)\nzg = np.hstack((ml.aq.zaqtop[0], zg, ml.aq.zaqbot[-1]))\nh = np.vstack((h[0], h, h[-1]))\n\nlevels = np.linspace(h_bem-.1, -0.0, 51)\n\nfig, ax = plt.subplots(1, 1, figsize=(16, 8))\nml.plot(win=[x1, x2, y1, y2], orientation=\"ver\", newfig=False)\n# ml.vcontoursf1D(x1, x2, n, levels=101, newfig=False, ax=ax, color=\"r\")\n\ncf = ax.contourf(xg, zg, h, levels)\ncs = ax.contour(xg, zg, h, levels, colors=\"k\", linewidths=0.5)\n# cs2 = ax.contour(xg, zg2, h2, levels, colors=\"r\", linewidths=0.5, linestyles=\"dashed\")\n\n# plt.clabel(cs, fmt=\"%.2f\")\n# plt.clabel(cs2, fmt=\"%.2f\")\n\nax.set_ylim(z_dw-zoffset, z_dw+zoffset)\nax.set_ylabel(\"depth (m NAP)\");\nax.set_xlabel(\"m\");\n# ax.set_aspect(\"equal\")\n\ncb = plt.colorbar(cf, ax=ax)\ncb.ax.set_ylabel(\"stijghoogte (m)\")\n\n# ax.set_ylim(-20, -5)\n# ax.set_xlim(-70, -57)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
merryjman/astronomy
sample.ipynb
gpl-3.0
[ "Star catalogue analysis\nThanks to UCF Physics undergrad Tyler Townsend for contributing to the development of this notebook.", "# Import modules that contain functions we need\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt", "Getting the data", "# Read in data that will be used for the calculations.\n# Using pandas read_csv method, we can create a data frame\ndata = pd.read_csv(\"https://raw.githubusercontent.com/merryjman/astronomy/master/stars.csv\")\ndatatwo = pd.read_csv(\"https://raw.githubusercontent.com/astronexus/HYG-Database/master/hygdata_v3.csv\")\n\n# We wish to look at the first 12 rows of our data set\n\ndata.head(12)\n", "Star map", "fig = plt.figure(figsize=(15, 4))\nplt.scatter(data.ra,data.dec, s=0.01)\nplt.xlim(24, 0)\nplt.title(\"All the Stars in the Catalogue\")\nplt.xlabel('right ascension (hours)')\nplt.ylabel('declination (degrees)')", "Let's Graph a Constellation!", "# These are the abbreviations for all the constellations\ndatatwo.sort_values('con').con.unique()\n\n# This shows just one constellation.\ndatatwo_con = datatwo.query('con == \"UMa\"')\n\n#Define a variable called \"name\" so I don't have to keep renaming the plot title!\nname = \"Ursa Major\"\n\n# This plots where the brightest 15 stars are in the sky\ndatatwo_con = datatwo_con.sort_values('mag').head(15)\nplt.scatter(datatwo_con.ra,datatwo_con.dec)\nplt.gca().invert_xaxis()\n\n# I graphed first without the line below, to see what it looks like, then\n# I added the plt.xlim(25,20) to make it look nicer.\n\nplt.xlim(15,8)\nplt.ylim(30,70)\nplt.title('%s In the Sky'%(name))\nplt.xlabel('right ascension (hours)')\nplt.ylabel('declination (degrees)')", "Let's Go Back in Time!", "# What did this constellation look like 50,000 years ago??\nplt.scatter(datatwo_con.ra-datatwo_con.pmra/1000/3600/15*50000,datatwo_con.dec-datatwo_con.pmdec/1000/3600*50000)\nplt.xlim(15,8)\nplt.ylim(30,70)\n\nplt.title('%s Fifty Thousand Years Ago!'%(name))\nplt.xlabel('right ascension (hours)')\nplt.ylabel('declination (degrees)')", "Let's Go Into the Future!", "# Now, let's try looking at what this same constellation will look like in 50,000 years!\nplt.scatter(datatwo_con.ra+datatwo_con.pmra/1000/3600/15*50000,datatwo_con.dec+datatwo_con.pmdec/1000/3600*50000)\n\nplt.xlim(15,8)\nplt.ylim(30,70)\nplt.title('%s Fifty Thousand Years From Now!'%(name))\nplt.xlabel('right ascension (hours)')\nplt.ylabel('declination (degrees)')\n", "Now you try one of your own!", "# Make a Hertzsprung-Russell Diagram!", "References\n\nThe data came from The Astronomy Nexus and their colletion of the Hipparcos, Yale Bright Star, and Gliese catalogues (huge zip file here).\nReversed H-R diagram from The Electric Universe\nMany thanks to Adam Lamee and his colleagues at codingink12.org for basically making this whole thing." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chinapnr/python_study
Python 基础课程/draft/python_basic_2/Python - 面向对象编程 01.ipynb
gpl-3.0
[ "Python - 面向对象编程 01\n\nPython Senior, Lesson 10, v1.0.0, 2016.11 by David.Yi\nv1.1, 2020.5.4, 6.13 edit by David Yi\n\n本次内容要点\n\n类与实例\n类的属性\n类的方法\n创建和使用子类\n构造器方法和解构器方法\n调用父类的 super() 方法\n\n面向对象编程\n类与实例\n类,是一个定义;实例是真正的对象的实现,创建一个实例的过程称作实例化。\n所有的类都继承自一个叫做 object 的类。继承的定义之后再说。\n类的一些操作方式和函数很像,在没有面向对象编程方式的时候,就是面向过程(函数)的开发方式。\n类可以很复杂,也可以很简单,取决于应用的需要。面向对象的编程方式,是现代流行的开发方式,博大精深,需要慢慢理解体会。一开始有些不太清楚,也没有关系。\n类的属性\n类可以理解为一种称之为命名空间( namespace),在这个类之下,数据都是属于这个类的实例的,我们称为属性,用实例名字+点+属性名字。这样的类比较简单,在 c 语言中称为结构体,在 pascal 中为记录类型,python 的数据结构比较简单。", "# 申明一个 class MyData\nclass MyData(object):\n pass\n\n# 实例化 MyData, 实例的名字叫做 obj_math\nobj_math = MyData()\nobj_math.x = 4\nprint(obj_math.x)", "类的方法\n光是把类作为命名空间太简单了,可以给类添加功能,称为方法。\n方法定义在类中,使用在实例中。可以理解为实例是真正做事情的代码,所以在实例中调用方法完成某个功能是合理的。", "class MyData(object):\n \n # 定义一个 SayHello 的方法,self 可以理解为必须传递的参数\n def SayHello(self):\n print('Hello!')\n \n# 实例化\nobj_math = MyData()\n\n# 调用方法\nobj_math.SayHello()\n\n# 类的方法的直接调用,其实还是实例化了\n\nMyData().SayHello()\n\n# 在上面基础上,复杂一点的例子\n\nclass MyData(object):\n \n # 初始化方法,双下划线前后\n # 实例化的时候,需要传递 self 之后的参数\n def __init__(self, x, y):\n self.x = x\n self.y = y\n \n # 定义一个 SayHello 的方法,self 可以理解为必须传递的参数\n def SayHello(self):\n print('Hello!')\n \n def Add(self):\n print(self.x + self.y)\n \n# 实例化\nobj_math = MyData(3,4)\n\n# 调用方法\nobj_math.SayHello()\n\nobj_math.x = 5\n\nobj_math.Add()\n\no1 = MyData(1,3)\no1.Add()\n\n# 再复杂一点,创建多个实例\n\nclass MyData(object):\n \n # 初始化方法,双下划线前后\n # 实例化的时候,需要传递 self 之后的参数\n def __init__(self, x, y):\n self.x = x\n self.y = y\n \n # 定义一个 SayHello 的方法,self 可以理解为必须传递的参数\n def SayHello(self):\n print('Hello!')\n \n def Add(self):\n print(self.x + self.y)\n \n# 实例化\nobj_math = MyData(3,4)\n\n# 调用方法\nobj_math.SayHello()\nobj_math.Add()\n\n# 再创建一个实例\nobj_math2 = MyData(5,6)\nobj_math2.Add()\n\n# 我们建立一个有趣的简单的模仿游戏的类来说明面向对象编程的概念\n# v1.0.0\n\n# NPC 类\nclass NPC(object):\n \n # 初始化 NPC 的属性\n def __init__(self, name):\n self.name = name\n self.weapon = 'gun'\n self.blood = 1000\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n print('name:', self.name)\n print('weapon:', self.weapon)\n print('blood:', self.blood)\n \n # 定义方法 - 通用攻击\n def fight_common(self):\n print('Fight Common')\n \nn1 = NPC('AA1')\nn1.show_properties()\nn1.fight_common()", "这个 NPC 的类,在初始化这里定义了 NPC 拥有的三个属性,name、weapon、blood,其中 name 需要创建实例的时候设置。\nNPC 类有两个方法,一个是 show_properties(),显示 NPC 的属性;另外一个是 fight_common(),我们称之为普通攻击。\n创建和使用子类\n通过 class B(A),即表示从 A 开始创建一个子类,我们称之为继承。\n面向对象的编程方式的有点已经可以体现,我们可以通过继承来构造一个对象体系,比如这里现在 NPC 这个类中,定义了游戏中人物的最基本的一些属性和方法,可以理解不光 NPC 是战士,还是巫师,对会有这些属性和方法;然后我们把各自独特的属性和方法定义在从 NPC 中继承的各个子类中。我们先来定义一个战士 Soldier 类。", "# 战士 Soldier 类,继承自 NPC class\nclass Soldier(NPC):\n # 暂时什么也不干\n pass\n \n# 创建一个 Soldier 类, 作为 NPC 的子类\nn1 = Soldier('AA2')\n\n# 调用方法,因为 Soldier 中此刻并没有任何实际的方法等,所以实际上自动调用了父类的方法\nn1.show_properties()\nn1.fight_common()", "在子类中,可以覆盖父类的方法。\n比如 __init__() ,我们需要在 Soldier 类的初始化中加入一个只有 soldier 才有的级别 level 属性。\n所以,我们先调用父类的__init__() 方法,再写 Soldier 类需要的代码。\n而 show_properties()方法,我们先用简单的办法,完全覆盖,也就是使用这个新的show_properties()方法,来显示 NPC 标准的三个属性以及 Soldier 的一个独立属性。", "# 战士 Soldier 类\nclass Soldier(NPC):\n \n # 建立 soldier 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.soldier_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n print('name:', self.name)\n print('weapon:', self.weapon)\n print('blood:', self.blood)\n print('soldier_level:', self.soldier_level)\n\n# 创建一个 Soldier 类, 作为 NPC 的子类\nn1 = Soldier('AA2')\nn1.show_properties()\nn1.fight_common() ", "再来看看 show_properties() 这个方法,总感觉这样有点笨重,因为不管是否是面向对象的编程方式,都不应该有太多重复的代码,现在 NPC 类中也有 show_properties() 方法,用来显示标准的三个属性,而 Soldier 类中同样名字的方法显示四个属性,如果 NPC 类中增加了属性,那么两边类中的这个方法都要修改。\n我们来看看有没有更好的办法,可以用 super(需要知道父类的类,self) 的方法来访问父类的方法,这样代码就简洁优雅了。", "# 战士 Soldier 类\nclass Soldier(NPC):\n \n # 建立 soldier 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.soldier_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Soldier, self).show_properties()\n print('soldier_level', self.soldier_level)\n\n# 创建一个 Soldier 类, 作为 NPC 的子类\nn1 = Soldier('AA2')\nn1.show_properties()\nn1.fight_common() ", "init() 构造器方法和 del() 解构器方法\n可以理解构造就是创建,解构就是销毁。每一个对象的实例都是有声明周期的,从构造到解构。\n类别调用的时候,对象被创建的时候,python 先检查是否实现了 init() 方法,如果没有,则什么也不干。如果有 init()方法,则执行。\npython 作为高级语言,具有垃圾对象回收机制,当 python 发现对于该实例对象的引用都被清除掉后,会执行 del() 方法。通常不需要自己去实现这个 del() 方法,python 会做好这一切。", "# 先整理一下上面的代码\n# v1.0.1\n\n# NPC 类\nclass NPC(object):\n \n # 初始化 NPC 的属性\n def __init__(self, name):\n self.name = name\n self.weapon = 'gun'\n self.blood = 1000\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n print('name:', self.name)\n print('weapon:', self.weapon)\n print('blood:', self.blood)\n \n # 定义方法 - 通用攻击\n def fight_common(self):\n print('Fight Common')\n \n# 战士 Soldier 类\nclass Soldier(NPC):\n \n # 建立 soldier 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.soldier_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Soldier, self).show_properties()\n print('soldier_level', self.soldier_level)\n\n# 创建两个 soldier\n\nn1 = Soldier('AA1')\nn1.show_properties()\nn1.fight_common() \n\nprint()\n\nn2 = Soldier('AA2')\nn2.show_properties()\nn2.fight_common() \n\n# 连续创建多个 soldier 的实例\n# 并存储在 list 中\n\ns = []\n\nfor i in range(3):\n n = Soldier('AA' + str(i))\n n.show_properties()\n n.fight_common() \n s.append(n)\n\n# 看一下存储了对象的列表\n\nprint(s)\n\n# 可以和一般访问列表一样访问列表中的对象\n\nfor i in s:\n print(i.name)\n \nprint(len(s))\n\n# 可以删除一个实例\ns.pop(1)\n\n# 显示列表中的实例\nfor i in s:\n print(i.name)\n \nprint(len(s))\n\n# 再增加一个巫师 Wizard 的类\n\n# 巫师 Wizard 类\nclass Wizard(NPC):\n \n # 建立 Wizard 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.wizard_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Wizard, self).show_properties()\n print('wizard_level', self.wizard_level)\n \n # 定义一个巫师专用的战斗方法\n def wizard_fight_magic(self):\n print('Wizard Magic!')\n \n# 创建一个 wizard\n\nc1 = Wizard('CC1')\nc1.show_properties()\nc1.fight_common() \nc1.wizard_fight_magic()\n\n# 创建复杂的 NPC,3个 wizards,3个 soldiers\n\n# 创建多个 soldier 的实例\ns = []\n\nfor i in range(3):\n n = Soldier('AA' + str(i))\n n.show_properties()\n s.append(n)\n \nfor i in range(3):\n n = Wizard('CC' + str(i))\n n.show_properties()\n s.append(n)\n\nfor i in s:\n print(i.name)\n print('--')\n\n# 显示类的方法\n\nprint(dir(Soldier))\n\n# 显示类的方法\n\nprint(dir(Wizard))\n\n# 当前版本的代码\n# v1.0.2\n\n# NPC 类\nclass NPC(object):\n \n # 初始化 NPC 的属性\n def __init__(self, name):\n self.name = name\n self.weapon = 'gun'\n self.blood = 1000\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n print('name:', self.name)\n print('weapon:', self.weapon)\n print('blood:', self.blood)\n \n # 定义方法 - 通用攻击\n def fight_common(self):\n print('Fight Common')\n \n# 战士 Soldier 类\nclass Soldier(NPC):\n \n # 建立 soldier 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.soldier_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Soldier, self).show_properties()\n print('soldier_level', self.soldier_level)\n \n# 巫师 Wizard 类\nclass Wizard(NPC):\n \n # 建立 Wizard 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.wizard_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Wizard, self).show_properties()\n print('wizard_level', self.wizard_level)\n \n # 定义一个巫师专用的战斗方法\n def fight_magic(self):\n print('Wizard Magic!')\n\n# 在 NPC 的 __init__() 加入显示创建的是什么角色\n# v1.0.3\n\n# NPC 类\nclass NPC(object):\n \n # 初始化 NPC 的属性\n def __init__(self, name):\n self.name = name\n self.weapon = 'gun'\n self.blood = 1000\n # 先简单的显示\n print('')\n print('NPC created!')\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n print('name:', self.name)\n print('weapon:', self.weapon)\n print('blood:', self.blood)\n \n # 定义方法 - 通用攻击\n def fight_common(self):\n print('Fight Common')\n \n# 战士 Soldier 类\nclass Soldier(NPC):\n \n # 建立 soldier 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.soldier_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Soldier, self).show_properties()\n print('soldier_level', self.soldier_level)\n \n# 巫师 Wizard 类\nclass Wizard(NPC):\n \n # 建立 Wizard 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.wizard_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Wizard, self).show_properties()\n print('wizard_level', self.wizard_level)\n \n # 定义一个巫师专用的战斗方法\n def fight_magic(self):\n print('Wizard Magic!')\n\ns = []\n\nfor i in range(2):\n n = Soldier('AA' + str(i))\n n.show_properties()\n s.append(n)\n \nfor i in range(2):\n n = Wizard('CC' + str(i))\n n.show_properties()\n s.append(n)\n\n# 但是在 NPC 这个父类中没有显示出具体的子类名称\n# 所以我们用下面的方法来显示子类的名称\n# type(self).__name__ 来访问类的名称\n# v1.0.4\n\n# NPC 类\nclass NPC(object):\n \n # 初始化 NPC 的属性\n def __init__(self, name):\n self.name = name\n self.weapon = 'gun'\n self.blood = 1000\n print('')\n print(type(self).__name__, 'NPC created!')\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n print('name:', self.name)\n print('weapon:', self.weapon)\n print('blood:', self.blood)\n \n # 定义方法 - 通用攻击\n def fight_common(self):\n print('Fight Common')\n \n# 战士 Soldier 类\nclass Soldier(NPC):\n \n # 建立 soldier 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.soldier_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Soldier, self).show_properties()\n print('soldier_level', self.soldier_level)\n \n# 巫师 Wizard 类\nclass Wizard(NPC):\n \n # 建立 Wizard 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.wizard_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Wizard, self).show_properties()\n print('wizard_level', self.wizard_level)\n \n # 定义一个巫师专用的战斗方法\n def fight_magic(self):\n print('Wizard Magic!')\n\ns = []\n\nfor i in range(2):\n n = Soldier('AA' + str(i))\n n.show_properties()\n s.append(n)\n \nfor i in range(2):\n n = Wizard('CC' + str(i))\n n.show_properties()\n s.append(n)", "面向对象程序设计(英语:Object-oriented programming,缩写:OOP)是种具有对象概念的程序编程范型,同时也是一种程序开发的方法。它可能包含数据、属性、代码与方法。对象则指的是类的实例。它将对象作为程序的基本单元,将程序和数据封装其中,以提高软件的重用性、灵活性和扩展性,对象里的程序可以访问及经常修改对象相关连的数据。在面向对象程序编程里,计算机程序会被设计成彼此相关的对象。\n面向对象程序设计可以看作一种在程序中包含各种独立而又互相调用的对象的思想,这与传统的思想刚好相反:传统的程序设计主张将程序看作一系列函数的集合,或者直接就是一系列对电脑下达的指令。面向对象程序设计中的每一个对象都应该能够接受数据、处理数据并将数据传达给其它对象,因此它们都可以被看作一个小型的“机器”,即对象。目前已经被证实的是,面向对象程序设计推广了程序的灵活性和可维护性,并且在大型项目设计中广为应用。此外,支持者声称面向对象程序设计要比以往的做法更加便于学习,因为它能够让人们更简单地设计并维护程序,使得程序更加便于分析、设计、理解。反对者在某些领域对此予以否认。", "# 继续优化,根据 NPC 类型来设定 blood 和 weapon\n# 将代码尽量集中,是有好处的\n# v1.0.5\n\n# NPC 类\nclass NPC(object):\n \n # 初始化 NPC 的属性\n def __init__(self, name):\n self.name = name\n \n self.npc_type = type(self).__name__\n \n print('')\n print(self.npc_type, 'NPC created!')\n \n if self.npc_type == 'Soldier':\n self.weapon = 'sword'\n self.blood = 1000\n \n if self.npc_type == 'Wizard':\n self.weapon = 'staff'\n self.blood = 2000\n \n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n print('name:', self.name)\n print('weapon:', self.weapon)\n print('blood:', self.blood)\n \n # 定义方法 - 通用攻击\n def fight_common(self):\n print('Fight Common')\n \n# 战士 Soldier 类\nclass Soldier(NPC):\n \n # 建立 soldier 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.soldier_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Soldier, self).show_properties()\n print('soldier_level', self.soldier_level)\n \n# 巫师 Wizard 类\nclass Wizard(NPC):\n \n # 建立 Wizard 的初始化\n def __init__(self, name):\n # 调用 父类 NPC 的初始化方法\n NPC.__init__(self, name)\n \n # soldier 自己的初始化\n self.wizard_level = 1\n \n # 定义方法 - 显示 NPC 属性\n def show_properties(self):\n # 通过 super 来调用父类方法\n super(Wizard, self).show_properties()\n print('wizard_level', self.wizard_level)\n \n # 定义一个巫师专用的战斗方法\n def fight_magic(self):\n print('Wizard Magic!')\n\ns = []\n\nfor i in range(2):\n n = Soldier('AA' + str(i))\n n.show_properties()\n s.append(n)\n \nfor i in range(2):\n n = Wizard('CC' + str(i))\n n.show_properties()\n s.append(n)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
d00d/quantNotebooks
Notebooks/quantopian_research_public/notebooks/lectures/Regression_Model_Instability/notebook.ipynb
unlicense
[ "Regression Analysis\nBy Evgenia \"Jenny\" Nitishinskaya and Delaney Granizo-Mackenzie. Algorithms by David Edwards.\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License.\n\nRegression analysis allows us to estimate coefficients in a function which approximately relates multiple data sets. We hypothesize a specific form for this function and then find coefficients which fit the data well, working under the assumption that deviations from the model can be considered noise.\nWhen building such a model, we accept that it cannot perfectly predict the dependent variable. Here we would like to evaluate the accuracy of the model not by how well it explains the dependent variable, but by how <i>stable</i> it is (that is, how stable the regression coefficients are) with respect to our sample data. After all, if a model is truly a good fit, it should be similar, say, for two random halves of our data set that we model individually. Otherwise, we cannot assume that the model isn't simply an artifact of the particular sample of data we happened to choose, or that it will be predictive of new data points.\nWe'll be using linear regressions here for illustration purposes, but the same considerations apply for all regression models. Below we define a wrapper function for the linear regression from statsmodels so we can use it later.", "import numpy as np\nimport pandas as pd\nfrom statsmodels import regression, stats\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nimport scipy as sp\n\ndef linreg(X,Y):\n # Running the linear regression\n x = sm.add_constant(X) # Add a row of 1's so that our model has a constant term\n model = regression.linear_model.OLS(Y, x).fit()\n return model.params[0], model.params[1] # Return the coefficients of the linear model", "Biased noise\nThe particular sample we choose for the data affects the model generated, and unevenly distributed noise can lead to an inaccurate model. Below we're drawing from a normal distribution, but because we do not have very many data points, we get a significant downward bias. If we took more measurements, both of the regression coefficients would move toward zero.", "# Draw observations from normal distribution\nnp.random.seed(107) # Fix seed for random number generation\nrand = np.random.randn(20)\n\n# Conduct linear regression on the ordered list of observations\nxs = np.arange(20)\na, b = linreg(xs, rand)\nprint 'Slope:', b, 'Intercept:', a\n\n# Plot the raw data and the regression line\nplt.scatter(xs, rand, alpha=0.7)\nY_hat = xs * b + a\nplt.plot(xs, Y_hat, 'r', alpha=0.9);\n\nimport seaborn\n\nseaborn.regplot(xs, rand)\n\n# Draw more observations\nrand2 = np.random.randn(100)\n\n# Conduct linear regression on the ordered list of observations\nxs2 = np.arange(100)\na2, b2 = linreg(xs2, rand2)\nprint 'Slope:', b2, 'Intercept:', a2\n\n# Plot the raw data and the regression line\nplt.scatter(xs2, rand2, alpha=0.7)\nY_hat2 = xs2 * b2 + a2\nplt.plot(xs2, Y_hat2, 'r', alpha=0.9);", "Regression analysis is very sensitive to outliers. Sometimes these outliers contain information, in which case we want to take them into account; however, in cases like the above, they can simply be random noise. Although we often have many more data points than in the example above, we could have (for example) fluctuations on the order of weeks or months, which then significantly change the regression coefficients.\nRegime changes\nA regime change (or structural break) is when something changes in the process generating the data, causing future samples to follow a different distribution. Below, we can see that there is a regime change at the end of 2007, and splitting the data there results in a much better fit (in red) than a regression on the whole data set (yellow). In this case our regression model will not be predictive of future data points since the underlying system is no longer the same as in the sample. In fact, the regression analysis assumes that the errors are uncorrelated and have constant variance, which is often not be the case if there is a regime change.", "start = '2003-01-01'\nend = '2009-02-01'\npricing = get_pricing('SPY', fields='price', start_date=start, end_date=end)\n\n# Manually set the point where we think a structural break occurs\nbreakpoint = 1200\nxs = np.arange(len(pricing))\nxs2 = np.arange(breakpoint)\nxs3 = np.arange(len(pricing) - breakpoint)\n\n# Perform linear regressions on the full data set, the data up to the breakpoint, and the data after\na, b = linreg(xs, pricing)\na2, b2 = linreg(xs2, pricing[:breakpoint])\na3, b3 = linreg(xs3, pricing[breakpoint:])\n\nY_hat = pd.Series(xs * b + a, index=pricing.index)\nY_hat2 = pd.Series(xs2 * b2 + a2, index=pricing.index[:breakpoint])\nY_hat3 = pd.Series(xs3 * b3 + a3, index=pricing.index[breakpoint:])\n\n# Plot the raw data\npricing.plot()\nY_hat.plot(color='y')\nY_hat2.plot(color='r')\nY_hat3.plot(color='r')\nplt.title('SPY Price')\nplt.ylabel('Price');", "Of course, the more pieces we break our data set into, the more precisely we can fit to it. It's important to avoid fitting to noise, which will always fluctuate and is not predictive. We can test for the existence of a structural break, either at a particular point we have identified or in general. Below we use a test from statsmodels which computes the probability of observing the data if there were no breakpoint.", "stats.diagnostic.breaks_cusumolsresid(\n regression.linear_model.OLS(pricing, sm.add_constant(xs)).fit().resid)[1]", "Multicollinearity\nAbove we were only considering regressions of one dependent variable against one independent one. However, we can also have multiple independent variables. This leads to instability if the independent variables are highly correlated.\nImagine we are using two independent variables, $X_1$ and $X_2$, which are very highly correlated. Then the coefficients may shift drastically if we add a new observation that is slightly better explained by one of the two than by the other. In the extreme case, if $X_1 = X_2$, then the choice of coefficients will depend on the particular linear regression algorithm.\nBelow, we run a multiple linear regression in which the independent variables are highly correlated. If we take our sample period to be 2013-01-01 to 2015-01-01, then the coefficients are approximately .25 and .1. But if we extend the period to 2015-06-01, the coefficients become approximately .18 and .20, respectively.", "# Get pricing data for two benchmarks (stock indices) and a stock\nstart = '2013-01-01'\nend = '2015-01-01'\nb1 = get_pricing('SPY', fields='price', start_date=start, end_date=end)\nb2 = get_pricing('MDY', fields='price', start_date=start, end_date=end)\nasset = get_pricing('V', fields='price', start_date=start, end_date=end)\n\nmlr = regression.linear_model.OLS(asset, sm.add_constant(np.column_stack((b1, b2)))).fit()\nprediction = mlr.params[0] + mlr.params[1]*b1 + mlr.params[2]*b2\nprint 'Constant:', mlr.params[0], 'MLR beta to S&P 500:', mlr.params[1], ' MLR beta to MDY', mlr.params[2]\n\n# Plot the asset pricing data and the regression model prediction, just for fun\nasset.plot()\nprediction.plot();\nplt.ylabel('Price')\nplt.legend(['Asset', 'Linear Regression Prediction']);\n\n# Get pricing data for two benchmarks (stock indices) and a stock\nstart = '2013-01-01'\nend = '2015-06-01'\nb1 = get_pricing('SPY', fields='price', start_date=start, end_date=end)\nb2 = get_pricing('MDY', fields='price', start_date=start, end_date=end)\nasset = get_pricing('V', fields='price', start_date=start, end_date=end)\n\nmlr = regression.linear_model.OLS(asset, sm.add_constant(np.column_stack((b1, b2)))).fit()\nprediction = mlr.params[0] + mlr.params[1]*b1 + mlr.params[2]*b2\nprint 'Constant:', mlr.params[0], 'MLR beta to S&P 500:', mlr.params[1], ' MLR beta to MDY', mlr.params[2]\n\n# Plot the asset pricing data and the regression model prediction, just for fun\nasset.plot()\nprediction.plot();\nplt.ylabel('Price')\nplt.legend(['Asset', 'Linear Regression Prediction']);", "We can check that our independent variables are correlated by computing their correlation coefficient. This number always lies between -1 and 1, and a value of 1 means that the two variables are perfectly correlated.", "# Compute Pearson correlation coefficient\nsp.stats.pearsonr(b1,b2)[0] # Second return value is p-value", "This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AtmaMani/pyChakras
udemy_ml_bootcamp/Python-for-Data-Analysis/Pandas/Pandas Exercises/Ecommerce Purchases Exercise .ipynb
mit
[ "<a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>\n\nEcommerce Purchases Exercise\nIn this Exercise you will be given some Fake Data about some purchases done through Amazon! Just go ahead and follow the directions and try your best to answer the questions and complete the tasks. Feel free to reference the solutions. Most of the tasks can be solved in different ways. For the most part, the questions get progressively harder.\nPlease excuse anything that doesn't make \"Real-World\" sense in the dataframe, all the data is fake and made-up.\nAlso note that all of these questions can be answered with one line of code.\n\n Import pandas and read in the Ecommerce Purchases csv file and set it to a DataFrame called ecom.", "import pandas as pd\necom = pd.read_csv('Ecommerce Purchases')", "Check the head of the DataFrame.", "ecom.head(5)", "How many rows and columns are there?", "ecom.shape\n\necom.describe()", "What is the average Purchase Price?", "ecom['Purchase Price'].mean()", "What were the highest and lowest purchase prices?", "ecom['Purchase Price'].max()\n\necom['Purchase Price'].min()", "How many people have English 'en' as their Language of choice on the website?", "len(ecom[ecom['Language']=='en'])", "How many people have the job title of \"Lawyer\" ?", "len(ecom[ecom['Job']=='Lawyer'])", "How many people made the purchase during the AM and how many people made the purchase during PM ? \n(Hint: Check out value_counts() )", "ecom['AM or PM'].value_counts()", "What are the 5 most common Job Titles?", "ecom['Job'].value_counts().head(5)", "Someone made a purchase that came from Lot: \"90 WT\" , what was the Purchase Price for this transaction?", "ecom[ecom['Lot']=='90 WT']['Purchase Price']", "What is the email of the person with the following Credit Card Number: 4926535242672853", "ecom[ecom['Credit Card']==4926535242672853]['Email']", "How many people have American Express as their Credit Card Provider and made a purchase above $95 ?", "len(ecom[(ecom['CC Provider']=='American Express')&(ecom['Purchase Price']>95)])", "Hard: How many people have a credit card that expires in 2025?", "def expires_in_2025(exp_date):\n if exp_date.split('/')[1]=='25':\n return True\n else:\n return False\n\nsum(ecom['CC Exp Date'].apply(expires_in_2025))", "Hard: What are the top 5 most popular email providers/hosts (e.g. gmail.com, yahoo.com, etc...)", "ecom['Email'].apply(lambda x : x.split('@')[-1]).value_counts().head(5)", "Great Job!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
evanmiltenburg/python-for-text-analysis
Chapters/Chapter 08 - Comparison of lists and sets.ipynb
apache-2.0
[ "Chapter 8 - Comparison of lists and sets\nYou've been introduced to two containers in this topic: lists and sets. However, a question we often get is when to use a list and when a set. The goal of this chapter is to help you answer that question.\nAt the end of this chapter, you will be able to:\n* decide when to use a list and when to use a set\nIf you have questions about this chapter, please contact us (cltl.python.course@gmail.com).\n1. Properties of sets and lists\nSets: unordered collection of unique elements\nLists: ordered collection of elements\nComparison lists vs sets\n| property | set | list | \n|--------- |---------|---|\n| can contain duplicates | no | yes | \n| ordered | no | yes |\n| finding element(s) | relatively quick | relatively slow | \n| can contain | immutable objects | all objects |\n1.1 Duplication of elements\n\nlist: yes\nset: no\n\nAs shown below, lists allow duplicates (e.g. the integer 1 in the example below), sets do not.", "list1 = [1, 2, 1, 3, 4, 1]\nset1 = {1, 2, 3, 4}\nset2 = {1, 2, 1, 3, 4, 1}\n\nprint('list1', list1)\nprint('set1', set1)\nprint('set2', set2)\nprint('set1 is the same as set2:', set1 == set2)", "Tip\nYou can create a set from a list. Attention: duplicates will be removed.", "a_list = [1,2,3,4, 4]\n\na_set = set(a_list)\n\nprint(a_list)\nprint(a_set)", "1.2 Order (with respect to how elements are added to it)\n\nlist: yes\nset: no\n\nThe order in which you add elements to a list matters. Please look at the following example:", "a_list = []\na_list.append(2)\na_list.append(1)\nprint(a_list)", "However, this information is not kept in sets:", "a_set = set()\na_set.add(2)\na_set.add(1)\nprint(a_set)", "Is it possible to understand the order of items in a set? Yes, but we will not cover it here since it is not important for the tasks we cover.\nWhat is then the take home message about order? The answer is: you have it for lists, but not for sets.\nIf you want to learn more about this, look up the data structure called hash table (https://en.wikipedia.org/wiki/Hash_table) \n1.3 Finding element(s)\nIt's usually quicker to check if an element is in a set than to check if it is in a list.\nHence, this will be usally relatively slow:", "list1 = [1,2,3,4]\nprint(1 in list1)", "And this will usually be relatively quick:", "set1 = {1,2,3,4}\nprint(1 in set1)", "Is it possible to understand the speed of finding elements of items in sets and lists? Yes, but we will not cover it here since it is not important for the tasks we cover.\nWhat is then the take home message about speed? The answer is: it's probably quicker to use sets.\n1.4 Mutability of elements in can contain\nsets can only contain immutable objects.\nThis works:", "a_set = set()\na_set.add(1)\nprint(a_set)", "This does not", "a_set.add([1])", "lists can contain any Python object.\nThis works:", "a_list = []\na_list.append(1)\nprint(a_list)", "This as well", "a_list = []\na_list.append([1])\nprint(a_list)", "2. When to choose what?\nLists if you need:\n1. duplicates\n2. the order in which items are added\n3. mutable objects\nAll other scenarios -> sets\nExercises\nExercise 1:\nWhich container can contain duplicates?\nExercise 2:\nWhich container is the faster choice when checking whether it contains an element? \nExercise 3:\nYou want to collect and count all the people taking this class. You can only use their first names. Do you chose a list or a set?\nExercise 4:\nCan you think of a use case for a set and a list (perhaps you think of text analysis)?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
matmodlab/matmodlab2
notebooks/Hyperfit.ipynb
bsd-3-clause
[ "Hyperelastic Model Fitting", "%load_ext autoreload\n%autoreload 2\nfrom numpy import *\nimport numpy as np\nfrom bokeh.plotting import *\nfrom pandas import read_excel\nfrom matmodlab2.fitting.hyperopt import *\noutput_notebook()", "Experimental Data", "# uniaxial data\nudf = read_excel('Treloar_hyperelastic_data.xlsx', sheetname='Uniaxial')\nud = udf.as_matrix(columns=('Engineering Strain', 'Engineering Stress (MPa)'))\n\n# Biaxial data\nbdf = read_excel('Treloar_hyperelastic_data.xlsx', sheetname='Biaxial')\nbd = bdf.as_matrix(columns=('Engineering Strain', 'Engineering Stress (MPa)'))\n\n# Pure shear data\nsdf = read_excel('Treloar_hyperelastic_data.xlsx', sheetname='Pure Shear')\nsd = sdf.as_matrix(columns=('Engineering Strain', 'Engineering Stress (MPa)'))", "Uniaxial Data\nFind the optimal fit to the uniaxial stress data with a hyperelastic polynomial model. The symbolic constant UNIAXIAL_DATA instructs hyperopt to interpret the input data as coming from a uniaxial stress experiment.", "uf = hyperopt(UNIAXIAL_DATA, ud[:,0], ud[:,1])\nprint(uf.summary())", "At this point, the optimal parameters have been determined and are accessible with the popt attribute:", "uf.popt", "The optimal parameters are also available as a dictionary via the todict method:", "uf.todict()", "The error in the fit:", "uf.error", "Plots are generated with the bp_plot method", "show(uf.bp_plot())", "Biaxial Data\nBiaxial data is fit in a similar manner:", "bf = hyperopt(BIAXIAL_DATA, bd[:,0], bd[:,1])\nprint(bf.summary())\nshow(bf.bp_plot())", "Shear Data\nLastly, the shear data is fit", "sf = hyperopt(SHEAR_DATA, sd[:,0], sd[:,1])\nprint(sf.summary())\nshow(sf.bp_plot())", "Comparison of Fits\nExamine the error in the shear fit using parameters from the uniaxial fit", "y1 = sf.eval(overlay=uf)\ny2 = sf.eval()\nerr = sqrt(mean((y1-y2)**2)) / average(abs(y2))\nprint(err)\nshow(sf.bp_plot(overlay=[bf, uf]))\nshow(uf.bp_plot(overlay=[sf]))\nshow(bf.bp_plot(overlay=[uf, sf]))", "hyperopt2\nhyperopt2 attempts to find the model that fits all given data the best.", "f = hyperopt2(SHEAR_DATA, sd[:,0], sd[:,1], \n UNIAXIAL_DATA, ud[:,0], ud[:,1],\n BIAXIAL_DATA, bd[:,0], bd[:,1])\n\nprint(f.summary())\n\nf.error2\n\np = f.bp_plot(strain=linspace(0,6.5), points=False)\np.circle(sd[:,0], sd[:,1], color='black', legend='Shear data')\np.circle(bd[:,0], bd[:,1], color='red', legend='Biaxial data')\np.circle(ud[:,0], ud[:,1], color='green', legend='Uniaxial data')\nshow(p)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
phoebe-project/phoebe2-docs
2.1/tutorials/detach.ipynb
gpl-3.0
[ "Advanced: Detaching from Run Compute\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).", "!pip install -I \"phoebe>=2.1,<2.2\"", "As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.", "%matplotlib inline\n\nimport phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()", "Now we'll add datasets", "b.add_dataset('lc', times=np.linspace(0,20,501))", "Run Compute\nHere we just pass detach=True to any run_compute call. We'll immediately be returned to the prompt instead of waiting for the results to complete.", "b.run_compute(detach=True, model='mymodel')", "If we then try to access the model, we see that there is instead a single parameter that is a placeholder - this parameter stores information on how to check the progress of the run_compute job and how to load the resulting model once it's complete", "b['mymodel']", "Re-attaching to a Job\nWe can check on the job's status", "b['mymodel'].status", "If we want, we can even save the Bundle and load it later to retrieve the results. In this case where the job is being run in a different Python thread but on the same machine, you cannot, however, exit Python or restart your machine. \nWhen detaching and running on a server (coming soon), you will then be able to exit your Python session or even open the Bundle on a different machine.", "b.save('test_detach.bundle')\n\nb = phoebe.Bundle.open('test_detach.bundle')\n\nb['mymodel'].status", "And at any point we can choose to \"re-attach\". If the job isn't yet complete, we'll be in a wait loop until it is. Once the job is complete, the new model will be loaded and accessible.", "b['mymodel'].attach()\n\nb['mymodel']\n\naxs, artists = b['mymodel'].plot()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/pcmdi/cmip6/models/sandbox-1/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: PCMDI\nSource ID: SANDBOX-1\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:36\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-1', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kgrodzicki/machine-learning-specialization
course-1-machine-learning-foundations/notebooks/week2/Predicting house prices.ipynb
mit
[ "Fire up graphlab create", "import graphlab", "Load some house sales data\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.", "sales = graphlab.SFrame('home_data.gl/')\n\nsales", "Exploring the data for housing sales\nThe house price is correlated with the number of square feet of living space.", "graphlab.canvas.set_target('ipynb')\nsales.show(view=\"Scatter Plot\", x=\"sqft_living\", y=\"price\")", "Create a simple regression model of sqft_living to price\nSplit data into training and testing.\nWe use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).", "train_data,test_data = sales.random_split(.8,seed=0)", "Build the regression model using only sqft_living as a feature", "sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'],validation_set=None)", "Evaluate the simple model", "print test_data['price'].mean()\n\nprint sqft_model.evaluate(test_data)", "RMSE of about \\$255,170!\nLet's show what our predictions look like\nMatplotlib is a Python plotting library that is also useful for plotting. You can install it with:\n'pip install matplotlib'", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(test_data['sqft_living'],test_data['price'],'.',\n test_data['sqft_living'],sqft_model.predict(test_data),'-')", "Above: blue dots are original data, green line is the prediction from the simple regression.\nBelow: we can view the learned regression coefficients.", "sqft_model.get('coefficients')", "Explore other features in the data\nTo build a more elaborate model, we will explore using more features.", "my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']\n\nsales[my_features].show()\n\nsales.show(view='BoxWhisker Plot', x='zipcode', y='price')", "Pull the bar at the bottom to view more of the data. \n98039 is the most expensive zip code.\nBuild a regression model with more features", "my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features,validation_set=None)\n\nprint my_features", "Comparing the results of the simple model with adding more features", "print sqft_model.evaluate(test_data)\nmy_features_evaluated = my_features_model.evaluate(test_data)\nprint my_features_evaluated", "The RMSE goes down from \\$255,170 to \\$179,508 with more features.\nApply learned models to predict prices of 3 houses\nThe first house we will use is considered an \"average\" house in Seattle.", "house1 = sales[sales['id']=='5309101200']\n\nhouse1", "<img src=\"http://info.kingcounty.gov/Assessor/eRealProperty/MediaHandler.aspx?Media=2916871\">", "print house1['price']\n\nprint sqft_model.predict(house1)\n\nprint my_features_model.predict(house1)", "In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.\nPrediction for a second, fancier house\nWe will now examine the predictions for a fancier house.", "house2 = sales[sales['id']=='1925069082']\n\nhouse2", "<img src=\"https://ssl.cdn-redfin.com/photo/1/bigphoto/302/734302_0.jpg\">", "print sqft_model.predict(house2)\n\nprint my_features_model.predict(house2)", "In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house. \nLast house, super fancy\nOur last house is a very large one owned by a famous Seattleite.", "bill_gates = {'bedrooms':[8], \n 'bathrooms':[25], \n 'sqft_living':[50000], \n 'sqft_lot':[225000],\n 'floors':[4], \n 'zipcode':['98039'], \n 'condition':[10], \n 'grade':[10],\n 'waterfront':[1],\n 'view':[4],\n 'sqft_above':[37500],\n 'sqft_basement':[12500],\n 'yr_built':[1994],\n 'yr_renovated':[2010],\n 'lat':[47.627606],\n 'long':[-122.242054],\n 'sqft_living15':[5000],\n 'sqft_lot15':[40000]}", "<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Bill_gates%27_house.jpg/2560px-Bill_gates%27_house.jpg\">", "print my_features_model.predict(graphlab.SFrame(bill_gates))", "The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)", "# Mean price for zipcode 98039\n\nprint sales[sales['zipcode']=='98039']['price'].mean()\n\nin_range = sales[(sales['sqft_living'] > 2000) & (sales['sqft_living'] < 4000)]\n\nprint len(sales)\nprint len(in_range)\n\nadvanced_features = [\n'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',\n'condition', # condition of house\n'grade', # measure of quality of construction\n'waterfront', # waterfront property\n'view', # type of view\n'sqft_above', # square feet above ground\n'sqft_basement', # square feet in basement\n'yr_built', # the year built\n'yr_renovated', # the year renovated\n'lat', 'long', # the lat-long of the parcel\n'sqft_living15', # average sq.ft. of 15 nearest neighbors\n'sqft_lot15', # average lot size of 15 nearest neighbors \n]\n\nadvanced_features_model = graphlab.linear_regression.create(train_data,target='price',features=advanced_features,validation_set=None)\n\nadvanced_features_evaluated = advanced_features_model.evaluate(test_data)\n\nmy_features_rmse = my_features_evaluated['rmse']\nadvanced_features_rmse = advanced_features_evaluated['rmse']\nprint (my_features_rmse - advanced_features_rmse)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/workshops
extras/archive/05_custom_estimators.ipynb
apache-2.0
[ "Custom Estimators\nIn this notebook we'll write an Custom Estimator (using a model function we specifiy). On the way, we'll use tf.layers to write our model. In the next notebook, we'll use tf.layers to write a Custom Estimator for a Convolutional Neural Network.", "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport math\nimport numpy as np\n\nimport tensorflow as tf", "Import the dataset. Here, we'll need to convert the labels to a one-hot encoding, and we'll reshape the MNIST images to (784,).", "# We'll use Keras (included with TensorFlow) to import the data\n# I figured I'd do all the preprocessing and reshaping here, \n# rather than in the model.\n(x_train, y_train), (x_test, y_test) = tf.contrib.keras.datasets.mnist.load_data()\n\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\n\ny_train = y_train.astype('int32')\ny_test = y_test.astype('int32')\n\n# Normalize the color values to 0-1\n# (as imported, they're 0-255)\nx_train /= 255\nx_test /= 255\n\n# Flatten 28x28 images to (784,)\nx_train = x_train.reshape(x_train.shape[0], 784)\nx_test = x_test.reshape(x_test.shape[0], 784)\n\n# Convert to one-hot.\ny_train = tf.contrib.keras.utils.to_categorical(y_train, num_classes=10)\ny_test = tf.contrib.keras.utils.to_categorical(y_test, num_classes=10)\n\nprint(x_train.shape[0], 'train samples')\nprint(x_test.shape[0], 'test samples')", "When using Estimators, we do not manage the TensorFlow session directly. Instead, we skip straight to defining our hyperparameters.", "# Number of neurons in each hidden layer\nHIDDEN1_SIZE = 500\nHIDDEN2_SIZE = 250", "To write a Custom Estimator we'll specify our own model function. Here, we'll use tf.layers to replicate the model from the third notebook.", "def model_fn(features, labels, mode):\n \n # First we'll create 2 fully-connected layers, with ReLU activations.\n # Notice we're retrieving the 'x' feature (we'll provide this in the input function\n # in a moment).\n fc1 = tf.layers.dense(features['x'], HIDDEN1_SIZE, activation=tf.nn.relu, name=\"fc1\")\n fc2 = tf.layers.dense(fc1, HIDDEN2_SIZE, activation=tf.nn.relu, name=\"fc2\")\n \n # Add dropout operation; 0.9 probability that a neuron will be kept\n dropout = tf.layers.dropout(\n inputs=fc2, rate=0.1, training = mode == tf.estimator.ModeKeys.TRAIN, name=\"dropout\")\n\n # Finally, we'll calculate logits. This will be\n # the input to our Softmax function. Notice we \n # don't apply an activation at this layer.\n # If you've commented out the dropout layer,\n # switch the input here to 'fc2'.\n logits = tf.layers.dense(dropout, units=10, name=\"logits\")\n \n # Generate Predictions\n classes = tf.argmax(logits, axis=1)\n predictions = {\n 'classes': classes,\n 'probabilities': tf.nn.softmax(logits, name='softmax_tensor')\n }\n \n if mode == tf.estimator.ModeKeys.PREDICT:\n # Return an EstimatorSpec for prediction\n return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)\n \n # Compute the loss, per usual.\n loss = tf.losses.softmax_cross_entropy(\n onehot_labels=labels, logits=logits)\n \n if mode == tf.estimator.ModeKeys.TRAIN:\n \n # Configure the Training Op\n train_op = tf.contrib.layers.optimize_loss(\n loss=loss,\n global_step=tf.train.get_global_step(),\n learning_rate=1e-3,\n optimizer='Adam')\n\n # Return an EstimatorSpec for training\n return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions,\n loss=loss, train_op=train_op) \n\n assert mode == tf.estimator.ModeKeys.EVAL\n \n # Configure the accuracy metric for evaluation\n metrics = {'accuracy': tf.metrics.accuracy(classes, tf.argmax(labels, axis=1))}\n \n return tf.estimator.EstimatorSpec(mode=mode, \n predictions=predictions, \n loss=loss,\n eval_metric_ops=metrics)", "Input functions, as before.", "train_input = tf.estimator.inputs.numpy_input_fn(\n {'x': x_train},\n y_train, \n num_epochs=None, # repeat forever\n shuffle=True # \n)\n\ntest_input = tf.estimator.inputs.numpy_input_fn(\n {'x': x_test},\n y_test,\n num_epochs=1, # loop through the dataset once\n shuffle=False # don't shuffle the test data\n)\n\n# At this point, our Estimator will work just like a canned one.\nestimator = tf.estimator.Estimator(model_fn=model_fn)\n\n# Train the estimator using our input function.\nestimator.train(input_fn=train_input, steps=2000)\n\n# Evaluate the estimator using our input function.\n# We should see our accuracy metric below\nevaluation = estimator.evaluate(input_fn=test_input)\nprint(evaluation)\n\nMAX_TO_PRINT = 5\n\n# This returns a generator object\npredictions = estimator.predict(input_fn=test_input)\ni = 0\nfor p in predictions:\n true_label = np.argmax(y_test[i])\n predicted_label = p['classes']\n print(\"Example %d. True: %d, Predicted: %s\" % (i, true_label, predicted_label))\n i += 1\n if i == MAX_TO_PRINT: break" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mimoralea/applied-reinforcement-learning
notebooks/04-q-learning.ipynb
mit
[ "Model-Free Reinforcement Learning\nRemember how in last Notebook we felt like cheating by using directions calculated from the map of the environment?? Well, model-free reinforcement learning deals with that. Model-free refers to the fact that algorithms unders this category do not need a model of the environment, also known as MDP, to calculate optimal policies.\nIn this notebook, we will look at what is perhaps the most popular model-free reinforcement learning algorithm, q-learning. Q-learning run without needing a map of the environment, it works by balancing the need for exploration with the need for exploiting previously explored knowledge. Let's take a look.", "import matplotlib.pyplot as plt\n\nimport numpy as np\nimport pandas as pd\nimport tempfile\nimport pprint\nimport math\nimport json\nimport sys\nimport gym\n\nfrom gym import wrappers\nfrom subprocess import check_output\nfrom IPython.display import HTML", "Q-Learning\nThe function below, action_selection is an important aspect of reinforcement learning algorithms. The fact is, when you have possibly conflicting needs, explore vs exploit, you enter into a difficult situation, dilemma. The Exploration vs Exploitation Dilemma is at the core of reinforcement learning and it is good for you to think about it for a little while. How much do you need to explore an environment before you exploit it?\nIn the function below we use one of the many alternatives which is we explore a lot at the begining and decay the amount of exploration as we increase the number of episodes. Let's take a look at what the function looks like:", "def action_selection(state, Q, episode, n_episodes):\n epsilon = max(0, episode/n_episodes*2)\n if np.random.random() < epsilon:\n action = np.random.randint(len(Q[0]))\n else:\n action = np.argmax(Q[state])\n return action, epsilon\n\nQ = [[0]]\nn_episodes = 10000\nepsilons = []\nfor episode in range(n_episodes//2, -n_episodes//2, -1):\n _, epsilon = action_selection(0, Q, episode, n_episodes)\n epsilons.append(epsilon)\nplt.plot(np.arange(len(epsilons)), epsilons, '.')\nplt.ylabel('Probability')\nplt.xlabel('Episode')", "See that? So, at episode 0 we have 100% change of acting randomly, all the way down to 0 when we stop exploring and instead always select the action that we think would maximizing the discounted future rewards. \nAgain, this is a way of doing this, there are many and you surely should be thinking about better ways of doing so.\nNext, let me show you what Q-Learning looks like:", "def q_learning(env, alpha = 0.9, gamma = 0.9):\n nS = env.env.observation_space.n\n nA = env.env.action_space.n\n \n Q = np.random.random((nS, nA)) * 2.0\n n_episodes = 10000\n \n for episode in range(n_episodes//2, -n_episodes//2, -1):\n state = env.reset()\n done = False\n while not done:\n action, _ = action_selection(state, Q, episode, n_episodes)\n nstate, reward, done, info = env.step(action)\n Q[state][action] += alpha * (reward + gamma * Q[nstate].max() * (not done) - Q[state][action])\n state = nstate\n return Q", "Nice, right? You just pass it an environment, nS and nA are the number of states and actions respectively. \nQ is a table of states as rows and actions as columns that will hold the expected reward the agent expects to get for taking action 'a' on state 's'. You can see how we initialize Q(s,a)'s to a random value, but also we multiply that by 2. You may ask, why is this? This is called \"Optimism in the face of uncertainty\" and it is a common reinforcement learning technique for encouraging agents to explore. Think about it on an intuitive level. If you think positively most of the time, if you receive a low balling job offer, you are going to pass on it and potentially get a better offer later. Worst case, you don't find any better offer and after 'adjusting' your estimates you will think an offer like the \"low balling\" one you got wasn't that bad after all. The same applies to reinforcement learning agent, cool right?\nThen, I go on a loop for n_episodes using the action_selection function as described above. Don't pay too much attention to the range start and end, that is just the way I get the exploration strategy the way I showed. You should not like it, I don't like it. You will have a chance to make it better.\nFor now, let's unleash this agent and see how it does!!!", "mdir = tempfile.mkdtemp()\nenv = gym.make('FrozenLake-v0')\nenv = wrappers.Monitor(env, mdir, force=True)\n\nQ = q_learning(env)", "Let's look at a couple of the episodes in more detail.", "videos = np.array(env.videos)\nn_videos = 5\n\nidxs = np.linspace(0, len(videos) - 1, n_videos).astype(int)\nvideos = videos[idxs,:]\n\nurls = []\nfor i in range(n_videos):\n out = check_output([\"asciinema\", \"upload\", videos[i][0]])\n out = out.decode(\"utf-8\").replace('\\n', '').replace('\\r', '')\n urls.append([out])\nvideos = np.concatenate((videos, urls), axis=1)\n\nstrm = ''\nfor video_path, meta_path, url in videos:\n\n with open(meta_path) as data_file: \n meta = json.load(data_file)\n castid = url.split('/')[-1]\n html_tag = \"\"\"\n <h2>{0}\n <script type=\"text/javascript\" \n src=\"https://asciinema.org/a/{1}.js\" \n id=\"asciicast-{1}\" \n async data-autoplay=\"true\" data-size=\"big\">\n </script>\n \"\"\"\n strm += html_tag.format('Episode ' + str(meta['episode_id']),\n castid)\nHTML(data=strm)", "Nice!!!\nYou can see the progress of this agent. From total caos completely sinking into holes, to sliding into the goal fairly consistently.\nLet's inspect the Values and Policies.", "V = np.max(Q, axis=1)\nV\n\npi = np.argmax(Q, axis=1)\npi", "Fair enough, let's close this environment and you will have a chance to submit to your OpenAI account. After that, you will have a chance to modify the action_selection to try something different.", "env.close()\n\ngym.upload(mdir, api_key='<YOUR OPENAI API KEY>')", "Your turn\nMaybe you want to try an exponential decay?? (http://www.miniwebtool.com/exponential-decay-calculator/)\nP(t) = P0e-rt\nwhere: \n* P(t) = the amount of some quantity at time t \n* P0 = initial amount at time t = 0 \n* r = the decay rate \n* t = time (number of periods)", "def action_selection(state, Q, episode, n_episodes, decay=0.0006, initial=1.00):\n \"\"\" YOU WRITE THIS METHOD \"\"\"\n return action, epsilon", "Use the following code to test your new exploration strategy:", "Q = [[0]]\nn_episodes = 10000\nepsilons = []\nfor episode in range(n_episodes):\n _, epsilon = action_selection(0, Q, episode, n_episodes)\n epsilons.append(epsilon)\nplt.plot(np.arange(len(epsilons)), epsilons, '.')\nplt.ylabel('Probability')\nplt.xlabel('Episode')", "Let's redefine the q_learning function we had above and run it against the environment again.", "def q_learning(env, alpha = 0.9, gamma = 0.9):\n nS = env.env.observation_space.n\n nA = env.env.action_space.n\n \n Q = np.random.random((nS, nA)) * 2.0\n n_episodes = 10000\n \n for episode in range(n_episodes):\n state = env.reset()\n done = False\n while not done:\n action, _ = action_selection(state, Q, episode, n_episodes)\n nstate, reward, done, info = env.step(action)\n Q[state][action] += alpha * (reward + gamma * Q[nstate].max() * (not done) - Q[state][action])\n state = nstate\n return Q\n\nmdir = tempfile.mkdtemp()\nenv = gym.make('FrozenLake-v0')\nenv = wrappers.Monitor(env, mdir, force=True)\n\nQ = q_learning(env)", "Curious to see how the new agent did?? Let's check it out!", "videos = np.array(env.videos)\nn_videos = 5\n\nidxs = np.linspace(0, len(videos) - 1, n_videos).astype(int)\nvideos = videos[idxs,:]\n\nurls = []\nfor i in range(n_videos):\n out = check_output([\"asciinema\", \"upload\", videos[i][0]])\n out = out.decode(\"utf-8\").replace('\\n', '').replace('\\r', '')\n urls.append([out])\nvideos = np.concatenate((videos, urls), axis=1)\n\nstrm = ''\nfor video_path, meta_path, url in videos:\n\n with open(meta_path) as data_file: \n meta = json.load(data_file)\n castid = url.split('/')[-1]\n html_tag = \"\"\"\n <h2>{0}\n <script type=\"text/javascript\" \n src=\"https://asciinema.org/a/{1}.js\" \n id=\"asciicast-{1}\" \n async data-autoplay=\"true\" data-size=\"big\">\n </script>\n \"\"\"\n strm += html_tag.format('Episode ' + str(meta['episode_id']),\n castid)\nHTML(data=strm)", "Did it do good??? This isn't an easy thing, take your time. Be sure to look into the Notebook solution if you want an idea.\nFor now, let's take a look at the value function and policy the agent came up with.", "V = np.max(Q, axis=1)\nV\n\npi = np.argmax(Q, axis=1)\npi", "Good??? Nice!\nLet's wrap-up!", "env.close()\n\ngym.upload(mdir, api_key='<YOUR OPENAI API KEY>')", "So, this notebook shows you how agents do when they don't have a definition of the environment. They will be interacting with it, just like you and I would. \nNow, we are one step closer, but you probably are wondering, if this is 'model-free' reinforcement learning, is 'model-based' reinforcement learning the algorithms we learned before? Well, not really. Model-based reinforcement learning algorithms use of the experience, perhaps in addition to what model-free algorithms do, to come up models of the environment. This helps for many things, the one worth highlighting are, algorithms can require less computation, and more importantly less exploration. This is vital when experience is expensive to collect. Think a robot learning to walk. What's the price of a robot collapsing into the floor?\nAdditionally, you should have a little thing bothering you. Isn't it disappointing to be dealing with discrete states and actions?? Who are we kidding? A robot doesn't know to go to state 2?!!? \nSo, yeah, we have been working with discrete states and actions. That's just not the way the world works. Let's step it a bit up. In the following lessons we'll discuss what to do when states and later actions are continuous and perhaps too large to even store on a table the way Q does it in q-learning. You ready? Let's go." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jmunar/pymc3-kalman
notebooks/02_Dimensionality.ipynb
apache-2.0
[ "Dimensionality of the inputs to the filter\nOne of the main strengths of PyMC3 is its dependence on Theano. Theano allows to compute arithmetic operations on arbitrary tensors. This might not sound very impressive, but in the process:\n\nIt can apply the chain rule to calculate the gradient of a scalar function on the unknown parameters\nElementwise operations on tensors can be extended to any number of dimensions\nSmart optimizations on expressions are applied before compiling, reducing the computing time\n\nHere, we will apply the Kalman filter to scalar observations and/or scalar state spaces. This will result in a noticeable speed improvement with respect to the general vector-vector case.\nWe will use the same example as in the previous notebook:", "import numpy as np\nimport theano\nimport theano.tensor as tt\nimport kalman\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\n\n%matplotlib inline\n\n# True values\nT = 500 # Time steps\nsigma2_eps0 = 3 # Variance of the observation noise\nsigma2_eta0 = 10 # Variance in the update of the mean\n\n# Simulate data\nnp.random.seed(12345)\neps = np.random.normal(scale=sigma2_eps0**0.5, size=T)\neta = np.random.normal(scale=sigma2_eta0**0.5, size=T)\nmu = np.cumsum(eta)\ny = mu + eps\n\n# Plot the time series\nfig, ax = plt.subplots(figsize=(13,2))\nax.fill_between(np.arange(T), 0, y, facecolor=(0.7,0.7,1), edgecolor=(0,0,1))\nax.set(xlabel='$T$', title='Simulated series');", "Vectorial observation + vectorial state", "# Measurement equation\nZ, d, H = tt.dmatrix(name='Z'), tt.dvector(name='d'), tt.dmatrix(name='H')\n# Transition equation\nT, c, R, Q = tt.dmatrix(name='T'), tt.dvector(name='c'), \\\n tt.dmatrix(name='R'), tt.dmatrix(name='Q')\n# Tensors for the initial state mean and uncertainty\na0, P0 = tt.dvector(name='a0'), tt.dmatrix(name='P0')\n\n# Values for the actual calculation\nargs = dict(Z = np.array([[1.]]), d = np.array([0.]), H = np.array([[3.]]),\n T = np.array([[1.]]), c = np.array([0.]), R = np.array([[1.]]),\n Q = np.array([[10.]]),\n a0 = np.array([0.]), P0 = np.array([[1e6]]))\n\n# Create function to calculate log-likelihood\nkalmanTheano = kalman.KalmanTheano(Z, d, H, T, c, R, Q, a0, P0)\n(_,_,lliks),_ = kalmanTheano.filter(y[:,None])\nf = theano.function([Z, d, H, T, c, R, Q, a0, P0], lliks[1:].sum())\n\n# Evaluate\n%timeit f(**args)\n\nprint('Log-likelihood:', f(**args))", "Scalar observation + vectorial state", "# Measurement equation\nZ, d, H = tt.dvector(name='Z'), tt.dscalar(name='d'), tt.dscalar(name='H')\n# Transition equation\nT, c, R, Q = tt.dmatrix(name='T'), tt.dvector(name='c'), \\\n tt.dmatrix(name='R'), tt.dmatrix(name='Q')\n# Tensors for the initial state mean and uncertainty\na0, P0 = tt.dvector(name='a0'), tt.dmatrix(name='P0')\n\n# Values for the actual calculation\nargs = dict(Z = np.array([1.]), d = np.array(0.), H = np.array(3.),\n T = np.array([[1.]]), c = np.array([0.]), R = np.array([[1.]]),\n Q = np.array([[10.]]),\n a0 = np.array([0.]), P0 = np.array([[1e6]]))\n\n# Create function to calculate log-likelihood\nkalmanTheano = kalman.KalmanTheano(Z, d, H, T, c, R, Q, a0, P0)\n(_,_,lliks),_ = kalmanTheano.filter(y)\nf = theano.function([Z, d, H, T, c, R, Q, a0, P0], lliks[1:].sum())\n\n# Evaluate\n%timeit f(**args)\n\nprint('Log-likelihood:', f(**args))", "Scalar observation + scalar state", "# Measurement equation\nZ, d, H = tt.dscalar(name='Z'), tt.dscalar(name='d'), tt.dscalar(name='H')\n# Transition equation\nT, c, R, Q = tt.dscalar(name='T'), tt.dscalar(name='c'), \\\n tt.dscalar(name='R'), tt.dscalar(name='Q')\n# Tensors for the initial state mean and uncertainty\na0, P0 = tt.dscalar(name='a0'), tt.dscalar(name='P0')\n\n# Values for the actual calculation\nargs = dict(Z = np.array(1.), d = np.array(0.), H = np.array(3.),\n T = np.array(1.), c = np.array(0.), R = np.array(1.),\n Q = np.array(10.),\n a0 = np.array(0.), P0 = np.array(1e6))\n\n# Create function to calculate log-likelihood\nkalmanTheano = kalman.KalmanTheano(Z, d, H, T, c, R, Q, a0, P0)\n(_,_,lliks),_ = kalmanTheano.filter(y)\nf = theano.function([Z, d, H, T, c, R, Q, a0, P0], lliks[1:].sum())\n\n# Evaluate\n%timeit f(**args)\n\nprint('Log-likelihood:', f(**args))", "The improvement in this case is clear. By profiling the operation, it becomes aparent that, for scalar inputs, the algebraic operations do not use BLAS routines, but just normal products." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mathnathan/notebooks
dissertation/.ipynb_checkpoints/GNN - 1D GMM Example Convergence-checkpoint.ipynb
mit
[ "import matplotlib.pyplot as plt\n%matplotlib inline\nfrom ipywidgets import interactive\nfrom scipy.stats import norm\nimport numpy as np\nimport matplotlib.patches as patches\nfrom OldBrain import Neuron, Net, GMM\nimport matplotlib\nmatplotlib.rcParams.update({'font.size': 22})", "This entire theory is built on the idea that everything is normalized as input into the brain. i.e. all values are between 0 and 1. This is necessary because the learning rule has an adaptive learning rate that is $\\sigma^4$. If everything is normalized, the probability of $\\sigma^2$ being greater than 1 is very low", "#p = GMM([0.1,0.3,0.6], np.array([[0.2,.01],[0.5,0.01],[0.8,0.01]]))\np = GMM([0.4,0.6], np.array([[0.2,0.05],[0.65,.015]]))\n\nnum_samples = 1000\nbeg = 0.0\nend = 1.0\nt = np.linspace(beg,end,num_samples)\nnum_neurons = len(p.pis)\ncolors = [np.random.rand(num_neurons,) for i in range(num_neurons)]\np_y = p(t)\np_max = p_y.max()\n\nnp.random.seed(12)\n\nnum_neurons = 3\nnetwork = Net(1,1,num_neurons, bias=0.0002, decay=[0.03,0.05,0.03], kernels=[[1,1]], locs=[[0,0]], sleep_cycle=2000)\n#print('nework.sleep_cycle = ', network.sleep_cycle)\n\nsamples, labels = p.sample(10000)\n#samples = (samples-samples.min())/samples.max()\nys = []\nlbls = []\ncolors = [np.random.rand(3,) for i in range(num_neurons)]\ndef f(i=0):\n #print('network.num_calls = ', network.num_calls)\n x = np.array(samples[i])\n l = labels[i]\n y = network(x.reshape(1,1,1))\n #y is np.array([q1(x), q2(x), ...])\n\n ys.append(y)\n c = 'b' if l else 'g'\n lbls.append(c)\n\n fig, ax = plt.subplots(figsize=(15,5))\n ax.plot(t, p_y/p_max, c='r', lw=3, label='$p(x)$')\n ax.plot([x,x],[0,p_max],label=\"$x\\sim p(x)$\", lw=4)\n #print('int of p = ', p(t).sum()/1000)\n\n #for neurons in network.neurons.values():\n #for i,n in enumerate(neurons):\n #print('n = ', n)\n #print(\"q%i.bias = \" %(i), n.bias)\n #print('t.shape = ', t.shape)\n #print('t = ', t)\n y = network(t.reshape(num_samples,1,1),update=0)\n \n for j,yi in enumerate(y):\n yj_max = y[j].max()\n ax.plot(t, y[j]/yj_max, c=colors[j], lw=3, label=\"$q(x)$\")\n #ax.plot(t, y[j], c=colors[j], lw=3, label=\"$q_%i(x)$\"%(j))\n\n #print('q_out.bias = ', q_out.neurons[(0,0.5)][0].bias)\n #ax[0].plot(t, q3.pi*q3(t,0), c='k', lw=3, label='$q3(x)$')\n #ax.legend()\n ax.set_ylim(0.,1.5)\n ax.set_xlim(beg,end)\n\n #fig2, ax2 = plt.subplots()\n #print('q_out.weights = ', q_out.weights)\n #print('q_out.bias = ', q_out.bias)\n #circle = plt.Circle(q_out.neurons[(0,0.5)][0].weights, np.sqrt(q_out.neurons[(0,0.5)][0].bias), fill=0)\n #ax2.set_ylim(-0.2,1.5)\n #ax2.set_xlim(-0.2,1.5)\n #ax2.add_artist(circle)\n #ysa = np.asarray(ys)\n #ax2.scatter(ysa[:,0],ysa[:,1],s=12,c=lbls)\n plt.savefig('figs/fig%03i.png'%(i))\n plt.show()\n \n\ninteractive_plot = interactive(f, i=(0, 9999))\noutput = interactive_plot.children[-1]\noutput.layout.height = '450px'\ninteractive_plot\n\n[n.weights for n in list(network.neurons.items())[0][1]]\n\n[np.sqrt(n.bias) for n in list(network.neurons.items())[0][1]]\n\n[n.pi for n in list(network.neurons.items())[0][1]]", "I can assume $q(x)$ has two forms\n$$q(x) = \\frac{1}{\\sqrt{2 \\pi \\sigma^2}}exp{-\\frac{(x-\\mu)^2}{2\\sigma^2}}$$\nor \n$$q(x) = exp{-\\frac{(x-\\mu)^2}{\\sigma^2}}$$\nWhen I assume the second form and remove the extra $\\sigma$ term from the learning equations it no longer converges smoothly. However, if I add an 'astrocyte' to normalize all of them periodically by averaging over the output it works again. Perhaps astrocytes 'normalizing' the neurons is the biological mechanism for keeping the output roughly normal.", "def s(x):\n return (1/(1+np.exp(-10*(x-0.25))))\n\nx = np.linspace(0,1,100)\nplt.plot(x,s(x))\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
boada/planckClusters
analysis_ir/notebooks/05. Make real models.ipynb
mit
[ "%matplotlib inline\n#%matplotlib widget\nfrom astropy.cosmology import LambdaCDM\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom astropy import constants as const\nimport astropy.units as u\nfrom scipy.integrate import quad\nimport ezgal # BC03 model maker\nimport os", "Setup Cosmology", "cosmo = LambdaCDM(H0=70, Om0=0.3, Ode0=0.7, Tcmb0=2.725)", "Create Stellar Population", "# check to make sure we have defined the bpz filter path\nif not os.getenv('EZGAL_FILTERS'):\n os.environ['EZGAL_FILTERS'] = (f'{os.environ[\"HOME\"]}/Projects/planckClusters/MOSAICpipe/bpz-1.99.3/FILTER/')\n\nmodel = ezgal.model('bc03_ssp_z_0.02_salp.model')\nmodel = model.make_exponential(1)\nmodel.set_cosmology(Om=cosmo.Om0, Ol=cosmo.Ode0, h=cosmo.h, w=cosmo.w(0))\n \nmodel.add_filter('g_MOSAICII.res', name='g')\nmodel.add_filter('r_MOSAICII.res', name='r')\nmodel.add_filter('i_MOSAICII.res', name='i')\nmodel.add_filter('z_MOSAICII.res', name='z')\nmodel.add_filter('K_KittPeak.res', name='K')\n\n# Blanton 2003 Normalization\nMr_star = -20.44 + 5 * np.log10(cosmo.h) # abs mag.\n# set the normalization\nmodel.set_normalization('sloan_r', 0.1, Mr_star, vega=False) ", "Calculate a few things to get going.", "# desired formation redshift\nzf = 6.0\n# fetch an array of redshifts out to given formation redshift\nzs = model.get_zs(zf)\n \n# Calculate some cosmological stuff\nDM = cosmo.distmod(zs)\ndlum = cosmo.luminosity_distance(zs)", "Define the functions that we'll need\nNeed to compute the cluster volume...\n$M_{vir} = 4/3 \\pi r^3_{vir} \\rho_c(r<r_{vir}) = 4/3 \\pi r^3_{vir} \\Delta_c \\rho_c$\nif we let $\\Delta_c = 200$ then \n$M_{200} = 4/3 \\pi r^3_{200} 200 \\rho_c$ with $\\rho_c = \\frac{3H(z)^2}{8\\pi G}$\nor just $M_{200} = V_{200}200\\rho_c$. So we'll make a function to calculate $\\rho_c$. And we'll make use of the astropy units package to do all the unit analysis for us.\nDon't forget that $H(z) = H_0E(z)$ \nWe also need to integrate the Schechter luminosity functions..\nThe Schechter Function:\nFor Luminosity:\n$\\Phi(L) = \\phi^\\star \\frac{L}{L_\\star}^\\alpha e^{-\\frac{L}{L_\\star}}$\nFor Magnitudes:\n$\\Phi(M) = \\phi^\\star\\frac{2}{5}log(10) (10^{\\frac{2}{5}(M_\\star - M)})^{\\alpha+1} e^{-10^{\\frac{2}{5}(M_\\star - M)}}$", "def rho_crit(z, cosmo):\n # convert G into better units:\n G = const.G.to(u.km**2 * u.Mpc/(u.M_sun * u.s**2))\n return 3 / (8 * np.pi * G) * cosmo.H0**2 * cosmo.efunc(z)**2 # Mpc^3\n\ndef schechterL(luminosity, phiStar, alpha, LStar): \n \"\"\"Schechter luminosity function.\"\"\" \n LOverLStar = (luminosity/LStar) \n return (phiStar/LStar) * LOverLStar**alpha * np.exp(- LOverLStar) \n\ndef schechterM(magnitude, phiStar, alpha, MStar): \n \"\"\"Schechter luminosity function by magnitudes.\"\"\" \n MStarMinM = 0.4 * (MStar - magnitude)\n return (0.4 * np.log(10) * phiStar * 10.0**(MStarMinM * (alpha + 1.)) * np.exp(-10.**MStarMinM))", "Mass limits from PSZ2", "from astropy.table import Table\nfrom scipy.interpolate import interp1d\n\nz1 = 0\nz2 = 2\ndz = 0.025\n\n# build the mass array\nzarr = np.arange(z1, z2 + dz, dz)\n\nps2 = Table.read('../../catalogs/PSZ2v1.fits')\ndf2 = ps2.to_pandas()\ndata = df2[['REDSHIFT', 'MSZ']]\ndata['REDSHIFT'].replace(-1, np.nan, inplace=True)\n\n# redshift bins\nzbins = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 3]\n\nnMasses = 100\nbig_mass = []\nfor j in range(nMasses):\n mass = np.ones_like(zarr) * 1e14\n for i in range(len(zbins) - 1):\n mask = (zbins[i] <= zarr) & (zarr < zbins[i + 1])\n mass[mask] *= float(data.loc[(zbins[i] <= data['REDSHIFT']) & (data['REDSHIFT'] < zbins[i + 1]), 'MSZ'].sample()) * cosmo.h\n big_mass.append(mass)\n\nmass = np.vstack(big_mass)\nmass_func = interp1d(zarr, np.median(mass, axis=0))", "Start Calculating things", "# So now we are going to calculate the volumes as a function of z\n\n#M200 = mass_func(zarr) * u.solMass\n\nM200 = 1e15 * u.solMass\nV200 = M200/ (200 * rho_crit(zs, cosmo))\n\n# Calculate the M_star values\nMstar = model.get_absolute_mags(zf, filters='i', zs=zs)\n\n# calculate the abs mag of our limiting magnitude as a function of z\nmlim = 23.5\n#Mlim = Mstar - 2.5 * np.log10(0.4)\nMlim = mlim - cosmo.distmod(zs).value - model.get_kcorrects(zf, filters='i', zs=zs)\n\n# Here are the Schechter function stuff from Liu et al.\nphi_star = 3.6 * cosmo.efunc(zs)**2\nalpha = -1.05 * (1 + zs)**(-2/3)\nfr = 0.8*(1 + zs)**(-1/2)\n\n#alpha = np.ones_like(alpha) * -1\n#Mpiv = 6e14 * u.solMass\n#zpiv = 0.6\n\n#alpha = -0.96 * (M200 / Mpiv)**0.01 * ((1 + zs)/ (1 + zpiv))**-0.94\n#phi_star = 1.68 * (M200 / Mpiv)**0.09 * ((1 + zs)/ (1 + zpiv))**0.09 * cosmo.efunc(zs)**2\n#fr = 0.62 * (M200 / Mpiv)**0.08 * ((1 + zs)/ (1 + zpiv))** -0.80\n\n\nLF = []\nfor phi, a, M_star, M_lim in zip(phi_star, alpha, Mstar, Mlim):\n if M_lim < M_star - 2.5 * np.log10(0.4):\n Mlimit = M_lim\n else:\n Mlimit = M_star - 2.5 * np.log10(0.4)\n y, err = quad(schechterM, -30, Mlimit, args=(phi, a, M_star))\n #print(M_star - M_lim, y)\n LF.append(y)\n\nplt.figure()\nplt.plot(zs, (LF * V200.value + 1) * fr)\nax = plt.gca()\nax.set_yticks(np.arange(0, 75, 10))\nplt.xlim(0.1, 5)\nplt.ylim(0, 80)\nplt.xlabel('redshift')\nplt.ylabel('N (r < r200)')\nplt.grid()\n\n# calculate the abs mag of our limiting magnitude as a function of z\nmlim = 23.5\n#Mlim = model.get_absolute_mags(zf, filters='i', zs=zs) - 2.5 * np.log10(0.4)\nMlim = mlim - cosmo.distmod(zs).value - model.get_kcorrects(zf, filters='i', zs=zs)\nplt.figure()\nplt.plot(zs, model.get_absolute_mags(zf, filters='i', zs=zs), label='Lstar')\nplt.plot(zs, Mlim, label='Mlimit')\nplt.plot(zs, model.get_absolute_mags(zf, filters='i', zs=zs) - 2.5 * np.log10(0.4), label='0.4Lstar')\nplt.grid()\nplt.xlabel('redshift')\nplt.ylabel('abs Mag')\nplt.legend()\n\nMlim\n\nMstar - 2.5 * np.log10(0.4) # 0.4L* magnitudes\n\nnp.array(LF) # LF integration output\n\nalpha\n\nphi_star\n\nfr # red fraction\n\nzs # redshift array\n\nV200.value # cluster volume\n\n200 * rho_crit(zs, cosmo)\n\nplt.plot(zs, (V200/(4/3 * np.pi))**(1/3))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Taekyoon/Pytorch_Seq2Seq_Tutorial
Pytorch_Seq2Seq_Practice.ipynb
mit
[ "Pytorch Seq2Seq Machine Translator Practice\n이번 튜토리얼에서는 Sequence to Sequence 모델의 핵심인 RNN Encoder Decoder과 Attention 모델을 이해하고, 이를 활용하여 Machine Translator를 구현해보겠습니다.\nMachine Traslator에 핵심인 Sequence to Sequence 모델은 아래의 그림과 같이 구성되어 있습니다.\n\n모델의 역할은 다음과 같습니다.\n번역을 하고자 하는 데이터를 RNN Encoder에 입력하여 encoder context 정보를 얻습니다. Encoder context를 활용하여 RNN Decoder를 통해 보이고자 하는 번역 데이터를 학습하여 모델을 만듭니다. 학습된 모델은 Encoder 데이터만 입력을 하여 Decoder에서 번역된 내용을 보이게 됩니다.\nSetting Sequence Length\nEncoder와 Decoder에 입력할 최대 Sequence 길이에 대해 설정합니다. 빠른 학습을 위해서 최대길이는 10으로 지정하였습니다.", "MAX_LENGTH = 10", "Load Europal dataset\nEuropal 영-불 데이터셋을 불러옵니다.", "from data_util import prepare_data\n\ninput_lang, output_lang, train_pairs, test_pairs = prepare_data('lang1', 'lang2', MAX_LENGTH, 2, True)", "Implement Encoder Decoder Model\nEncoder와 Decoder를 구현해봅니다. \n구현하고자 하는 Encoder는 다음과 같은 구조로 구성되어 있습니다.\n\n위에 그래프를 보면 input vector에 대해서 embedding을 하고 hidden vector와 GRU function을 통해 한 feed forward step을 하게 됩니다. 마지막으로, GRU를 통해서 output과 hidden vector를 각각 얻게 됩니다.\n다음, 구현하고자 하는 Decoder의 구조는 아래와 같습니다.\n\nDecoder구현은 Encoder보다 복잡합니다. Decoder를 보다 잘 이해하기 위해서 그래프 구조 설명 이전에 Decoder에 중요한 부분 중 하나인 Attention에 대한 설명을 하겠습니다.\n\nAttention은 예측을 하고자 할 때 Input data에 대해 어디에 집중을 해야할 지 Encoder context에 가중치를 주는 역할을 합니다. 여기 Translator 모델에서는 매 스텝마다 들어오는 Decoder Input과 Hidden vector를 통해 Encoder context에 대한 가중치를 부여하여 Input에 대한 Output을 예측할 수 있도록 합니다.\n모델의 전체과정 중 Attention 부분은 다음과 같습니다.\nInput에 들어온 데이터는 embedding layer을 통해 이전 스텝의 hidden_vector와 결합을 합니다. 이후 softmax function을 거쳐 attn linear function을 두어 encoder_outputs와 matrix multiplication을 할 수 있도록 해줍니다.\nAttention이 적용된 context vector는 input vector와 결합이 되어 hidden vector와 같이 GRU function에 들어갑니다. GRU에서 나온 output은 softmax를 처리하여 return 처리를 합니다.\n이제 위 내용을 바탕으로 model을 구현해 보겠습니다.\nmodels.py에 NotImplementedError라 표시된 영역에 구현해보겠습니다.\n각 구현에 대한 순서는 다음과 같습니다.\n\nEncoder 모델 __init__에 embedding과 gru 함수를 구현합니다.\nEncoder 모델 forward 부분을 구현합니다. 방법은 아래와 같습니다.\nEmbedding function을 통해 word embedding layer를 구현합니다.\nGRU function을 이용하여 multi layer RNN을 구현합니다.\n\n\nDecoder 모델 forward 부분을 구현합니다. 방법은 아래와 같습니다.\nEmbedding functiondㅡㄹ 통해 word embedding layer를 구현합니다.\nAttention Module을 구현합니다. (구현에 관한 내용은 위 그래프 Image를 참조하여 구현합니다.)\nGRU function을 이용하여 multi layer RNN을 구현합니다.\nFully Connected Layer을 구현하고 Softmax를 통해 output data를 보입니다.", "from models import EncoderRNN, AttnDecoderRNN\n\nhidden_size = 256\nencoder1 = EncoderRNN(input_lang.n_words, hidden_size)\nattn_decoder1 = AttnDecoderRNN(hidden_size, output_lang.n_words,\n MAX_LENGTH, dropout_p=0.1)", "Implement Training Module\nTraining Module 중 Teacher forcing 부분에 대해서 구현을 하고 criterion과 optimizer에 대해서 설정을 해봅니다.\ntrain.py에 NotImplementedError라 표시된 영역에 구현해보겠습니다.\n\n\nTeacher forcing 부분을 구현합니다.\n\ndecoder로 부터 output vector를 받습니다.\ncriterion을 활용하여 loss값을 축적합니다.\nground truth 값을 decoder_input에 입력합니다.\n\n\n\nWithout Teacher forcing 부분을 구현합니다.\n\ndecoder로 부터 ouput vector를 받습니다.\noutput vector로 부터 argmax값을 받습니다.\ndecoder로 부터 받은 예측값을 decoder_input에 입력합니다.\ncriterion을 활용하여 loss값을 축적합니다.\nEOS_token이 있을 시 break를 하도록 조건문을 둡니다.", "from train import train_iters\n\nplot_losses = train_iters(encoder1, attn_decoder1, input_lang, \n output_lang, train_pairs[:70], 1000, MAX_LENGTH)", "Evaluate and predict model\n구현한 모델의 training loss값들을 그래프로 확인하고, 번역성능을 확인해보도록 합니다.", "import matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nimport numpy as np\n%matplotlib inline\n\ndef showPlot(points):\n plt.figure()\n fig, ax = plt.subplots()\n # this locator puts ticks at regular intervals\n loc = ticker.MultipleLocator(base=0.2)\n ax.yaxis.set_major_locator(loc)\n plt.plot(points)\n\nshowPlot(plot_losses)\n\nfrom predict import ModelPredictor\n\npredictor = ModelPredictor(encoder1, attn_decoder1, input_lang, output_lang, MAX_LENGTH)\npredictor.evaluate_randomly(train_pairs[:10])\npredictor.predict_sentence(\"je comprends il est essentiel .\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dynaryu/rmtk
rmtk/vulnerability/derivation_fragility/R_mu_T_dispersion/SPO2IDA/spo2ida.ipynb
agpl-3.0
[ "SPO2IDA\nThis methodology uses the SPO2IDA tool described in Vamvatsikos and Cornell (2006) to convert static pushover curves into $16\\%$, $50\\%$, and $84\\%$ IDA curves. The SPO2IDA tool is based on empirical relationships obtained from a large database of incremental dynamic analysis results. This procedure is applicable to any kind of multi-linear capacity curve and it is suitable for single-building fragility curve estimation. Individual fragility curves can later be combined into a single fragility curve that considers the inter-building uncertainty. The figure below illustrates the IDA curves estimated using this methodology for a given capacity curve.\n<img src=\"../../../../../figures/spo2ida.jpg\" width=\"500\" align=\"middle\">\nNote: To run the code in a cell:\n\nClick on the cell to select it.\nPress SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.", "from rmtk.vulnerability.derivation_fragility.R_mu_T_dispersion.SPO2IDA import SPO2IDA_procedure \nfrom rmtk.vulnerability.common import utils\n%matplotlib inline ", "Load capacity curves\nIn order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual. In case multiple capacity curves are input, a spectral shape also needs to be defined.\n\nPlease provide the location of the file containing the capacity curves using the parameter capacity_curves_file.\nPlease also provide a spectral shape using the parameter input_spectrum if multiple capacity curves are used.", "capacity_curves_file = \"../../../../../../rmtk_data/capacity_curves_Vb-dfloor.csv\"\ninput_spectrum = \"../../../../../../rmtk_data/FEMAP965spectrum.txt\"\n\ncapacity_curves = utils.read_capacity_curves(capacity_curves_file)\nSa_ratios = utils.get_spectral_ratios(capacity_curves, input_spectrum)\nutils.plot_capacity_curves(capacity_curves)", "Idealise pushover curves\nIn order to use this methodology the pushover curves need to be idealised. Please choose an idealised shape using the parameter idealised_type. The valid options for this methodology are \"bilinear\" and \"quadrilinear\". Idealised curves can also be directly provided as input by setting the field Idealised to TRUE in the input file defining the capacity curves.", "idealised_type = \"quadrilinear\"\n\nidealised_capacity = utils.idealisation(idealised_type, capacity_curves)\nutils.plot_idealised_capacity(idealised_capacity, capacity_curves, idealised_type)", "Load damage state thresholds\nPlease provide the path to your damage model file using the parameter damage_model_file in the cell below. Currently only interstorey drift damage model type is supported.", "damage_model_file = \"../../../../../../rmtk_data/damage_model_ISD.csv\"\n\ndamage_model = utils.read_damage_model(damage_model_file)", "Calculate fragility functions\nThe damage threshold dispersion is calculated and integrated with the record-to-record dispersion through Monte Carlo simulations. Please enter the number of Monte Carlo samples to be performed using the parameter montecarlo_samples in the cell below.", "montecarlo_samples = 50\n\nfragility_model = SPO2IDA_procedure.calculate_fragility(capacity_curves, idealised_capacity, damage_model, montecarlo_samples, Sa_ratios, 1)", "Plot fragility functions\nThe following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:\n* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions", "minIML, maxIML = 0.01, 2\n\nutils.plot_fragility_model(fragility_model, minIML, maxIML)\n\nprint fragility_model", "Save fragility functions\nThe derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:\n1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.\n2. minIML and maxIML: These parameters define the bounds of applicability of the functions.\n3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are \"csv\" and \"nrml\".", "taxonomy = \"RC\"\nminIML, maxIML = 0.01, 2.00\noutput_type = \"csv\"\noutput_path = \"../../../../../../rmtk_data/output/\"\n\nutils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)", "Obtain vulnerability function\nA vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level. \nThe following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:\n1. cons_model_file: This parameter specifies the path of the consequence model file.\n2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.\n3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are \"lognormal\", \"beta\", and \"PMF\".", "cons_model_file = \"../../../../../../rmtk_data/cons_model.csv\"\nimls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, \n 0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]\ndistribution_type = \"lognormal\"\n\ncons_model = utils.read_consequence_model(cons_model_file)\nvulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model, \n imls, distribution_type)", "Plot vulnerability function", "utils.plot_vulnerability_model(vulnerability_model)", "Save vulnerability function\nThe derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:\n1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.\n3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are \"csv\" and \"nrml\".", "taxonomy = \"RC\"\noutput_type = \"nrml\"\noutput_path = \"../../../../../../rmtk_data/output/\"\n\nutils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cranmer/look-elsewhere-2d
examples_from_paper.ipynb
mit
[ "Look Elsewhere Effect in 2-d\nKyle Cranmer, Nov 19, 2015\nBased on\nEstimating the significance of a signal in a multi-dimensional search by Ofer Vitells and Eilam Gross http://arxiv.org/pdf/1105.4355v1.pdf\nThis is for the special case of a likelihood function of the form \n$L(\\mu, \\nu_1, \\nu_2)$ where $\\mu$ is a single parameter of interest and\n$\\nu_1,\\nu_2$ are two nuisance parameters that are not identified under the null.\nFor example, $\\mu$ is the signal strength of a new particle and $\\nu_1,\\nu_2$ are the\nunknown mass and width of the new particle. Under the null hypothesis, those parameters \ndon't mean anything... aka they \"are not identified under the null\" in the statistics jargon.\nThis introduces a 2-d look elsewhere effect.\nThe LEE correction in this case is based on \n\\begin{equation}\nE[ \\phi(A_u) ] = P(\\chi^2_1 > u) + e^{-u/2} (N_1 + \\sqrt{u} N_2) \\,\n\\end{equation}\nwhere \n * $A_u$ is the 'excursion set above level $u$ (eg. the set of parameter points in $(\\nu_1,\\nu_2)$ that have a -2 log-likelihood ratio greater than $u$ )\n * $\\phi(A_u)$ is the Euler characteristic of the excursion set\n * $E[ \\phi(A_u) ]$ is the expectation of the Euler characteristic of those excursion sets under the null\n * $P(\\chi^2_1 > u)$ is the standard chi-square probability \n * and $N_1$ and $N_2$ are two coefficients that characterize the chi-square random field.\nstructure of the notebook\nThe notebook is broken into two parts.\n * calculation of $N_1$ and $N_2$ based on $E[ \\phi(A_u) ]$ at two different levels $u_1$ and $u_2$\n * calculation of LEE-corrected 'global p-value' given $N_1,N_2$", "%pylab inline --no-import-all\n\n\nfrom lee2d import *", "Test numerical solution to $N_1, N_2$ from the example in the paper\nUsage: calculate n1,n2 based on expected value of Euler characteristic (calculated from toy Monte Carlo) at two different levels u1, u2. For example: \n * $u_1=0$ with $E[ \\phi(A_{u=u_1})]=33.5 $ \n * $u_2=1$ with $E[ \\phi(A_{u=u_2})]=94.6 $\nwould lead to a call like this", "# An example from the paper\nn1, n2 = get_coefficients(u1=0., u2=1., exp_phi_1=33.5, exp_phi_2=94.6)\nprint n1, n2\n\n# reproduce Fig 5 from paper (the markers are read by eye)\nu = np.linspace(5,35,100)\nglobal_p = global_pvalue(u,n1,n2)\nplt.plot(u, global_p)\nplt.scatter(35,2.E-5) #from Fig5\nplt.scatter(30,2.E-4) #from Fig5\nplt.scatter(25,2.5E-3) #from Fig5\nplt.scatter(20,2.5E-2) #from Fig5\nplt.scatter(15,.3) #from Fig5\nplt.xlabel('u')\nplt.ylabel('P(max q > u)')\nplt.semilogy()", "Check Euler characteristic from Fig 3 example in the paper", "#create Fig 3 of http://arxiv.org/pdf/1105.4355v1.pdf\na = np.zeros((7,7))\na[1,2]=a[1,3]=a[2,1]=a[2,2]=a[2,3]=a[2,4]=1\na[3,1]=a[3,2]=a[3,3]=a[3,4]=a[3,5]=1\na[4,1]=a[4,2]=a[4,3]=a[4,4]=1\na[5,3]=1\na[6,0]=a[6,1]=1\na=a.T\nplt.imshow(a,cmap='gray',interpolation='none')\n\n#should be 2\ncalculate_euler_characteristic(a) ", "Try a big matrix", "#Fully filled, should be 1\nrandMatrix = np.zeros((100,100))+1\ncalculate_euler_characteristic(randMatrix)\n\n# split in half vertically, should be 2\nrandMatrix[50,:]=0\nplt.imshow(randMatrix,cmap='gray')\ncalculate_euler_characteristic(randMatrix)\n\n#split in half horizontally twice, should be 6\nrandMatrix[:,25]=0\nrandMatrix[:,75]=0\nplt.imshow(randMatrix,cmap='gray')\ncalculate_euler_characteristic(randMatrix)\n\n#remove a hole from middle of one, should be 5\nrandMatrix[25:30,50:53]=0\nplt.imshow(randMatrix,cmap='gray')\ncalculate_euler_characteristic(randMatrix)\n\n#remove a single pixel hole\nrandMatrix[75,50]=0\nplt.imshow(randMatrix,cmap='gray')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/nasa-giss/cmip6/models/giss-e2-1g/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: NASA-GISS\nSource ID: GISS-E2-1G\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:20\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1g', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MingChen0919/learning-apache-spark
notebooks/02-data-manipulation/2.7.2-dot-column-expression.ipynb
mit
[ "# create entry points to spark\ntry:\n sc.stop()\nexcept:\n pass\nfrom pyspark import SparkContext, SparkConf\nfrom pyspark.sql import SparkSession\nsc=SparkContext()\nspark = SparkSession(sparkContext=sc)", "Example data", "mtcars = spark.read.csv('../../../data/mtcars.csv', inferSchema=True, header=True)\nmtcars = mtcars.withColumnRenamed('_c0', 'model')\nmtcars.show(5)", "Dot (.) column expression\nCreate a column expression that will return the original column values.", "mpg_col_exp = mtcars.mpg\nmpg_col_exp\n\nmtcars.select(mpg_col_exp).show(5)" ]
[ "code", "markdown", "code", "markdown", "code" ]
leriomaggio/python-in-a-notebook
03 List and Tuples and Sets.ipynb
mit
[ "Lists Tuples and Sets\nIn this notebook, you will learn to store more than one valuable in a single variable. \nThis by itself is one of the most powerful ideas in programming, and it introduces a number of other central concepts such as loops. \nIf this section ends up making sense to you, you will be able to start writing some interesting programs, and you can be more confident that you will be able to develop overall competence as a programmer.\n<a name=\"top\"></a> Contents\n\nLists\nIntroducing Lists\nExample\nNaming and defining a list\nAccessing one item in a list\nExercises\n\n\nLists and Looping\nAccessing all elements in a list\nEnumerating a list\nExercises\n\n\nCommon List Operations\nModifying elements in a list\nFinding an element in a list\nTesting whether an element is in a list\nAdding items to a list\nCreating an empty list\nSorting a list\nFinding the length of a list\nExercises\n\n\nRemoving Items from a List\nRemoving items by position\nRemoving items by value\nPopping items\nExercises\n\n\nWant to see what functions are?\nSlicing a List\nCopying a list\nExercises\n\n\nNumerical Lists\nThe range() function\nThe min(), max(), sum() functions\nExercises\n\n\nList Comprehensions\nNumerical comprehensions\nNon-numerical comprehensions\nExercises\n\n\nStrings as Lists\nStrings as a list of characters\nSlicing strings\nFinding substrings\nReplacing substrings\nCounting substrings\nSplitting strings\nOther string methods\nExercises\nChallenges\n\n\nTuples\nDefining tuples, and accessing elements\nUsing tuples to make strings\nExercises\n\n\nSets\nBasic Operatoins on Sets\nExercises\n\n\nOverall Challenges\n\n\n\n<a name='lists'></a>Lists\n<a name='introducing'></a>Introducing Lists\n<a name='example'></a>Example\nA list is a collection of items, that is stored in a variable. The items should be related in some way, but there are no restrictions on what can be stored in a list. Here is a simple example of a list, and how we can quickly access each item in the list.", "students = ['bernice', 'aaron', 'cody']\n\nfor student in students:\n print(\"Hello, \" + student.title() + \"!\")", "<a name='naming'></a>Naming and defining a list\nSince lists are collection of objects, it is good practice to give them a plural name. If each item in your list is a car, call the list 'cars'. If each item is a dog, call your list 'dogs'. This gives you a straightforward way to refer to the entire list ('dogs'), and to a single item in the list ('dog').\nIn Python, square brackets designate a list. To define a list, you give the name of the list, the equals sign, and the values you want to include in your list within square brackets.", "dogs = ['border collie', \n 'australian cattle dog', \n 'labrador retriever']", "<a name='accessing_one_item'></a>Accessing one item in a list\nItems in a list are identified by their position in the list, starting with zero. This will almost certainly trip you up at some point. Programmers even joke about how often we all make \"off-by-one\" errors, so don't feel bad when you make this kind of error.\nTo access the first element in a list, you give the name of the list, followed by a zero in parentheses.", "dogs = ['border collie', \n 'australian cattle dog', \n 'labrador retriever']\n\ndog = dogs[0]\nprint(dog.title())", "The number in parentheses is called the index of the item. \nBecause lists start at zero, the index of an item is always one less than its position in the list. \nBecause of that, Python is said to be a zero-indexed \nlanguage (as many others, like C, or Java)\nSo to get the second item in the list, we need to use an index of 1, and so on..", "dog = dogs[1]\nprint(dog.title())", "Accessing the last items in a list\nYou can probably see that to get the last item in this list, we would use an index of 2. This works, but it would only work because our list has exactly three items. To get the last item in a list, no matter how long the list is, you can use an index of -1.", "dog = dogs[-1]\nprint(dog.title())", "This syntax also works for the second to last item, the third to last, and so forth.", "dog = dogs[-2]\nprint(dog.title())", "You can't use a negative number larger than the length of the list, however.", "dog = dogs[-4]\nprint(dog.title())", "top\n<a name='exercises_list_introduction'></a>Exercises\nFirst List\n\nStore the values 'python', 'c', and 'java' in a list. Print each of these values out, using their position in the list.\n\nFirst Neat List\n\nStore the values 'python', 'c', and 'java' in a list. Print a statement about each of these values, using their position in the list.\nYour statement could simply be, 'A nice programming language is value.'\n\nYour First List\n\nThink of something you can store in a list. Make a list with three or four items, and then print a message that includes at least one item from your list. Your sentence could be as simple as, \"One item in my list is a ____.\"", "# Ex 3.1 : First List\n\n# put your code here\n\n# Ex 3.2 : First Neat List\n\n# put your code here\n\n# Ex 3.3 : Your First List\n\n# put your code here", "top\n<a name='looping'></a>Lists and Looping\n<a name='accessing_all_elements'></a>Accessing all elements in a list\nThis is one of the most important concepts related to lists. You can have a list with a million items in it, and in three lines of code you can write a sentence for each of those million items. If you want to understand lists, and become a competent programmer, make sure you take the time to understand this section.\nWe use a loop to access all the elements in a list. A loop is a block of code that repeats itself until it runs out of items to work with, or until a certain condition is met. In this case, our loop will run once for every item in our list. With a list that is three items long, our loop will run three times.\nLet's take a look at how we access all the items in a list, and then try to understand how it works.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nfor dog in dogs:\n print(dog)", "We have already seen how to create a list, so we are really just trying to understand how the last two lines work. These last two lines make up a loop, and the language here can help us see what is happening:\nfor dog in dogs:\n\n\nThe keyword \"for\" tells Python to get ready to use a loop.\nThe variable \"dog\", with no \"s\" on it, is a temporary placeholder variable. This is the variable that Python will place each item in the list into, one at a time.\nThe first time through the loop, the value of \"dog\" will be 'border collie'.\nThe second time through the loop, the value of \"dog\" will be 'australian cattle dog'.\nThe third time through, \"dog\" will be 'labrador retriever'.\nAfter this, there are no more items in the list, and the loop will end.\n\nDoing more with each item\nWe can do whatever we want with the value of \"dog\" inside the loop. In this case, we just print the name of the dog.\nprint(dog)\n\nWe are not limited to just printing the word dog. We can do whatever we want with this value, and this action will be carried out for every item in the list.\nLet's say something about each dog in our list.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nfor dog in dogs:\n print('I like ' + dog + 's.')", "Inside and outside the loop\nPython uses indentation to decide what is inside the loop and what is outside the loop. Code that is inside the loop will be run for every item in the list. Code that is not indented, which comes after the loop, will be run once just like regular code.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nfor dog in dogs:\n print('I like ' + dog + 's.')\n print('No, I really really like ' + dog +'s!\\n')\n \nprint(\"\\nThat's just how I feel about dogs.\")", "Notice that the last line only runs once, after the loop is completed. Also notice the use of newlines (\"\\n\") to make the output easier to read.\ntop\n<a name='enumerating_list'></a>Enumerating a list\nWhen you are looping through a list, you may want to know the index of the current item. You could always use the list.index(value) syntax, but there is a simpler way. The enumerate() function tracks the index of each item for you, as it loops through the list:", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nprint(\"Results for the dog show are as follows:\\n\")\nfor index, dog in enumerate(dogs):\n place = str(index)\n print(\"Place: \" + place + \" Dog: \" + dog.title())", "To enumerate a list, you need to add an index variable to hold the current index. So instead of\nfor dog in dogs:\n\nYou have\nfor index, dog in enumerate(dogs)\n\nThe value in the variable index is always an integer. If you want to print it in a string, you have to turn the integer into a string:\nstr(index)\n\nThe index always starts at 0, so in this example the value of place should actually be the current index, plus one:", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nprint(\"Results for the dog show are as follows:\\n\")\nfor index, dog in enumerate(dogs):\n place = str(index + 1)\n print(\"Place: \" + place + \" Dog: \" + dog.title())", "A common looping error\nOne common looping error occurs when instead of using the single variable dog inside the loop, we accidentally use the variable that holds the entire list:", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nfor dog in dogs:\n print(dogs)", "In this example, instead of printing each dog in the list, we print the entire list every time we go through the loop. Python puts each individual item in the list into the variable dog, but we never use that variable. Sometimes you will just get an error if you try to do this:", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nfor dog in dogs:\n print('I like ' + dogs + 's.')", "The FOR (iteration) loop\nThe for loop statement is the most widely used iteration mechanisms in Python.\n\n\nAlmost every structure in Python can be iterated (element by element) by a for loop\n\na list, a tuple, a dictionary, $\\ldots$ (more details will follows)\n\n\n\nIn Python, also while loops are permitted, but for is the one you would see (and use) most of the time!\n\n\nFOR Special keywords\nPython allows two keywords to be used within a for loop: break and continue.\nThe two keywords have two different meanings:\n\nBreak used to immediatly break the loop and exit!\nContinue used to skip to the next iteration step!\n\nNOTE: The two keywords are permitted with while loops as well!\nExamples\n<a name='exercises_list_loop'></a>Exercises\nFirst List - Loop\n\nRepeat First List, but this time use a loop to print out each value in the list.\n\nFirst Neat List - Loop\n\nRepeat First Neat List, but this time use a loop to print out your statements. Make sure you are writing the same sentence for all values in your list. Loops are not effective when you are trying to generate different output for each value in your list.\n\nYour First List - Loop\n\nRepeat Your First List, but this time use a loop to print out your message for each item in your list. Again, if you came up with different messages for each value in your list, decide on one message to repeat for each value in your list.", "# Ex 3.4 : First List - Loop\n\n# put your code here\n\n# Ex 3.5 : First Neat List - Loop\n\n# put your code here\n\n# Ex 3.6 : Your First List - Loop\n\n# put your code here", "top\n<a name='common_operations'></a>Common List Operations\n<a name='modifying_elements'></a>Modifying elements in a list\nYou can change the value of any element in a list if you know the position of that item.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\ndogs[0] = 'australian shepherd'\nprint(dogs)", "<a name='finding_elements'></a>Finding an element in a list\nIf you want to find out the position of an element in a list, you can use the index() function.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nprint(dogs.index('australian cattle dog'))", "This method returns a ValueError if the requested item is not in the list.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nprint(dogs.index('poodle'))", "<a name='testing_elements'></a>Testing whether an item is in a list\nYou can test whether an item is in a list using the \"in\" keyword. This will become more useful after learning how to use if-else statements.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n\nprint('australian cattle dog' in dogs)\nprint('poodle' in dogs)", "<a name='adding_items'></a>Adding items to a list\nAppending items to the end of a list\nWe can add an item to a list using the append() method. This method adds the new item to the end of the list.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\ndogs.append('poodle')\n\nfor dog in dogs:\n print(dog.title() + \"s are cool.\")", "Inserting items into a list\nWe can also insert items anywhere we want in a list, using the insert() function. We specify the position we want the item to have, and everything from that point on is shifted one position to the right. In other words, the index of every item after the new item is increased by one.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\ndogs.insert(1, 'poodle')\n\nprint(dogs)", "Note that you have to give the position of the new item first, and then the value of the new item. If you do it in the reverse order, you will get an error.\n<a name='empty_list'></a>Creating an empty list\nNow that we know how to add items to a list after it is created, we can use lists more dynamically. We are no longer stuck defining our entire list at once.\nA common approach with lists is to define an empty list, and then let your program add items to the list as necessary. This approach works, for example, when starting to build an interactive web site. Your list of users might start out empty, and then as people register for the site it will grow. This is a simplified approach to how web sites actually work, but the idea is realistic.\nHere is a brief example of how to start with an empty list, start to fill it up, and work with the items in the list. The only new thing here is the way we define an empty list, which is just an empty set of square brackets.", "# Create an empty list to hold our users.\nusernames = []\n\n# Add some users.\nusernames.append('bernice')\nusernames.append('cody')\nusernames.append('aaron')\n\n# Greet all of our users.\nfor username in usernames:\n print(\"Welcome, \" + username.title() + '!')", "If we don't change the order in our list, we can use the list to figure out who our oldest and newest users are.", "# Create an empty list to hold our users.\nusernames = []\n\n# Add some users.\nusernames.append('bernice')\nusernames.append('cody')\nusernames.append('aaron')\n\n# Greet all of our users.\nfor username in usernames:\n print(\"Welcome, \" + username.title() + '!')\n\n# Recognize our first user, and welcome our newest user.\nprint(\"\\nThank you for being our very first user, \" + usernames[0].title() + '!')\nprint(\"And a warm welcome to our newest user, \" + usernames[-1].title() + '!')", "Note that the code welcoming our newest user will always work, because we have used the index -1. If we had used the index 2 we would always get the third user, even as our list of users grows and grows.\n<a name='sorting_list'></a>Sorting a List\nWe can sort a list alphabetically, in either order.", "students = ['bernice', 'aaron', 'cody']\n\n# Put students in alphabetical order.\nstudents.sort()\n\n# Display the list in its current order.\nprint(\"Our students are currently in alphabetical order.\")\nfor student in students:\n print(student.title())\n\n#Put students in reverse alphabetical order.\nstudents.sort(reverse=True)\n\n# Display the list in its current order.\nprint(\"\\nOur students are now in reverse alphabetical order.\")\nfor student in students:\n print(student.title())", "sorted() vs. sort()\nWhenever you consider sorting a list, keep in mind that you can not recover the original order. If you want to display a list in sorted order, but preserve the original order, you can use the sorted() function. The sorted() function also accepts the optional reverse=True argument.", "students = ['bernice', 'aaron', 'cody']\n\n# Display students in alphabetical order, but keep the original order.\nprint(\"Here is the list in alphabetical order:\")\nfor student in sorted(students):\n print(student.title())\n\n# Display students in reverse alphabetical order, but keep the original order.\nprint(\"\\nHere is the list in reverse alphabetical order:\")\nfor student in sorted(students, reverse=True):\n print(student.title())\n\nprint(\"\\nHere is the list in its original order:\")\n# Show that the list is still in its original order.\nfor student in students:\n print(student.title())", "Reversing a list\nWe have seen three possible orders for a list:\n- The original order in which the list was created\n- Alphabetical order\n- Reverse alphabetical order\nThere is one more order we can use, and that is the reverse of the original order of the list. The reverse() function gives us this order.", "students = ['bernice', 'aaron', 'cody']\nstudents.reverse()\n\nprint(students)", "Note that reverse is permanent, although you could follow up with another call to reverse() and get back the original order of the list.\nSorting a numerical list\nAll of the sorting functions work for numerical lists as well.", "numbers = [1, 3, 4, 2]\n\n# sort() puts numbers in increasing order.\nnumbers.sort()\nprint(numbers)\n\n# sort(reverse=True) puts numbers in decreasing order.\nnumbers.sort(reverse=True)\nprint(numbers)\n\n\nnumbers = [1, 3, 4, 2]\n\n# sorted() preserves the original order of the list:\nprint(sorted(numbers))\nprint(numbers)\n\nnumbers = [1, 3, 4, 2]\n\n# The reverse() function also works for numerical lists.\nnumbers.reverse()\nprint(numbers)", "<a name='length'></a>Finding the length of a list\nYou can find the length of a list using the len() function.", "usernames = ['bernice', 'cody', 'aaron']\nuser_count = len(usernames)\n\nprint(user_count)", "There are many situations where you might want to know how many items in a list. If you have a list that stores your users, you can find the length of your list at any time, and know how many users you have.", "# Create an empty list to hold our users.\nusernames = []\n\n# Add some users, and report on how many users we have.\nusernames.append('bernice')\nuser_count = len(usernames)\n\nprint(\"We have \" + str(user_count) + \" user!\")\n\nusernames.append('cody')\nusernames.append('aaron')\nuser_count = len(usernames)\n\nprint(\"We have \" + str(user_count) + \" users!\")", "On a technical note, the len() function returns an integer, which can't be printed directly with strings. We use the str() function to turn the integer into a string so that it prints nicely:", "usernames = ['bernice', 'cody', 'aaron']\nuser_count = len(usernames)\n\nprint(\"This will cause an error: \" + user_count)\n\nusernames = ['bernice', 'cody', 'aaron']\nuser_count = len(usernames)\n\nprint(\"This will work: \" + str(user_count))", "<a name='exercises_common_operations'></a>Exercises\nWorking List\n\nMake a list that includes four careers, such as 'programmer' and 'truck driver'.\nUse the list.index() function to find the index of one career in your list.\nUse the in function to show that this career is in your list.\nUse the append() function to add a new career to your list.\nUse the insert() function to add a new career at the beginning of the list.\nUse a loop to show all the careers in your list.\n\nStarting From Empty\n\nCreate the list you ended up with in Working List, but this time start your file with an empty list and fill it up using append() statements.\nPrint a statement that tells us what the first career you thought of was.\nPrint a statement that tells us what the last career you thought of was.\n\nOrdered Working List\n\nStart with the list you created in Working List.\nYou are going to print out the list in a number of different orders.\nEach time you print the list, use a for loop rather than printing the raw list.\nPrint a message each time telling us what order we should see the list in.\nPrint the list in its original order.\nPrint the list in alphabetical order.\nPrint the list in its original order.\nPrint the list in reverse alphabetical order.\nPrint the list in its original order.\nPrint the list in the reverse order from what it started.\nPrint the list in its original order\nPermanently sort the list in alphabetical order, and then print it out.\nPermanently sort the list in reverse alphabetical order, and then print it out.\n\n\n\nOrdered Numbers\n\nMake a list of 5 numbers, in a random order.\nYou are going to print out the list in a number of different orders.\nEach time you print the list, use a for loop rather than printing the raw list.\nPrint a message each time telling us what order we should see the list in.\nPrint the numbers in the original order.\nPrint the numbers in increasing order.\nPrint the numbers in the original order.\nPrint the numbers in decreasing order.\nPrint the numbers in their original order.\nPrint the numbers in the reverse order from how they started.\nPrint the numbers in the original order.\nPermanently sort the numbers in increasing order, and then print them out.\nPermanently sort the numbers in descreasing order, and then print them out.\n\n\n\nList Lengths\n\nCopy two or three of the lists you made from the previous exercises, or make up two or three new lists.\nPrint out a series of statements that tell us how long each list is.", "# Ex 3.7 : Working List\n\n# put your code here\n\n# Ex 3.8 : Starting From Empty\n\n# put your code here\n\n# Ex 3.9 : Ordered Working List\n\n# put your code here\n\n# Ex 3.10 : Ordered Numbers\n\n# put your code here\n\n# Ex 3.11 : List Lengths\n\n# put your code here", "top\n<a name='removing_items'></a>Removing Items from a List\nHopefully you can see by now that lists are a dynamic structure. We can define an empty list and then fill it up as information comes into our program. To become really dynamic, we need some ways to remove items from a list when we no longer need them. You can remove items from a list through their position, or through their value.\n<a name='removing_by_position'></a>Removing items by position\nIf you know the position of an item in a list, you can remove that item using the del command. To use this approach, give the command del and the name of your list, with the index of the item you want to move in square brackets:", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n# Remove the first dog from the list.\ndel dogs[0]\n\nprint(dogs)", "<a name='removing_by_value'></a>Removing items by value\nYou can also remove an item from a list if you know its value. To do this, we use the remove() function. Give the name of the list, followed by the word remove with the value of the item you want to remove in parentheses. Python looks through your list, finds the first item with this value, and removes it.", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\n# Remove australian cattle dog from the list.\ndogs.remove('australian cattle dog')\n\nprint(dogs)", "Be careful to note, however, that only the first item with this value is removed. If you have multiple items with the same value, you will have some items with this value left in your list.", "letters = ['a', 'b', 'c', 'a', 'b', 'c']\n# Remove the letter a from the list.\nletters.remove('a')\n\nprint(letters)", "<a name='popping'></a>Popping items from a list\nThere is a cool concept in programming called \"popping\" items from a collection. Every programming language has some sort of data structure similar to Python's lists. All of these structures can be used as queues, and there are various ways of processing the items in a queue.\nOne simple approach is to start with an empty list, and then add items to that list. When you want to work with the items in the list, you always take the last item from the list, do something with it, and then remove that item. The pop() function makes this easy. It removes the last item from the list, and gives it to us so we can work with it. \nThis is easier to show with an example", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\nlast_dog = dogs.pop()\n\nprint(last_dog)\nprint(dogs)", "This is an example of a first-in, last-out approach. The first item in the list would be the last item processed if you kept using this approach. We will see a full implementation of this approach later on, when we learn about while loops.\nYou can actually pop any item you want from a list, by giving the index of the item you want to pop. So we could do a first-in, first-out approach by popping the first iem in the list:", "dogs = ['border collie', 'australian cattle dog', 'labrador retriever']\nfirst_dog = dogs.pop(0)\n\nprint(first_dog)\nprint(dogs)", "<a name='exercises_removing_items'></a>Exercises\nFamous People\n\nMake a list that includes the names of four famous people.\nRemove each person from the list, one at a time, using each of the four methods we have just seen:\nPop the last item from the list, and pop any item except the last item.\nRemove one item by its position, and one item by its value.\n\n\nPrint out a message that there are no famous people left in your list, and print your list to prove that it is empty.", "# Ex 3.12 : Famous People\nfpeople = ['david bowie', 'robert plant', 'obama', 'taylor swift']\n#Remove each person from the list, one at a time, using each of the four methods we have just seen\nfpeople.remove('taylor swift')\nprint(fpeople)\ndel fpeople[2]\nprint(fpeople)\nbowie=fpeople.pop(0)\nprint(bowie,fpeople)\nlast=fpeople.pop()\nprint('there are no more famous people in the list')\nprint(fpeople)\n# put your code here\n\n#Pop the last item from the list\nfpeople = ['david bowie', 'robert plant', 'obama', 'taylor swift']\nfpeople.pop()\nprint(fpeople)\n# and pop any item except the last item.\nfpeople = ['david bowie', 'robert plant', 'obama', 'taylor swift']\nfor _ in range(0,len(fpeople)-1):\n fpeople.pop(0)\nprint(fpeople)\n\nfpeople = ['david bowie', 'robert plant', 'obama', 'taylor swift']\nfpeople.remove('obama')\ndel fpeople[2]\nprint(fpeople)\n", "top\n<a name='functions'></a>Want to see what functions are?\nAt this point, you might have noticed we have a fair bit of repetetive code in some of our examples. This repetition will disappear once we learn how to use functions. If this repetition is bothering you already, you might want to go look at Introducing Functions before you do any more exercises in this section. \n<a name='slicing'></a>Slicing a List\nSince a list is a collection of items, we should be able to get any subset of those items. For example, if we want to get just the first three items from the list, we should be able to do so easily. The same should be true for any three items in the middle of the list, or the last three items, or any x items from anywhere in the list. These subsets of a list are called slices.\nTo get a subset of a list, we give the position of the first item we want, and the position of the first item we do not want to include in the subset. So the slice list[0:3] will return a list containing items 0, 1, and 2, but not item 3. \nHere is how you get a batch containing the first three items.", "usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']\n\n# Grab the first three users in the list.\nfirst_batch = usernames[0:3]\n\nfor user in first_batch:\n print(user.title())", "If you want to grab everything up to a certain position in the list, you can also leave the first index blank:", "usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']\n\n# Grab the first three users in the list.\nfirst_batch = usernames[:3]\n\nfor user in first_batch:\n print(user.title())", "When we grab a slice from a list, the original list is not affected:", "usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']\n\n# Grab the first three users in the list.\nfirst_batch = usernames[0:3]\n\n# The original list is unaffected.\nfor user in usernames:\n print(user.title())", "We can get any segment of a list we want, using the slice method:", "usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']\n\n# Grab a batch from the middle of the list.\nmiddle_batch = usernames[1:4]\n\nfor user in middle_batch:\n print(user.title())", "To get all items from one position in the list to the end of the list, we can leave off the second index:", "usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']\n\n# Grab all users from the third to the end.\nend_batch = usernames[2:]\n\nfor user in end_batch:\n print(user.title())", "<a name='copying'></a>Copying a list\nYou can use the slice notation to make a copy of a list, by leaving out both the starting and the ending index. This causes the slice to consist of everything from the first item to the last, which is the entire list.", "usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']\n\n# Make a copy of the list.\ncopied_usernames = usernames[:]\nprint(\"The full copied list:\\n\\t\", copied_usernames)\n\n# Remove the first two users from the copied list.\ndel copied_usernames[0]\ndel copied_usernames[0]\nprint(\"\\nTwo users removed from copied list:\\n\\t\", copied_usernames)\n\n# The original list is unaffected.\nprint(\"\\nThe original list:\\n\\t\", usernames)", "<a name='exercises_slicing'></a>Exercises\nAlphabet Slices\n\nStore the first ten letters of the alphabet in a list.\nUse a slice to print out the first three letters of the alphabet.\nUse a slice to print out any three letters from the middle of your list.\nUse a slice to print out the letters from any point in the middle of your list, to the end.\n\nProtected List\n\nYour goal in this exercise is to prove that copying a list protects the original list.\nMake a list with three people's names in it.\nUse a slice to make a copy of the entire list.\nAdd at least two new names to the new copy of the list.\nMake a loop that prints out all of the names in the original list, along with a message that this is the original list.\nMake a loop that prints out all of the names in the copied list, along with a message that this is the copied list.", "from string import ascii_lowercase\nprint(ascii_lowercase)\ntenletters=ascii_lowercase[0:10]\nprint(tenletters)\n\n# Ex 3.13 : Alphabet Slices\n#Store the first ten letters of the alphabet in a list.\nalphabet=tenletters[:]\n#Use a slice to print out the first three letters of the alphabet.\nprint(alphabet[:3])\n#Use a slice to print out any three letters from the middle of your list.\nprint(alphabet[6:9])\n#Use a slice to print out the letters from any point in the middle of your list, to the end.\nprint(alphabet[6:])\n# put your code here\n\n# Ex 3.14 : Protected List\n#Your goal in this exercise is to prove that copying a list protects the original list.\n#Make a list with three people's names in it.\nnames=['alice','anna','ada']\n#Use a slice to make a copy of the entire list.\ncopied_names=names[:]\n#Add at least two new names to the new copy of the list.\ncopied_names.append('agata')\ncopied_names.append('aurora')\n#Make a loop that prints out all of the names in the original list, along with a message that this is the original list.\nprint('This is the original list:')\nfor name in names:\n print(name.title())\n#Make a loop that prints out all of the names in the copied list, along with a message that this is the copied list.\nprint('This is the copy: ')\nfor cname in copied_names:\n print(cname.title())\nprint(copied_names)\n\n#title the names in the original list\nprint (names)\ncopied_names = [i.title() for i in copied_names]\nprint(copied_names)", "top\n<a name='numerical_lists'></a>Numerical Lists\nThere is nothing special about lists of numbers, but there are some functions you can use to make working with numerical lists more efficient. Let's make a list of the first ten numbers, and start working with it to see how we can use numbers in a list.", "# Print out the first ten numbers.\nnumbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n\nfor number in numbers:\n print(number)", "<a name='range_function'></a>The range() function\nThis works, but it is not very efficient if we want to work with a large set of numbers. The range() function helps us generate long lists of numbers. Here are two ways to do the same thing, using the range function.", "# Print the first ten numbers.\nfor number in range(1,11):\n print(number)", "The range function takes in a starting number, and an end number. You get all integers, up to but not including the end number. You can also add a step value, which tells the range function how big of a step to take between numbers:", "# Print the first ten odd numbers.\nfor number in range(1,21,2):\n print(number)", "If we want to store these numbers in a list, we can use the list() function. This function takes in a range, and turns it into a list:", "# Create a list of the first ten numbers.\nnumbers = list(range(1,11))\nprint(numbers)", "This is incredibly powerful; we can now create a list of the first million numbers, just as easily as we made a list of the first ten numbers. It doesn't really make sense to print the million numbers here, but we can show that the list really does have one million items in it, and we can print the last ten items to show that the list is correct.", "# Store the first million numbers in a list.\nnumbers = list(range(1,1000001))\n\n# Show the length of the list:\nprint(\"The list 'numbers' has \" + str(len(numbers)) + \" numbers in it.\")\n\n# Show the last ten numbers:\nprint(\"\\nThe last ten numbers in the list are:\")\nfor number in numbers[-10:]:\n print(number)", "There are two things here that might be a little unclear. The expression\nstr(len(numbers))\n\ntakes the length of the numbers list, and turns it into a string that can be printed.\nThe expression \nnumbers[-10:]\n\ngives us a slice of the list. The index -1 is the last item in the list, and the index -10 is the item ten places from the end of the list. So the slice numbers[-10:] gives us everything from that item to the end of the list.\n<a name='min_max_sum'></a>The min(), max(), and sum() functions\nThere are three functions you can easily use with numerical lists. As you might expect, the min() function returns the smallest number in the list, the max() function returns the largest number in the list, and the sum() function returns the total of all numbers in the list.", "ages = [23, 16, 14, 28, 19, 11, 38]\n\nyoungest = min(ages)\noldest = max(ages)\ntotal_years = sum(ages)\n\nprint(\"Our youngest reader is \" + str(youngest) + \" years old.\")\nprint(\"Our oldest reader is \" + str(oldest) + \" years old.\")\nprint(\"Together, we have \" + str(total_years) + \n \" years worth of life experience.\")", "<a name='exercises_numerical'></a>Exercises\nFirst Twenty\n\nUse the range() function to store the first twenty numbers (1-20) in a list, and print them out.\n\nLarger Sets\n\nTake the first_twenty.py program you just wrote. Change your end number to a much larger number. How long does it take your computer to print out the first million numbers? (Most people will never see a million numbers scroll before their eyes. You can now see this!)\n\nFive Wallets\n\nImagine five wallets with different amounts of cash in them. Store these five values in a list, and print out the following sentences:\n\"The fattest wallet has $ value in it.\"\n\"The skinniest wallet has $ value in it.\"\n\"All together, these wallets have $ value in them.\"", "# Ex 3.15 : First Twenty\ntwenties=list(range(1,21))\nfor n in twenties:\n print(n)\n# put your code here\n\n# Ex 3.16 : Larger Sets\nmillions=list(range(0,int(1e6)))\nlen(millions)\n\n# put your code here\n\n# Ex 3.17 : Five Wallets\n#Imagine five wallets with different amounts of cash in them. Store these five values in a list, \n#and print out the following sentences:\n#\"The fattest wallet has ∗value∗init.\"−\"Theskinniestwallethas value in it.\"\n#\"All together, these wallets have $ value in them.\"\n\nfrom random import randint\n\nwallets = [ [randint(1,100) for _ in range(randint(2,10))] for _ in range(5) ]\nprint(wallets)\namounts = [ sum(wallet) for wallet in wallets ]\nprint(amounts)\nprint('The fattest wallet has {} in it'.format(max(amounts)))\nprint('All together, these wallets have {} value in them'.format(sum(amounts)))\nprint('The thinnest wallet has {} in it'.format(min(amounts)))", "top\n<a name='comprehensions'></a>List Comprehensions\nI thought carefully before including this section. If you are brand new to programming, list comprehensions may look confusing at first. They are a shorthand way of creating and working with lists. It is good to be aware of list comprehensions, because you will see them in other people's code, and they are really useful when you understand how to use them. That said, if they don't make sense to you yet, don't worry about using them right away. When you have worked with enough lists, you will want to use comprehensions. For now, it is good enough to know they exist, and to recognize them when you see them. If you like them, go ahead and start trying to use them now.\n<a name='comprehensions_numerical'></a>Numerical Comprehensions\nLet's consider how we might make a list of the first ten square numbers. We could do it like this:", "# Store the first ten square numbers in a list.\n# Make an empty list that will hold our square numbers.\nsquares = []\n\n# Go through the first ten numbers, square them, and add them to our list.\nfor number in range(1,11):\n new_square = number**2\n squares.append(new_square)\n \n# Show that our list is correct.\nfor square in squares:\n print(square)", "This should make sense at this point. If it doesn't, go over the code with these thoughts in mind:\n- We make an empty list called squares that will hold the values we are interested in.\n- Using the range() function, we start a loop that will go through the numbers 1-10.\n- Each time we pass through the loop, we find the square of the current number by raising it to the second power.\n- We add this new value to our list squares.\n- We go through our newly-defined list and print out each square.\nNow let's make this code more efficient. We don't really need to store the new square in its own variable new_square; we can just add it directly to the list of squares. The line\nnew_square = number**2\n\nis taken out, and the next line takes care of the squaring:", "# Store the first ten square numbers in a list.\n# Make an empty list that will hold our square numbers.\nsquares = []\n\n# Go through the first ten numbers, square them, and add them to our list.\nfor number in range(1,11):\n squares.append(number**2)\n \n# Show that our list is correct.\nfor square in squares:\n print(square)", "List comprehensions allow us to collapse the first three lines of code into one line. Here's what it looks like:", "# Store the first ten square numbers in a list.\nsquares = [number**2 for number in range(1,11)]\n\n# Show that our list is correct.\nfor square in squares:\n print(square)", "It should be pretty clear that this code is more efficient than our previous approach, but it may not be clear what is happening. Let's take a look at everything that is happening in that first line:\nWe define a list called squares.\nLook at the second part of what's in square brackets:\nfor number in range(1,11)\n\nThis sets up a loop that goes through the numbers 1-10, storing each value in the variable number. Now we can see what happens to each number in the loop:\nnumber**2\n\nEach number is raised to the second power, and this is the value that is stored in the list we defined. We might read this line in the following way:\nsquares = [raise number to the second power, for each number in the range 1-10]\nAnother example\nIt is probably helpful to see a few more examples of how comprehensions can be used. Let's try to make the first ten even numbers, the longer way:", "# Make an empty list that will hold the even numbers.\nevens = []\n\n# Loop through the numbers 1-10, double each one, and add it to our list.\nfor number in range(1,11):\n evens.append(number*2)\n \n# Show that our list is correct:\nfor even in evens:\n print(even)", "Here's how we might think of doing the same thing, using a list comprehension:\nevens = [multiply each number by 2, for each number in the range 1-10]\nHere is the same line in code:", "# Make a list of the first ten even numbers.\nevens = [number*2 for number in range(1,11)]\n\nfor even in evens:\n print(even)", "<a name='comprehensions_non_numerical'></a>Non-numerical comprehensions\nWe can use comprehensions with non-numerical lists as well. In this case, we will create an initial list, and then use a comprehension to make a second list from the first one. Here is a simple example, without using comprehensions:", "# Consider some students.\nstudents = ['bernice', 'aaron', 'cody']\n\n# Let's turn them into great students.\ngreat_students = []\nfor student in students:\n great_students.append(student.title() + \" the great!\")\n\n# Let's greet each great student.\nfor great_student in great_students:\n print(\"Hello, \" + great_student)", "To use a comprehension in this code, we want to write something like this:\ngreat_students = [add 'the great' to each student, for each student in the list of students]\nHere's what it looks like:", "# Consider some students.\nstudents = ['bernice', 'aaron', 'cody']\n\n# Let's turn them into great students.\ngreat_students = [student.title() + \" the great!\" for student in students]\n\n# Let's greet each great student.\nfor great_student in great_students:\n print(\"Hello, \" + great_student)", "<a name='exercises_comprehensions'></a>Exercises\nIf these examples are making sense, go ahead and try to do the following exercises using comprehensions. If not, try the exercises without comprehensions. You may figure out how to use comprehensions after you have solved each exercise the longer way.\nMultiples of Ten\n\nMake a list of the first ten multiples of ten (10, 20, 30... 90, 100). There are a number of ways to do this, but try to do it using a list comprehension. Print out your list.\n\nCubes\n\nWe saw how to make a list of the first ten squares. Make a list of the first ten cubes (1, 8, 27... 1000) using a list comprehension, and print them out.\n\nAwesomeness\n\nStore five names in a list. Make a second list that adds the phrase \"is awesome!\" to each name, using a list comprehension. Print out the awesome version of the names.\n\nWorking Backwards\n\n\nWrite out the following code without using a list comprehension:\nplus_thirteen = [number + 13 for number in range(1,11)]", "# Ex 3.18 : Multiples of Ten\n\n# put your code here\n\n# Ex 3.19 : Cubes\n\n# put your code here\n\n# Ex 3.20 : Awesomeness\n\n# put your code here\n\n# Ex 3.21 : Working Backwards\n\n# put your code here", "top\n<a name='strings_as_lists'></a>Strings as Lists\nNow that you have some familiarity with lists, we can take a second look at strings. A string is really a list of characters, so many of the concepts from working with lists behave the same with strings.\n<a name='list_of_characters'></a>Strings as a list of characters\nWe can loop through a string using a for loop, just like we loop through a list:", "message = \"Hello!\"\n\nfor letter in message:\n print(letter)", "We can create a list from a string. The list will have one element for each character in the string:", "message = \"Hello world!\"\n\nmessage_list = list(message)\nprint(message_list)", "<a name='slicing_strings'></a>Slicing strings\nWe can access any character in a string by its position, just as we access individual items in a list:", "message = \"Hello World!\"\nfirst_char = message[0]\nlast_char = message[-1]\n\nprint(first_char, last_char)", "We can extend this to take slices of a string:", "message = \"Hello World!\"\nfirst_three = message[:3]\nlast_three = message[-3:]\n\nprint(first_three, last_three)", "<a name='finding_substrings'></a>Finding substrings\nNow that you have seen what indexes mean for strings, we can search for substrings. A substring is a series of characters that appears in a string.\nYou can use the in keyword to find out whether a particular substring appears in a string:", "message = \"I like cats and dogs.\"\ndog_present = 'dog' in message\nprint(dog_present)", "If you want to know where a substring appears in a string, you can use the find() method. The find() method tells you the index at which the substring begins.", "message = \"I like cats and dogs.\"\ndog_index = message.find('dog')\nprint(dog_index)", "Note, however, that this function only returns the index of the first appearance of the substring you are looking for. If the substring appears more than once, you will miss the other substrings.", "message = \"I like cats and dogs, but I'd much rather own a dog.\"\ndog_index = message.find('dog')\nprint(dog_index)", "If you want to find the last appearance of a substring, you can use the rfind() function:", "message = \"I like cats and dogs, but I'd much rather own a dog.\"\nlast_dog_index = message.rfind('dog')\nprint(last_dog_index)", "<a name='replacing_substrings'></a>Replacing substrings\nYou can use the replace() function to replace any substring with another substring. To use the replace() function, give the substring you want to replace, and then the substring you want to replace it with. You also need to store the new string, either in the same string variable or in a new variable.", "message = \"I like cats and dogs, but I'd much rather own a dog.\"\nmessage = message.replace('dog', 'snake')\nprint(message)", "<a name='counting_substrings'></a>Counting substrings\nIf you want to know how many times a substring appears within a string, you can use the count() method.", "message = \"I like cats and dogs, but I'd much rather own a dog.\"\nnumber_dogs = message.count('dog')\nprint(number_dogs)", "<a name='splitting_strings'></a>Splitting strings\nStrings can be split into a set of substrings when they are separated by a repeated character. If a string consists of a simple sentence, the string can be split based on spaces. The split() function returns a list of substrings. The split() function takes one argument, the character that separates the parts of the string.", "message = \"I like cats and dogs, but I'd much rather own a dog.\"\nwords = message.split(' ')\nprint(words)", "Notice that the punctuation is left in the substrings.\nIt is more common to split strings that are really lists, separated by something like a comma. The split() function gives you an easy way to turn comma-separated strings, which you can't do much with in Python, into lists. Once you have your data in a list, you can work with it in much more powerful ways.", "animals = \"dog, cat, tiger, mouse, liger, bear\"\n\n# Rewrite the string as a list, and store it in the same variable\nanimals = animals.split(',')\nprint(animals)", "Notice that in this case, the spaces are also ignored. It is a good idea to test the output of the split() function and make sure it is doing what you want with the data you are interested in.\nOne use of this is to work with spreadsheet data in your Python programs. Most spreadsheet applications allow you to dump your data into a comma-separated text file. You can read this file into your Python program, or even copy and paste from the text file into your program file, and then turn the data into a list. You can then process your spreadsheet data using a for loop.\n<a name='other_string_methods'></a>Other string methods\nThere are a number of other string methods that we won't go into right here, but you might want to take a look at them. Most of these methods should make sense to you at this point. You might not have use for any of them right now, but it is good to know what you can do with strings. This way you will have a sense of how to solve certain problems, even if it means referring back to the list of methods to remind yourself how to write the correct syntax when you need it.\n<a name='exercises_strings_as_lists'></a>Exercises\nListing a Sentence\n\nStore a single sentence in a variable. Use a for loop to print each character from your sentence on a separate line.\n\nSentence List\n\nStore a single sentence in a variable. Create a list from your sentence. Print your raw list (don't use a loop, just print the list).\n\nSentence Slices\n\nStore a sentence in a variable. Using slices, print out the first five characters, any five consecutive characters from the middle of the sentence, and the last five characters of the sentence.\n\nFinding Python\n\nStore a sentence in a variable, making sure you use the word Python at least twice in the sentence.\nUse the in keyword to prove that the word Python is actually in the sentence.\nUse the find() function to show where the word Python first appears in the sentence.\nUse the rfind() function to show the last place Python appears in the sentence.\nUse the count() function to show how many times the word Python appears in your sentence.\nUse the split() function to break your sentence into a list of words. Print the raw list, and use a loop to print each word on its own line.\nUse the replace() function to change Python to Ruby in your sentence.", "# Ex 3.22 : Listing a Sentence\n\n# put your code here\n\n# Ex 3.23 : Sentence List\n\n# put your code here\n\n# Ex 3.24 : Sentence Slices\n\n# put your code here\n\n# Ex 3.25 : Finding Python\n\n# put your code here", "<a name='challenges_strings_as_lists'></a>Challenges\nCounting DNA Nucleotides\n\nProject Rosalind is a problem set based on biotechnology concepts. It is meant to show how programming skills can help solve problems in genetics and biology.\nIf you have understood this section on strings, you have enough information to solve the first problem in Project Rosalind, Counting DNA Nucleotides. Give the sample problem a try.\nIf you get the sample problem correct, log in and try the full version of the problem!\n\nTranscribing DNA into RNA\n\nYou also have enough information to try the second problem, Transcribing DNA into RNA. Solve the sample problem.\nIf you solved the sample problem, log in and try the full version!\n\nComplementing a Strand of DNA\n\nYou guessed it, you can now try the third problem as well: Complementing a Strand of DNA. Try the sample problem, and then try the full version if you are successful.", "# Challenge: Counting DNA Nucleotides\n\n# Put your code here\n\n# Challenge: Transcribing DNA into RNA\n\n# Put your code here\n\n# Challenge: Complementing a Strand of DNA\n\n# Put your code here", "top\n<a name='tuples'></a>Tuples\nTuples are basically lists that can never be changed. Lists are quite dynamic; they can grow as you append and insert items, and they can shrink as you remove items. You can modify any element you want to in a list. Sometimes we like this behavior, but other times we may want to ensure that no user or no part of a program can change a list. That's what tuples are for.\nTechnically, lists are mutable objects and tuples are immutable objects. Mutable objects can change (think of mutations), and immutable objects can not change.\n<a name='defining_tuples'></a>Defining tuples, and accessing elements\nYou define a tuple just like you define a list, except you use parentheses instead of square brackets. Once you have a tuple, you can access individual elements just like you can with a list, and you can loop through the tuple with a for loop:", "colors = ('red', 'green', 'blue')\nprint(\"The first color is: \" + colors[0])\n\nprint(\"\\nThe available colors are:\")\nfor color in colors:\n print(\"- \" + color)", "If you try to add something to a tuple, you will get an error:", "colors = ('red', 'green', 'blue')\ncolors.append('purple')", "The same kind of thing happens when you try to remove something from a tuple, or modify one of its elements. Once you define a tuple, you can be confident that its values will not change.\n<a name='tuples_strings'></a>Using tuples to make strings\nWe have seen that it is pretty useful to be able to mix raw English strings with values that are stored in variables, as in the following:", "animal = 'dog'\nprint(\"I have a \" + animal + \".\")", "This was especially useful when we had a series of similar statements to make:", "animals = ['dog', 'cat', 'bear']\nfor animal in animals:\n print(\"I have a \" + animal + \".\")", "I like this approach of using the plus sign to build strings because it is fairly intuitive. We can see that we are adding several smaller strings together to make one longer string. This is intuitive, but it is a lot of typing. There is a shorter way to do this, using placeholders.\nPython ignores most of the characters we put inside of strings. There are a few characters that Python pays attention to, as we saw with strings such as \"\\t\" and \"\\n\". Python also pays attention to \"%s\" and \"%d\". These are placeholders. When Python sees the \"%s\" placeholder, it looks ahead and pulls in the first argument after the % sign:", "animal = 'dog'\nprint(\"I have a %s.\" % animal)", "This is a much cleaner way of generating strings that include values. We compose our sentence all in one string, and then tell Python what values to pull into the string, in the appropriate places.\nThis is called string formatting, and it looks the same when you use a list:", "animals = ['dog', 'cat', 'bear']\nfor animal in animals:\n print(\"I have a %s.\" % animal)", "If you have more than one value to put into the string you are composing, you have to pack the values into a tuple:", "animals = ['dog', 'cat', 'bear']\nprint(\"I have a %s, a %s, and a %s.\" % (animals[0], animals[1], animals[2]))", "String formatting with numbers\nIf you recall, printing a number with a string can cause an error:", "number = 23\nprint(\"My favorite number is \" + number + \".\")", "Python knows that you could be talking about the value 23, or the characters '23'. So it throws an error, forcing us to clarify that we want Python to treat the number as a string. We do this by casting the number into a string using the str() function:", "number = 23\nprint(\"My favorite number is \" + str(number) + \".\")", "The format string \"%d\" takes care of this for us. Watch how clean this code is:", "number = 23\nprint(\"My favorite number is %d.\" % number)", "If you want to use a series of numbers, you pack them into a tuple just like we saw with strings:", "numbers = [7, 23, 42]\nprint(\"My favorite numbers are %d, %d, and %d.\" % (numbers[0], numbers[1], numbers[2]))", "Just for clarification, look at how much longer the code is if you use concatenation instead of string formatting:", "numbers = [7, 23, 42]\nprint(\"My favorite numbers are \" + str(numbers[0]) + \", \" + str(numbers[1]) + \", and \" + str(numbers[2]) + \".\")", "You can mix string and numerical placeholders in any order you want.", "names = ['eric', 'ever']\nnumbers = [23, 2]\nprint(\"%s's favorite number is %d, and %s's favorite number is %d.\" % (names[0].title(), numbers[0], names[1].title(), numbers[1]))", "There are more sophisticated ways to do string formatting in Python 3, but we will save that for later because it's a bit less intuitive than this approach. For now, you can use whichever approach consistently gets you the output that you want to see.\n<a name='tuples_exercises'></a>Exercises\nGymnast Scores\n\nA gymnast can earn a score between 1 and 10 from each judge; nothing lower, nothing higher. All scores are integer values; there are no decimal scores from a single judge.\nStore the possible scores a gymnast can earn from one judge in a tuple.\nPrint out the sentence, \"The lowest possible score is ___, and the highest possible score is ___.\" Use the values from your tuple.\nPrint out a series of sentences, \"A judge can give a gymnast ___ points.\"\nDon't worry if your first sentence reads \"A judge can give a gymnast 1 points.\"\nHowever, you get 1000 bonus internet points if you can use a for loop, and have correct grammar. hint\n\n\n\nRevision with Tuples\n\nChoose a program you have already written that uses string concatenation.\nSave the program with the same filename, but add _tuple.py to the end. For example, gymnast_scores.py becomes gymnast_scores_tuple.py.\nRewrite your string sections using %s and %d instead of concatenation.\nRepeat this with two other programs you have already written.", "# Ex 3.26 : Gymnast Scores\n\n# put your code here\n\n# Ex 3.27 : Revision with Tuples\n\n# put your code here", "top\n<a name='sets'></a>Sets\nA set object is an unordered collection of distinct hashable objects. Common uses include membership testing, removing duplicates from a sequence, and computing mathematical operations such as intersection, union, difference, and symmetric difference.", "shapes = ['circle','square','triangle','circle']\nset_of_shapes = set(shapes)\nset_of_shapes\n\nshapes = {'circle','square','triangle','circle'}\nfor shape in set_of_shapes:\n print(shape)\n\nset_of_shapes.add('polygon') \nprint(set_of_shapes)", "Exists (Check)", "# Test if circle is IN the set (i.e. exist)\nprint('Circle is in the set: ', ('circle' in set_of_shapes))\nprint('Rhombus is in the set:', ('rhombus' in set_of_shapes))", "Operations", "favourites_shapes = set(['circle','triangle','hexagon'])\n\n# Intersection\nset_of_shapes.intersection(favourites_shapes)\n\n# Union\nset_of_shapes.union(favourites_shapes)\n\n# Difference\nset_of_shapes.difference(favourites_shapes)", "<a name='challenges_overall'></a>Overall Challenges\nProgramming Words\n\nMake a list of the most important words you have learned in programming so far. You should have terms such as list,\nMake a corresponding list of definitions. Fill your list with 'definition'.\nUse a for loop to print out each word and its corresponding definition.\nMaintain this program until you get to the section on Python's Dictionaries.", "# Overall Challenges: Programming Words\n\n# Put your code here", "top\nHints\nThese are placed at the bottom, so you can have a chance to solve exercises without seeing any hints.\n<a name='hints_gymnast_scores'></a>\nGymnast Scores\n\nHint: Use a slice." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chrislit/abydos
binder/Reversed Metaphone using Keras seq2seq.ipynb
gpl-3.0
[ "Reversed Metaphone using Keras seq2seq\nThis is a quick demo just for fun, demonstrating a Keras seq2seq model 'translating' from Double Metaphone back to German surnames.\nThis notebook based on the Keras example found at https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py. The only changes are the data (now a set of 8000 German surnames) and the use of Double Metaphone to transform those surnames.", "from keras.models import Model\nfrom keras.layers import Input, LSTM, Dense\nimport numpy as np\n\nfrom abydos.phonetic import DoubleMetaphone\ndm = DoubleMetaphone()\n\nbatch_size = 64 # Batch size for training.\nepochs = 100 # Number of epochs to train for.\nlatent_dim = 256 # Latent dimensionality of the encoding space.\nnum_samples = 10000 # Number of samples to train on.", "The data is the nachnamen.csv file from the tests/corpora directory. Only the first field, which contains the surnames, is used.", "data_path = '../tests/corpora/nachnamen.csv'", "Below, as each line is read from the file, the first field is retained and the first Double Metaphone encoding is calculated. For training, the Double Metaphone is the input and the original name is the target value.", "# Vectorize the data.\ninput_texts = []\ntarget_texts = []\ninput_characters = set()\ntarget_characters = set()\nwith open(data_path, 'r', encoding='utf-8') as f:\n lines = f.read().split('\\n')\nfor line in lines[: min(num_samples, len(lines) - 1)]:\n target_text = line.split(',')[0]\n input_text = dm.encode(target_text)[0]\n # We use \"tab\" as the \"start sequence\" character\n # for the targets, and \"\\n\" as \"end sequence\" character.\n target_text = '\\t' + target_text + '\\n'\n input_texts.append(input_text)\n target_texts.append(target_text)\n for char in input_text:\n if char not in input_characters:\n input_characters.add(char)\n for char in target_text:\n if char not in target_characters:\n target_characters.add(char)\n\ninput_characters = sorted(list(input_characters))\ntarget_characters = sorted(list(target_characters))\nnum_encoder_tokens = len(input_characters)\nnum_decoder_tokens = len(target_characters)\nmax_encoder_seq_length = max([len(txt) for txt in input_texts])\nmax_decoder_seq_length = max([len(txt) for txt in target_texts])\n\nprint('Number of samples:', len(input_texts))\nprint('Number of unique input tokens:', num_encoder_tokens)\nprint('Number of unique output tokens:', num_decoder_tokens)\nprint('Max sequence length for inputs:', max_encoder_seq_length)\nprint('Max sequence length for outputs:', max_decoder_seq_length)\n\ninput_token_index = dict(\n [(char, i) for i, char in enumerate(input_characters)])\ntarget_token_index = dict(\n [(char, i) for i, char in enumerate(target_characters)])\n\nencoder_input_data = np.zeros(\n (len(input_texts), max_encoder_seq_length, num_encoder_tokens),\n dtype='float32')\ndecoder_input_data = np.zeros(\n (len(input_texts), max_decoder_seq_length, num_decoder_tokens),\n dtype='float32')\ndecoder_target_data = np.zeros(\n (len(input_texts), max_decoder_seq_length, num_decoder_tokens),\n dtype='float32')\n\nfor i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):\n for t, char in enumerate(input_text):\n encoder_input_data[i, t, input_token_index[char]] = 1.\n for t, char in enumerate(target_text):\n # decoder_target_data is ahead of decoder_input_data by one timestep\n decoder_input_data[i, t, target_token_index[char]] = 1.\n if t > 0:\n # decoder_target_data will be ahead by one timestep\n # and will not include the start character.\n decoder_target_data[i, t - 1, target_token_index[char]] = 1.\n\n# Define an input sequence and process it.\nencoder_inputs = Input(shape=(None, num_encoder_tokens))\nencoder = LSTM(latent_dim, return_state=True)\nencoder_outputs, state_h, state_c = encoder(encoder_inputs)\n# We discard `encoder_outputs` and only keep the states.\nencoder_states = [state_h, state_c]\n\n# Set up the decoder, using `encoder_states` as initial state.\ndecoder_inputs = Input(shape=(None, num_decoder_tokens))\n# We set up our decoder to return full output sequences,\n# and to return internal states as well. We don't use the\n# return states in the training model, but we will use them in inference.\ndecoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)\ndecoder_outputs, _, _ = decoder_lstm(decoder_inputs,\n initial_state=encoder_states)\ndecoder_dense = Dense(num_decoder_tokens, activation='softmax')\ndecoder_outputs = decoder_dense(decoder_outputs)\n\n# Define the model that will turn\n# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`\nmodel = Model([encoder_inputs, decoder_inputs], decoder_outputs)", "Train and save the model below:", "# Run training\nmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy')\nmodel.fit([encoder_input_data, decoder_input_data], decoder_target_data,\n batch_size=batch_size,\n epochs=epochs,\n validation_split=0.2)\n# Save model\nmodel.save('s2s.h5')\n\n# Next: inference mode (sampling).\n# Here's the drill:\n# 1) encode input and retrieve initial decoder state\n# 2) run one step of decoder with this initial state\n# and a \"start of sequence\" token as target.\n# Output will be the next target token\n# 3) Repeat with the current target token and current states\n\n# Define sampling models\nencoder_model = Model(encoder_inputs, encoder_states)\n\ndecoder_state_input_h = Input(shape=(latent_dim,))\ndecoder_state_input_c = Input(shape=(latent_dim,))\ndecoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]\ndecoder_outputs, state_h, state_c = decoder_lstm(\n decoder_inputs, initial_state=decoder_states_inputs)\ndecoder_states = [state_h, state_c]\ndecoder_outputs = decoder_dense(decoder_outputs)\ndecoder_model = Model(\n [decoder_inputs] + decoder_states_inputs,\n [decoder_outputs] + decoder_states)\n\n# Reverse-lookup token index to decode sequences back to\n# something readable.\nreverse_input_char_index = dict(\n (i, char) for char, i in input_token_index.items())\nreverse_target_char_index = dict(\n (i, char) for char, i in target_token_index.items())\n\ndef decode_sequence(input_seq):\n # Encode the input as state vectors.\n states_value = encoder_model.predict(input_seq)\n\n # Generate empty target sequence of length 1.\n target_seq = np.zeros((1, 1, num_decoder_tokens))\n # Populate the first character of target sequence with the start character.\n target_seq[0, 0, target_token_index['\\t']] = 1.\n\n # Sampling loop for a batch of sequences\n # (to simplify, here we assume a batch of size 1).\n stop_condition = False\n decoded_sentence = ''\n while not stop_condition:\n output_tokens, h, c = decoder_model.predict(\n [target_seq] + states_value)\n\n # Sample a token\n sampled_token_index = np.argmax(output_tokens[0, -1, :])\n sampled_char = reverse_target_char_index[sampled_token_index]\n decoded_sentence += sampled_char\n\n # Exit condition: either hit max length\n # or find stop character.\n if (sampled_char == '\\n' or\n len(decoded_sentence) > max_decoder_seq_length):\n stop_condition = True\n\n # Update the target sequence (of length 1).\n target_seq = np.zeros((1, 1, num_decoder_tokens))\n target_seq[0, 0, sampled_token_index] = 1.\n\n # Update states\n states_value = [h, c]\n\n return decoded_sentence", "Finally, some decoded examples are shown below. The decoded sequences are generally German-like surnames (some quite common) and are names that would be encoded to the input sequence by Double Metaphone.", "for seq_index in range(100):\n # Take one sequence (part of the training set)\n # for trying out decoding.\n input_seq = encoder_input_data[seq_index: seq_index + 1]\n decoded_sentence = decode_sequence(input_seq)\n print('Input sequence:', input_texts[seq_index])\n print('Decoded sequence:', decoded_sentence)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
0x4a50/udacity-0x4a50-deep-learning-nanodegree
language-translation/dlnd_language_translation.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n \n source_id_text = []\n for line in source_text.split('\\n'):\n source_id_text.append([source_vocab_to_int[word] for word in line.split()])\n\n target_id_text = []\n for line in target_text.split('\\n'):\n target_id_text.append([target_vocab_to_int[word] for word in line.split()] + [target_vocab_to_int['<EOS>']])\n \n return (source_id_text, target_id_text)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n inputs = tf.placeholder(tf.int32, shape=(None, None), name=\"input\")\n targets = tf.placeholder(tf.int32, shape=(None, None))\n learning_rate = tf.placeholder(tf.float32, shape=(), name=\"learning_rate\")\n keep_prob = tf.placeholder(tf.float32, shape=(), name=\"keep_prob\")\n target_sequence_length = tf.placeholder(tf.int32, shape=(None,), name=\"target_sequence_length\")\n max_target_length = tf.reduce_max(target_sequence_length, name=\"max_target_len\")\n source_sequence_length= tf.placeholder(tf.int32, shape=(None,), name=\"source_sequence_length\")\n \n return (inputs, targets, learning_rate, keep_prob, target_sequence_length,\n max_target_length, source_sequence_length)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.", "def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n sliced = tf.strided_slice(target_data, [0,0], [batch_size, -1], strides=[1,1])\n return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), sliced], 1)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()", "from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n \n embedding = tf.contrib.layers.embed_sequence(\n rnn_inputs, vocab_size=source_vocab_size, embed_dim=encoding_embedding_size\n )\n stacked_lstm = tf.contrib.rnn.MultiRNNCell(\n [tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), keep_prob) for _ in range(num_layers)]\n )\n \n output, state = tf.nn.dynamic_rnn(\n stacked_lstm,\n embedding,\n sequence_length=source_sequence_length,\n dtype=tf.float32\n )\n return output, state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n \n helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)\n decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, initial_state=encoder_state, output_layer=output_layer)\n return tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length)[0]\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')\n helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)\n decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer=output_layer)\n return tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length)[0]\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n \n dec_cells = tf.contrib.rnn.MultiRNNCell(\n [tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), keep_prob) for _ in range(num_layers)]\n )\n \n output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))\n \n with tf.variable_scope('decode'):\n training_decoder_logits = decoding_layer_train(\n encoder_state,\n dec_cells,\n dec_embed_input,\n target_sequence_length,\n max_target_sequence_length,\n output_layer,\n keep_prob\n )\n with tf.variable_scope('decode', reuse=True):\n infer_decoder_logits = decoding_layer_infer(\n encoder_state,\n dec_cells,\n dec_embeddings,\n target_vocab_to_int['<GO>'],\n target_vocab_to_int['<EOS>'],\n max_target_sequence_length,\n target_vocab_size,\n output_layer,\n batch_size,\n keep_prob\n )\n \n return (training_decoder_logits, infer_decoder_logits)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.", "def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n \n enc_layer = encoding_layer(\n input_data,\n rnn_size,\n num_layers,\n keep_prob,\n source_sequence_length,\n source_vocab_size,\n enc_embedding_size\n )\n \n dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)\n \n return decoding_layer(\n dec_input,\n enc_layer[1],\n target_sequence_length,\n max_target_sentence_length,\n rnn_size,\n num_layers,\n target_vocab_to_int,\n target_vocab_size,\n batch_size,\n keep_prob,\n dec_embedding_size\n )\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement", "# Number of Epochs\nepochs = 2\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 128\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 128\ndecoding_embedding_size = 128\n# Learning Rate\nlearning_rate = 0.01\n# Dropout Keep Probability\nkeep_probability = 0.7\ndisplay_step = 10", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Batch and pad the source and target sequences", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n sentence = sentence.lower()\n word_ids = []\n for word in sentence.split():\n if word in vocab_to_int:\n word_ids.append(vocab_to_int[word])\n else:\n word_ids.append(vocab_to_int[\"<UNK>\"])\n return word_ids\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hunterherrin/phys202-2015-work
assignments/assignment03/NumpyEx04.ipynb
mit
[ "Numpy Exercise 4\nImports", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "Complete graph Laplacian\nIn discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.\nA Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.\nHere is $K_5$:", "import networkx as nx\nK_5=nx.complete_graph(5)\nnx.draw(K_5)", "The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.\nThe degree matrix for $K_n$ is an $n \\times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.", "def complete_deg(n):\n \"\"\"Return the integer valued degree matrix D for the complete graph K_n.\"\"\"\n z=np.zeros((n,n), dtype=int)\n np.fill_diagonal(z,(n-1))\n return z\n\n\nD = complete_deg(5)\nassert D.shape==(5,5)\nassert D.dtype==np.dtype(int)\nassert np.all(D.diagonal()==4*np.ones(5))\nassert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))", "The adjacency matrix for $K_n$ is an $n \\times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.", "def complete_adj(n):\n \"\"\"Return the integer valued adjacency matrix A for the complete graph K_n.\"\"\"\n u = np.zeros((n,n), dtype=int)\n u = u + 1\n np.fill_diagonal(u,0)\n return u\n\nA = complete_adj(5)\nassert A.shape==(5,5)\nassert A.dtype==np.dtype(int)\nassert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))", "Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.", "def Laplacian(n):\n return complete_deg(n) - complete_adj(n)\n\nfor n in range(1,10):\n print(np.linalg.eigvals(Laplacian(n)))", "YOUR ANSWER HERE\nThe only thing that changes is that n inccreases, one eigenvalue will always be n-1 while the other will be a number so small it can be considered zero. The number of times the real eigenvalue shows up can also be predicted by a relationship of n-2" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/mlops-on-gcp
immersion/kubeflow_pipelines/cicd/solutions/lab-03.ipynb
apache-2.0
[ "CI/CD for a KFP pipeline\nLearning Objectives:\n1. Learn how to create a custom Cloud Build builder to pilote CAIP Pipelines\n1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP\n1. Learn how to setup a Cloud Build Github trigger to rebuild the KFP\nIn this lab you will walk through authoring of a Cloud Build CI/CD workflow that automatically builds and deploys a KFP pipeline. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.\nConfiguring environment settings\nUpdate the ENDPOINT constat with the settings reflecting your lab environment. \nThen endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.\n\nOpen the SETTINGS for your instance\nUse the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.", "ENDPOINT = '<YOUR_ENDPOINT>'\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]", "Creating the KFP CLI builder\nReview the Dockerfile describing the KFP CLI builder", "!cat kfp-cli/Dockerfile", "Build the image and push it to your project's Container Registry.", "IMAGE_NAME='kfp-cli'\nTAG='latest'\nIMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)\n\n!gcloud builds submit --timeout 15m --tag {IMAGE_URI} kfp-cli", "Understanding the Cloud Build workflow.\nReview the cloudbuild.yaml file to understand how the CI/CD workflow is implemented and how environment specific settings are abstracted using Cloud Build variables.\nThe CI/CD workflow automates the steps you walked through manually during lab-02-kfp-pipeline:\n1. Builds the trainer image\n1. Builds the base image for custom components\n1. Compiles the pipeline\n1. Uploads the pipeline to the KFP environment\n1. Pushes the trainer and base images to your project's Container Registry\nAlthough the KFP backend supports pipeline versioning, this feature has not been yet enable through the KFP CLI. As a temporary workaround, in the Cloud Build configuration a value of the TAG_NAME variable is appended to the name of the pipeline. \nThe Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates KFP CLI. \nManually triggering CI/CD runs\nYou can manually trigger Cloud Build runs using the gcloud builds submit command.", "SUBSTITUTIONS=\"\"\"\n_ENDPOINT={},\\\n_TRAINER_IMAGE_NAME=trainer_image,\\\n_BASE_IMAGE_NAME=base_image,\\\nTAG_NAME=test,\\\n_PIPELINE_FOLDER=.,\\\n_PIPELINE_DSL=covertype_training_pipeline.py,\\\n_PIPELINE_PACKAGE=covertype_training_pipeline.yaml,\\\n_PIPELINE_NAME=covertype_continuous_training,\\\n_RUNTIME_VERSION=1.15,\\\n_PYTHON_VERSION=3.7,\\\n_USE_KFP_SA=True,\\\n_COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/\n\"\"\".format(ENDPOINT).strip()\n\n!gcloud builds submit . --config cloudbuild.yaml --substitutions {SUBSTITUTIONS}", "Setting up GitHub integration\nIn this exercise you integrate your CI/CD workflow with GitHub, using Cloud Build GitHub App. \nYou will set up a trigger that starts the CI/CD workflow when a new tag is applied to the GitHub repo managing the pipeline source code. You will use a fork of this repo as your source GitHub repository.\nCreate a fork of this repo\nFollow the GitHub documentation to fork this repo\nCreate a Cloud Build trigger\nConnect the fork you created in the previous step to your Google Cloud project and create a trigger following the steps in the Creating GitHub app trigger article. Use the following values on the Edit trigger form:\n|Field|Value|\n|-----|-----|\n|Name|[YOUR TRIGGER NAME]|\n|Description|[YOUR TRIGGER DESCRIPTION]|\n|Event| Tag|\n|Source| [YOUR FORK]|\n|Tag (regex)|.*|\n|Build Configuration|Cloud Build configuration file (yaml or json)|\n|Cloud Build configuration file location| ./immersion/kubeflow_pipelines/cicd/solutions/cloudbuild.yaml|\nUse the following values for the substitution variables:\n|Variable|Value|\n|--------|-----|\n|_BASE_IMAGE_NAME|base_image|\n|_COMPONENT_URL_SEARCH_PREFIX|https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/|\n|_ENDPOINT|[Your inverting proxy host]|\n|_PIPELINE_DSL|covertype_training_pipeline.py|\n|_PIPELINE_FOLDER|immersion/kubeflow_pipelines/cicd/solutions|\n|_PIPELINE_NAME|covertype_training_deployment|\n|_PIPELINE_PACKAGE|covertype_training_pipeline.yaml|\n|_PYTHON_VERSION|3.7|\n|_RUNTIME_VERSION|1.15|\n|_TRAINER_IMAGE_NAME|trainer_image|\n|_USE_KFP_SA|False|\nTrigger the build\nTo start an automated build create a new release of the repo in GitHub. Alternatively, you can start the build by applying a tag using git. \ngit tag [TAG NAME]\ngit push origin --tags\n<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/mpi-m/cmip6/models/mpi-esm-1-2-lr/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: MPI-M\nSource ID: MPI-ESM-1-2-LR\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:17\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-lr', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/feature_engineering/labs/4_keras_adv_feat_eng.ipynb
apache-2.0
[ "Advanced Feature Engineering in Keras\nLearning Objectives\n\nProcess temporal feature columns in Keras\nUse Lambda layers to perform feature engineering on geolocation features \nCreate bucketized and crossed feature columns\n\nOverview\nIn this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides. \nWe will start by importing the necessary libraries for this lab.", "import datetime\nimport logging\nimport os\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow import feature_column as fc\nfrom tensorflow.keras import layers, models\n\n# set TF error log verbosity\nlogging.getLogger(\"tensorflow\").setLevel(logging.ERROR)\n\nprint(tf.version.VERSION)", "Load taxifare dataset\nThe Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict. \nLet's check that the files look like we expect them to.", "!ls -l ../data/taxi-*.csv\n\n!head ../data/taxi-*.csv", "Create an input pipeline\nTypically, you will use a two step proces to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model.", "CSV_COLUMNS = [\n \"fare_amount\",\n \"pickup_datetime\",\n \"pickup_longitude\",\n \"pickup_latitude\",\n \"dropoff_longitude\",\n \"dropoff_latitude\",\n \"passenger_count\",\n \"key\",\n]\nLABEL_COLUMN = \"fare_amount\"\nSTRING_COLS = [\"pickup_datetime\"]\nNUMERIC_COLS = [\n \"pickup_longitude\",\n \"pickup_latitude\",\n \"dropoff_longitude\",\n \"dropoff_latitude\",\n \"passenger_count\",\n]\nDEFAULTS = [[0.0], [\"na\"], [0.0], [0.0], [0.0], [0.0], [0.0], [\"na\"]]\nDAYS = [\"Sun\", \"Mon\", \"Tue\", \"Wed\", \"Thu\", \"Fri\", \"Sat\"]\n\n# A function to define features and labesl\ndef features_and_labels(row_data):\n for unwanted_col in [\"key\"]:\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label\n\n\n# A utility method to create a tf.data dataset from a Pandas Dataframe\ndef load_dataset(pattern, batch_size=1, mode=\"eval\"):\n dataset = tf.data.experimental.make_csv_dataset(\n pattern, batch_size, CSV_COLUMNS, DEFAULTS\n )\n dataset = dataset.map(features_and_labels) # features, label\n if mode == \"train\":\n dataset = dataset.shuffle(1000).repeat()\n # take advantage of multi-threading; 1=AUTOTUNE\n dataset = dataset.prefetch(1)\n return dataset", "Create a Baseline DNN Model in Keras\nNow let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques.", "# Build a simple Keras DNN using its Functional API\ndef rmse(y_true, y_pred): # Root mean square error\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n\ndef build_dnn_model():\n # input layer\n inputs = {\n colname: layers.Input(name=colname, shape=(), dtype=\"float32\")\n for colname in NUMERIC_COLS\n }\n\n # feature_columns\n feature_columns = {\n colname: fc.numeric_column(colname) for colname in NUMERIC_COLS\n }\n\n # Constructor for DenseFeatures takes a list of numeric columns\n dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)\n\n # two hidden layers of [32, 8] just in like the BQML DNN\n h1 = layers.Dense(32, activation=\"relu\", name=\"h1\")(dnn_inputs)\n h2 = layers.Dense(8, activation=\"relu\", name=\"h2\")(h1)\n\n # final output is a linear activation because this is regression\n output = layers.Dense(1, activation=\"linear\", name=\"fare\")(h2)\n model = models.Model(inputs, output)\n\n # compile model\n model.compile(optimizer=\"adam\", loss=\"mse\", metrics=[rmse, \"mse\"])\n\n return model", "We'll build our DNN model and inspect the model architecture.", "model = build_dnn_model()\n\ntf.keras.utils.plot_model(\n model, \"dnn_model.png\", show_shapes=False, rankdir=\"LR\"\n)", "Train the model\nTo train the model, simply call model.fit(). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.\nWe start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model.", "TRAIN_BATCH_SIZE = 32\nNUM_TRAIN_EXAMPLES = 7333 * 30\nNUM_EVALS = 30\nNUM_EVAL_EXAMPLES = 1571\n\ntrainds = load_dataset(\"../data/taxi-train*\", TRAIN_BATCH_SIZE, \"train\")\nevalds = load_dataset(\"../data/taxi-valid*\", 1000, \"eval\").take(\n NUM_EVAL_EXAMPLES // 1000\n)\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\nhistory = model.fit(\n trainds,\n validation_data=evalds,\n epochs=NUM_EVALS,\n steps_per_epoch=steps_per_epoch,\n)", "Visualize the model loss curve\nNext, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.", "def plot_curves(history, metrics):\n nrows = 1\n ncols = 2\n fig = plt.figure(figsize=(10, 5))\n\n for idx, key in enumerate(metrics):\n ax = fig.add_subplot(nrows, ncols, idx + 1)\n plt.plot(history.history[key])\n plt.plot(history.history[f\"val_{key}\"])\n plt.title(f\"model {key}\")\n plt.ylabel(key)\n plt.xlabel(\"epoch\")\n plt.legend([\"train\", \"validation\"], loc=\"upper left\");\n\nplot_curves(history, [\"loss\", \"mse\"])", "Predict with the model locally\nTo predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime.", "model.predict(\n {\n \"pickup_longitude\": tf.convert_to_tensor([-73.982683]),\n \"pickup_latitude\": tf.convert_to_tensor([40.742104]),\n \"dropoff_longitude\": tf.convert_to_tensor([-73.983766]),\n \"dropoff_latitude\": tf.convert_to_tensor([40.755174]),\n \"passenger_count\": tf.convert_to_tensor([3.0]),\n \"pickup_datetime\": tf.convert_to_tensor(\n [\"2010-02-08 09:17:00 UTC\"], dtype=tf.string\n ),\n },\n steps=1,\n)", "Improve Model Performance Using Feature Engineering\nWe now improve our model's performance by creating the following feature engineering types: Temporal, Categorical, and Geolocation. \nTemporal Feature Columns\nExercise. Processing temporal feature columns in Keras\nWe incorporate the temporal feature pickup_datetime. As noted earlier, pickup_datetime is a string and we will need to handle this within the model. First, you will include the pickup_datetime as a feature and then you will need to modify the model to handle our string feature.", "def parse_datetime(s):\n if type(s) is not str:\n s = s.numpy().decode(\"utf-8\")\n return # TODO: Your code here\n\n\ndef get_dayofweek(s):\n ts = parse_datetime(s)\n return # TODO: Your code here\n\n\n@tf.function\ndef dayofweek(ts_in):\n return tf.map_fn(\n # TODO: Your code here,\n ts_in\n )", "Geolocation/Coordinate Feature Columns\nThe pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.\nRecall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.\nComputing Euclidean distance\nThe dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.", "def euclidean(params):\n lon1, lat1, lon2, lat2 = params\n londiff = lon2 - lon1\n latdiff = lat2 - lat1\n return tf.sqrt(londiff * londiff + latdiff * latdiff)", "Scaling latitude and longitude\nIt is very important for numerical variables to get scaled before they are \"fed\" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation features. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1.\nFirst, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitudinal value and then divide by 8 to return a scaled value.", "def scale_longitude(lon_column):\n return (lon_column + 78) / 8.0", "Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling latitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.", "def scale_latitude(lat_column):\n return (lat_column - 37) / 8.0", "Putting it all together\nWe will create a function called \"euclidean\" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclidean distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features.\nExercise. We will use Lambda layers to create two new \"geo\" functions for our model.\nExercise. Creating the bucketized and crossed feature columns", "def transform(inputs, numeric_cols, string_cols, nbuckets):\n print(f\"Inputs before features transformation: {inputs.keys()}\")\n\n # Pass-through columns\n transformed = inputs.copy()\n del transformed[\"pickup_datetime\"]\n\n feature_columns = {\n colname: tf.feature_column.numeric_column(colname)\n for colname in numeric_cols\n }\n\n # Scaling longitude from range [-70, -78] to [0, 1]\n # TODO: Your code here\n\n # Scaling latitude from range [37, 45] to [0, 1]\n # TODO: Your code here\n\n # add Euclidean distance\n transformed[\"euclidean\"] = layers.Lambda(euclidean, name=\"euclidean\")(\n [\n inputs[\"pickup_longitude\"],\n inputs[\"pickup_latitude\"],\n inputs[\"dropoff_longitude\"],\n inputs[\"dropoff_latitude\"],\n ]\n )\n feature_columns[\"euclidean\"] = fc.numeric_column(\"euclidean\")\n\n # create bucketized features\n latbuckets = np.linspace(0, 1, nbuckets).tolist()\n lonbuckets = np.linspace(0, 1, nbuckets).tolist()\n b_plat = fc.bucketized_column(\n # TODO: Your code here\n )\n b_dlat = fc.bucketized_column(\n # TODO: Your code here\n )\n b_plon = fc.bucketized_column(\n # TODO: Your code here\n )\n b_dlon = fc.bucketized_column(\n # TODO: Your code here\n )\n\n # create crossed columns\n ploc = fc.crossed_column(\n # TODO: Your code here\n )\n dloc = fc.crossed_column(\n # TODO: Your code here\n )\n pd_pair = fc.crossed_column(\n # TODO: Your code here\n )\n\n # create embedding columns\n feature_columns[\"pickup_and_dropoff\"] = fc.embedding_column(pd_pair, 100)\n\n print(f\"Transformed features: {transformed.keys()}\")\n print(f\"Feature columns: {feature_columns.keys()}\")\n return transformed, feature_columns", "Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude.", "NBUCKETS = 10\n\n\n# DNN MODEL\ndef rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n\ndef build_dnn_model():\n # input layer is all float except for pickup_datetime which is a string\n inputs = {\n colname: layers.Input(name=colname, shape=(), dtype=\"float32\")\n for colname in NUMERIC_COLS\n }\n inputs.update(\n {\n colname: tf.keras.layers.Input(\n name=colname, shape=(), dtype=\"string\"\n )\n for colname in STRING_COLS\n }\n )\n\n # transforms\n transformed, feature_columns = transform(\n inputs,\n numeric_cols=NUMERIC_COLS,\n string_cols=STRING_COLS,\n nbuckets=NBUCKETS,\n )\n dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)\n\n # two hidden layers of [32, 8] just in like the BQML DNN\n h1 = layers.Dense(32, activation=\"relu\", name=\"h1\")(dnn_inputs)\n h2 = layers.Dense(8, activation=\"relu\", name=\"h2\")(h1)\n\n # final output is a linear activation because this is regression\n output = layers.Dense(1, activation=\"linear\", name=\"fare\")(h2)\n model = models.Model(inputs, output)\n\n # Compile model\n model.compile(optimizer=\"adam\", loss=\"mse\", metrics=[rmse, \"mse\"])\n return model\n\nmodel = build_dnn_model()", "Let's see how our model architecture has changed now.", "tf.keras.utils.plot_model(\n model, \"dnn_model_engineered.png\", show_shapes=False, rankdir=\"LR\"\n)\n\ntrainds = load_dataset(\"../data/taxi-train*\", TRAIN_BATCH_SIZE, \"train\")\nevalds = load_dataset(\"../data/taxi-valid*\", 1000, \"eval\").take(\n NUM_EVAL_EXAMPLES // 1000\n)\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\nhistory = model.fit(\n trainds,\n validation_data=evalds,\n epochs=NUM_EVALS + 3,\n steps_per_epoch=steps_per_epoch,\n)", "As before, let's visualize the DNN model layers.", "plot_curves(history, [\"loss\", \"mse\"])", "Let's a prediction with this new model with engineered features on the example we had above.", "model.predict(\n {\n \"pickup_longitude\": tf.convert_to_tensor([-73.982683]),\n \"pickup_latitude\": tf.convert_to_tensor([40.742104]),\n \"dropoff_longitude\": tf.convert_to_tensor([-73.983766]),\n \"dropoff_latitude\": tf.convert_to_tensor([40.755174]),\n \"passenger_count\": tf.convert_to_tensor([3.0]),\n \"pickup_datetime\": tf.convert_to_tensor(\n [\"2010-02-08 09:17:00 UTC\"], dtype=tf.string\n ),\n },\n steps=1,\n)", "Below we summarize our training results comparing our baseline model with our model with engineered features.\n| Model | Taxi Fare | Description |\n|--------------------|-----------|-------------------------------------------|\n| Baseline | value? | Baseline model - no feature engineering |\n| Feature Engineered | value? | Feature Engineered Model |\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
napsternxg/gensim
docs/notebooks/FastText_Tutorial.ipynb
gpl-3.0
[ "Using FastText via Gensim\nThis tutorial is about using fastText model in Gensim. There are two ways you can use fastText in Gensim - Gensim's native implementation of fastText and Gensim wrapper for fastText's original C++ code. Here, we'll learn to work with fastText library for training word-embedding models, saving & loading them and performing similarity operations & vector lookups analogous to Word2Vec.\nWhen to use FastText?\nThe main principle behind fastText is that the morphological structure of a word carries important information about the meaning of the word, which is not taken into account by traditional word embeddings, which train a unique word embedding for every individual word. This is especially significant for morphologically rich languages (German, Turkish) in which a single word can have a large number of morphological forms, each of which might occur rarely, thus making it hard to train good word embeddings.\nfastText attempts to solve this by treating each word as the aggregation of its subwords. For the sake of simplicity and language-independence, subwords are taken to be the character ngrams of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams.\nAccording to a detailed comparison of Word2Vec and FastText in this notebook, fastText does significantly better on syntactic tasks as compared to the original Word2Vec, especially when the size of the training corpus is small. Word2Vec slightly outperforms FastText on semantic tasks though. The differences grow smaller as the size of training corpus increases.\nTraining time for fastText is significantly higher than the Gensim version of Word2Vec (15min 42s vs 6min 42s on text8, 17 mil tokens, 5 epochs, and a vector size of 100).\nfastText can be used to obtain vectors for out-of-vocabulary (OOV) words, by summing up vectors for its component char-ngrams, provided at least one of the char-ngrams was present in the training data.\nTraining models\nFor the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim) for training our model.\nFor using the wrapper for fastText, you need to have fastText setup locally to be able to train models. See installation instructions for fastText if you don't have fastText installed already.\nUsing Gensim's implementation of fastText", "from gensim.models.fasttext import FastText as FT_gensim\nfrom gensim.test.utils import datapath\n\n# Set file names for train and test data\ncorpus_file = datapath('lee_background.cor')\n\nmodel_gensim = FT_gensim(size=100)\n\n# build the vocabulary\nmodel_gensim.build_vocab(corpus_file=corpus_file)\n\n# train the model\nmodel_gensim.train(\n corpus_file=corpus_file, epochs=model_gensim.epochs,\n total_examples=model_gensim.corpus_count, total_words=model_gensim.corpus_total_words\n)\n\nprint(model_gensim)", "Using wrapper for fastText's C++ code", "from gensim.models.wrappers.fasttext import FastText as FT_wrapper\n\n# Set FastText home to the path to the FastText executable\nft_home = '/home/misha/src/fastText-0.1.0/fasttext'\n\n# train the model\nmodel_wrapper = FT_wrapper.train(ft_home, corpus_file)\n\nprint(model_wrapper)", "Training hyperparameters\nHyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the following parameters from the original word2vec - \n - model: Training architecture. Allowed values: cbow, skipgram (Default cbow)\n - size: Size of embeddings to be learnt (Default 100)\n - alpha: Initial learning rate (Default 0.025)\n - window: Context window size (Default 5)\n - min_count: Ignore words with number of occurrences below this (Default 5)\n - loss: Training objective. Allowed values: ns, hs, softmax (Default ns)\n - sample: Threshold for downsampling higher-frequency words (Default 0.001)\n - negative: Number of negative words to sample, for ns (Default 5)\n - iter: Number of epochs (Default 5)\n - sorted_vocab: Sort vocab by descending frequency (Default 1)\n - threads: Number of threads to use (Default 12)\nIn addition, FastText has three additional parameters - \n - min_n: min length of char ngrams (Default 3)\n - max_n: max length of char ngrams (Default 6)\n - bucket: number of buckets used for hashing ngrams (Default 2000000)\nParameters min_n and max_n control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If max_n is set to 0, or to be lesser than min_n, no character ngrams are used, and the model effectively reduces to Word2Vec.\nTo bound the memory requirements of the model being trained, a hashing function is used that maps ngrams to integers in 1 to K. For hashing these character sequences, the Fowler-Noll-Vo hashing function (FNV-1a variant) is employed.\nNote: As in the case of Word2Vec, you can continue to train your model while using Gensim's native implementation of fastText.\nSaving/loading models\nModels can be saved and loaded via the load and save methods.", "# saving a model trained via Gensim's fastText implementation\nmodel_gensim.save('saved_model_gensim')\nloaded_model = FT_gensim.load('saved_model_gensim')\nprint(loaded_model)\n\n# saving a model trained via fastText wrapper\nmodel_wrapper.save('saved_model_wrapper')\nloaded_model = FT_wrapper.load('saved_model_wrapper')\nprint(loaded_model)", "The save_word2vec_method causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model. \nWord vector lookup\nNote: Operations like word vector lookups and similarity queries can be performed in exactly the same manner for both the implementations of fastText so they have been demonstrated using only the fastText wrapper here.\nFastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word.", "print('night' in model_wrapper.wv.vocab)\nprint('nights' in model_wrapper.wv.vocab)\nprint(model_wrapper['night'])\nprint(model_wrapper['nights'])", "The word vector lookup operation only works if at least one of the component character ngrams is present in the training corpus. For example -", "# Raises a KeyError since none of the character ngrams of the word `axe` are present in the training data\ntry:\n model_wrapper['axe']\nexcept KeyError:\n #\n # trap the error here so it does not interfere\n # with the execution of the cells below\n #\n pass\nelse:\n assert False, 'the above code should have raised a KeyError'", "The in operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary -", "# Tests if word present in vocab\nprint(\"word\" in model_wrapper.wv.vocab)\n# Tests if vector present for word\nprint(\"word\" in model_wrapper)", "Similarity operations\nSimilarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data.", "print(\"nights\" in model_wrapper.wv.vocab)\nprint(\"night\" in model_wrapper.wv.vocab)\nmodel_wrapper.similarity(\"night\", \"nights\")", "Syntactically similar words generally have high similarity in fastText models, since a large number of the component char-ngrams will be the same. As a result, fastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here.\nOther similarity operations", "# The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only\nmodel_wrapper.most_similar(\"nights\")\n\nmodel_wrapper.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant'])\n\nmodel_wrapper.doesnt_match(\"breakfast cereal dinner lunch\".split())\n\nmodel_wrapper.most_similar(positive=['baghdad', 'england'], negative=['london'])\n\nmodel_wrapper.accuracy(questions=datapath('questions-words.txt'))\n\n# Word Movers distance\nsentence_obama = 'Obama speaks to the media in Illinois'.lower().split()\nsentence_president = 'The president greets the press in Chicago'.lower().split()\n\n# Remove their stopwords.\nfrom nltk.corpus import stopwords\nstopwords = stopwords.words('english')\nsentence_obama = [w for w in sentence_obama if w not in stopwords]\nsentence_president = [w for w in sentence_president if w not in stopwords]\n\n# Compute WMD.\ndistance = model_wrapper.wmdistance(sentence_obama, sentence_president)\ndistance" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]