repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
monicathieu/cu-psych-r-tutorial
|
public/tutorials/python/3-data-manipulation/index.ipynb
|
mit
|
[
"title: \"Data Manipulation in Python\"\nsubtitle: \"CU Psych Scientific Computing Workshop\"\nweight: 1301\ntags: [\"core\", \"python\"]\n\nGoals of this lesson\nStudents will learn:\n\nHow to group and categorize data in Python\nHow to generative descriptive statistics in Python\n\nLinks to Files\nThe files for all tutorials can be downloaded from the Columbia Psychology Scientific Computing GitHub page. This particular file is located here: /content/tutorials/python/3-data-manipulation/index.ipynb.",
"# load packages we will be using for this lesson\nimport pandas as pd",
"0. Open dataset and load package\nThis dataset examines the relationship between multitasking and working memory. Link here to original paper by Uncapher et al. 2016.",
"# use pd.read_csv to open data into python\ndf = pd.read_csv(\"uncapher_2016_repeated_measures_dataset.csv\")",
"1. Familiarize yourself with the data\nQuick review from data cleaning: take a look at the basic data structure, number of rows and columns.",
"df.head()\n\ndf.shape\n\ndf.columns",
"2. Selecting relevant variables\nSometimes datasets have many variables that are unnecessary for a given analysis. To simplify your life, and your code, we can select only the given variables we'd like to use for now.",
"df = df[[\"subjNum\", \"groupStatus\", \"adhd\", \"hitRate\", \"faRate\", \"dprime\"]]\ndf.head()",
"3. Basic Descriptives\nSummarizing data\nLet's learn how to make simple tables of summary statistics.\nFirst, we will calculate summary info across all data using describe(), a useful function for creating summaries. Note that we're not creating a new object for this summary (i.e. not using the = symbol), so this will print but not save.",
"df.describe()",
"3. Grouping data\nNext, we will learn how to group data based on certain variables of interest.\nWe will use the groupby() function in pandas, which will automatically group any subsequent actions called on the data.",
"df.groupby([\"groupStatus\"]).mean()",
"We can group data by more than one factor. Let's say we're interested in how levels of ADHD interact with groupStatus (multitasking: high or low). \nWe will first make a factor for ADHD (median-split), and add it as a grouping variable using the qcut() function in pandas:",
"df[\"adhdF\"] = pd.qcut(df[\"adhd\"],q=2,labels=[\"Low\",\"High\"])",
"Then we'll check how evenly split these groups are by using groupby() the size() functions:",
"df.groupby([\"groupStatus\",\"adhdF\"]).size()",
"Then we'll calculate some summary info about these groups:",
"df.groupby([\"groupStatus\",\"adhdF\"]).mean()",
"A note on piping / stringing commands together\nIn R, we often use the pipe %>% to string a series of steps together. We can do the same in python with many functions in a row\nThis is how we're able to take the output of df.groupby([\"groupStatus\",\"adhdF\"]) and then send that output into the mean() function\n\n5. Extra: Working with a long dataset\nThis is a repeated measures (\"long\") dataset, with multiple rows per subject. This makes things a bit tricker, but we are going to show you some tools for how to work with \"long\" datasets.\nHow many unique subjects are in the data?",
"subList = df[\"subjNum\"].unique()\nnSubs = len(subList)\nnSubs",
"How many trials were there per subject?",
"nTrialsPerSubj = df.groupby([\"subjNum\"]).size().reset_index(name=\"nTrials\")\nnTrialsPerSubj.head()",
"Combine summary statistics with the full data frame\nFor some analyses, you might want to add a higher level variable (e.g. subject average hitRate) alongside your long data. We can do this by summarizing the data in a new data frame and then merging it with the full data.",
"avgHR = df.groupby([\"subjNum\"])[\"hitRate\"].mean().reset_index(name=\"avgHR\")\navgHR.head()\n\ndf = df.merge(avgHR,on=\"subjNum\")\ndf.head()",
"You should now have an avgHR column in df, which will repeat within each subject, but vary across subjects.\nNext: Plotting in Python"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
daviddesancho/PREFUR
|
examples/free_energy_model_local.ipynb
|
gpl-3.0
|
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nsns.set(style=\"ticks\", color_codes=True, font_scale=1.4)\nsns.set_style({\"xtick.direction\": \"in\", \"ytick.direction\": \"in\"})\n%matplotlib inline",
"Splitting stabilization energy\nWe start by importing the thermo module from the prefur package.",
"from prefur import thermo",
"Using the option enthalpy_global we can introduce different amounts of local and non-local stabilization energy in the total enthalpy of the protein. This allows for the exploration of different folding regimes (\"downhill\", two-state and high barrier) that determine the global kinetics.",
"fig, ax = plt.subplots(2,2, figsize=(7,5), sharex=True)\nax = ax.flatten()\n\nFES = thermo.FES(40)\nFES.gen_enthalpy_global(DHloc=1.31, DHnonloc=5.5)\nFES.gen_free(temp=298)\nax[0].plot(FES.nat, FES.DHo)\nax[0].plot(FES.nat, FES.DHo_loc, lw=1)\nax[0].plot(FES.nat, FES.DHo_nonloc, lw=1)\nax[1].plot(FES.nat, FES.DG, 'k', label='Barrier\\nlimited')\nax[1].set_ylim(-5,35)\nax[0].set_ylim(0,300)\nax[0].set_ylabel('$\\Delta H(n)$ (kJ/mol)', fontsize=14)\nax[1].set_ylabel('$\\Delta G(n)$ (kJ/mol)', fontsize=14)\n\nFES = thermo.FES(40)\nFES.gen_enthalpy_global(DHloc=3.1, DHnonloc=4.52)\nFES.gen_free(temp=298)\nax[2].plot(FES.nat, FES.DHo)\nax[2].plot(FES.nat, FES.DHo_loc, lw=1)\nax[2].plot(FES.nat, FES.DHo_nonloc, lw=1)\nax[3].plot(FES.nat, FES.DG, 'k', label='Downhill')\nax[3].set_ylim(-5,40)\nax[2].set_ylim(0,300)\nax[1].set_yticks(range(0,40,10))\nax[3].set_yticks(range(0,40,10))\nax[2].set_ylabel('$\\Delta H(n)$ (kJ/mol)', fontsize=14)\nax[3].set_ylabel('$\\Delta G(n)$ (kJ/mol)', fontsize=14)\nax[2].set_xlabel('$n$', fontsize=14)\nax[3].set_xlabel('$n$', fontsize=14)\n\nax[1].legend(loc=1, prop={'size': 12})\nax[3].legend(loc=1, prop={'size': 12})\n\nax[0].annotate(\"A\", xy=(-0.3, 0.95), fontsize=24, xycoords=ax[0].get_window_extent)\nax[2].annotate(\"B\", xy=(-0.3, 0.95), fontsize=24, xycoords=ax[2].get_window_extent)\n\nplt.tight_layout(h_pad=2, w_pad=0)\nfig.savefig(\"regimes.png\", dpi=300)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
andrewosh/notebooks
|
worker/notebooks/thunder/tutorials/thunder_context.ipynb
|
mit
|
[
"Thunder context\nThe ThunderContext is the entry point for loading data and interacting with remote services (e.g. Amazon).\nSetup plotting",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_context('notebook')\nfrom thunder import Colorize\nimage = Colorize.image",
"Construction\nA ThunderContext (you'll only need one) is automatically provided as the variable tsc when you start the interactive shell using the command line call thunder. It also be created manually, in two different ways, which can be useful when writing standalone analysis scripts (see examples in thunder.standalone). First, it can be created from an existing instance of a SparkContext:",
"from thunder import ThunderContext\ntsc = ThunderContext(sc)",
"Or it can be created directly using the same arguments provided to a SparkContext (we don't run this line here because you can't run multiple SparkContexts at once):\ntsc = ThunderContext.start(appName='myapp')\nLoading data\nThe primary methods for loading data are loadSeries and loadImages, for loading a Series or Images object, respectively. Here we show example syntax for loading two example data sets included with thunder, and in each case inspect the first element. (To use these example data sets, we'll first figure out their path on our system.) See the Input Format tutorial for more information on loading and data types.",
"import os.path as pth\ndatapath = pth.join(pth.dirname(pth.realpath(thunder.__file__)), 'utils/data/')\n\ndata = tsc.loadImages(datapath + '/mouse/images/', startIdx=0, stopIdx=10)\n\nimage(data.values().first())\n\ndata = tsc.loadSeries(datapath + '/iris/iris.bin', inputFormat='binary')\ndata.first()",
"Currently, loadImages can load tif, png, or binary images (or volumes) from a local file system, networked file system, Amazon S3, or Google Storage. loadSeries can load data from one or more text or binary files on a local file system, networked file system, Amazon S3, or HDFS.\nThe methods loadImagesFromArray and loadSeriesFromArray can be used to used to load data directly from numpy arrays.",
"from numpy import random\ndata = tsc.loadSeriesFromArray(random.randn(50,10))\n\ndata.nrecords\n\ndata.index\n\ndata = tsc.loadImagesFromArray(random.randn(50,10,10))\n\ndata.nrecords\n\ndata.dims.count",
"Finally, loadSeries can also load data stored in local arrays in either numpy npy or Matlab MAT format (if loading from a MAT file, you must additionally provide a variable name). This is especially useful for smaller local datasets, or for distributing a smaller data set for performing intensive computations. In the latter case, the number of partitions should be set to be approximately equal to 2-3 times the number of cores available on your cluster, so that different cores can work on different portions of the data.",
"data = tsc.loadSeries(datapath + '/iris/iris.mat', inputFormat='mat', varName='data', minPartitions=5)\ndata.first()\n\ndata = tsc.loadSeries(datapath + '/iris/iris.npy', inputFormat='npy', minPartitions=5)\ndata.first()",
"Loading examples\nThe makeExample method makes it easy to generate example data for testing purposes, by calling methods from the DataSets class:",
"data = tsc.makeExample('kmeans', k=2, ndims=10, nrecords=10, noise=0.5)\n\nfrom numpy import asarray\nts = data.collectValuesAsArray()\nplt.plot(ts.T);",
"You can see the list of available generated datasets by calling without an argument",
"tsc.makeExample()",
"The loadExample method directly loads one of the small example datasets. This are highly compressed and downsampled, and meant only to demonstrate basic functionality and help explore the API, not to represent anything meaningful about the data itself.",
"data = tsc.loadExample('mouse-images')\nimg = data.values().first()\n\nimage(img)\n\ndata = tsc.loadExample('fish-series')\nimg = data.seriesMean().pack()\n\nimage(img[:,:,0])",
"You can see the list of avaiable example data sets:",
"tsc.loadExample()",
"Example large data sets are available Amazon S3 through the CodeNeuro data repository. If you are running Thunder on an Amazon EC2 clsuter (see the instructions), these data sets can be can be loaded using the loadExampleS3 method. We show the operation without calling it here, because we assume this notebook is being run locally:\ndata, params = tsc.loadExampleS3('ahrens.lab/direction.selectivity')\nYou can also check the available data sets:",
"tsc.loadExampleS3()",
"Many of these data sets have notebooks associated with them for showing how to load the data"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
WomensCodingCircle/CodingCirclePython
|
Lesson14_NumpyAndMatplotlib/numpy.ipynb
|
mit
|
[
"Adapted from Scientific Python: Part 1 (lessons/thw-numpy/numpy.ipynb)\nIntroducing NumPy\nNumPy is a Python package implementing efficient collections of specific types of data (generally numerical), similar to the standard array\nmodule (but with many more features). NumPy arrays differ from lists and tuples in that the data is contiguous in memory. A Python list, \n[0, 1, 2], in contrast, is actually an array of pointers to Python objects representing each number. This allows NumPy arrays to be\nconsiderably faster for numerical operations than Python lists/tuples.",
"# by convention, we typically import numpy as the alias np\nimport numpy as np",
"Let's see what numpy can do.",
"#np?\n#np.",
"We can try out some of those constants and functions:",
"print((np.sqrt(4)))\nprint((np.pi)) # a constant\nprint((np.sin(np.pi)))",
"\"That's great,\" you're thinking. \"math already has all of those functions and constants.\" But that's not the real beauty of NumPy.\nTRY IT\nFind the square root of pi using numpy functions and constants\nNumpy arrays (ndarrays)\nCreating a NumPy array is as simple as passing a sequence to numpy.array:\nNumpy arrays are collections of things, all of which must be the same type, that work\nsimilarly to lists (as we've described them so far). The most important are:\n\nYou can easily perform elementwise operations (and matrix algebra) on arrays\nArrays can be n-dimensional\nArrays must be pre-allocated (ie, there is no equivalent to append)\n\nArrays can be created from existing collections such as lists, or instantiated \"from scratch\" in a \nfew useful ways.",
"arr1 = np.array([1, 2.3, 4]) \n# Type of a numpy array\nprint((type(arr1)))\n# Type of the data inside a numpy array dtype=data type\nprint((arr1.dtype)) ",
"TRY IT\nCreate an array from the list [0,1,2] and print out it's dtype\nDatatype options\nChoose your datatype based on how large the largest values could be, and how much memory you expect to use\n\nbool_ - Boolean (True or False) stored as a byte\nint_ - Default integer type (same as C long; normally either int64 or int32)\nint8 - Byte (-128 to 127)\nint16 - Integer (-32768 to 32767)\nint32 - Integer (-2147483648 to 2147483647)\nint64 - Integer (-9223372036854775808 to 9223372036854775807)\nuint8 - Unsigned integer (0 to 255)\nuint16 - Unsigned integer (0 to 65535)\nuint32 - Unsigned integer (0 to 4294967295)\nuint64 - Unsigned integer (0 to 18446744073709551615)\nfloat_ - Shorthand for float64.\nfloat16 - Half precision float: sign bit, 5 bits exponent, 10 bits mantissa\nfloat32 - Single precision float: sign bit, 8 bits exponent, 23 bits mantissa\nfloat64 - Double precision float: sign bit, 11 bits exponent, 52 bits mantissa\ncomplex_ - Shorthand for complex128.\ncomplex64 - Complex number, represented by two 32-bit floats (real and imaginary components)\ncomplex128 - Complex number, represented by two 64-bit floats (real and imaginary components)\n\nCreating Arrays\nThere are many other ways to create NumPy arrays, such as np.identity, np.zeros, np.zeros_like, np.ones, np.ones_like",
"print(('2 rows, 3 columns of zeros:\\n', np.zeros((2,3)))) \nprint(('4x4 identity matrix:\\n', np.identity(4)))\nsquared = []\nfor x in range(5):\n squared.append(x**2)\nprint(squared)\na = np.array(squared)\nb = np.zeros_like(a)\n\nprint(('a:\\n', a))\nprint(('b:\\n', b))",
"These arrays have attributes, like .ndim and .shape that tell us about the number and length of the dimensions.\nThe dimension of an array is the number of indices needed to select an element. Thus, if the array is seen as a function on a set of possible index combinations, it is the dimension of the space of which its domain is a discrete subset. Thus a one-dimensional array is a list of data, a two-dimensional array a rectangle of data, a three-dimensional array a block of data, etc.\nThe shape is the number of elements in each dimension of data",
"c = np.ones((15, 30))\nprint(('number of dimensions of c:', c.ndim)) \nprint(('length of c in each dimension:', c.shape))\n\nx = np.array([[[1,2,3],[4,5,6],[7,8,9]] , [[0,0,0],[0,0,0],[0,0,0]]])\nprint(('number of dimensions of x:', x.ndim)) \nprint(('length of x in each dimension:', x.shape))",
"NumPy has its own range() function, np.arange() (stands for array-range), that is more efficient for building larger arrays. It functions in much the same way as range().\nNumPy also has linspace() and logspace(), that can generate equally spaced samples between a start-point and an end-point. Find out more with np.linspace?.",
"print(\"Arange\")\nprint((np.arange(5)))\n\n# Args: start, stop, number of elements\nprint(\"Linspace\")\nprint((np.linspace(5, 10, 5)))\n\n# logspace can also take a base argument, by default it is 10\nprint(\"Logspace\")\nprint((np.logspace(0, 1, 5)))\nprint((np.logspace(0, 1, 5, base=2)))",
"TRY IT\nCreate a numpy array with 8 rows and 50 columns of 0's\nCreating numpy arrays from text files\nYou can use loadtxt to load data from a text file (csv or tab-delimited data)",
"np.loadtxt?",
"The simplest way to use it is to just give it a file name. By default, your data will be loaded as floats with whitespace being the delimiter\nmy_arr = np.loadtxt('myfile.txt')\nMore likely you will need to use some of the keyword arguments. like dtype, delimiter, skiprows, or usecols Docs available here: http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html\nmy_array = loadtxt('myfile.csv', usecols=[1,2,3,4,5,6,7,8,9,10,11,12], delimiter=',')",
"np.loadtxt('simple.csv', delimiter=',')",
"TRY IT\nLoad the file 'example.tsv' a tab delimited file. Once you have that working, only load the odd numbered columns (1,3,5).\nArithmetic with ndarrays\nStandard arithmetic operators perform element-wise operations on arrays of the same size.",
"A = np.arange(5)\nB = np.arange(5, 10)\n\nprint(('A', A))\nprint(('B', B))\n\nprint(('A+B', A+B))\nprint(('B-A', B-A))\nprint(('A*B', A*B))",
"In addition, if one of the arguments is a scalar, that value will be applied to all the elements of the array.\nscalar - a quantity possessing only magnitude. (In this case we mean a single number either an int or a float)",
"A = np.arange(5)\nprint(('A', A))\nprint(('A+10', A+10))\nprint(('2 * A', 2*A))\nprint(('A ** 2', A**2)) ",
"Linear algebra with arrays\nYou can use arrays as vectors and matrices in linear algebra operations\nSpecifically, you can perform matrix/vector multiplication between arrays, by using the .dot method, or the np.dot function\ndot product - the dot product between two vectors is based on the projection of one vector onto another.",
"print((A.dot(B)))\nprint((np.dot(A, B)))",
"If you are planning on doing serious linear algebra, you might be better off using the np.matrix object instead of np.array.\nNumpy 'gotchas'\nMultiplication and Addition\nAs you may have noticed above, since NumPy arrays are modeled more closely after vectors and matrices, multiplying by a scalar will multiply each element of the array, whereas multiplying a list by a scalar will repeat that list N times.",
"# Numpy arrays\nA = np.arange(5)*2\nprint(A)\n# Lists\nB = list(range(5))*2\nprint(B)",
"Similarly, when adding two numpy arrays together, we get the vector sum back, whereas when adding two lists together, we get the concatenation back.",
"# Numpy arrays\nA = np.arange(5) + np.arange(5)\nprint(A)\n# Lists\nB = list(range(5)) + list(range(5))\nprint(B)",
"Boolean operators work on arrays too, and they return boolean arrays\nMuch like the basic arithmetic operations we discussed above, comparison operations are performed element-wise. That is, rather than returning a\nsingle boolean, comparison operators compare each element in both arrays pairwise, and return an array of booleans (if the sizes of the input\narrays are incompatible, the comparison will simply return False). For example:",
"arr1 = np.array([1, 2, 3, 4, 5])\narr2 = np.array([1, 1, 3, 3, 5])\nprint((arr1 == arr2))\nc = (arr1 == arr2)\nprint((type(c)))\nprint((c.dtype))",
"You can get a portion of an array by using a boolean array as the index. It will return an array where only true values are returned",
"print(arr1)\nprint(c)\nprint((arr1[c]))",
"Note: You can use the methods .any() and .all() or the functions np.any and np.all to return a single boolean indicating whether any or all values in the array are True, respectively.",
"print((np.all(c)))\nprint((c.all()))\nprint((c.any()))",
"TRY IT\nCreate a boolean array for arr1 for where values are >= 3\nViews vs. Copies\nIn order to be as efficient as possible, numpy uses \"views\" instead of copies wherever possible. That is, numpy arrays derived from another base array generally refer to the ''exact same data'' as the base array. The consequence of this is that modification of these derived arrays will also modify the base array. The result of an array indexed by an array of indices is a ''copy'', but an array indexed by an array of booleans is a ''view''. \nSpecifically, slices of arrays are always views, unlike slices of lists or tuples, which are always copies.",
"A = np.arange(5)\nB = A[0:1]\nB[0] = 42\nprint(A)\n\nA = list(range(5))\nB = A[0:1]\nB[0] = 42\nprint(A)",
"Indexing arrays\nIn addition to the usual methods of indexing lists with an integer (or with a series of colon-separated integers for a slice), numpy allows you\nto index arrays in a wide variety of different ways for more advanced operations.\nFirst, the simple way:",
"a = np.array([1,2,3])\nprint((a[0:2]))",
"How can we index if the array has more than one dimension?",
"c = np.random.rand(3,3)\nprint(c)\nprint((c[1:3,0:2]))\nprint(a)\nc[0,:] = a\nprint(c)",
"TRY IT\nCreate a random 4x4 array, print out the second row, second column."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tdhopper/notes-on-dirichlet-processes
|
pages/2015-10-07-econtalk-topics.ipynb
|
mit
|
[
"Note: This is best viewed on NBViewer. It is part of a series on Dirichlet Processes and Nonparametric Bayes.\nNonparametric Latent Dirichlet Allocation\nAnalysis of the topics of Econtalk\nIn 2003, a groundbreaking statistical model called \"Latent Dirichlet Allocation\" was presented by David Blei, Andrew Ng, and Michael Jordan.\nLDA provides a method for summarizing the topics discussed in a document. LDA defines topics to be discrete probability distrbutions over words. For an introduction to LDA, see Edwin Chen's post.\nThe original LDA model requires the number of topics in the document to be specfied as a known parameter of the model. In 2005, Yee Whye Teh and others published a \"nonparametric\" version of this model that doesn't require the number of topics to be specified. This model uses a prior distribution over the topics called a hierarchical Dirichlet process. I wrote an introduction to this HDP-LDA model earlier this year.\nFor the last six months, I have been developing a Python-based Gibbs sampler for the HDP-LDA model. This is part of a larger library of \"robust, validated Bayesian nonparametric models for discovering structure in data\" known as Data Microscopes.\nThis notebook demonstrates the functionality of this implementation.\nThe Data Microscopes library is available on anaconda.org for Linux and OS X. microscopes-lda can be installed with:\n$ conda install -c datamicroscopes -c distributions microscopes-lda",
"%matplotlib inline\nimport pyLDAvis\nimport json\nimport sys\nimport cPickle\n\nfrom microscopes.common.rng import rng\nfrom microscopes.lda.definition import model_definition\nfrom microscopes.lda.model import initialize\nfrom microscopes.lda import utils\nfrom microscopes.lda import model, runner\n\nfrom numpy import genfromtxt \nfrom numpy import linalg\nfrom numpy import array",
"dtm.csv contains a document-term matrix representation of the words used in Econtalk transcripts. The columns of the matrix correspond to the words in vocab.txt. The rows in the matrix correspond to the show urls in urls.txt.\nOur LDA implementation takes input data as a list of lists of hashable objects (typically words). We can use a utility function to convert the document-term matrix to the list of tokenized documents.",
"vocab = genfromtxt('./econtalk-data/vocab.txt', delimiter=\",\", dtype='str').tolist()\ndtm = genfromtxt('./econtalk-data/dtm.csv', delimiter=\",\", dtype=int)\ndocs = utils.docs_from_document_term_matrix(dtm, vocab=vocab)\nurls = [s.strip() for s in open('./econtalk-data/urls.txt').readlines()]\n\ndtm.shape[1] == len(vocab)\n\ndtm.shape[0] == len(urls)",
"Here's a utility method to get the title of a webpage that we'll use later.",
"def get_title(url):\n \"\"\"Scrape webpage title\n \"\"\"\n import lxml.html\n t = lxml.html.parse(url)\n return t.find(\".//title\").text.split(\"|\")[0].strip()",
"Let's set up our model. First we created a model definition describing the basic structure of our data. Next we initialize an MCMC state object using the model definition, documents, random number generator, and hyper-parameters.",
"N, V = len(docs), len(vocab)\ndefn = model_definition(N, V)\nprng = rng(12345)\nstate = initialize(defn, docs, prng,\n vocab_hp=1,\n dish_hps={\"alpha\": .6, \"gamma\": 2})\nr = runner.runner(defn, docs, state, )",
"When we first create a state object, the words are randomly assigned to topics. Thus, our perplexity (model score) is quite high. After we start to run the MCMC, the score will drop quickly.",
"print \"randomly initialized model:\"\nprint \" number of documents\", defn.n\nprint \" vocabulary size\", defn.v\nprint \" perplexity:\", state.perplexity(), \"num topics:\", state.ntopics()",
"Run one iteration of the MCMC to make sure everything is working.",
"%%time\nr.run(prng, 1)",
"Now lets run 1000 generations of the MCMC.\nUnfortunately, MCMC is slow going.",
"%%time\nr.run(prng, 500)\n\nwith open('./econtalk-data/2015-10-07-state.pkl', 'w') as f:\n cPickle.dump(state, f)\n\n%%time\nr.run(prng, 500)\n\nwith open('./econtalk-data/2015-10-07-state.pkl', 'w') as f:\n cPickle.dump(state, f)",
"Now that we've run the MCMC, the perplexity has dropped significantly.",
"print \"after 1000 iterations:\"\nprint \" perplexity:\", state.perplexity(), \"num topics:\", state.ntopics()",
"pyLDAvis projects the topics into two dimensions using techniques described by Carson Sievert.",
"vis = pyLDAvis.prepare(**state.pyldavis_data())\npyLDAvis.display(vis)",
"We can extract the term relevance (shown in the right hand side of the visualization) right from our state object. Here are the 10 most relevant words for each topic:",
"relevance = state.term_relevance_by_topic()\nfor i, topic in enumerate(relevance):\n print \"topic\", i, \":\",\n for term, _ in topic[:10]:\n print term, \n print ",
"We could assign titles to each of these topics. For example, Topic 5 appears to be about the foundations of classical liberalism. Topic 6 is obviously Bitcoin and Software. Topic 0 is the financial system and monetary policy. Topic 4 seems to be generic words used in most episodes; unfortunately, the prevalence of \"don\" is a result of my preprocessing which splits up the contraction \"don't\".\nWe can also get the topic distributions for each document.",
"topic_distributions = state.topic_distribution_by_document()",
"Topic 5 appears to be about the theory of classical liberalism. Let's find the 20 episodes which have the highest proportion of words from that topic.",
"austrian_topic = 5\nfoundations_episodes = sorted([(dist[austrian_topic], url) for url, dist in zip(urls, topic_distributions)], reverse=True)\nfor url in [url for _, url in foundations_episodes][:20]:\n print get_title(url), url",
"We could also find the episodes that have notable discussion of both politics AND the financial system.",
"topic_a = 0\ntopic_b = 1\njoint_episodes = [url for url, dist in zip(urls, topic_distributions) if dist[0] > 0.18 and dist[1] > 0.18]\nfor url in joint_episodes:\n print get_title(url), url",
"We can look at the topic distributions as projections of the documents into a much lower dimension (16). \nWe can try to find shows that are similar by comparing the topic distributions of the documents.",
"def find_similar(url, max_distance=0.2):\n \"\"\"Find episodes most similar to input url.\n \"\"\"\n index = urls.index(url)\n for td, url in zip(topic_distributions, urls):\n if linalg.norm(array(topic_distributions[index]) - array(td)) < max_distance:\n print get_title(url), url",
"Which Econtalk episodes are most similar, in content, to \"Mike Munger on the Division of Labor\"?",
"find_similar('http://www.econtalk.org/archives/2007/04/mike_munger_on.html')",
"How about episodes similar to \"Kling on Freddie and Fannie and the Recent History of the U.S. Housing Market\"?",
"find_similar('http://www.econtalk.org/archives/2008/09/kling_on_freddi.html')",
"The model also gives us distributions over words for each topic.",
"word_dists = state.word_distribution_by_topic()",
"We can use this to find the topics a word is most likely to occur in.",
"def bars(x, scale_factor=10000):\n return int(x * scale_factor) * \"=\"\n\ndef topics_related_to_word(word, n=10):\n for wd, rel in zip(word_dists, relevance):\n score = wd[word]\n rel_words = ' '.join([w for w, _ in rel][:n]) \n if bars(score):\n print bars(score), rel_words",
"What topics are most likely to contain the word \"Munger\" (as in Mike Munger). The number of equal signs indicates the probability the word is generated by the topic. If a topic isn't shown, it's extremely unlikley to generate the word.",
"topics_related_to_word('munger')",
"Where does Munger come up? In discussing the moral foundations of classical liberalism and microeconomics!\nHow about the word \"lovely\"? Russ Roberts uses it often when talking about the Theory of Moral Sentiments. It looks like it also comes up when talking about schools.",
"topics_related_to_word('lovely')",
"If you have feedback on this implementation of HDP-LDA, you can reach me on Twitter or open an issue on Github."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ghvn7777/ghvn7777.github.io
|
content/fluent_python/13_operator.ipynb
|
apache-2.0
|
[
"我们本章会讨论:\n\nPython 如何处理终追运算符中不同类型的操作数\n使用鸭子类型或显式类型检查处理不同类型的操作数\n中缀运算符如何表明自己无法处理的操作数\n众多比较运算符(如 ==,>,<= 等等)的特殊行为\n增量赋值运算符(如 += )的默认处理方式和重载方式\n\n运算符重载基础\n在某些圈子里,运算符重载名声不太好,因为总被滥用.Python 加了一些限制,做好了灵活性,可用性和安全性的平衡\n\n不能重载内置运算符\n不能新建运算符,只能重载现有的\n某些运算符不能重载 -- is,and,or 和 not(不过位运算 &,| 和 ~ 可以)\n\n一元运算符\n- (__neg__) 一元取负运算符,如果 x 是 -2, -x == 2\n+ (__pos__) 一元取正运算符,通常 x == +x,但也有一些例外\n~ (__invert__) 对整数按位去饭,定义 ~x == -(x + 1),如果 x 是 2, ~x == -3\n支持一元操作符只需要实现相应的特殊方法,这些方法只有一个 self 参数,然后使用符合所在类的逻辑实现。不过,要遵守运算符的一个基本规则:始终返回一个新对象。也就是不能修改 self\n对于 - 和 + 来说,结果可能是与 self 属于同一类的实例,多数的时候, + 最好返回 self 的副本。abs(...) 的结果应该是一个标量,但是对于 ~ 来说,很难说明什么结果是合理的,因为可能处理的不是整数,例如 ORM 中,SQL WHERE 子句应该返回反集\n我们将把 - 和 + 运算符添加到第 10 章的例子中:",
"from array import array\nimport reprlib\nimport math\nimport numbers\nimport functools\nimport operator\nimport itertools\n\nclass Vector:\n typecode = 'd'\n \n def __init__(self, components):\n self._components = array(self.typecode, components) \n \n def __iter__(self):\n return iter(self._components)\n \n def __repr__(self):\n components = reprlib.repr(self._components) \n components = components[components.find('['):-1] \n return 'Vector({})'.format(components)\n \n def __str__(self):\n return str(tuple(self))\n \n def __bytes__(self):\n return (bytes([ord(self.typecode)]) + \n bytes(self._components)) \n \n def __eq__(self, other):\n return len(self) == len(other) and all(a == b for a, b in zip(self, other))\n \n def __abs__(self):\n return math.sqrt(sum(x * x for x in self)) \n \n def __bool__(self):\n return bool(abs(self))\n \n @classmethod\n def frombytes(cls, octets):\n typecode = chr(octets[0])\n memv = memoryview(octets[1:]).cast(typecode)\n return cls(memv) \n \n # 上面都一样\n def __len__(self):\n return len(self._components)\n\n def __getitem__(self, index):\n cls = type(self) # 获取实例所属的类\n if isinstance(index, slice):\n return cls(self._components[index])\n elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型\n return self._components[index]\n else:\n msg = '{cls.__name__} indices must be integers'\n raise TypeError(msg.format(cls=cls)) \n\n shortcut_names = 'xyzt'\n\n def __getattr__(self, name):\n cls = type(self)\n\n if len(name) == 1:\n pos = cls.shortcut_names.find(name)\n if 0 <= pos < len(self._components):\n return self._components[pos]\n msg = '{.__name__!r} object has no attribute {!r}'\n raise AttributeError(msg.format(cls, name))\n \n def __setattr__(self, name, value):\n cls = type(self)\n if len(name) == 1:\n if name in cls.shortcut_names:\n error = 'readonly attribute {attr_name!}'\n elif name.islower():\n error = \"can't set attributes 'a' to 'z' in {cls_name!r}\"\n else:\n error = ''\n if error:\n msg = error.format(cls_name = cls.__name__, attr_name = name) # 这个方法好,无论错误是哪个,都可以给定值\n raise AttributeError(msg)\n super().__setattr__(name, value) # 默认情况,调用超类的 __setattr__ 方法,提供标准行为\n \n def __hash__(self):\n hashs = (hash(x) for x in self._components) # 注意这里是生成器表达式,不是列表推导式,可以节省内存\n return functools.reduce(operator.xor, hashs)\n\n def angle(self, n):\n r = math.sqrt(sum(x * x for x in self[n:]))\n a = math.atan2(r, self[n-1])\n if (n == len(self) - 1) and (self[-1] < 0):\n return math.pi * 2 - a\n else:\n return a\n \n def angles(self):\n return (self.angle(n) for n in range(1, len(self)))\n \n def __format__(self, fmt_spec=''):\n if fmt_spec.endswith('h'):\n fmt_spec = fmt_spec[:-1]\n coords = itertools.chain([abs(self)], # 使用 chain 函数生成生成器表达式,无缝迭代向量的模和各个角坐标\n self.angles())\n outer_fmt = '<{}>' # 球面坐标\n \n else:\n coords = self\n outer_fmt = '({})' # 笛卡尔坐标\n components = (format(c, fmt_spec) for c in coords)\n return outer_fmt.format(', '.join(components))\n \n \n def __neg__(self):\n return Vector(-x for x in self)\n \n def __pos__(self):\n return Vector(self)",
"因为 Vector 实例是可迭代对象,而且 Vector.__init__ 的参数是可迭代对象,所以我们的 __neg__ 和 _pos__ 的实现短小精悍\n我们不打算实现 __invert__ 方法,因此如果用户想计算 ~v,Python 会抛出 TypeError。\nx 和 +x 何时不相等\n在 Python 中几乎所有情况下 x == +x,但是在标准库中找到两例 x != +x 的情况\ndecimal.Decimal 类,如果 x 是 Decimal 类的实例,在 算数运算的上下文中创建,然后在不同的上下文中计算 +x,那么 x != +x。例如,x 所在的上下文用某个精度,而计算 +x 时,精度变了,如下所示:",
"import decimal\nctx = decimal.getcontext()\nctx.prec = 40 # 精度设为 40\none_third = decimal.Decimal('1') / decimal.Decimal('3')\none_third\n\none_third == +one_third\n\nctx.prec = 28 #精度设为 28\none_third == +one_third\n\n+one_third",
"虽然每个 +one_third 表达式都会使用 one_third 的值创建一个新的 Decimal 实例,但是会使用当前算数运算符上下文的精度\n第二例在 collections.Counter 的文档中。类实现了几个算数运算符,例如中缀运算符 +,作用是把两个 Counter 实例的计数器加在一起,然而,从使用角度出发,Counter 相加时,负值和零值计数会从结果中剔除,而一元运算符 + 等同于加上一个空的 Counter,因此产生一个新的 Counter 且仅保留大于 0 的计数器。",
"from collections import Counter\n\nct = Counter('abracadabra')\nct\n\nct['r'] = -3\nct['d'] = 0\nct\n\n+ct",
"重载向量加法运算符 +\n我们要为向量实现不定长的向量加法",
"from array import array\nimport reprlib\nimport math\nimport numbers\nimport functools\nimport operator\nimport itertools\n\nclass Vector:\n typecode = 'd'\n \n def __init__(self, components):\n self._components = array(self.typecode, components) \n \n def __iter__(self):\n return iter(self._components)\n \n def __repr__(self):\n components = reprlib.repr(self._components) \n components = components[components.find('['):-1] \n return 'Vector({})'.format(components)\n \n def __str__(self):\n return str(tuple(self))\n \n def __bytes__(self):\n return (bytes([ord(self.typecode)]) + \n bytes(self._components)) \n \n def __eq__(self, other):\n return len(self) == len(other) and all(a == b for a, b in zip(self, other))\n \n def __abs__(self):\n return math.sqrt(sum(x * x for x in self)) \n \n def __bool__(self):\n return bool(abs(self))\n \n @classmethod\n def frombytes(cls, octets):\n typecode = chr(octets[0])\n memv = memoryview(octets[1:]).cast(typecode)\n return cls(memv) \n \n # 上面都一样\n def __len__(self):\n return len(self._components)\n\n def __getitem__(self, index):\n cls = type(self) # 获取实例所属的类\n if isinstance(index, slice):\n return cls(self._components[index])\n elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型\n return self._components[index]\n else:\n msg = '{cls.__name__} indices must be integers'\n raise TypeError(msg.format(cls=cls)) \n\n shortcut_names = 'xyzt'\n\n def __getattr__(self, name):\n cls = type(self)\n\n if len(name) == 1:\n pos = cls.shortcut_names.find(name)\n if 0 <= pos < len(self._components):\n return self._components[pos]\n msg = '{.__name__!r} object has no attribute {!r}'\n raise AttributeError(msg.format(cls, name))\n \n def __setattr__(self, name, value):\n cls = type(self)\n if len(name) == 1:\n if name in cls.shortcut_names:\n error = 'readonly attribute {attr_name!}'\n elif name.islower():\n error = \"can't set attributes 'a' to 'z' in {cls_name!r}\"\n else:\n error = ''\n if error:\n msg = error.format(cls_name = cls.__name__, attr_name = name) # 这个方法好,无论错误是哪个,都可以给定值\n raise AttributeError(msg)\n super().__setattr__(name, value) # 默认情况,调用超类的 __setattr__ 方法,提供标准行为\n \n def __hash__(self):\n hashs = (hash(x) for x in self._components) # 注意这里是生成器表达式,不是列表推导式,可以节省内存\n return functools.reduce(operator.xor, hashs)\n\n def angle(self, n):\n r = math.sqrt(sum(x * x for x in self[n:]))\n a = math.atan2(r, self[n-1])\n if (n == len(self) - 1) and (self[-1] < 0):\n return math.pi * 2 - a\n else:\n return a\n \n def angles(self):\n return (self.angle(n) for n in range(1, len(self)))\n \n def __format__(self, fmt_spec=''):\n if fmt_spec.endswith('h'):\n fmt_spec = fmt_spec[:-1]\n coords = itertools.chain([abs(self)], # 使用 chain 函数生成生成器表达式,无缝迭代向量的模和各个角坐标\n self.angles())\n outer_fmt = '<{}>' # 球面坐标\n \n else:\n coords = self\n outer_fmt = '({})' # 笛卡尔坐标\n components = (format(c, fmt_spec) for c in coords)\n return outer_fmt.format(', '.join(components))\n \n \n def __neg__(self):\n return Vector(-x for x in self)\n \n def __pos__(self):\n return Vector(self)\n \n \n def __add__(self, other):\n pairs = itertools.zip_longest(self, other, fillvalue=0.0)\n return Vector(a + b for a, b in pairs)",
"pairs 是个生成器,会生成 (a, b) 形式的元组,其中 a 来自 self, b 来自 other,如果 a 和 b 的长度不同,使用 fillvalue 填充较短的可迭代对象\n\n实现一元运算符和中缀运算符时候不要修改操作数,只有增量赋值表达式可能会修改第一个操作数。\n\n现在我们的加法也支持 Vector 之外的对象",
"v1 = Vector([3, 4, 5])\nv1 + [10, 20, 30]\n\nv2 = Vector([1, 2])\nv1 + v2",
"zip_longest(...) 能处理任何可迭代对象,而且构建新 Vector 实例的生成器表达式仅仅是把 zip_longest(...) 生成的值对相加(a + b),因此可以使用任何生成数字元素的可迭代对象\n然而,如果对调操作数,混合类型的加法就会失败:",
"v1 = Vector([3, 4, 5])\n(10, 20, 30) + v1",
"为了支持涉及不同类型的运算,Python 为中缀运算符特殊方法提供了特殊的分派机制,对于表达式 a + b 来说,会执行下面操作:\n\n\n如果 a 有 __add__ 方法,而且返回值不是 NotImplemented,调用 a.__add__(b) 方法,返回结果\n\n\n如果 a 没有 __add__ 方法, 或返回值是 NotImplemented,调用 b.__radd__(a) 返回结果\n\n\n如果 b 没有 __radd__ 方法,或者返回为 NotImplemented,抛出 TypeError。( r 的含义是 reflected 或 reverse)\n\n\n所以我们为了让混合类型加法可以正确运算,要实现 Vector.__radd__ 方法,这是一种后备机制\n\n别把 NotImplemented 和 NotImplementedError 搞混了,前者是特殊的单例值,如果中缀运算符特殊方法不能处理给定的操作数,要把它反回给解释器,而后者是一种一场,抽象类中占位方法将它抛出(raise),提醒子类必须覆盖\n\n最简单可用的 __radd__ 实现如下:",
"def __radd__(self, other):\n return self + other",
"前面的 Vector 对象的加法对象如果不可迭代,__add__ 就无法处理,而且提供的错误消息不是很有用",
"v1 + 1\n\nv1 + 'ABC'",
"上面揭露的问题比晦涩难懂的错误消息更严重,如果由于类型不兼容导致特殊方法无法返回有效结果,应该返回 NoteImplemented,而不是抛出 TypeError,返回 NotImplemented 时,另一个操作数所属类型还有机会执行运算,Python 会尝试调用反向方法\n为了遵守鸭子类型精神,我们不能测试 other 操作数类型,或者它的元素类型,我们要捕获异常,返回 NotImplemented。如果解释器还没有反转操作数,那么它将尝试去做,如果反向方法返回 NoteImplemented,那么 Python 会抛出 TypeError,并返回一个标准的错误消息,下面是 Vector 的加法特殊方法最终版:",
"from array import array\nimport reprlib\nimport math\nimport numbers\nimport functools\nimport operator\nimport itertools\n\nclass Vector:\n typecode = 'd'\n \n def __init__(self, components):\n self._components = array(self.typecode, components) \n \n def __iter__(self):\n return iter(self._components)\n \n def __repr__(self):\n components = reprlib.repr(self._components) \n components = components[components.find('['):-1] \n return 'Vector({})'.format(components)\n \n def __str__(self):\n return str(tuple(self))\n \n def __bytes__(self):\n return (bytes([ord(self.typecode)]) + \n bytes(self._components)) \n \n def __eq__(self, other):\n return len(self) == len(other) and all(a == b for a, b in zip(self, other))\n \n def __abs__(self):\n return math.sqrt(sum(x * x for x in self)) \n \n def __bool__(self):\n return bool(abs(self))\n \n @classmethod\n def frombytes(cls, octets):\n typecode = chr(octets[0])\n memv = memoryview(octets[1:]).cast(typecode)\n return cls(memv) \n \n # 上面都一样\n def __len__(self):\n return len(self._components)\n\n def __getitem__(self, index):\n cls = type(self) # 获取实例所属的类\n if isinstance(index, slice):\n return cls(self._components[index])\n elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型\n return self._components[index]\n else:\n msg = '{cls.__name__} indices must be integers'\n raise TypeError(msg.format(cls=cls)) \n\n shortcut_names = 'xyzt'\n\n def __getattr__(self, name):\n cls = type(self)\n\n if len(name) == 1:\n pos = cls.shortcut_names.find(name)\n if 0 <= pos < len(self._components):\n return self._components[pos]\n msg = '{.__name__!r} object has no attribute {!r}'\n raise AttributeError(msg.format(cls, name))\n \n def __setattr__(self, name, value):\n cls = type(self)\n if len(name) == 1:\n if name in cls.shortcut_names:\n error = 'readonly attribute {attr_name!}'\n elif name.islower():\n error = \"can't set attributes 'a' to 'z' in {cls_name!r}\"\n else:\n error = ''\n if error:\n msg = error.format(cls_name = cls.__name__, attr_name = name) # 这个方法好,无论错误是哪个,都可以给定值\n raise AttributeError(msg)\n super().__setattr__(name, value) # 默认情况,调用超类的 __setattr__ 方法,提供标准行为\n \n def __hash__(self):\n hashs = (hash(x) for x in self._components) # 注意这里是生成器表达式,不是列表推导式,可以节省内存\n return functools.reduce(operator.xor, hashs)\n\n def angle(self, n):\n r = math.sqrt(sum(x * x for x in self[n:]))\n a = math.atan2(r, self[n-1])\n if (n == len(self) - 1) and (self[-1] < 0):\n return math.pi * 2 - a\n else:\n return a\n \n def angles(self):\n return (self.angle(n) for n in range(1, len(self)))\n \n def __format__(self, fmt_spec=''):\n if fmt_spec.endswith('h'):\n fmt_spec = fmt_spec[:-1]\n coords = itertools.chain([abs(self)], # 使用 chain 函数生成生成器表达式,无缝迭代向量的模和各个角坐标\n self.angles())\n outer_fmt = '<{}>' # 球面坐标\n \n else:\n coords = self\n outer_fmt = '({})' # 笛卡尔坐标\n components = (format(c, fmt_spec) for c in coords)\n return outer_fmt.format(', '.join(components))\n \n \n def __neg__(self):\n return Vector(-x for x in self)\n \n def __pos__(self):\n return Vector(self)\n \n \n def __add__(self, other):\n try:\n pairs = itertools.zip_longest(self, other, fillvalue=0.0)\n return Vector(a + b for a, b in pairs)\n except TypeError:\n return NotImplemented\n \n def __radd__(self, other):\n return self + other",
"重载标量乘法运算符\nVector([1, 2, 3]) * x 是啥意思?如果 x 是数字,就是标量乘积,结果是一个新的 Vector 实例,还有一种是两个向量的点积。也就是 1 * N 和 N * 1 的两个矩阵的乘法。\nNumpy 库目前做法是,不重载这两种意义的 *,只用 * 计算标量乘积,例如 Numpy 中,点积使用 numpy.dot() 函数计算\n\n从 Python 3.5 开始,@ 符号可以用作中缀点积运算符\n\n标量积中,我们依然先实现最简单可用的 __mul__ 和 __rmul__ 方法:",
"def __mul__(self, scalar):\n return Vector(n * scalar for n in self)\n\ndef __rmul__(self, scalar):\n return self * scalar",
"这两个方法确实可用,但是提供不兼容操作数时候会出问题。scalar 参数的值要是个数字,与浮点数相乘得到的的积是另一个浮点数(因为 Vector 类内部使用的是浮点数数组)。因此,不能使用复数,但是可以是 int,bool(int 的子类),甚至是 fractions.Fraction 实例等标量。\n我们可以像上面那样采用鸭子类型技术,抛出 TypeError。但是这个问题有个更易于理解的方式,而且也更合理:白鹅类型。我们将使用 isinstance() 检查 scalar 类型,但是不硬编码具体的类型,而是检查 numbers.Real 抽象基类。这个抽象基类包含了我们所需要的全部类型,而且还支持以后声明为 numbers.Real 抽象基类的真实子类或虚拟子类的数值类型。下面展示了白鹅类型的实际应用 -- 显式检查抽象类型\n\n我们在第 16 章说过,decimal.Decimal 没有把自己注册为 numbers.Real 的虚拟子类,因此,Vector 不会处理 decimal.Decimal 数字\n\n增加 * 运算符方法:",
"from array import array\nimport reprlib\nimport math\nimport numbers\nimport functools\nimport operator\nimport itertools\n\nclass Vector:\n typecode = 'd'\n \n def __init__(self, components):\n self._components = array(self.typecode, components) \n \n def __iter__(self):\n return iter(self._components)\n \n def __repr__(self):\n components = reprlib.repr(self._components) \n components = components[components.find('['):-1] \n return 'Vector({})'.format(components)\n \n def __str__(self):\n return str(tuple(self))\n \n def __bytes__(self):\n return (bytes([ord(self.typecode)]) + \n bytes(self._components)) \n \n def __eq__(self, other):\n return len(self) == len(other) and all(a == b for a, b in zip(self, other))\n \n def __abs__(self):\n return math.sqrt(sum(x * x for x in self)) \n \n def __bool__(self):\n return bool(abs(self))\n \n @classmethod\n def frombytes(cls, octets):\n typecode = chr(octets[0])\n memv = memoryview(octets[1:]).cast(typecode)\n return cls(memv) \n \n # 上面都一样\n def __len__(self):\n return len(self._components)\n\n def __getitem__(self, index):\n cls = type(self) # 获取实例所属的类\n if isinstance(index, slice):\n return cls(self._components[index])\n elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型\n return self._components[index]\n else:\n msg = '{cls.__name__} indices must be integers'\n raise TypeError(msg.format(cls=cls)) \n\n shortcut_names = 'xyzt'\n\n def __getattr__(self, name):\n cls = type(self)\n\n if len(name) == 1:\n pos = cls.shortcut_names.find(name)\n if 0 <= pos < len(self._components):\n return self._components[pos]\n msg = '{.__name__!r} object has no attribute {!r}'\n raise AttributeError(msg.format(cls, name))\n \n def __setattr__(self, name, value):\n cls = type(self)\n if len(name) == 1:\n if name in cls.shortcut_names:\n error = 'readonly attribute {attr_name!}'\n elif name.islower():\n error = \"can't set attributes 'a' to 'z' in {cls_name!r}\"\n else:\n error = ''\n if error:\n msg = error.format(cls_name = cls.__name__, attr_name = name) # 这个方法好,无论错误是哪个,都可以给定值\n raise AttributeError(msg)\n super().__setattr__(name, value) # 默认情况,调用超类的 __setattr__ 方法,提供标准行为\n \n def __hash__(self):\n hashs = (hash(x) for x in self._components) # 注意这里是生成器表达式,不是列表推导式,可以节省内存\n return functools.reduce(operator.xor, hashs)\n\n def angle(self, n):\n r = math.sqrt(sum(x * x for x in self[n:]))\n a = math.atan2(r, self[n-1])\n if (n == len(self) - 1) and (self[-1] < 0):\n return math.pi * 2 - a\n else:\n return a\n \n def angles(self):\n return (self.angle(n) for n in range(1, len(self)))\n \n def __format__(self, fmt_spec=''):\n if fmt_spec.endswith('h'):\n fmt_spec = fmt_spec[:-1]\n coords = itertools.chain([abs(self)], # 使用 chain 函数生成生成器表达式,无缝迭代向量的模和各个角坐标\n self.angles())\n outer_fmt = '<{}>' # 球面坐标\n \n else:\n coords = self\n outer_fmt = '({})' # 笛卡尔坐标\n components = (format(c, fmt_spec) for c in coords)\n return outer_fmt.format(', '.join(components))\n \n \n def __neg__(self):\n return Vector(-x for x in self)\n \n def __pos__(self):\n return Vector(self)\n \n \n def __add__(self, other):\n try:\n pairs = itertools.zip_longest(self, other, fillvalue=0.0)\n return Vector(a + b for a, b in pairs)\n except TypeError:\n return NotImplemented\n \n def __radd__(self, other):\n return self + other\n \n def __mul__(self, scalar):\n if isinstance(scalar, numbers.Real):\n return Vector(n * scalar for n in self)\n else:\n return NotImplemented\n \n def __rmul__(self, scalar):\n return self * scalar\n\nv1 = Vector([1.0, 2.0, 3.0])\n14 * v1\n\nv1 * True\n\nfrom fractions import Fraction\nv1 * Fraction(1, 3)",
"众多比较运算符\nPython 解释器对众多比较符(==,!=,>,<,>=,<=)的处理与前文类似,不过在两个方面有重大区别\n\n\n正向和反向调用使用的是同一系列方法,对于 == 来说,正向和反向调用都是 __eq__ 方法,只是把参数对调了,而正向的 __gt__ 方法调用的是反向的 __lt__ 方法,并把参数对掉\n\n\n对于 == 和 != 来说,如果反向调用失败,Python 会比较对象的 ID,而不抛出 TypeError\n\n\nPython 2 之后比较运算符后备机制都变了,对于 __ne__,现在 Python 3 返回结果是对 __eq__ 结果取反,对于排序比较运算符,Python 3 抛出 TypeError,并把错误消息设为 “unorderable types: int() < tuple()‘。在 Python 2 中,这些比较的结果很怪异,会考虑对象类型和 ID,并且无规律可循。然而,比较整数和元组确实没有意义,因此此时抛出 TypeError 是这门语言的一大进步\n了解这些规则之后,我们来分析并改进 Vector.__eq__ 方法的行为",
"va = Vector([1.0, 2.0, 3.0])\nvb = Vector(range(1, 4))\nva == vb\n\nt3 = (1, 2, 3)\nva == t3",
"Vector 和 元组比较的结果可能不太理想,作者的观点是结果应该由应用上下文决定。不过,”Python 之禅“作者说: 如果存在多重可能,不要猜测\nPython 中 [1, 2] == (1, 2) 结果是 False,所以我们也要在 __eq__ 中做类型检查:",
"from array import array\nimport reprlib\nimport math\nimport numbers\nimport functools\nimport operator\nimport itertools\n\nclass Vector:\n typecode = 'd'\n \n def __init__(self, components):\n self._components = array(self.typecode, components) \n \n def __iter__(self):\n return iter(self._components)\n \n def __repr__(self):\n components = reprlib.repr(self._components) \n components = components[components.find('['):-1] \n return 'Vector({})'.format(components)\n \n def __str__(self):\n return str(tuple(self))\n \n def __bytes__(self):\n return (bytes([ord(self.typecode)]) + \n bytes(self._components)) \n \n def __abs__(self):\n return math.sqrt(sum(x * x for x in self)) \n \n def __bool__(self):\n return bool(abs(self))\n \n @classmethod\n def frombytes(cls, octets):\n typecode = chr(octets[0])\n memv = memoryview(octets[1:]).cast(typecode)\n return cls(memv) \n \n # 上面都一样\n def __len__(self):\n return len(self._components)\n\n def __getitem__(self, index):\n cls = type(self) # 获取实例所属的类\n if isinstance(index, slice):\n return cls(self._components[index])\n elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型\n return self._components[index]\n else:\n msg = '{cls.__name__} indices must be integers'\n raise TypeError(msg.format(cls=cls)) \n\n shortcut_names = 'xyzt'\n\n def __getattr__(self, name):\n cls = type(self)\n\n if len(name) == 1:\n pos = cls.shortcut_names.find(name)\n if 0 <= pos < len(self._components):\n return self._components[pos]\n msg = '{.__name__!r} object has no attribute {!r}'\n raise AttributeError(msg.format(cls, name))\n \n def __setattr__(self, name, value):\n cls = type(self)\n if len(name) == 1:\n if name in cls.shortcut_names:\n error = 'readonly attribute {attr_name!}'\n elif name.islower():\n error = \"can't set attributes 'a' to 'z' in {cls_name!r}\"\n else:\n error = ''\n if error:\n msg = error.format(cls_name = cls.__name__, attr_name = name) # 这个方法好,无论错误是哪个,都可以给定值\n raise AttributeError(msg)\n super().__setattr__(name, value) # 默认情况,调用超类的 __setattr__ 方法,提供标准行为\n \n def __hash__(self):\n hashs = (hash(x) for x in self._components) # 注意这里是生成器表达式,不是列表推导式,可以节省内存\n return functools.reduce(operator.xor, hashs)\n\n def angle(self, n):\n r = math.sqrt(sum(x * x for x in self[n:]))\n a = math.atan2(r, self[n-1])\n if (n == len(self) - 1) and (self[-1] < 0):\n return math.pi * 2 - a\n else:\n return a\n \n def angles(self):\n return (self.angle(n) for n in range(1, len(self)))\n \n def __format__(self, fmt_spec=''):\n if fmt_spec.endswith('h'):\n fmt_spec = fmt_spec[:-1]\n coords = itertools.chain([abs(self)], # 使用 chain 函数生成生成器表达式,无缝迭代向量的模和各个角坐标\n self.angles())\n outer_fmt = '<{}>' # 球面坐标\n \n else:\n coords = self\n outer_fmt = '({})' # 笛卡尔坐标\n components = (format(c, fmt_spec) for c in coords)\n return outer_fmt.format(', '.join(components))\n \n \n def __neg__(self):\n return Vector(-x for x in self)\n \n def __pos__(self):\n return Vector(self)\n \n \n def __add__(self, other):\n try:\n pairs = itertools.zip_longest(self, other, fillvalue=0.0)\n return Vector(a + b for a, b in pairs)\n except TypeError:\n return NotImplemented\n \n def __radd__(self, other):\n return self + other\n \n def __mul__(self, scalar):\n if isinstance(scalar, numbers.Real):\n return Vector(n * scalar for n in self)\n else:\n return NotImplemented\n \n def __rmul__(self, scalar):\n return self * scalar\n \n def __eq__(self, other):\n if isinstance(other, Vector):\n print('.......')\n return (len(self) == len(other) and all(a == b for a, b in zip(self, other)))\n else:\n return NotImplemented\n\nt3 = (1, 2, 3)\nva = Vector([1.0, 2.0, 3.0])\nva == t3",
"上面首先调用 Vector.__eq__(va, t3)\n经过上面函数确认,t3 不是 Vector 实例,因此返回 NotImplemented\nPython 得到 NotImplemented,尝试调用 tuple.__eq__(t3, va)\ntuple 不知道 Vector 是什么,因此返回 NotImplemented\n对于 == 来说,如果反向调用返回 NoteImplemented,Python 会比较对象 ID,进行最后一搏。\n对于 != 来说,我们不应实现它,因为从 object 继承的 __ne__ 方法的后备行为满足了我们的需求,定义了 __eq__ 方法,而且它不返回 NotImplemented,__ne__ 会对 __eq__ 返回结果取反\n也就是说,使用 != 运算符比较的结果是一致的:",
"va != t3",
"__ne__ 运作方式与下面类似:",
"def __ne__(self, other):\n eq_result = self == other\n if eq_result is NotImplemented:\n return NotImplemented\n else:\n return not eq_result",
"可以看到,Python 3 中 __ne__ 对我们来说够用了,一般不用重载。\n增量赋值运算符\n现在我们的类已经支持增量运算符 += 和 *= 了:",
"v1 = Vector([1, 2, 3])\nv1_alias = v1\nid(v1)\n\nv1 += Vector([4, 5, 6])\nv1\n\nid(v1)\n\nv1_alias\n\nv1 *= 11\nv1\n\nid(v1)",
"这里的增量运算符只是语法糖,a += b 的行为和 a = a + b 一样,对于不可便类型来说,这是预期行为,而且,如果定义了 __add__ 方法的话,不用写额外的代码 += 就能使用\n然而,如果实现了就地运算符方法,例如 __iadd__,计算 a + b 的结果时会调用就地运算符方法,这种运算符名称表明,他们会就地修改左操作数,不会创建新对象\n\n不可变类型,例如 Vector 类,一定不能实现就地特殊方法,这是明显的事实,不过还是值得提出来\n\n为了展示如何实现就地运算符,我们将扩展 11 章的 BingoCage 类,实现 __add__ 和 __iadd__ 方法",
"import abc\n\nclass Tombola(abc.ABC):\n @abc.abstractmethod\n def load(self, iterable):\n '''从可迭代对象中添加元素'''\n \n @abc.abstractmethod # 抽象方法使用此标记\n def pick(self):\n '''随机删除元素,然后将其返回\n 如果实例为空,这个方法抛出 LookupError\n '''\n \n def loaded(self):\n '''如果至少有一个元素,返回 True,否则返回 False'''\n return bool(self.inspect()) # 抽象基类中的具体方法只能依赖抽象基类定义的接口(即只能使用抽象基类的其他具体方法,抽象方法或特性)\n \n def inspect(self):\n '''返回一个有序元组,由当前元素构成'''\n items = []\n while 1: # 我们不知道具体子类如何存储元素,为了得到 inspect 结果,不断调用 pick 方法,把 Tombola 清空\n try:\n items.append(self.pick())\n except LookupError:\n break\n self.load(items) # 再加回去元素\n return tuple(sorted(items))\n \n \nimport random\n\nclass BingoCage(Tombola):\n \n def __init__(self, items):\n self._randomizer = random.SystemRandom()\n self._items = []\n self.load(items)\n \n def load(self, items):\n self._items.extend(items)\n self._randomizer.shuffle(self._items)\n \n def pick(self):\n try:\n return self._items.pop()\n except IndexError:\n raise LookupError('pick from empty BingoCage')\n \n def __call__(self):\n self.pick()\n\n\n# ==== add\n\nclass AddableBingoCage(BingoCage):\n def __add__(self, other): # __add__ 方法的第二个操作数只能是 Tombola 实例\n if isinstance(other, Tombola): # other 是 Tombola 实例,获取元素\n return AddableBingoCage(self.inspect() + other.inspect())\n else:\n return NotImplemented\n \n def __iadd__(self, other):\n if isinstance(other, Tombola): \n other_iterable = other.inspect()\n else:\n try:\n other_iterable = iter(other) # 否则创建迭代器\n except TypeError:\n self_cls = type(self).__name__\n msg = 'right operand in += must be {!r} or an iterable'\n raise TypeError(msg.format(self_cls))\n self.load(other_iterable)\n return self # 非常重要,增量赋值特殊方法必须返回 self",
"最后,还有一点要注意,从设计上来看,AddableBingoCage 不用定义 __radd__ 方法,因为不需要。如果右操作数是相同类型,那么正向方法 __add__ 会处理,因此 Python 计算 a + b 时,如果 a 是 AddableBingoCage 实例,而 b 不是,那么返回 NotImplemented,那么 Python 最好放这i,抛出 TypeError,因为无法处理 b\n\n一般来说,如果终追运算符的正向方法(如 __mul__)之处理与 self 同一类型的操作数,那么就无需实现反向方法,因为按照定义,反向方法是为了处理不同类型的操作数\n\n最后我们看看效果:",
"vowels = 'AEIOU'\nglobe = AddableBingoCage(vowels)\nglobe.inspect()\n\nglobe.pick() in vowels\n\nlen(globe.inspect())\n\nglobe2 = AddableBingoCage('XYZ')\nglobe3 = globe + globe2\nlen(globe3.inspect())\n\nvoid = globe + [10, 20]\n\nglobe_orig = globe\nlen(globe.inspect())\n\nglobe += globe2\nlen(globe.inspect())\n\nglobe += ['M', 'N']\nlen(globe.inspect())\n\nglobe is globe_orig\n\nglobe += 1"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dcavar/python-tutorial-for-ipython
|
notebooks/Perceptron Learning in Python.ipynb
|
apache-2.0
|
[
"Perceptron Learning in Python\n(C) 2017-2019 by Damir Cavar\nDownload: This and various other Jupyter notebooks are available from my GitHub repo.\nLicense: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)\nThis is a tutorial related to the discussion of machine learning and NLP in the class Machine Learning for NLP: Deep Learning (Topics in Artificial Intelligence) taught at Indiana University in Spring 2018.\nWhat is a Perceptron?\nThere are many online examples and tutorials on perceptrons and learning. Here is a list of some articles:\n- Wikipedia on Perceptrons\n- Jurafsky and Martin (ed. 3), Chapter 8\nExample\nThis is an example that I have taken from a draft of the 3rd edition of Jurafsky and Martin, with slight modifications:\nWe import numpy and use its exp function. We could use the same function from the math module, or some other module like scipy. The sigmoid function is defined as in the textbook:",
"import numpy as np\n\ndef sigmoid(z):\n return 1 / (1 + np.exp(-z))",
"Our example data, weights $w$, bias $b$, and input $x$ are defined as:",
"w = np.array([0.2, 0.3, 0.8])\nb = 0.5\nx = np.array([0.5, 0.6, 0.1])",
"Our neural unit would compute $z$ as the dot-product $w \\cdot x$ and add the bias $b$ to it. The sigmoid function defined above will convert this $z$ value to the activation value $a$ of the unit:",
"z = w.dot(x) + b\nprint(\"z:\", z)\nprint(\"a:\", sigmoid(z))",
"The XOR Problem\nThe power of neural units comes from combining them into larger networks. Minsky and Papert (1969): A single neural unit cannot compute the simple logical function XOR.\nThe task is to implement a simple perceptron to compute logical operations like AND, OR, and XOR.\n\nInput: $x_1$ and $x_2$\nBias: $b = -1$ for AND; $b = 0$ for OR\nWeights: $w = [1, 1]$\n\nwith the following activation function:\n$$\ny = \\begin{cases}\n \\ 0 & \\quad \\text{if } w \\cdot x + b \\leq 0\\\n \\ 1 & \\quad \\text{if } w \\cdot x + b > 0\n \\end{cases}\n$$\nWe can define this activation function in Python as:",
"def activation(z):\n if z > 0:\n return 1\n return 0",
"For AND we could implement a perceptron as:",
"w = np.array([1, 1])\nb = -1\nx = np.array([0, 0])\nprint(\"0 AND 0:\", activation(w.dot(x) + b))\nx = np.array([1, 0])\nprint(\"1 AND 0:\", activation(w.dot(x) + b))\nx = np.array([0, 1])\nprint(\"0 AND 1:\", activation(w.dot(x) + b))\nx = np.array([1, 1])\nprint(\"1 AND 1:\", activation(w.dot(x) + b))",
"For OR we could implement a perceptron as:",
"w = np.array([1, 1])\nb = 0\nx = np.array([0, 0])\nprint(\"0 OR 0:\", activation(w.dot(x) + b))\nx = np.array([1, 0])\nprint(\"1 OR 0:\", activation(w.dot(x) + b))\nx = np.array([0, 1])\nprint(\"0 OR 1:\", activation(w.dot(x) + b))\nx = np.array([1, 1])\nprint(\"1 OR 1:\", activation(w.dot(x) + b))",
"There is no way to implement a perceptron for XOR this way.\n(C) 2017-2019 by Damir Cavar"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dereneaton/ipyrad
|
tests/cookbook-PCA-pedicularis.ipynb
|
gpl-3.0
|
[
"Cookbook: PCA analyses\nAs part of the ipyrad.analysis toolkit we've created convenience functions for easily performing exploratory principal component analysis (PCA) on your data. PCA is a very standard dimension-reduction technique that is often used to get a general sense of how samples are related to one another. PCA has the advantage over STRUCTURE type analyases in that it is very fast. Similar to STRUCTURE, PCA can be used to produce simple and intuitive plots that can be used to guide downstream analysis. There are three very nice papers that talk about the application and interpretation of PCA in the context of population genetics:\nReich et al (2008) Principal component analysis of genetic data\nNovembre & Stephens (2008) Interpreting principal component analyses of spatial population genetic variation\nMcVean (2009) A genealogical interpretation of principal components analysis\nA note on Jupyter/IPython\nThis is a Jupyter notebook, a reproducible and executable document. The code in this notebook is Python (2.7), and should be executed either in a jupyter-notebook, like this one, or in an IPython terminal. Execute each cell in order to reproduce our entire analysis. The example data set used in this analysis is from the empirical example ipyrad tutorial.\nRequired software\nYou can easily install the required software for this notebook with a locally installed conda environment. Just run the commented code below in a terminal. If you are working on an HPC cluster you do not need administrator privileges to install the software in this way, since it is only installed locally.",
"## conda install ipyrad -c ipyrad\n## conda install -c conda-forge scikit-allel",
"Import Python libraries\nThe call to %matplotlib inline here enables support for plotting directly inside the notebook.",
"%matplotlib inline\nimport ipyrad\nimport ipyrad.analysis as ipa ## ipyrad analysis toolkit",
"Quick guide (tldr;)\nThe following cell shows the quickest way to results. Further explanation of all of the features and options is provided further below.",
"## Load your assembly\ndata = ipyrad.load_json(\"/tmp/ipyrad-test/rad.json\")\n## Create they pca object\npca = ipa.pca(data)\n## Bam!\npca.plot()",
"Full guide\nIn the most common use you'll want to plot the first two PCs, then inspect the output, remove any obvious outliers, and then redo the pca. It's often desirable to import a vcf file directly rather than to use the ipyrad assembly, so here we'll demonstrate this.",
"## Path to the input vcf, in this case it's just the vcf from our ipyrad pedicularis assembly\nvcffile = \"/home/isaac/ipyrad/test-data/pedicularis/ped_outfiles/ped.vcf\"",
"Here we can just load the vcf file directly into the pca analysis module. Then ask for the samples in samples_vcforder, which is the order in which they are written in the vcf.",
"pca = ipa.pca(vcffile)\nprint(pca.samples_vcforder)",
"Now construct the default plot, which shows all samples and PCs 1 and 2. By default all samples are assigned to one population, so everything will be the same color.",
"pca.plot()",
"Population assignment for sample colors\nIn the tl;dr example the assembly of our simulated data had included a pop_assign_file so the pca() was smart enough to find this and color samples accordingly. In some cases you might not have used a pops file, so it's also possible to specify population assignments in a dictionary. The format of the dictionary should have populations as keys and lists of samples as values. Sample names need to be identical to the names in the vcf file, which we can verify with the samples_vcforder property of the pca object.",
"pops_dict = {\n \"superba\":[\"29154_superba_SRR1754715\"],\n \"thamno\":[\"30556_thamno_SRR1754720\", \"33413_thamno_SRR1754728\"],\n \"cyathophylla\":[\"30686_cyathophylla_SRR1754730\"],\n \"przewalskii\":[\"32082_przewalskii_SRR1754729\", \"33588_przewalskii_SRR1754727\"],\n \"rex\":[\"35236_rex_SRR1754731\", \"35855_rex_SRR1754726\", \"38362_rex_SRR1754725\",\\\n \"39618_rex_SRR1754723\", \"40578_rex_SRR1754724\"],\n \"cyathophylloides\":[\"41478_cyathophylloides_SRR1754722\", \"41954_cyathophylloides_SRR1754721\"]\n}\n\npca = ipa.pca(vcffile, pops_dict)\npca.plot()",
"This is just much nicer looking now, and it's also much more straightforward to interpret.\nRemoving \"bad\" samples and replotting.\nIn PC analysis, it's common for \"bad\" samples to dominate several of the first PCs, and thus \"pop out\" in a degenerate looking way. Bad samples of this kind can often be attributed to poor sequence quality or sample misidentifcation. Samples with lots of missing data tend to pop way out on their own, causing distortion in the signal in the PCs. Normally it's best to evaluate the quality of the sample, and if it can be seen to be of poor quality, to remove it and replot the PCA. The Pedicularis dataset is actually very nice, and clean, but for the sake of demonstration lets imagine the cyathophylloides samples are \"bad samples\".\nWe can see that the cyathophylloides samples have particularly high values of PC2, so we can target them for removal in this way.",
"## pca.pcs is a property of the pca object that is populated after the plot() function is called. It contains\n## the first 10 PCs for each sample. We construct a 'mask' based on the value of PC2, which here is the '1' in\n## the first line of code (numpy arrays are 0-indexed and it's typical for PCs to be 1-indexed)\nmask = pca.pcs.values[:, 1] > 500\nprint(mask)\n\n## You can see here that the mask is a list of booleans that is the same length as the number of samples.\n## We can use this list to print out the names of just the samples of interest\nprint(pca.samples_vcforder[mask])\n\n## We can then use this list of \"bad\" samples in a call to pca.remove_samples\n## and then replot the new pca\npca.remove_samples(pca.samples_vcforder[mask])\n\n## Lets prove that they're gone now\nprint(pca.samples_vcforder)\n\n## and do the plot\npca.plot()",
"Inspecting PCs directly\nAt any time after calling plot() you can inspect the PCs for all the samples using the pca.pcs property. The PC values are saved internally in a convenient pandas dataframe format.",
"pca.pcs",
"Looking at PCs other than 1 & 2\nPCs 1 and 2 by definition explain the most variation in the data, but sometimes PCs further down the chain can also be useful and informative. The plot function makes it simple to ask for PCs directly.",
"## Lets reload the full dataset so we have all the samples\npca = ipa.pca(vcffile, pops_dict)\npca.plot(pcs=[3,4])\n\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(12, 5))\nax1 = fig.add_subplot(1, 2, 1)\nax2 = fig.add_subplot(1, 2, 2)\n\npca.plot(ax=ax1, pcs=[1, 2])\npca.plot(ax=ax2, pcs=[3, 4])",
"It's nice to see PCs 1-4 here, but it's kind of stupid to plot the legend twice, so we can just turn off the legend on the first plot.",
"fig = plt.figure(figsize=(12, 5))\nax1 = fig.add_subplot(1, 2, 1)\nax2 = fig.add_subplot(1, 2, 2)\n\npca.plot(ax=ax1, pcs=[1, 2], legend=False)\npca.plot(ax=ax2, pcs=[3, 4])",
"Controlling colors\nYou might notice the default color scheme is unobtrusive, but perhaps not to your liking. There are two ways of modifying the color scheme, one simple and one more complicated, but which gives extremely fine grained control over colors.\nColors for the more complicated method can be specified according to python color conventions. I find this visual page of python color names useful.",
"## Here's the simple way, just pass in a matplotlib cmap, or even better, the name of a cmap\npca.plot(cmap=\"jet\")\n\n## Here's the harder way that gives you uber control. Pass in a dictionary mapping populations to colors.\nmy_colors = {\n \"rex\":\"aliceblue\",\n \"thamno\":\"crimson\",\n \"przewalskii\":\"deeppink\",\n \"cyathophylloides\":\"fuchsia\",\n \"cyathophylla\":\"goldenrod\",\n \"superba\":\"black\"\n}\npca.plot(cdict=my_colors)",
"Dealing with missing data\nRAD-seq datasets are often characterized by moderate to high levels of missing data. While there may be many thousands or tens of thousands of loci recovered overall, the number of loci that are recovered in all sequenced samples is often quite small. The distribution of depth of coverage per locus is a complicated function of the size of the genome of the focal organism, the restriction enzyme(s) used, the size selection tolerances, and the sequencing effort. \nBoth model-based (STRUCTURE and the like) and model-free (PCA/sNMF/etc) genetic \"clustering\" methods are sensitive to missing data. Light to moderate missing data that is distributed randomly among samples is often not enough to seriously impact the results. These are, after all, only exploratory methods. However, if missing data is biased in some way then it can distort the number of inferred populations and/or the relationships among these. For example, if several unrelated samples recover relatively few loci, for whatever reason (mistakes during library prep, failed sequencing, etc), clustering methods may erroniously identify this as true \"similarity\" with respect to the rest of the samples, and create spurious clusters.\nIn the end, all these methods must do something with sites that are uncalled in some samples. Some methods adopt a strategy of silently asigning missing sites the \"Reference\" base. Others, assign missing sites the average base. \nThere are several ways of dealing with this:\n\nOne method is to simply eliminate all loci with missing data. This can be ok for SNP chip type data, where missingness is very sparse. For RAD-Seq type data, eliminating data for all missing loci often results in a drastic reduction in the size of the final data matrix. Assemblies with thousands of loci can be pared down to only tens or hundreds of loci.\nAnother method is to impute missing data. This is rarely done for RAD-Seq type data, comparatively speaking. Or at least it is rarely done intentionally. \nA third method is to downsample using a hypergeometric projection. This is the strategy adopted by dadi in the construction of the SFS (which abhors missing data). It's a little complicated though, so we'll only look at the first two strategies.\n\nInspect the amount of missing data under various conditions\nThe pca module has various functions for inspecting missing data. The simples is the get_missing_per_sample() function, which does exactly what it says. It displays the number of ungenotyped snps per sample in the final data matrix. Here you can see that since we are using simulated data the amount of missing data is very low, but in real data these numbers will be considerable.",
"pca.get_missing_per_sample()",
"This is useful, but it doesn't give us a clear direction for how to go about dealing with the missingness. One way to reduce missing data is to reduce the tolerance for samples ungenotyped at a snp. The other way to reduce missing data is to remove samples with very poor sequencing. To this end, the .missingness() function will show a table of number of retained snps for various of these conditions.",
"pca.missingness()",
"Here the columns indicate progressive removal of the samples with the fewest number of snps. So \"Full\" indicates retention of all samples. \"2E_0\" shows # snps after removing this sample (as it has the most missing data). \"2F_0\" shows the # snps after removing both this sample & \"2E_0\". And so on. You can see as we move from left to right the total number of snps goes down, but also so does the amount of missingness.\nRows indicate thresholds for number of allowed missing samples per snp. The \"0\" row shows the condition of allowing 0 missing samples, so this is the complete data matrix. The \"1\" row shows # of snps retained if you allow 1 missing sample. And so on.\nFilter by missingness threshold - trim_missing()\nThe trim_missing() function takes one argument, namely the maximum number of missing samples per snp. Then it removes all sites that don't pass this threshold.",
"pca.trim_missing(1)\npca.missingness()",
"You can see that this also has the effect of reducing the amount of missingness per sample.",
"pca.get_missing_per_sample()",
"NB: This operation is destructive of the data inside the pca object. It doesn't do anything to your data on file, though, so if you want to rewind you can just reload your vcf file.",
"## Voila. Back to the full dataset.\npca = ipa.pca(data)\npca.missingness()",
"Imputing missing genotypes\nMcVean (2008) recommends filling missing sites with the average genotype of the population, so that's what we're doing here. For each population, we determine the average genotype at any site with missing data, and then fill in the missing sites with this average. In this case, if the average \"genotype\" is \"./.\", then this is what gets filled in, so essentially any site missing more than 50% of the data isn't getting imputed. If two genotypes occur with equal frequency then the average is just picked as the first one.",
"pca.fill_missing()\npca.missingness()",
"In comparing this missingness matrix with the previous one, you can see that indeed some snps are being recovered (though not many, again because of the clean simulated data). \nYou can also examine the effect of imputation on the amount of missingness per sample. You can see it doesn't have as drastic of an effect as trimming, but it does have some effect, plus you are retaining more data!",
"pca.get_missing_per_sample()",
"Dealing with unequal sampling\nUnequal sampling of populations can potentially distort PC analysis (see for example Bradburd et al 2016). Model based ancestry analysis suffers a similar limitation Puechmaille 2016). McVean (2008) recommends downsampling larger populations, but nobody likes throwing away data. Weighted PCA was proposed, but has not been adopted by the community.",
"{x:len(y) for x, y in pca.pops.items()}",
"Dealing with linked snps",
"\nprettier_labels = {\n \n \"32082_przewalskii\":\"przewalskii\", \n \"33588_przewalskii\":\"przewalskii\",\n \"41478_cyathophylloides\":\"cyathophylloides\", \n \"41954_cyathophylloides\":\"cyathophylloides\", \n \"29154_superba\":\"superba\",\n \"30686_cyathophylla\":\"cyathophylla\", \n \"33413_thamno\":\"thamno\", \n \"30556_thamno\":\"thamno\", \n \"35236_rex\":\"rex\", \n \"40578_rex\":\"rex\", \n \"35855_rex\":\"rex\",\n \"39618_rex\":\"rex\", \n \"38362_rex\":\"rex\"\n}",
"Copying this notebook to your computer/cluster\nYou can easily copy this notebook and then just replace my file names with your filenames to run your analysis. Just click on the [Download Notebook] link at the top of this page. Then run jupyter-notebook from a terminal and open this notebook from the dashboard."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/gcp-getting-started-lab-jp
|
data_analytics/sample.ipynb
|
apache-2.0
|
[
"「%%bigquery」に続いてSQLを記述するとBigQueryにクエリを投げることができます\n例えば、WebUIから実行した「重複なしでバイクステーションの数をカウントする」クエリは以下のように実行します",
"%%bigquery\nSELECT\n COUNT(DISTINCT station_id) as cnt\nFROM\n `bigquery-public-data.new_york.citibike_stations`",
"同じように、WebUIから実行した各種クエリを実行してみます。\n営業しているバイクステーション",
"%%bigquery\nSELECT\n COUNT(station_id) as cnt\nFROM\n `bigquery-public-data.new_york.citibike_stations`\nWHERE\n is_installed = TRUE\n AND is_renting = TRUE\n AND is_returning = TRUE",
"ユーザーの課金モデル",
"%%bigquery\nSELECT\n usertype,\n gender,\n COUNT(gender) AS cnt\nFROM\n `bigquery-public-data.new_york.citibike_trips`\nGROUP BY\n usertype,\n gender\nORDER BY\n cnt DESC",
"バイクの借り方の傾向",
"%%bigquery\nSELECT\n start_station_name,\n end_station_name,\n COUNT(end_station_name) AS cnt\nFROM\n `bigquery-public-data.new_york.citibike_trips`\nGROUP BY\n start_station_name,\n end_station_name\nORDER BY\n cnt DESC",
"結果の解釈(一例)\n\nCentral Parkの南に地下鉄の駅がある\n観光客がCentral Parkの観光に利用している\n\n\n12 Ave & W 40 St => West St & Chambers St\n通勤での利用(居住区からオフィス街への移動)\n\n\n南北方面ではなく東西方面の移動が多い\n地下鉄は南北方向に駅がある\nNY在住者は自転車で東西方向に移動して、南北方向に地下鉄を利用する傾向がある\n\n\n\n単純にBigQueryに対してクエリを実行するだけではなく、データの簡易的な可視化などの機能も提供されます。\n利用者の調査\n最も利用者が多いstart_station_name=\"Central Park S & 6 Ave\", end_station_name=\"Central Park S & 6 Ave\"の利用時間を調査します。\n%%bigqueryコマンドに続いて変数名を渡すことで、BigQueryの結果をpandasのDataFrameとして保存することができます。",
"%%bigquery utilization_time\nSELECT\n starttime, stoptime, \n TIMESTAMP_DIFF(stoptime, starttime, MINUTE) as minute,\n usertype, birth_year, gender\nFROM\n `bigquery-public-data.new_york.citibike_trips`\nWHERE\n start_station_name = 'Central Park S & 6 Ave' and end_station_name = 'Central Park S & 6 Ave'\n\n# utilization_timeの中身の確認\nutilization_time",
"Pythonによるデータ可視化\nデータの概要を掴むためにヒストグラム(データのばらつきを確認するための図)を描きます。",
"# 必要となるライブラリのインポート及び警告が表示されないような設定\nimport pandas as pd\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# ヒストグラムの描画\nutilization_time['minute'].hist(bins=range(0,100,2))",
"30分程度の利用が最も多いことが確認されました。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
iancze/Starfish
|
examples/setup.ipynb
|
bsd-3-clause
|
[
"Setup\nHere I will go over setting up our interfaces and emulators from a raw spectral library to prepare us for fitting some data in further examples.\nGetting the Grid\nTo begin, we need a spectral model library that we will use for our fitting. One common example are the PHOENIX models, most recently computed by T.O. Husser. We provide many interfaces directly with different libraries, which can be viewed in Raw Grid Interfaces.\nAs a convenience, we provide a helper to download PHOENIX models from the Goettingen servers. Note this will skip any files already on disk.",
"import numpy as np\n\nfrom Starfish.grid_tools import download_PHOENIX_models\n\nranges = [[5700, 8600], [4.0, 6.0], [-0.5, 0.5]] # T, logg, Z\n\ndownload_PHOENIX_models(path=\"PHOENIX\", ranges=ranges)",
"Now that we have the files downloaded, let's set up a grid interface",
"from Starfish.grid_tools import PHOENIXGridInterfaceNoAlpha\n\ngrid = PHOENIXGridInterfaceNoAlpha(path=\"PHOENIX\")",
"From here, we will want to set up our HDF5 interface that will allow us to go on to using the spectral emulator, but first we need to determine our model subset and instrument.\nSetting up the HDF5 Interface\nWe set up an HDF5 interface in order to allow much quicker reading and writing than compared to loading FITS files over and over again. In addition, when considering the application to our likelihood methods, we know that for a given dataset, any effects characteristic of the instrument can be pre-applied to our models, saving on computation time during the maximum likelihood estimation.\nLooking towards our fitting examples, we know we will try fitting some data from TRES Spectrograph. This instrument is available in our grid tools, but if yours isn't, you can always supply the FWHM in km/s. The FWHM ($\\Gamma$) can be found using the resolving power, $R$\n$$ \\Gamma = \\frac{c}{R} $$\nwith $c$ in km/s. Let’s also say that, for a given dataset, we want to only use a reasonable subset of our original model grid. The data provided in future examples is a ~F3V star, so we will limit our model parameter ranges appropriately.",
"from Starfish.grid_tools.instruments import SPEX\nfrom Starfish.grid_tools import HDF5Creator\n\ncreator = HDF5Creator(\n grid, \"F_SPEX_grid.hdf5\", instrument=SPEX(), wl_range=(0.9e4, np.inf), ranges=ranges\n)\ncreator.process_grid()",
"Setting up the Spectral Emulator\nOnce we have our pre-processed grid, we can make our spectral emulator and train its Gaussian process hyperparameters.",
"from Starfish.emulator import Emulator\n\n# can load from string or HDF5Interface\nemu = Emulator.from_grid(\"F_SPEX_grid.hdf5\")\nemu\n\n%time emu.train(options=dict(maxiter=1e5))\nemu",
"<div class=\"alert alert-info\">\n\n**Note:** If the emulator does not optimize the first time you use ``train``, just run it again. You can also tweak the arguments passed to ``scipy.optimize.minimize`` by passing them as keyword arguments to the call.\n\n</div>\n\n<div class=\"alert alert-warning\">\n\n**Warning:** Training the emulator will take on the order of minutes to complete. The more eigenspectra that are used as well as the resolution of the spectrograph will mainly dominate this runtime.\n\n</div>\n\nWe can do a sanity check on the optimization by looking at slice of the emulator's parameter space and the corresponding Gaussian process fit. We should see a smooth line connecting all the parameter values with some uncertainty that grows with large gaps or turbulent weights.",
"%matplotlib inline\nfrom Starfish.emulator.plotting import plot_emulator\n\nplot_emulator(emu)",
"If we are satisfied, let's save this emulator and move on to fitting some data.",
"emu.save(\"F_SPEX_emu.hdf5\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io
|
v0.12.1/examples/notebooks/generated/statespace_sarimax_internet.ipynb
|
bsd-3-clause
|
[
"SARIMAX: Model selection, missing data\nThe example mirrors Durbin and Koopman (2012), Chapter 8.4 in application of Box-Jenkins methodology to fit ARMA models. The novel feature is the ability of the model to work on datasets with missing values.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nimport requests\nfrom io import BytesIO\nfrom zipfile import ZipFile\n\n# Download the dataset\ndk = requests.get('http://www.ssfpack.com/files/DK-data.zip').content\nf = BytesIO(dk)\nzipped = ZipFile(f)\ndf = pd.read_table(\n BytesIO(zipped.read('internet.dat')),\n skiprows=1, header=None, sep='\\s+', engine='python',\n names=['internet','dinternet']\n)",
"Model Selection\nAs in Durbin and Koopman, we force a number of the values to be missing.",
"# Get the basic series\ndta_full = df.dinternet[1:].values\ndta_miss = dta_full.copy()\n\n# Remove datapoints\nmissing = np.r_[6,16,26,36,46,56,66,72,73,74,75,76,86,96]-1\ndta_miss[missing] = np.nan",
"Then we can consider model selection using the Akaike information criteria (AIC), but running the model for each variant and selecting the model with the lowest AIC value.\nThere are a couple of things to note here:\n\nWhen running such a large batch of models, particularly when the autoregressive and moving average orders become large, there is the possibility of poor maximum likelihood convergence. Below we ignore the warnings since this example is illustrative.\nWe use the option enforce_invertibility=False, which allows the moving average polynomial to be non-invertible, so that more of the models are estimable.\nSeveral of the models do not produce good results, and their AIC value is set to NaN. This is not surprising, as Durbin and Koopman note numerical problems with the high order models.",
"import warnings\n\naic_full = pd.DataFrame(np.zeros((6,6), dtype=float))\naic_miss = pd.DataFrame(np.zeros((6,6), dtype=float))\n\nwarnings.simplefilter('ignore')\n\n# Iterate over all ARMA(p,q) models with p,q in [0,6]\nfor p in range(6):\n for q in range(6):\n if p == 0 and q == 0:\n continue\n \n # Estimate the model with no missing datapoints\n mod = sm.tsa.statespace.SARIMAX(dta_full, order=(p,0,q), enforce_invertibility=False)\n try:\n res = mod.fit(disp=False)\n aic_full.iloc[p,q] = res.aic\n except:\n aic_full.iloc[p,q] = np.nan\n \n # Estimate the model with missing datapoints\n mod = sm.tsa.statespace.SARIMAX(dta_miss, order=(p,0,q), enforce_invertibility=False)\n try:\n res = mod.fit(disp=False)\n aic_miss.iloc[p,q] = res.aic\n except:\n aic_miss.iloc[p,q] = np.nan",
"For the models estimated over the full (non-missing) dataset, the AIC chooses ARMA(1,1) or ARMA(3,0). Durbin and Koopman suggest the ARMA(1,1) specification is better due to parsimony.\n$$\n\\text{Replication of:}\\\n\\textbf{Table 8.1} ~~ \\text{AIC for different ARMA models.}\\\n\\newcommand{\\r}[1]{{\\color{red}{#1}}}\n\\begin{array}{lrrrrrr}\n\\hline\nq & 0 & 1 & 2 & 3 & 4 & 5 \\\n\\hline\np & {} & {} & {} & {} & {} & {} \\\n0 & 0.00 & 549.81 & 519.87 & 520.27 & 519.38 & 518.86 \\\n1 & 529.24 & \\r{514.30} & 516.25 & 514.58 & 515.10 & 516.28 \\\n2 & 522.18 & 516.29 & 517.16 & 515.77 & 513.24 & 514.73 \\\n3 & \\r{511.99} & 513.94 & 515.92 & 512.06 & 513.72 & 514.50 \\\n4 & 513.93 & 512.89 & nan & nan & 514.81 & 516.08 \\\n5 & 515.86 & 517.64 & nan & nan & nan & nan \\\n\\hline\n\\end{array}\n$$\nFor the models estimated over missing dataset, the AIC chooses ARMA(1,1)\n$$\n\\text{Replication of:}\\\n\\textbf{Table 8.2} ~~ \\text{AIC for different ARMA models with missing observations.}\\\n\\begin{array}{lrrrrrr}\n\\hline\nq & 0 & 1 & 2 & 3 & 4 & 5 \\\n\\hline\np & {} & {} & {} & {} & {} & {} \\\n0 & 0.00 & 488.93 & 464.01 & 463.86 & 462.63 & 463.62 \\\n1 & 468.01 & \\r{457.54} & 459.35 & 458.66 & 459.15 & 461.01 \\\n2 & 469.68 & nan & 460.48 & 459.43 & 459.23 & 460.47 \\\n3 & 467.10 & 458.44 & 459.64 & 456.66 & 459.54 & 460.05 \\\n4 & 469.00 & 459.52 & nan & 463.04 & 459.35 & 460.96 \\\n5 & 471.32 & 461.26 & nan & nan & 461.00 & 462.97 \\\n\\hline\n\\end{array}\n$$\nNote: the AIC values are calculated differently than in Durbin and Koopman, but show overall similar trends.\nPostestimation\nUsing the ARMA(1,1) specification selected above, we perform in-sample prediction and out-of-sample forecasting.",
"# Statespace\nmod = sm.tsa.statespace.SARIMAX(dta_miss, order=(1,0,1))\nres = mod.fit(disp=False)\nprint(res.summary())\n\n# In-sample one-step-ahead predictions, and out-of-sample forecasts\nnforecast = 20\npredict = res.get_prediction(end=mod.nobs + nforecast)\nidx = np.arange(len(predict.predicted_mean))\npredict_ci = predict.conf_int(alpha=0.5)\n\n# Graph\nfig, ax = plt.subplots(figsize=(12,6))\nax.xaxis.grid()\nax.plot(dta_miss, 'k.')\n\n# Plot\nax.plot(idx[:-nforecast], predict.predicted_mean[:-nforecast], 'gray')\nax.plot(idx[-nforecast:], predict.predicted_mean[-nforecast:], 'k--', linestyle='--', linewidth=2)\nax.fill_between(idx, predict_ci[:, 0], predict_ci[:, 1], alpha=0.15)\n\nax.set(title='Figure 8.9 - Internet series');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bspalding/research_public
|
lectures/correlation/Linear Correlation Analysis.ipynb
|
apache-2.0
|
[
"The Correlation Coefficient\nBy Evgenia \"Jenny\" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution.\n\nThe correlation coefficient measures the extent to which the relationship between two variables is linear. Its value is always between -1 and 1. A positive coefficient indicates that the variables are directly related, i.e. when one increases the other one also increases. A negative coefficient indicates that the variables are inversely related, so that when one increases the other decreases. The closer to 0 the correlation coefficient is, the weaker the relationship between the variables.\nThe correlation coefficient of two series $X$ and $Y$ is defined as\n$$r = \\frac{Cov(X,Y)}{std(X)std(Y)}$$\nwhere $Cov$ is the covariance and $std$ is the standard deviation.\nTwo random sets of data will have a correlation coefficient close to 0:\nCorrelation vs. Covariance\nCorrelation is simply a normalized form of covariance. They are otherwise the same and are often used semi-interchangeably in everyday conversation. It is obviously important to be precise with language when discussing the two, but conceptually they are almost identical.\nCovariance isn't that meaningful by itself\nLet's say we have two variables $X$ and $Y$ and we take the covariance of the two.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nX = np.random.rand(50)\nY = 2 * X + np.random.normal(0, 0.1, 50)\n\nnp.cov(X, Y)[0, 1]",
"So now what? What does this mean? Correlation uses information about the variance of X and Y to normalize this metric. Once we've normalized the metric to the -1 to 1 scale, we can make meaningful statements and compare correlations.\nTo see how this is done consider the formula.\n$$\\frac{Cov(X, Y)}{std(X)std(Y)}$$\n$$= \\frac{Cov(X, Y)}{\\sqrt{var(X)}\\sqrt{var(Y)}}$$\n$$= \\frac{Cov(X, Y)}{\\sqrt{Cov(X, X)}\\sqrt{Cov(Y, Y)}}$$\nTo demonstrate this let's compare the correlation and covariance of two series.",
"X = np.random.rand(50)\nY = 2 * X + 4\n\nprint 'Covariance of X and Y: \\n' + str(np.cov(X, Y))\nprint 'Correlation of X and Y: \\n' + str(np.corrcoef(X, Y))",
"Why do both np.cov and np.corrcoef return matricies?\nThe covariance matrix is an important concept in statistics. Often people will refer to the covariance of two variables $X$ and $Y$, but in reality that is just one entry in the covariance matrix of $X$ and $Y$. For each input variable we have one row and one column. The diagonal is just the variance of that variable, or $Cov(X, X)$, entries off the diagonal are covariances between different variables. The matrix is symmetric across the diagonal. Let's check that this is true.",
"cov_matrix = np.cov(X, Y)\n\n# We need to manually set the degrees of freedom on X to 1, as numpy defaults to 0 for variance\n# This is usually fine, but will result in a slight mismatch as np.cov defaults to 1\nerror = cov_matrix[0, 0] - X.var(ddof=1)\n\nprint 'error: ' + str(error)\n\nX = np.random.rand(50)\nY = np.random.rand(50)\n\nplt.scatter(X,Y)\nplt.xlabel('X Value')\nplt.ylabel('Y Value')\n\n# taking the relevant value from the matrix returned by np.cov\nprint 'Correlation: ' + str(np.cov(X,Y)[0,1]/(np.std(X)*np.std(Y)))\n# Let's also use the builtin correlation function\nprint 'Built-in Correlation: ' + str(np.corrcoef(X, Y)[0, 1])",
"Now let's see what two correlated sets of data look like.",
"X = np.random.rand(50)\nY = X + np.random.normal(0, 0.1, 50)\n\nplt.scatter(X,Y)\nplt.xlabel('X Value')\nplt.ylabel('Y Value')\n\nprint 'Correlation: ' + str(np.corrcoef(X, Y)[0, 1])",
"Let's dial down the relationship by introducing more noise.",
"X = np.random.rand(50)\nY = X + np.random.normal(0, .2, 50)\n\nplt.scatter(X,Y)\nplt.xlabel('X Value')\nplt.ylabel('Y Value')\n\nprint 'Correlation: ' + str(np.corrcoef(X, Y)[0, 1])",
"Finally, let's see what an inverse relationship looks like.",
"X = np.random.rand(50)\nY = -X + np.random.normal(0, .1, 50)\n\nplt.scatter(X,Y)\nplt.xlabel('X Value')\nplt.ylabel('Y Value')\n\nprint 'Correlation: ' + str(np.corrcoef(X, Y)[0, 1])",
"We see a little bit of rounding error, but they are clearly the same value.\nHow is this useful in finance?\nDetermining related assets\nOnce we've established that two series are probably related, we can use that in an effort to predict future values of the series. For example. let's loook at the price of Apple and a semiconductor equipment manufacturer, Lam Research Corporation.",
"# Pull the pricing data for our two stocks and S&P 500\nstart = '2013-01-01'\nend = '2015-01-01'\nbench = get_pricing('SPY', fields='price', start_date=start, end_date=end)\na1 = get_pricing('LRCX', fields='price', start_date=start, end_date=end)\na2 = get_pricing('AAPL', fields='price', start_date=start, end_date=end)\n\nplt.scatter(a1,a2)\nplt.xlabel('LRCX')\nplt.ylabel('AAPL')\nplt.title('Stock prices from ' + start + ' to ' + end)\nprint \"Correlation coefficients\"\nprint \"LRCX and AAPL: \", np.corrcoef(a1,a2)[0,1]\nprint \"LRCX and SPY: \", np.corrcoef(a1,bench)[0,1]\nprint \"AAPL and SPY: \", np.corrcoef(bench,a2)[0,1]",
"Constructing a portfolio of uncorrelated assets\nAnother reason that correlation is useful in finance is that uncorrelated assets produce the best portfolios. The intuition for this is that if the assets are uncorrelated, a drawdown in one will not correspond with a drawdown in another. This leads to a very stable return stream when many uncorrelated assets are combined.\nLimitations\nSignificance\nIt's hard to rigorously determine whether or not a correlation is significant, especially when, as here, the variables are not normally distributed. Their correlation coefficient is close to 1, so it's pretty safe to say that the two stock prices are correlated over the time period we use, but is this indicative of future correlation? If we examine the correlation of each of them with the S&P 500, we see that it is also quite high. So, AAPL and LRCX are slightly more correlated with each other than with the average stock.\nOne fundamental problem is that it is easy to datamine correlations by picking the right time period. To avoid this, one should compute the correlation of two quantities over many historical time periods and examine the distibution of the correlation coefficient. More details on why single point estimates are bad will be covered in future notebooks.\nAs an example, remember that the correlation of AAPL and LRCX from 2013-1-1 to 2015-1-1 was 0.95. Let's take the rolling 60 day correlation between the two to see how that varies.",
"rolling_correlation = pd.rolling_corr(a1, a2, 60)\nplt.plot(rolling_correlation)\nplt.xlabel('Day')\nplt.ylabel('60-day Rolling Correlation')",
"Non-Linear Relationships\nThe correlation coefficient can be useful for examining the strength of the relationship between two variables. However, it's important to remember that two variables may be associated in different, predictable ways which this analysis would not pick up. For instance, one variable might precisely follow the behavior of a second, but with a delay. There are techniques for dealing with this lagged correlation. Alternatively, a variable may be related to the rate of change of another. Neither of these relationships are linear, but can be very useful if detected.\nAdditionally, the correlation coefficient can be very sensitive to outliers. This means that including or excluding even a couple of data points can alter your result, and it is not always clear whether these points contain information or are simply noise.\nAs an example, let's make the noise distribution poisson rather than normal and see what happens.",
"X = np.random.rand(100)\nY = X + np.random.poisson(size=100)\n\nplt.scatter(X, Y)\n\nnp.corrcoef(X, Y)[0, 1]",
"In conclusion, correlation is a powerful technique, but as always in statistics, one should be careful not to interpret results where there are none."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.12/_downloads/plot_sensors_decoding.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Decoding sensor space data\nDecoding, a.k.a MVPA or supervised machine learning applied to MEG\ndata in sensor space. Here the classifier is applied to every time\npoint.",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.cross_validation import StratifiedKFold\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.decoding import TimeDecoding, GeneralizationAcrossTime\n\ndata_path = sample.data_path()\n\nplt.close('all')",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.2, 0.5\nevent_id = dict(aud_l=1, vis_l=3)\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(2, None, method='iir') # replace baselining with high-pass\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=None, preload=True,\n reject=dict(grad=4000e-13, eog=150e-6))\n\nepochs_list = [epochs[k] for k in event_id]\nmne.epochs.equalize_epoch_counts(epochs_list)\ndata_picks = mne.pick_types(epochs.info, meg=True, exclude='bads')",
"Temporal decoding\nWe'll use the default classifer for a binary classification problem\nwhich is a linear Support Vector Machine (SVM).",
"td = TimeDecoding(predict_mode='cross-validation', n_jobs=1)\n\n# Fit\ntd.fit(epochs)\n\n# Compute accuracy\ntd.score(epochs)\n\n# Plot scores across time\ntd.plot(title='Sensor space decoding')",
"Generalization Across Time\nHere we'll use a stratified cross-validation scheme.",
"# make response vector\ny = np.zeros(len(epochs.events), dtype=int)\ny[epochs.events[:, 2] == 3] = 1\ncv = StratifiedKFold(y=y) # do a stratified cross-validation\n\n# define the GeneralizationAcrossTime object\ngat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1,\n cv=cv, scorer=roc_auc_score)\n\n# fit and score\ngat.fit(epochs, y=y)\ngat.score(epochs)\n\n# let's visualize now\ngat.plot()\ngat.plot_diagonal()",
"Exercise\n\nCan you improve the performance using full epochs and a common spatial\n pattern (CSP) used by most BCI systems?\nExplore other datasets from MNE (e.g. Face dataset from SPM to predict\n Face vs. Scrambled)\n\nHave a look at the example\n:ref:sphx_glr_auto_examples_decoding_plot_decoding_csp_space.py"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
alephcero/adsProject
|
olds/testeoModelos.ipynb
|
gpl-3.0
|
[
"import getEPH\nimport categorize\nimport schoolYears\nimport make_dummy\nimport functionsForModels\nimport pandas as pd\n#http://statsmodels.sourceforge.net/devel/examples/generated/example_wls.html\nimport numpy as np\nfrom scipy import stats\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom statsmodels.sandbox.regression.predstd import wls_prediction_std\nfrom statsmodels.iolib.table import (SimpleTable, default_txt_fmt)\nnp.random.seed(1024)\n%matplotlib inline",
"NOTAS\n\nCALCULAR EN BASE AL MODELO DE CURVAS LAS DERIVADAS Y DONDE HACE EL PICO EN EDAD\n\n\nDOWNLOAD DATA",
"#get data\ngetEPHdbf('t310')\n\ndata1 = pd.read_csv('data/cleanDatat310.csv')\n\ndata2 = categorize.categorize(data1)\ndata3 = schoolYears.schoolYears(data2)\ndata = make_dummy.make_dummy(data3)\n\ndataModel = functionsForModels.prepareDataForModel(data)\n\ndataModel.head()",
"NEW VARIABLES FOR MODEL\nGraficos exploratorios",
"fig = plt.figure(figsize=(16,12))\nax1 = fig.add_subplot(2,2,1)\nax2 = fig.add_subplot(2,2,2)\nax3 = fig.add_subplot(2,2,3)\nax4 = fig.add_subplot(2,2,4)\n\nax1.plot(dataModel.education,dataModel.P47T,'ro')\nax1.set_ylabel('Ingreso total')\nax1.set_xlabel('Educacion')\nax2.plot(dataModel.age,dataModel.P47T,'ro')\nax2.set_xlabel('Edad')\nax3.plot(dataModel.education,dataModel.P21,'bo')\nax3.set_ylabel('Ingreso Laboral')\nax3.set_xlabel('Educacion')\nax4.plot(dataModel.age,dataModel.P21,'bo')\nax4.set_xlabel('Edad')\n\nfig = plt.figure(figsize=(16,12))\nax1 = fig.add_subplot(2,2,1)\nax2 = fig.add_subplot(2,2,2)\nax3 = fig.add_subplot(2,2,3)\nax4 = fig.add_subplot(2,2,4)\n\n\nsns.kdeplot(dataModel.P47T,ax=ax1,color = 'red')\nsns.kdeplot(dataModel.lnIncomeT,ax=ax2,color = 'red')\nsns.kdeplot(dataModel.P21,ax=ax3)\nsns.kdeplot(dataModel.lnIncome,ax=ax4)\n\nprint 'mean:', dataModel.lnIncome.mean(), 'std:', dataModel.lnIncome.std()\n\nprint 'mean:', dataModel.P21.mean(), 'std:', dataModel.P21.std()\n\nplt.boxplot(list(dataModel.P21), 0, 'gD')",
"PLOTS FOR LnINCOME ~ EDUC AND AGE",
"g = sns.JointGrid(x=\"education\", y=\"lnIncome\", data=dataModel) \ng.plot_joint(sns.regplot, order=2) \ng.plot_marginals(sns.distplot)\n\ng2 = sns.JointGrid(x=\"age\", y=\"lnIncome\", data=dataModel) \ng2.plot_joint(sns.regplot, order=2) \ng2.plot_marginals(sns.distplot)",
"Modelos\nTomo el de mejor performance para evaluar en el test set. Basicamente son dos posibiliades INDEC o ALTERNATIVO (que habiamos propuesto no cortar las edades y los años de escolaridad, sino usar las variables y directamente usar el cuadrado). Cada uno lo pruebo con ingresos laborales (con y sin constante) y con el log del ingreso laboral.\n1 CEPAL con ingresos laborales",
"dataModel1 = runModel(dataModel, income = 'P21')",
"2 - CEPAL con Log ingresos laborales",
"dataModel2 = functionsForModels.runModel(dataModel, income = 'lnIncome', variables= [\n 'primary','secondary','university',\n 'male_14to24','male_25to34',\n 'female_14to24', 'female_25to34', 'female_35more'])",
"3 - CEPAL con ingresos totales",
"dataModel3 = functionsForModels.runModel(dataModel, income = 'P47T')",
"4 - CEPAL con Log ingresos totales",
"dataModel4 = functionsForModels.runModel(dataModel, income = 'lnIncomeT')",
"5 - ALTERNATIVO con Log ingresos totales",
"dataModel5 = functionsForModels.runModel(dataModel, income = 'lnIncomeT', variables=['education','education2',\n 'age','age2','female'])",
"6 - ALTERNATIVO con log Income laboral",
"dataModel6 = functionsForModels.runModel(dataModel, income = 'lnIncome', variables=['education','education2',\n 'age','age2','female'])"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
felipescobarv/notebooks
|
laplace/2D_Laplace_equation.ipynb
|
bsd-3-clause
|
[
"Relax and hold steady\nMany problems in physics have no time dependence, yet are rich with physical meaning: the gravitational field produced by a massive object, the electrostatic potential of a charge distribution, the displacement of a stretched membrane and the steady flow of fluid through a porous medium ... all these can be modeled by Poisson's equation:\n\\begin{equation}\n\\nabla^2 u = f\n\\end{equation}\nwhere the unknown $u$ and the known $f$ are functions of space, in a domain $\\Omega$. To find the solution, we require boundary conditions. These could be Dirichlet boundary conditions, specifying the value of the solution on the boundary,\n\\begin{equation}\nu = b_1 \\text{ on } \\partial\\Omega,\n\\end{equation}\nor Neumann boundary conditions, specifying the normal derivative of the solution on the boundary,\n\\begin{equation}\n\\frac{\\partial u}{\\partial n} = b_2 \\text{ on } \\partial\\Omega.\n\\end{equation}\nA boundary-value problem consists of finding $u$, given the above information. Numerically, we can do this using relaxation methods, which start with an initial guess for $u$ and then iterate towards the solution. Let's find out how!\nLaplace's equation\nThe particular case of $f=0$ (homogeneous case) results in Laplace's equation:\n\\begin{equation}\n\\nabla^2 u = 0\n\\end{equation}\nFor example, the equation for steady, two-dimensional heat conduction is:\n\\begin{equation}\n\\frac{\\partial ^2 T}{\\partial x^2} + \\frac{\\partial ^2 T}{\\partial y^2} = 0\n\\end{equation}\nwhere $T$ is a temperature that has reached steady state. The Laplace equation models the equilibrium state of a system under the supplied boundary conditions.\nThe study of solutions to Laplace's equation is called potential theory, and the solutions themselves are often potential fields. Let's use $p$ from now on to represent our generic dependent variable, and write Laplace's equation again (in two dimensions):\n\\begin{equation}\n\\frac{\\partial ^2 p}{\\partial x^2} + \\frac{\\partial ^2 p}{\\partial y^2} = 0\n\\end{equation}\nLike in the diffusion equation, we discretize the second-order derivatives with central differences. If you need to refresh your mind, check out this lesson and try to discretize the equation by yourself. On a two-dimensional Cartesian grid, it gives:\n\\begin{equation}\n\\frac{p_{i+1, j} - 2p_{i,j} + p_{i-1,j} }{\\Delta x^2} + \\frac{p_{i,j+1} - 2p_{i,j} + p_{i, j-1} }{\\Delta y^2} = 0\n\\end{equation}\nWhen $\\Delta x = \\Delta y$, we end up with the following equation:\n\\begin{equation}\np_{i+1, j} + p_{i-1,j} + p_{i,j+1} + p_{i, j-1}- 4 p_{i,j} = 0\n\\end{equation}\nThis tells us that the Laplacian differential operator at grid point $(i,j)$ can be evaluated discretely using the value of $p$ at that point (with a factor $-4$) and the four neighboring points to the left and right, above and below grid point $(i,j)$.\nThe stencil of the discrete Laplacian operator is shown in Figure 1. It is typically called the five-point stencil, for obvious reasons.\n<img src=\"./figures/laplace.svg\">\nFigure 1: Laplace five-point stencil.\nThe discrete equation above is valid for every interior point in the domain. If we write the equations for all interior points, we have a linear system of algebraic equations. We could solve the linear system directly (e.g., with Gaussian elimination), but we can be more clever than that!\nNotice that the coefficient matrix of such a linear system has mostly zeroes. For a uniform spatial grid, the matrix is block diagonal: it has diagonal blocks that are tridiagonal with $-4$ on the main diagonal and $1$ on two off-center diagonals, and two more diagonals with $1$. All of the other elements are zero. Iterative methods are particularly suited for a system with this structure, and save us from storing all those zeroes.\nWe will start with an initial guess for the solution, $p_{i,j}^{0}$, and use the discrete Laplacian to get an update, $p_{i,j}^{1}$, then continue on computing $p_{i,j}^{k}$ until we're happy. Note that $k$ is not a time index here, but an index corresponding to the number of iterations we perform in the relaxation scheme. \nAt each iteration, we compute updated values $p_{i,j}^{k+1}$ in a (hopefully) clever way so that they converge to a set of values satisfying Laplace's equation. The system will reach equilibrium only as the number of iterations tends to $\\infty$, but we can approximate the equilibrium state by iterating until the change between one iteration and the next is very small. \nThe most intuitive method of iterative solution is known as the Jacobi method, in which the values at the grid points are replaced by the corresponding weighted averages:\n\\begin{equation}\np^{k+1}{i,j} = \\frac{1}{4} \\left(p^{k}{i,j-1} + p^k_{i,j+1} + p^{k}{i-1,j} + p^k{i+1,j} \\right)\n\\end{equation}\nThis method does indeed converge to the solution of Laplace's equation. Thank you Professor Jacobi!\nChallenge task\nGrab a piece of paper and write out the coefficient matrix for a discretization with 7 grid points in the $x$ direction (5 interior points) and 5 points in the $y$ direction (3 interior). The system should have 15 unknowns, and the coefficient matrix three diagonal blocks. Assume prescribed Dirichlet boundary conditions on all sides (not necessarily zero).\nBoundary conditions and relaxation\nSuppose we want to model steady-state heat transfer on (say) a computer chip with one side insulated (zero Neumann BC), two sides held at a fixed temperature (Dirichlet condition) and one side touching a component that has a sinusoidal distribution of temperature.\nWe would need to solve Laplace's equation with boundary conditions like\n\\begin{equation}\n \\begin{gathered}\np=0 \\text{ at } x=0\\\n\\frac{\\partial p}{\\partial x} = 0 \\text{ at } x = L\\\np = 0 \\text{ at }y = 0 \\\np = \\sin \\left( \\frac{\\frac{3}{2}\\pi x}{L} \\right) \\text{ at } y = H.\n \\end{gathered}\n\\end{equation}\nWe'll take $L=1$ and $H=1$ for the sizes of the domain in the $x$ and $y$ directions.\nOne of the defining features of elliptic PDEs is that they are \"driven\" by the boundary conditions. In the iterative solution of Laplace's equation, boundary conditions are set and the solution relaxes from an initial guess to join the boundaries together smoothly, given those conditions. Our initial guess will be $p=0$ everywhere. Now, let's relax!\nFirst, we import our usual smattering of libraries (plus a few new ones!)",
"from matplotlib import pyplot\nimport numpy\n%matplotlib inline\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16",
"To visualize 2D data, we can use pyplot.imshow(), but a 3D plot can sometimes show a more intuitive view the solution. Or it's just prettier!\nBe sure to enjoy the many examples of 3D plots in the mplot3d section of the Matplotlib Gallery. \nWe'll import the Axes3D library from Matplotlib and also grab the cm package, which provides different colormaps for visualizing plots.",
"from mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm",
"Let's define a function for setting up our plotting environment, to avoid repeating this set-up over and over again. It will save us some typing.",
"def plot_3D(x, y, p):\n '''Creates 3D plot with appropriate limits and viewing angle\n \n Parameters:\n ----------\n x: array of float\n nodal coordinates in x\n y: array of float\n nodal coordinates in y\n p: 2D array of float\n calculated potential field\n \n '''\n fig = pyplot.figure(figsize=(11,7), dpi=100)\n ax = fig.gca(projection='3d')\n X,Y = numpy.meshgrid(x,y)\n surf = ax.plot_surface(X,Y,p[:], rstride=1, cstride=1, cmap=cm.viridis,\n linewidth=0, antialiased=False)\n\n ax.set_xlim(0,1)\n ax.set_ylim(0,1)\n ax.set_xlabel('$x$')\n ax.set_ylabel('$y$')\n ax.set_zlabel('$z$')\n ax.view_init(30,45)\n",
"Note\nThis plotting function uses Viridis, a new (and awesome) colormap available in Matplotlib versions 1.5 and greater. If you see an error when you try to plot using <tt>cm.viridis</tt>, just update Matplotlib using <tt>conda</tt> or <tt>pip</tt>.\nAnalytical solution\nThe Laplace equation with the boundary conditions listed above has an analytical solution, given by\n\\begin{equation}\np(x,y) = \\frac{\\sinh \\left( \\frac{\\frac{3}{2} \\pi y}{L}\\right)}{\\sinh \\left( \\frac{\\frac{3}{2} \\pi H}{L}\\right)} \\sin \\left( \\frac{\\frac{3}{2} \\pi x}{L} \\right)\n\\end{equation}\nwhere $L$ and $H$ are the length of the domain in the $x$ and $y$ directions, respectively.\nWe will use numpy.meshgrid to plot our 2D solutions. This is a function that takes two vectors (x and y, say) and returns two 2D arrays of $x$ and $y$ coordinates that we then use to create the contour plot. Always useful, linspace creates 1-row arrays of equally spaced numbers: it helps for defining $x$ and $y$ axes in line plots, but now we want the analytical solution plotted for every point in our domain. To do this, we'll use in the analytical solution the 2D arrays generated by numpy.meshgrid.",
"def p_analytical(x, y):\n X, Y = numpy.meshgrid(x,y)\n \n p_an = numpy.sinh(1.5*numpy.pi*Y / x[-1]) /\\\n (numpy.sinh(1.5*numpy.pi*y[-1]/x[-1]))*numpy.sin(1.5*numpy.pi*X/x[-1])\n \n return p_an",
"Ok, let's try out the analytical solution and use it to test the plot_3D function we wrote above.",
"nx = 41\nny = 41\n\nx = numpy.linspace(0,1,nx)\ny = numpy.linspace(0,1,ny)\n\np_an = p_analytical(x,y)\n\nplot_3D(x,y,p_an)",
"It worked! This is what the solution should look like when we're 'done' relaxing. (And isn't viridis a cool colormap?) \nHow long do we iterate?\nWe noted above that there is no time dependence in the Laplace equation. So it doesn't make a lot of sense to use a for loop with nt iterations.\nInstead, we can use a while loop that continues to iteratively apply the relaxation scheme until the difference between two successive iterations is small enough. \nBut how small is small enough? That's a good question. We'll try to work that out as we go along. \nTo compare two successive potential fields, a good option is to use the L2 norm of the difference. It's defined as\n\\begin{equation}\n|\\textbf{x}| = \\sqrt{\\sum_{i=0, j=0}^k \\left|p^{k+1}{i,j} - p^k{i,j}\\right|^2}\n\\end{equation}\nBut there's one flaw with this formula. We are summing the difference between successive iterations at each point on the grid. So what happens when the grid grows? (For example, if we're refining the grid, for whatever reason.) There will be more grid points to compare and so more contributions to the sum. The norm will be a larger number just because of the grid size!\nThat doesn't seem right. We'll fix it by normalizing the norm, dividing the above formula by the norm of the potential field at iteration $k$. \nFor two successive iterations, the relative L2 norm is then calculated as\n\\begin{equation}\n|\\textbf{x}| = \\frac{\\sqrt{\\sum_{i=0, j=0}^k \\left|p^{k+1}{i,j} - p^k{i,j}\\right|^2}}{\\sqrt{\\sum_{i=0, j=0}^k \\left|p^k_{i,j}\\right|^2}}\n\\end{equation}\nOur Python code for this calculation is a one-line function:",
"def L2_error(p, pn):\n return numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2))",
"Now, let's define a function that will apply Jacobi's method for Laplace's equation. Three of the boundaries are Dirichlet boundaries and so we can simply leave them alone. Only the Neumann boundary needs to be explicitly calculated at each iteration, and we'll do it by discretizing the derivative in its first order approximation:",
"def laplace2d(p, l2_target):\n '''Iteratively solves the Laplace equation using the Jacobi method\n \n Parameters:\n ----------\n p: 2D array of float\n Initial potential distribution\n l2_target: float\n target for the difference between consecutive solutions\n \n Returns:\n -------\n p: 2D array of float\n Potential distribution after relaxation\n '''\n \n l2norm = 1\n pn = numpy.empty_like(p)\n while l2norm > l2_target:\n pn = p.copy()\n p[1:-1,1:-1] = .25 * (pn[1:-1,2:] + pn[1:-1, :-2] \\\n + pn[2:, 1:-1] + pn[:-2, 1:-1])\n \n ##Neumann B.C. along x = L\n p[1:-1, -1] = p[1:-1, -2] # 1st order approx of a derivative \n l2norm = L2_error(p, pn)\n \n return p",
"Rows and columns, and index order\nThe physical problem has two dimensions, so we also store the temperatures in two dimensions: in a 2D array. \nWe chose to store it with the $y$ coordinates corresponding to the rows of the array and $x$ coordinates varying with the columns (this is just a code design decision!). If we are consistent with the stencil formula (with $x$ corresponding to index $i$ and $y$ to index $j$), then $p_{i,j}$ will be stored in array format as p[j,i].\nThis might be a little confusing as most of us are used to writing coordinates in the format $(x,y)$, but our preference is to have the data stored so that it matches the physical orientation of the problem. Then, when we make a plot of the solution, the visualization will make sense to us, with respect to the geometry of our set-up. That's just nicer than to have the plot rotated!\n<img src=\"./figures/rowcolumn.svg\" width=\"400px\">\nFigure 2: Row-column data storage\nAs you can see on Figure 2 above, if we want to access the value $18$ we would write those coordinates as $(x_2, y_3)$. You can also see that its location is the 3rd row, 2nd column, so its array address would be p[3,2].\nAgain, this is a design decision. However you can choose to manipulate and store your data however you like; just remember to be consistent!\nLet's relax!\nThe initial values of the potential field are zero everywhere (initial guess), except at the boundary: \n$$p = \\sin \\left( \\frac{\\frac{3}{2}\\pi x}{L} \\right) \\text{ at } y=H$$\nTo initialize the domain, numpy.zeros will handle everything except that one Dirichlet condition. Let's do it!",
"##variable declarations\nnx = 41\nny = 41\n\n\n##initial conditions\np = numpy.zeros((ny,nx)) ##create a XxY vector of 0's\n\n\n##plotting aids\nx = numpy.linspace(0,1,nx)\ny = numpy.linspace(0,1,ny)\n\n##Dirichlet boundary conditions\np[-1,:] = numpy.sin(1.5*numpy.pi*x/x[-1])\n",
"Now let's visualize the initial conditions using the plot_3D function, just to check we've got it right.",
"plot_3D(x, y, p)",
"The p array is equal to zero everywhere, except along the boundary $y = 1$. Hopefully you can see how the relaxed solution and this initial condition are related. \nNow, run the iterative solver with a target L2-norm difference between successive iterations of $10^{-8}$.",
"p = laplace2d(p.copy(), 1e-8)",
"Let's make a gorgeous plot of the final field using the newly minted plot_3D function.",
"plot_3D(x,y,p)",
"Awesome! That looks pretty good. But we'll need more than a simple visual check, though. The \"eyeball metric\" is very forgiving!\nConvergence analysis\nConvergence, Take 1\nWe want to make sure that our Jacobi function is working properly. Since we have an analytical solution, what better way than to do a grid-convergence analysis? We will run our solver for several grid sizes and look at how fast the L2 norm of the difference between consecutive iterations decreases.\nLet's make our lives easier by writing a function to \"reset\" the initial guess for each grid so we don't have to keep copying and pasting them.",
"def laplace_IG(nx):\n '''Generates initial guess for Laplace 2D problem for a \n given number of grid points (nx) within the domain [0,1]x[0,1]\n \n Parameters:\n ----------\n nx: int\n number of grid points in x (and implicitly y) direction\n \n Returns:\n -------\n p: 2D array of float\n Pressure distribution after relaxation\n x: array of float\n linspace coordinates in x\n y: array of float\n linspace coordinates in y\n '''\n\n ##initial conditions\n p = numpy.zeros((nx,nx)) ##create a XxY vector of 0's\n\n ##plotting aids\n x = numpy.linspace(0,1,nx)\n y = x\n\n ##Dirichlet boundary conditions\n p[:,0] = 0\n p[0,:] = 0\n p[-1,:] = numpy.sin(1.5*numpy.pi*x/x[-1])\n \n return p, x, y",
"Now run Jacobi's method on the Laplace equation using four different grids, with the same exit criterion of $10^{-8}$ each time. Then, we look at the error versus the grid size in a log-log plot. What do we get?",
"nx_values = [11, 21, 41, 81]\nl2_target = 1e-8\n\nerror = numpy.empty_like(nx_values, dtype=numpy.float)\n\nfor i, nx in enumerate(nx_values):\n p, x, y = laplace_IG(nx)\n \n p = laplace2d(p.copy(), l2_target)\n \n p_an = p_analytical(x, y)\n \n error[i] = L2_error(p, p_an)\n \n\npyplot.figure(figsize=(6,6))\npyplot.grid(True)\npyplot.xlabel(r'$n_x$', fontsize=18)\npyplot.ylabel(r'$L_2$-norm of the error', fontsize=18)\n\npyplot.loglog(nx_values, error, color='k', ls='--', lw=2, marker='o')\npyplot.axis('equal');",
"Hmm. That doesn't look like 2nd-order convergence, but we're using second-order finite differences. What's going on? The culprit is the boundary conditions. Dirichlet conditions are order-agnostic (a set value is a set value), but the scheme we used for the Neumann boundary condition is 1st-order. \nRemember when we said that the boundaries drive the problem? One boundary that's 1st-order completely tanked our spatial convergence. Let's fix it!\n2nd-order Neumann BCs\nUp to this point, we have used the first-order approximation of a derivative to satisfy Neumann B.C.'s. For a boundary located at $x=0$ this reads,\n\\begin{equation}\n\\frac{p^{k+1}{1,j} - p^{k+1}{0,j}}{\\Delta x} = 0\n\\end{equation}\nwhich, solving for $p^{k+1}_{0,j}$ gives us\n\\begin{equation}\np^{k+1}{0,j} = p^{k+1}{1,j}\n\\end{equation}\nUsing that Neumann condition will limit us to 1st-order convergence. Instead, we can start with a 2nd-order approximation (the central-difference approximation):\n\\begin{equation}\n\\frac{p^{k+1}{1,j} - p^{k+1}{-1,j}}{2 \\Delta x} = 0\n\\end{equation}\nThat seems problematic, since there is no grid point $p^{k}_{-1,j}$. But no matter … let's carry on. According to the 2nd-order approximation,\n\\begin{equation}\np^{k+1}{-1,j} = p^{k+1}{1,j}\n\\end{equation}\nRecall the finite-difference Jacobi equation with $i=0$:\n\\begin{equation}\np^{k+1}{0,j} = \\frac{1}{4} \\left(p^{k}{0,j-1} + p^k_{0,j+1} + p^{k}{-1,j} + p^k{1,j} \\right)\n\\end{equation}\nNotice that the equation relies on the troublesome (nonexistent) point $p^k_{-1,j}$, but according to the equality just above, we have a value we can substitute, namely $p^k_{1,j}$. Ah! We've completed the 2nd-order Neumann condition:\n\\begin{equation}\np^{k+1}{0,j} = \\frac{1}{4} \\left(p^{k}{0,j-1} + p^k_{0,j+1} + 2p^{k}_{1,j} \\right)\n\\end{equation}\nThat's a bit more complicated than the first-order version, but it's relatively straightforward to code.\nNote\nDo not confuse $p^{k+1}{-1,j}$ with <tt>p[-1]</tt>:\n<tt>p[-1]</tt> is a piece of Python code used to refer to the last element of a list or array named <tt>p</tt>. $p^{k+1}{-1,j}$ is a 'ghost' point that describes a position that lies outside the actual domain.\nConvergence, Take 2\nWe can copy the previous Jacobi function and replace only the line implementing the Neumann boundary condition. \nCareful!\nRemember that our problem has the Neumann boundary located at $x = L$ and not $x = 0$ as we assumed in the derivation above.",
"def laplace2d_neumann(p, l2_target):\n '''Iteratively solves the Laplace equation using the Jacobi method\n with second-order Neumann boundary conditions\n \n Parameters:\n ----------\n p: 2D array of float\n Initial potential distribution\n l2_target: float\n target for the difference between consecutive solutions\n \n Returns:\n -------\n p: 2D array of float\n Potential distribution after relaxation\n '''\n \n l2norm = 1\n pn = numpy.empty_like(p)\n while l2norm > l2_target:\n pn = p.copy()\n p[1:-1,1:-1] = .25 * (pn[1:-1,2:] + pn[1:-1, :-2] \\\n + pn[2:, 1:-1] + pn[:-2, 1:-1])\n \n ##2nd-order Neumann B.C. along x = L\n p[1:-1,-1] = .25 * (2*pn[1:-1,-2] + pn[2:, -1] + pn[:-2, -1])\n \n l2norm = L2_error(p, pn)\n \n return p",
"Again, this is the exact same code as before, but now we're running the Jacobi solver with a 2nd-order Neumann boundary condition. Let's do a grid-refinement analysis, and plot the error versus the grid spacing.",
"nx_values = [11, 21, 41, 81]\nl2_target = 1e-8\n\nerror = numpy.empty_like(nx_values, dtype=numpy.float)\n\n\nfor i, nx in enumerate(nx_values):\n p, x, y = laplace_IG(nx)\n \n p = laplace2d_neumann(p.copy(), l2_target)\n \n p_an = p_analytical(x, y)\n \n error[i] = L2_error(p, p_an)\n\npyplot.figure(figsize=(6,6))\npyplot.grid(True)\npyplot.xlabel(r'$n_x$', fontsize=18)\npyplot.ylabel(r'$L_2$-norm of the error', fontsize=18)\n\npyplot.loglog(nx_values, error, color='k', ls='--', lw=2, marker='o')\npyplot.axis('equal');",
"Nice! That's much better. It might not be exactly 2nd-order, but it's awfully close. (What is \"close enough\" in regards to observed convergence rates is a thorny question.)\nNow, notice from this plot that the error on the finest grid is around $0.0002$. Given this, perhaps we didn't need to continue iterating until a target difference between two solutions of $10^{-8}$. The spatial accuracy of the finite difference approximation is much worse than that! But we didn't know it ahead of time, did we? That's the \"catch 22\" of iterative solution of systems arising from discretization of PDEs.\nFinal word\nThe Jacobi method is the simplest relaxation scheme to explain and to apply. It is also the worst iterative solver! In practice, it is seldom used on its own as a solver, although it is useful as a smoother with multi-grid methods. There are much better iterative methods! If you are curious you can find them in this lesson.",
"#Ignore this cell, It simply loads a style for the notebook.\n\nfrom IPython.core.display import HTML\ndef css_styling():\n try:\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\n except:\n pass\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
abhay1/tf_rundown
|
notebooks/Logistic Regression.ipynb
|
mit
|
[
"Classification - MNIST dataset\n\nExploring the popular MNIST dataset. \nTensorflow provides a function to ingest the data.",
"# Necessary imports\nimport time\nfrom IPython import display\n\nimport numpy as np\nfrom matplotlib.pyplot import imshow\nfrom PIL import Image, ImageOps\nimport tensorflow as tf\n\n%matplotlib inline\n\nfrom tensorflow.examples.tutorials.mnist import input_data\n\n# Read the mnist dataset\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)",
"A little exploration",
"# Explore mnist\nprint(\"Shape of MNIST Images.\\nShape = (num_examples * num_features/pixels)\\n\")\nprint(\"Train : \", mnist.train.images.shape)\nprint(\"Validation : \", mnist.validation.images.shape)\nprint(\"Train : \", mnist.test.images.shape)\nprint(\"-\"*25)\nprint(\"Shape of MNIST Labels.\\nShape = (num_examples * num_labels/classes)\\n\")\nprint(\"Train : \", mnist.train.labels.shape)\nprint(\"Validation : \", mnist.validation.labels.shape)\nprint(\"Train : \", mnist.test.labels.shape)",
"Lets look at a random image and its label",
"# Pull out a random image & its label\nrandom_image_index = 200\nrandom_image = mnist.train.images[random_image_index]\nrandom_image_label = mnist.train.labels[random_image_index]\n\n# Print the label and the image as grayscale\nprint(\"Image label: %d\"%(random_image_label.argmax()))\npil_image = Image.fromarray(((random_image.reshape(28,28)) * 256).astype('uint8'), \"L\")\nimshow(ImageOps.invert(pil_image), cmap='gray')",
"Logistic Regression - Softmax\n\nNow let's build a softmax classifier (linear) to classify MNIST images. We will use Mini-batch gradient descent for optimization\nFirst, declare some of the hyperparameters that will be used by our softmax",
"# Softmax hyperparameters\nlearning_rate = 0.5\ntraining_epochs = 5\nbatch_size = 100",
"Step 1: Create placeholders to hold the images. \n\nUsing None for a dimension in shape means it can be any number.",
"# Create placeholders\nx = tf.placeholder(tf.float32, shape=(None, 784))\ny = tf.placeholder(tf.float32, shape=(None, 10))",
"Step 2: Create variables to hold the weight matrix and the bias vector",
"# Model parameters that have to be learned\n# Initialize with zeros\nW = tf.Variable(tf.zeros([784, 10]))\nb = tf.Variable(tf.zeros([10]))",
"Step 3: Lets compute the label distribution. Apply the linear function W * X + b for each of the 10 classes. Then apply the softmax function to get a probability distribution of likelihoods for classes.\n Recall that softmax(x) = exp(x)/ sum_i(exp(i)) where i represents each class\n\n\nStep 4: Compute the loss function as the cross entropy between the predicted distribution of the labels and its true distribution.\n Cross entropy H(Y) = - sum_i( true_dist(i) * log (computed_dist(i))",
"# Get all the logits i.e. W * X + b for each of the class\nlogits = tf.matmul(x, W) + b\n# Take a softmax of the logits. \ny_predicted = tf.nn.softmax(logits)\n\n# Make sure you reduce the sum across columns.\n# The y_predicted has a shape of number_of_examples * 10\n# Cross entropy should first sum across columns to get individual cost and then average this error over all examples\ncross_entropy_loss = tf.reduce_mean(- tf.reduce_sum(y * tf.log(y_predicted ), axis=1))\n\n# This can apparently be numerically unstable. \n# Tensorflow provides a function that computes the logits, applies softmax and computes the cross entropy\n# The example above is split only for pedagogical purposes\n# logits = tf.matmul(x, W) + b\n# cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=logits)) ",
"Step 5: Lets create a gradient descent optimizer to minimize the cross entropy loss",
"# Create an optimizer with the learning rate\noptimizer = tf.train.GradientDescentOptimizer(learning_rate)\n# Use the optimizer to minimize the loss\ntrain_step = optimizer.minimize(cross_entropy_loss)",
"Step 6: Lets compute the accuracy",
"# First create the correct prediction by taking the maximum value from the prediction class\n# and checking it with the actual class. The result is a boolean column vector\ncorrect_predictions = tf.equal(tf.argmax(y_predicted, 1), tf.argmax(y, 1))\n# Calculate the accuracy over all the images\n# Cast the boolean vector into float (1s & 0s) and then compute the average. \naccuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))",
"Now lets run our graph as usual",
"# Initializing global variables\ninit = tf.global_variables_initializer()\n\n# Create a session to run the graph\nwith tf.Session() as sess:\n # Run initialization\n sess.run(init)\n\n # For the set number of epochs\n for epoch in range(training_epochs):\n \n # Compute the total number of batches\n num_batches = int(mnist.train.num_examples/batch_size)\n \n # Iterate over all the examples (1 epoch)\n for batch_num in range(num_batches):\n \n # Get a batch of examples\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # Now run the session \n curr_loss, cur_accuracy, _ = sess.run([cross_entropy_loss, accuracy, train_step], \n feed_dict={x: batch_xs, y: batch_ys})\n \n if batch_num % 50 == 0:\n display.clear_output(wait=True)\n time.sleep(0.1)\n # Print the loss\n print(\"Epoch: %d/%d. Batch #: %d/%d. Current loss: %.5f. Train Accuracy: %.2f\"\n %(epoch, training_epochs, batch_num, num_batches, curr_loss, cur_accuracy))\n\n # Run the session to compute the value and print it\n test_accuracy = sess.run(accuracy,\n feed_dict={x: mnist.test.images, \n y: mnist.test.labels})\n print(\"Test Accuracy: %.2f\"%test_accuracy)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
icrtiou/coursera-ML
|
ex8-anomaly detection and recommendation/1- Anomaly detection.ipynb
|
mit
|
[
"note:\n\ncovariance matrix\nmultivariate_normal\nseaborn bivariate kernel density estimate",
"%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(context=\"notebook\", style=\"white\", palette=sns.color_palette(\"RdBu\"))\n\nimport numpy as np\nimport pandas as pd\nimport scipy.io as sio\nfrom scipy import stats\n\nimport sys\nsys.path.append('..')\n\nfrom helper import anomaly\n\nfrom sklearn.cross_validation import train_test_split",
"You want to divide data into 3 set. \n1. Training set\n2. Cross Validation set\n3. Test set. \nYou shouldn't be doing prediction using training data or Validation data as it does in the exercise.",
"mat = sio.loadmat('./data/ex8data1.mat')\nmat.keys()\n\nX = mat.get('X')",
"divide original validation data into validation and test set",
"Xval, Xtest, yval, ytest = train_test_split(mat.get('Xval'),\n mat.get('yval').ravel(),\n test_size=0.5)",
"Visualize training data",
"sns.regplot('Latency', 'Throughput',\n data=pd.DataFrame(X, columns=['Latency', 'Throughput']), \n fit_reg=False,\n scatter_kws={\"s\":20,\n \"alpha\":0.5})",
"estimate multivariate Gaussian parameters $\\mu$ and $\\sigma^2$\n\naccording to data, X1, and X2 is not independent",
"mu = X.mean(axis=0)\nprint(mu, '\\n')\n\ncov = np.cov(X.T)\nprint(cov)\n\n# example of creating 2d grid to calculate probability density\nnp.dstack(np.mgrid[0:3,0:3])\n\n# create multi-var Gaussian model\nmulti_normal = stats.multivariate_normal(mu, cov)\n\n# create a grid\nx, y = np.mgrid[0:30:0.01, 0:30:0.01]\npos = np.dstack((x, y))\n\nfig, ax = plt.subplots()\n\n# plot probability density\nax.contourf(x, y, multi_normal.pdf(pos), cmap='Blues')\n\n# plot original data points\nsns.regplot('Latency', 'Throughput',\n data=pd.DataFrame(X, columns=['Latency', 'Throughput']), \n fit_reg=False,\n ax=ax,\n scatter_kws={\"s\":10,\n \"alpha\":0.4})",
"select threshold $\\epsilon$\n\nuse training set $X$ to model the multivariate Gaussian\nuse cross validation set $(Xval, yval)$ to find the best $\\epsilon$ by finding the best F-score\n\n<img style=\"float: left;\" src=\"../img/f1_score.png\">",
"e, fs = anomaly.select_threshold(X, Xval, yval)\nprint('Best epsilon: {}\\nBest F-score on validation data: {}'.format(e, fs))",
"visualize prediction of Xval using learned $\\epsilon$\n\nuse CV data to find the best $\\epsilon$\nuse all data (training + validation) to create model\ndo the prediction on test data",
"multi_normal, y_pred = anomaly.predict(X, Xval, e, Xtest, ytest)\n\n# construct test DataFrame\ndata = pd.DataFrame(Xtest, columns=['Latency', 'Throughput'])\ndata['y_pred'] = y_pred\n\n# create a grid for graphing\nx, y = np.mgrid[0:30:0.01, 0:30:0.01]\npos = np.dstack((x, y))\n\nfig, ax = plt.subplots()\n\n# plot probability density\nax.contourf(x, y, multi_normal.pdf(pos), cmap='Blues')\n\n# plot original Xval points\nsns.regplot('Latency', 'Throughput',\n data=data,\n fit_reg=False,\n ax=ax,\n scatter_kws={\"s\":10,\n \"alpha\":0.4})\n\n# mark the predicted anamoly of CV data. We should have a test set for this...\nanamoly_data = data[data['y_pred']==1]\nax.scatter(anamoly_data['Latency'], anamoly_data['Throughput'], marker='x', s=50)",
"high dimension data",
"mat = sio.loadmat('./data/ex8data2.mat')\n\nX = mat.get('X')\nXval, Xtest, yval, ytest = train_test_split(mat.get('Xval'),\n mat.get('yval').ravel(),\n test_size=0.5)\n\ne, fs = anomaly.select_threshold(X, Xval, yval)\nprint('Best epsilon: {}\\nBest F-score on validation data: {}'.format(e, fs))\n\nmulti_normal, y_pred = anomaly.predict(X, Xval, e, Xtest, ytest)\n\nprint('find {} anamolies'.format(y_pred.sum()))",
"The huge difference between my result, and the official 117 anamolies in the ex8 is due to:\n1. my use of multivariate Gaussian\n2. I split data very differently"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.18/_downloads/e327516e16a734ff2984dcae335b2433/plot_left_cerebellum_volume_source.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Generate a left cerebellum volume source space\nGenerate a volume source space of the left cerebellum and plot its vertices\nrelative to the left cortical surface source space and the freesurfer\nsegmentation file.",
"# Author: Alan Leggitt <alan.leggitt@ucsf.edu>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nfrom scipy.spatial import ConvexHull\nfrom mayavi import mlab\nfrom mne import setup_source_space, setup_volume_source_space\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nsubj = 'sample'\naseg_fname = subjects_dir + '/sample/mri/aseg.mgz'",
"Setup the source spaces",
"# setup a cortical surface source space and extract left hemisphere\nsurf = setup_source_space(subj, subjects_dir=subjects_dir, add_dist=False)\nlh_surf = surf[0]\n\n# setup a volume source space of the left cerebellum cortex\nvolume_label = 'Left-Cerebellum-Cortex'\nsphere = (0, 0, 0, 120)\nlh_cereb = setup_volume_source_space(subj, mri=aseg_fname, sphere=sphere,\n volume_label=volume_label,\n subjects_dir=subjects_dir)",
"Plot the positions of each source space",
"# extract left cortical surface vertices, triangle faces, and surface normals\nx1, y1, z1 = lh_surf['rr'].T\nfaces = lh_surf['use_tris']\nnormals = lh_surf['nn']\n# normalize for mayavi\nnormals /= np.sum(normals * normals, axis=1)[:, np.newaxis]\n\n# extract left cerebellum cortex source positions\nx2, y2, z2 = lh_cereb[0]['rr'][lh_cereb[0]['inuse'].astype(bool)].T\n\n# open a 3d figure in mayavi\nmlab.figure(1, bgcolor=(0, 0, 0))\n\n# plot the left cortical surface\nmesh = mlab.pipeline.triangular_mesh_source(x1, y1, z1, faces)\nmesh.data.point_data.normals = normals\nmlab.pipeline.surface(mesh, color=3 * (0.7,))\n\n# plot the convex hull bounding the left cerebellum\nhull = ConvexHull(np.c_[x2, y2, z2])\nmlab.triangular_mesh(x2, y2, z2, hull.simplices, color=3 * (0.5,), opacity=0.3)\n\n# plot the left cerebellum sources\nmlab.points3d(x2, y2, z2, color=(1, 1, 0), scale_factor=0.001)\n\n# adjust view parameters\nmlab.view(173.78, 101.75, 0.30, np.array([-0.03, -0.01, 0.03]))\nmlab.roll(85)",
"Compare volume source locations to segmentation file in freeview",
"# Export source positions to nift file\nnii_fname = data_path + '/MEG/sample/mne_sample_lh-cerebellum-cortex.nii'\n\n# Combine the source spaces\nsrc = surf + lh_cereb\n\nsrc.export_volume(nii_fname, mri_resolution=True)\n\n# Uncomment the following lines to display source positions in freeview.\n'''\n# display image in freeview\nfrom mne.utils import run_subprocess\nmri_fname = subjects_dir + '/sample/mri/brain.mgz'\nrun_subprocess(['freeview', '-v', mri_fname, '-v',\n '%s:colormap=lut:opacity=0.5' % aseg_fname, '-v',\n '%s:colormap=jet:colorscale=0,2' % nii_fname, '-slice',\n '157 75 105'])\n'''"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
materialsvirtuallab/megnet
|
multifidelity/codes_for_plots/Figure4/plot_figure4.ipynb
|
bsd-3-clause
|
[
"Literature results\nBand gap engineering in amorphous $Al_xGa_{1-x}N$ Experiment and ab initio calculations, Appl. Phys. Lett. 77, 1117 (2000)",
"import matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n%matplotlib inline\n\nxs = [0.0,\n 0.3305234864554154,\n 0.5015690020887643,\n 0.5719846500105247,\n 0.6616169303259445,\n 0.7943943392865815,\n 1.0]\n\nexp = [3.27,\n 3.973509933774835,\n 4.56953642384106,\n 4.56953642384106,\n 4.668874172185431,\n 5.178807947019868,\n 5.95]\n\npredicted = [\n [3.3234272, 3.443913, 2.8061478, 3.6181135, 2.7264364, 3.5291746],\n [3.3851593, 4.1667223, 3.6109998, 4.2762685, 3.2637808, 3.9524834],\n [3.7519681, 4.5286236, 4.0720024, 4.8389845, 3.7563384, 4.3596983],\n [3.9960487, 4.735317, 4.2778697, 5.0086565, 3.9825325, 4.511471],\n [4.405121, 5.0085635, 4.5971274, 5.240199, 4.266697, 4.6896057],\n [5.000762, 5.3106785, 5.1751385, 5.5603375, 4.6557055, 4.8134403],\n [5.612711, 5.6376424, 6.206752, 5.748086, 5.1321826, 3.567781]]\n\n\n#predicted = [3.241202, 3.7759016, 4.2179356, 4.418649, 4.701219, 5.0860105, 5.3175263]\n#errors = [0.34811336, 0.38219276, 0.39871, 0.37541276, 0.334767, 0.3028004, 0.84278023]\n\nplt.rcParams['font.size'] = 22\nplt.rcParams['font.family'] = 'Arial'\nplt.figure(figsize=(5.8, 5))\n\ncolor = '#F67F12'\n\n# plot. Set color of marker edge\nflierprops = dict(marker='o', markerfacecolor=color, markersize=6,\n linestyle='none', markeredgecolor=color)\n\nbox = plt.boxplot(np.array(predicted).T, positions=xs, widths=0.1, flierprops=flierprops, patch_artist=True,\n boxprops=dict(facecolor=color, color='k'))\nfor i in box['boxes']:\n plt.setp(i, zorder=0)\n \nfor i in box['medians']:\n plt.setp(i, color='w')\nh, = plt.plot(xs, exp, 'o--', markeredgewidth=2, markersize=10, markerfacecolor='w', label='Experiment')\n\n\nplt.xlabel('$x$ in $\\mathregular{Al_{x}Ga_{1-x}N}$')\nplt.ylabel('$E_g$ (eV)')\nplt.legend([ h, box['boxes'][1]], ['Experiment', 'Model'], frameon=False)\nplt.xlim([-0.1, 1.1])\nplt.xticks([0, 0.2, 0.4, 0.6, 0.8, 1.0], [0, 0.2, 0.4, 0.6, 0.8, 1.0])\nplt.yticks([3.0, 4.0, 5.0, 6.0], [3.0, 4.0, 5.0, 6.0])\nplt.tight_layout()\n\nplt.savefig('AlxGa1-xN.pdf')",
"Band gap engineering of mixed Cd(1-x)Zn (x) Se thin films, J. Alloys Compd., 703, 40-44, (2017)",
"xs = [0.0, 0.2, 0.4, 0.6, 0.8, 1.0]\nexp = [1.67, 1.82, 2.03, 2.2, 2.35, 2.6] \npredicted = [[1.8370335, 1.8265239, 1.84641, 1.7628204, 1.6473178, 2.057174],\n [2.0188313, 1.7471551, 1.84461, 1.8407636, 1.6486685, 2.1046824],\n [2.296093, 1.7560242, 1.9315724, 2.1663737, 2.0004115, 2.1264758],\n [2.4811623, 2.1458392, 2.4289386, 2.3242218, 2.259788, 2.2726748],\n [2.2881293, 2.6813347, 2.914834, 2.524885, 2.53475, 2.9929655],\n [2.343427, 2.7570426, 2.9132087, 2.7734382, 2.847426, 2.5522652]]\n\nplt.figure(figsize=(6, 5))\n# plot. Set color of marker edge\nflierprops = dict(marker='o', markerfacecolor=color, markersize=6,\n linestyle='none', markeredgecolor=color)\n\nbox = plt.boxplot(np.array(predicted).T, positions=xs, widths=0.1, flierprops=flierprops, patch_artist=True,\n boxprops=dict(facecolor=color, color='k'))\nfor i in box['boxes']:\n plt.setp(i, zorder=0)\n \nfor i in box['medians']:\n plt.setp(i, color='w')\nh, = plt.plot(xs, exp, 'o--', markeredgewidth=2, markersize=10, markerfacecolor='w', label='Experiment')\nplt.xlim([-0.1, 1.1])\nplt.xticks([0, 0.2, 0.4, 0.6, 0.8, 1.0], [0, 0.2, 0.4, 0.6, 0.8, 1.0])\nplt.xlabel('$x$ in $\\mathregular{Cd_{1-x}Zn_xSe}$')\nplt.ylabel('$E_g$ (eV)')\n\nplt.tight_layout()\nplt.savefig('Cd1-xZnxSe.pdf')",
"Band gap engineering of ZnO by doping with Mg, Phys. Scr. 90(8), 085502, (2015)",
"xs = [0.0, 0.01, 0.04, 0.08, 0.12, 0.16]\n\nexp = [3.1389807162534433,\n 3.1685950413223143,\n 3.229201101928375,\n 3.3201101928374657,\n 3.36900826446281,\n 3.4413223140495868]\n\npredicted = [[3.0789735, 3.5106642, 3.297291, 3.0242383, 3.6874614, 3.4847476],\n [3.3934112, 3.4003148, 3.426836, 3.158033, 3.5232787, 3.4476407],\n [3.441858, 3.4407954, 3.8043046, 3.2040539, 3.5728483, 3.547456],\n [3.4638472, 3.4949489, 4.0739813, 3.2903996, 3.6263444, 3.6790297],\n [3.4503715, 3.5805423, 3.5919178, 3.4007187, 3.655845, 3.8145654],\n [3.4545772, 3.717163, 2.974183, 3.5161405, 3.6517656, 3.9443142]]\n\n\nplt.figure(figsize=(5.8, 5))\nflierprops = dict(marker='o', markerfacecolor=color, markersize=6,\n linestyle='none', markeredgecolor=color)\n\nbox = plt.boxplot(np.array(predicted).T, positions=xs, widths=0.02, flierprops=flierprops, patch_artist=True,\n boxprops=dict(facecolor=color, color='k'))\n\nfor i in box['boxes']:\n plt.setp(i, zorder=0)\n \nfor i in box['medians']:\n plt.setp(i, color='w')\nh, = plt.plot(xs, exp, 'o--', markeredgewidth=2, markersize=10, markerfacecolor='w', label='Experiment')\n\n\nplt.xlabel('$x$ in $\\mathregular{Zn_{1-x}Mg_xO}$')\nplt.ylabel('$E_g$ (eV)')\n# plt.legend(frameon=False)\nplt.xticks([0, 0.05, 0.1, 0.15], [0, 0.05, 0.1, 0.15])\nplt.xlim([-0.016, 0.177])\nplt.yticks([3, 3.5, 4], [3.0, 3.5, 4.0])\n#plt.yticks([0., 0.5, 1, 1.5])\nplt.tight_layout()\nplt.savefig('MgxZn1-xO.pdf')",
"Band-gap engineering for removing shallow traps in rare-earth Lu3Al5O12 garnet scintillators using Ga3+ doping, Phys. Rev. B, 84(8) 081102, (2011)",
"xs = [0.0, 0.05, 0.1, 0.2, 0.4, 0.6, 1.0]\nexp = [5.551115123125783e-17,\n 0.003409090909090917,\n -0.030681818181818143,\n -0.2079545454545454,\n -0.4363636363636363,\n -0.7465909090909091,\n -1.5681818181818183]\n\npredicted = [[-0.22912502, 0.13428593, 0.3091731, -0.06367111, 0.21906424, -0.3697219],\n [-0.2466507, 0.19774532, 0.28103304, -0.1059885, 0.23758173, -0.40776634],\n [-0.2611394, 0.23995829, 0.23054123, -0.16277695, 0.2702756, -0.46076345],\n [-0.2759061, 0.23510027, 0.055459976, -0.36338377, 0.37191725, -0.63695955],\n [-0.33351135, 0.03725624, -0.13293362, -2.0082474, 0.2829337, -1.0578184],\n [-0.68473625, -0.51482725, -0.41922426, -4.0862846, -0.273458, -1.3728428],\n [-1.9137676, -1.2256684, -0.9274936, -5.6792936, -1.0770473, -2.3557308]]\n\nerrors = [0.24319291,\n 0.26221794,\n 0.28499138,\n 0.3530279,\n 0.7791653,\n 1.3266295,\n 1.6350949]\n\nplt.figure(figsize=(5.9, 5))\nplt.rcParams['font.family'] = 'Arial'\nplt.rcParams['font.size'] = 22\nflierprops = dict(marker='o', markerfacecolor=color, markersize=6,\n linestyle='none', markeredgecolor=color)\n\nbox = plt.boxplot(np.array(predicted).T, positions=xs, widths=0.1, flierprops=flierprops, patch_artist=True,\n boxprops=dict(facecolor=color, color='k'))\n\nfor i in box['boxes']:\n plt.setp(i, zorder=0)\n \nfor i in box['medians']:\n plt.setp(i, color='w')\nh, = plt.plot(xs, exp, 'o--', markeredgewidth=2, markersize=10, markerfacecolor='w', label='Experiment')\n\nplt.xlabel('$x$ in $\\mathregular{Lu_3(Ga_xAl_{1-x})_5O_{12}}$')\nplt.ylabel('$\\Delta E_g$ (eV)')\n#plt.legend(frameon=False)\nplt.xlim([-0.1, 1.1])\nplt.xticks([0, 0.2, 0.4, 0.6, 0.8, 1.0])\nplt.yticks([-4, -2, 0], [-4.0, -2.0, 0])\n# plt.yticks([0.5, 1., 1.5])\nplt.tight_layout()\nplt.savefig('Ga_Lu3Al5O12.pdf')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LFPy/LFPy
|
examples/LFPy-example-09.ipynb
|
gpl-3.0
|
[
"%matplotlib inline",
"Example plot for LFPy: Passive cell model adapted from Mainen and Sejnokwski (1996)\nThis is an example scripts using LFPy with a passive cell model adapted from\nMainen and Sejnowski, Nature 1996, for the original files, see\nhttp://senselab.med.yale.edu/modeldb/ShowModel.asp?model=2488\nHere, excitatory and inhibitory neurons are distributed on different parts of\nthe morphology, with stochastic spike times produced by the\nNEURON's NetStim objects associated with each individual synapse.\nOtherwise similar to LFPy-example-8.ipynb\nCopyright (C) 2017 Computational Neuroscience Group, NMBU.\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.",
"# importing some modules, setting some matplotlib values for pl.plot.\nimport LFPy\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams.update({'font.size' : 12,\n 'figure.facecolor' : '1',\n 'figure.subplot.wspace' : 0.5,\n 'figure.subplot.hspace' : 0.5})\n\n#seed for random generation\nnp.random.seed(1234)",
"Function declaration:",
"def insert_synapses(synparams, section, n, netstimParameters):\n '''find n compartments to insert synapses onto'''\n idx = cell.get_rand_idx_area_norm(section=section, nidx=n)\n\n #Insert synapses in an iterative fashion\n for i in idx:\n synparams.update({'idx' : int(i)})\n\n # Create synapse(s) and setting times using the Synapse class in LFPy\n s = LFPy.Synapse(cell, **synparams)\n s.set_spike_times_w_netstim(**netstimParameters)",
"Parameters etc.:\nDefine parameters, using dictionaries. It is possible to set a few more parameters for each class or functions, but\nwe chose to show only the most important ones here.",
"# Define cell parameters used as input to cell-class\ncellParameters = {\n 'morphology' : 'morphologies/L5_Mainen96_wAxon_LFPy.hoc',\n 'cm' : 1.0, # membrane capacitance\n 'Ra' : 150, # axial resistance\n 'v_init' : -65, # initial crossmembrane potential\n 'passive' : True, # switch on passive mechs\n 'passive_parameters' : {'g_pas' : 1./30000, 'e_pas' : -65}, # passive params \n 'nsegs_method' : 'lambda_f',# method for setting number of segments,\n 'lambda_f' : 100, # segments are isopotential at this frequency\n 'dt' : 2**-4, # dt of LFP and NEURON simulation.\n 'tstart' : -100, #start time, recorders start at t=0\n 'tstop' : 200, #stop time of simulation\n #'custom_code' : ['active_declarations_example3.hoc'], # will run this file\n}\n\n# Synaptic parameters taken from Hendrickson et al 2011\n# Excitatory synapse parameters:\nsynapseParameters_AMPA = {\n 'e' : 0, #reversal potential\n 'syntype' : 'Exp2Syn', #conductance based exponential synapse\n 'tau1' : 1., #Time constant, rise\n 'tau2' : 3., #Time constant, decay\n 'weight' : 0.005, #Synaptic weight\n 'record_current' : True, #record synaptic currents\n}\n# Excitatory synapse parameters\nsynapseParameters_NMDA = { \n 'e' : 0,\n 'syntype' : 'Exp2Syn',\n 'tau1' : 10.,\n 'tau2' : 30.,\n 'weight' : 0.005,\n 'record_current' : True,\n}\n# Inhibitory synapse parameters\nsynapseParameters_GABA_A = { \n 'e' : -80,\n 'syntype' : 'Exp2Syn',\n 'tau1' : 1.,\n 'tau2' : 12.,\n 'weight' : 0.005,\n 'record_current' : True\n}\n# where to insert, how many, and which input statistics\ninsert_synapses_AMPA_args = {\n 'section' : 'apic',\n 'n' : 100,\n 'netstimParameters': {\n 'number' : 1000,\n 'start' : 0,\n 'noise' : 1,\n 'interval' : 20,\n }\n}\ninsert_synapses_NMDA_args = {\n 'section' : ['dend', 'apic'],\n 'n' : 15,\n 'netstimParameters': {\n 'number' : 1000,\n 'start' : 0,\n 'noise' : 1,\n 'interval' : 90,\n }\n}\ninsert_synapses_GABA_A_args = {\n 'section' : 'dend',\n 'n' : 100,\n 'netstimParameters': {\n 'number' : 1000,\n 'start' : 0,\n 'noise' : 1,\n 'interval' : 20,\n }\n}\n\n# Define electrode geometry corresponding to a laminar electrode, where contact\n# points have a radius r, surface normal vectors N, and LFP calculated as the\n# average LFP in n random points on each contact:\nN = np.empty((16, 3))\nfor i in range(N.shape[0]): N[i,] = [1, 0, 0] #normal unit vec. to contacts\n# put parameters in dictionary\nelectrodeParameters = {\n 'sigma' : 0.3, # Extracellular potential\n 'x' : np.zeros(16) + 25, # x,y,z-coordinates of electrode contacts\n 'y' : np.zeros(16),\n 'z' : np.linspace(-500, 1000, 16),\n 'n' : 20,\n 'r' : 10,\n 'N' : N,\n}\n\n# Parameters for the cell.simulate() call, recording membrane- and syn.-currents\nsimulationParameters = {\n 'rec_imem' : True, # Record Membrane currents during simulation\n}",
"Main simulation procedure:",
"# Initialize cell instance, using the LFPy.Cell class\ncell = LFPy.Cell(**cellParameters)\n\n# Align apical dendrite with z-axis\ncell.set_rotation(x=4.98919, y=-4.33261, z=0.)\n\n# Insert synapses using the function defined earlier\ninsert_synapses(synapseParameters_AMPA, **insert_synapses_AMPA_args)\ninsert_synapses(synapseParameters_NMDA, **insert_synapses_NMDA_args)\ninsert_synapses(synapseParameters_GABA_A, **insert_synapses_GABA_A_args)\n\n# Perform NEURON simulation, results saved as attributes in the cell instance\ncell.simulate(**simulationParameters)\n\n# Initialize electrode geometry, then calculate the LFP, using the\n# LFPy.RecExtElectrode class. Note that now cell is given as input to electrode\n# and created after the NEURON simulations are finished\nelectrode = LFPy.RecExtElectrode(cell, **electrodeParameters)\nelectrode.data = electrode.get_transformation_matrix() @ cell.imem",
"Plot:",
"#plotting some variables and geometry, saving output to .pdf.\nfrom example_suppl import plot_ex3\nfig = plot_ex3(cell, electrode)\n#fig.savefig('LFPy-example-09.pdf', dpi=300)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ihmeuw/dismod_mr
|
examples/checking_convergence.ipynb
|
agpl-3.0
|
[
"Checking convergence in DisMod-MR\nThis notebook provides some examples of running multiple chains and checking convergence in DisMod-MR. Checking convergence is an important part of MCMC estimation.",
"import numpy as np, pandas as pd, dismod_mr, pymc as pm, matplotlib.pyplot as plt, seaborn as sns\n%matplotlib inline\n\n# set a random seed to ensure reproducible simulation results\nnp.random.seed(123456)\n\n# simulate data\nn = 20\n\ndata = dict(age=np.random.randint(0, 10, size=n)*10,\n year=np.random.randint(1990, 2010, size=n))\ndata = pd.DataFrame(data)\ndata['value'] = (.1 + .001 * data.age) + np.random.normal(0., .01, size=n)\n\ndata['data_type'] = 'p'\n\ndata['age_start'] = data.age\ndata['age_end'] = data.age+10\n\n# for prettier display, include jittered age near midpoint of age interval\ndata['jittered_age'] = .5*(data.age_start + data.age_end) + np.random.normal(size=n)\n\n# keep things simple, no spatial random effects, no sex effect\ndata['area'] = 'all'\ndata['sex'] = 'total'\n\n# quantification of uncertainty that says these numbers are believed to be quite precise\ndata['standard_error'] = -99\ndata['upper_ci'] = np.nan\ndata['lower_ci'] = np.nan\ndata['effective_sample_size'] = 1.e8\n\n\ndef new_model(data):\n # build the dismod_mr model\n dm = dismod_mr.data.ModelData()\n\n # set simple model parameters, for decent, fast computation\n dm.set_knots('p', [0,100])\n dm.set_level_bounds('p', lower=0, upper=1)\n dm.set_level_value('p', age_before=0, age_after=100, value=0)\n dm.set_heterogeneity('p', value='Slightly')\n dm.set_effect_prior('p', cov='x_sex', value=dict(dist='Constant', mu=0))\n \n # copy data into model \n dm.input_data = data.copy()\n \n return dm",
"Fit the model with too few iterations of MCMC",
"dm1 = new_model(data)\ndm1.setup_model('p', rate_model='neg_binom')\n%time dm1.fit(how='mcmc', iter=10, burn=0, thin=1)\ndm1.plot()",
"Fitting it again gives a different answer:",
"dm2 = new_model(data)\ndm2.setup_model('p', rate_model='neg_binom')\n%time dm2.fit(how='mcmc', iter=10, burn=0, thin=1)\ndm2.plot()\n\ndm1.vars['p']['gamma'][1].trace().mean()\n\ndm2.vars['p']['gamma'][1].trace().mean()",
"Fit with more MCMC iterations",
"dm1 = new_model(data)\ndm1.setup_model('p', rate_model='neg_binom')\n%time dm1.fit(how='mcmc', iter=10_000, burn=5_000, thin=5)\n\ndm2 = new_model(data)\ndm2.setup_model('p', rate_model='neg_binom')\n%time dm2.fit(how='mcmc', iter=10_000, burn=5_000, thin=5)\n\ndm1.vars['p']['gamma'][1].trace().mean()\n\ndm2.vars['p']['gamma'][1].trace().mean()\n\ndismod_mr.plot.plot_trace(dm1)\n\ndismod_mr.plot.plot_acorr(dm1)",
"Running multiple chains\nIt is simple to run multiple chains sequentially in DisMod-MR, although I worry that this gives a false sense of security about the convergence.",
"# setup a model and run the chain once\n\ndm = new_model(data)\ndm.setup_model('p', rate_model='neg_binom')\n%time dm.fit(how='mcmc', iter=2_000, burn=1_000, thin=1)\n\n# to run it more times, use the sample method of the dm.mcmc object\n# use the same iter/burn/thin settings for future convenience\n\nfor i in range(4):\n dm.mcmc.sample(iter=2_000, burn=1_000, thin=1)\n\n# calculate Gelman-Rubin statistic for all model variables\nR_hat = pm.gelman_rubin(dm.mcmc)\n\n# examine for gamma_p_100\nR_hat['gamma_p_100']\n\n!date"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
metpy/MetPy
|
v0.12/_downloads/f8c7f51c50c58b17901913e49a5b977e/Inverse_Distance_Verification.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Inverse Distance Verification: Cressman and Barnes\nCompare inverse distance interpolation methods\nTwo popular interpolation schemes that use inverse distance weighting of observations are the\nBarnes and Cressman analyses. The Cressman analysis is relatively straightforward and uses\nthe ratio between distance of an observation from a grid cell and the maximum allowable\ndistance to calculate the relative importance of an observation for calculating an\ninterpolation value. Barnes uses the inverse exponential ratio of each distance between\nan observation and a grid cell and the average spacing of the observations over the domain.\nAlgorithmically:\n\nA KDTree data structure is built using the locations of each observation.\nAll observations within a maximum allowable distance of a particular grid cell are found in\n O(log n) time.\nUsing the weighting rules for Cressman or Barnes analyses, the observations are given a\n proportional value, primarily based on their distance from the grid cell.\nThe sum of these proportional values is calculated and this value is used as the\n interpolated value.\nSteps 2 through 4 are repeated for each grid cell.",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import cKDTree\n\nfrom metpy.interpolate.geometry import dist_2\nfrom metpy.interpolate.points import barnes_point, cressman_point\nfrom metpy.interpolate.tools import average_spacing, calc_kappa\n\n\ndef draw_circle(ax, x, y, r, m, label):\n th = np.linspace(0, 2 * np.pi, 100)\n nx = x + r * np.cos(th)\n ny = y + r * np.sin(th)\n ax.plot(nx, ny, m, label=label)",
"Generate random x and y coordinates, and observation values proportional to x * y.\nSet up two test grid locations at (30, 30) and (60, 60).",
"np.random.seed(100)\n\npts = np.random.randint(0, 100, (10, 2))\nxp = pts[:, 0]\nyp = pts[:, 1]\nzp = xp * xp / 1000\n\nsim_gridx = [30, 60]\nsim_gridy = [30, 60]",
"Set up a cKDTree object and query all of the observations within \"radius\" of each grid point.\nThe variable indices represents the index of each matched coordinate within the\ncKDTree's data list.",
"grid_points = np.array(list(zip(sim_gridx, sim_gridy)))\n\nradius = 40\nobs_tree = cKDTree(list(zip(xp, yp)))\nindices = obs_tree.query_ball_point(grid_points, r=radius)",
"For grid 0, we will use Cressman to interpolate its value.",
"x1, y1 = obs_tree.data[indices[0]].T\ncress_dist = dist_2(sim_gridx[0], sim_gridy[0], x1, y1)\ncress_obs = zp[indices[0]]\n\ncress_val = cressman_point(cress_dist, cress_obs, radius)",
"For grid 1, we will use barnes to interpolate its value.\nWe need to calculate kappa--the average distance between observations over the domain.",
"x2, y2 = obs_tree.data[indices[1]].T\nbarnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2)\nbarnes_obs = zp[indices[1]]\n\nkappa = calc_kappa(average_spacing(list(zip(xp, yp))))\n\nbarnes_val = barnes_point(barnes_dist, barnes_obs, kappa)",
"Plot all of the affiliated information and interpolation values.",
"fig, ax = plt.subplots(1, 1, figsize=(15, 10))\nfor i, zval in enumerate(zp):\n ax.plot(pts[i, 0], pts[i, 1], '.')\n ax.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1]))\n\nax.plot(sim_gridx, sim_gridy, '+', markersize=10)\n\nax.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches')\nax.plot(x2, y2, 'ks', fillstyle='none', markersize=10, label='grid 1 matches')\n\ndraw_circle(ax, sim_gridx[0], sim_gridy[0], m='k-', r=radius, label='grid 0 radius')\ndraw_circle(ax, sim_gridx[1], sim_gridy[1], m='b-', r=radius, label='grid 1 radius')\n\nax.annotate('grid 0: cressman {:.3f}'.format(cress_val), xy=(sim_gridx[0] + 2, sim_gridy[0]))\nax.annotate('grid 1: barnes {:.3f}'.format(barnes_val), xy=(sim_gridx[1] + 2, sim_gridy[1]))\n\nax.set_aspect('equal', 'datalim')\nax.legend()",
"For each point, we will do a manual check of the interpolation values by doing a step by\nstep and visual breakdown.\nPlot the grid point, observations within radius of the grid point, their locations, and\ntheir distances from the grid point.",
"fig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.annotate('grid 0: ({}, {})'.format(sim_gridx[0], sim_gridy[0]),\n xy=(sim_gridx[0] + 2, sim_gridy[0]))\nax.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10)\n\nmx, my = obs_tree.data[indices[0]].T\nmz = zp[indices[0]]\n\nfor x, y, z in zip(mx, my, mz):\n d = np.sqrt((sim_gridx[0] - x)**2 + (y - sim_gridy[0])**2)\n ax.plot([sim_gridx[0], x], [sim_gridy[0], y], '--')\n\n xave = np.mean([sim_gridx[0], x])\n yave = np.mean([sim_gridy[0], y])\n\n ax.annotate('distance: {}'.format(d), xy=(xave, yave))\n ax.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))\n\nax.set_xlim(0, 80)\nax.set_ylim(0, 80)\nax.set_aspect('equal', 'datalim')",
"Step through the cressman calculations.",
"dists = np.array([22.803508502, 7.21110255093, 31.304951685, 33.5410196625])\nvalues = np.array([0.064, 1.156, 3.364, 0.225])\n\ncres_weights = (radius * radius - dists * dists) / (radius * radius + dists * dists)\ntotal_weights = np.sum(cres_weights)\nproportion = cres_weights / total_weights\nvalue = values * proportion\n\nval = cressman_point(cress_dist, cress_obs, radius)\n\nprint('Manual cressman value for grid 1:\\t', np.sum(value))\nprint('Metpy cressman value for grid 1:\\t', val)",
"Now repeat for grid 1, except use barnes interpolation.",
"fig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.annotate('grid 1: ({}, {})'.format(sim_gridx[1], sim_gridy[1]),\n xy=(sim_gridx[1] + 2, sim_gridy[1]))\nax.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10)\n\nmx, my = obs_tree.data[indices[1]].T\nmz = zp[indices[1]]\n\nfor x, y, z in zip(mx, my, mz):\n d = np.sqrt((sim_gridx[1] - x)**2 + (y - sim_gridy[1])**2)\n ax.plot([sim_gridx[1], x], [sim_gridy[1], y], '--')\n\n xave = np.mean([sim_gridx[1], x])\n yave = np.mean([sim_gridy[1], y])\n\n ax.annotate('distance: {}'.format(d), xy=(xave, yave))\n ax.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))\n\nax.set_xlim(40, 80)\nax.set_ylim(40, 100)\nax.set_aspect('equal', 'datalim')",
"Step through barnes calculations.",
"dists = np.array([9.21954445729, 22.4722050542, 27.892651362, 38.8329756779])\nvalues = np.array([2.809, 6.241, 4.489, 2.704])\n\nweights = np.exp(-dists**2 / kappa)\ntotal_weights = np.sum(weights)\nvalue = np.sum(values * (weights / total_weights))\n\nprint('Manual barnes value:\\t', value)\nprint('Metpy barnes value:\\t', barnes_point(barnes_dist, barnes_obs, kappa))\n\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cshallue/models
|
research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb
|
apache-2.0
|
[
"Neural Style Transfer with tf.keras\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/models/blob/master/research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/models/blob/master/research/nst_blogpost/4_Neural_Style_Transfer_with_Eager_Execution.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nOverview\nIn this tutorial, we will learn how to use deep learning to compose images in the style of another image (ever wish you could paint like Picasso or Van Gogh?). This is known as neural style transfer! This is a technique outlined in Leon A. Gatys' paper, A Neural Algorithm of Artistic Style, which is a great read, and you should definitely check it out. \nBut, what is neural style transfer?\nNeural style transfer is an optimization technique used to take three images, a content image, a style reference image (such as an artwork by a famous painter), and the input image you want to style -- and blend them together such that the input image is transformed to look like the content image, but “painted” in the style of the style image.\nFor example, let’s take an image of this turtle and Katsushika Hokusai's The Great Wave off Kanagawa:\n<img src=\"https://github.com/tensorflow/models/blob/master/research/nst_blogpost/Green_Sea_Turtle_grazing_seagrass.jpg?raw=1\" alt=\"Drawing\" style=\"width: 200px;\"/>\n<img src=\"https://github.com/tensorflow/models/blob/master/research/nst_blogpost/The_Great_Wave_off_Kanagawa.jpg?raw=1\" alt=\"Drawing\" style=\"width: 200px;\"/>\nImage of Green Sea Turtle\n-By P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Common\nNow how would it look like if Hokusai decided to paint the picture of this Turtle exclusively with this style? Something like this?\n<img src=\"https://github.com/tensorflow/models/blob/master/research/nst_blogpost/wave_turtle.png?raw=1\" alt=\"Drawing\" style=\"width: 500px;\"/>\nIs this magic or just deep learning? Fortunately, this doesn’t involve any witchcraft: style transfer is a fun and interesting technique that showcases the capabilities and internal representations of neural networks. \nThe principle of neural style transfer is to define two distance functions, one that describes how different the content of two images are , $L_{content}$, and one that describes the difference between two images in terms of their style, $L_{style}$. Then, given three images, a desired style image, a desired content image, and the input image (initialized with the content image), we try to transform the input image to minimize the content distance with the content image and its style distance with the style image. \nIn summary, we’ll take the base input image, a content image that we want to match, and the style image that we want to match. We’ll transform the base input image by minimizing the content and style distances (losses) with backpropagation, creating an image that matches the content of the content image and the style of the style image. \nSpecific concepts that will be covered:\nIn the process, we will build practical experience and develop intuition around the following concepts\n\nEager Execution - use TensorFlow's imperative programming environment that evaluates operations immediately \nLearn more about eager execution\nSee it in action\n Using Functional API to define a model - we'll build a subset of our model that will give us access to the necessary intermediate activations using the Functional API \nLeveraging feature maps of a pretrained model - Learn how to use pretrained models and their feature maps \nCreate custom training loops - we'll examine how to set up an optimizer to minimize a given loss with respect to input parameters\n\nWe will follow the general steps to perform style transfer:\n\nVisualize data\nBasic Preprocessing/preparing our data\nSet up loss functions \nCreate model\nOptimize for loss function\n\nAudience: This post is geared towards intermediate users who are comfortable with basic machine learning concepts. To get the most out of this post, you should: \n* Read Gatys' paper - we'll explain along the way, but the paper will provide a more thorough understanding of the task\n* Understand reducing loss with gradient descent\nTime Estimated: 30 min\nSetup\nDownload Images",
"import os\nimg_dir = '/tmp/nst'\nif not os.path.exists(img_dir):\n os.makedirs(img_dir)\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg\n!wget --quiet -P /tmp/nst/ https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg",
"Import and configure modules",
"import matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['figure.figsize'] = (10,10)\nmpl.rcParams['axes.grid'] = False\n\nimport numpy as np\nfrom PIL import Image\nimport time\nimport functools\n\nimport tensorflow as tf\nimport tensorflow.contrib.eager as tfe\n\nfrom tensorflow.python.keras.preprocessing import image as kp_image\nfrom tensorflow.python.keras import models \nfrom tensorflow.python.keras import losses\nfrom tensorflow.python.keras import layers\nfrom tensorflow.python.keras import backend as K",
"We’ll begin by enabling eager execution. Eager execution allows us to work through this technique in the clearest and most readable way.",
"tf.enable_eager_execution()\nprint(\"Eager execution: {}\".format(tf.executing_eagerly()))\n\n# Set up some global values here\ncontent_path = '/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg'\nstyle_path = '/tmp/nst/The_Great_Wave_off_Kanagawa.jpg'",
"Visualize the input",
"def load_img(path_to_img):\n max_dim = 512\n img = Image.open(path_to_img)\n long = max(img.size)\n scale = max_dim/long\n img = img.resize((round(img.size[0]*scale), round(img.size[1]*scale)), Image.ANTIALIAS)\n \n img = kp_image.img_to_array(img)\n \n # We need to broadcast the image array such that it has a batch dimension \n img = np.expand_dims(img, axis=0)\n return img\n\ndef imshow(img, title=None):\n # Remove the batch dimension\n out = np.squeeze(img, axis=0)\n # Normalize for display \n out = out.astype('uint8')\n plt.imshow(out)\n if title is not None:\n plt.title(title)\n plt.imshow(out)",
"These are input content and style images. We hope to \"create\" an image with the content of our content image, but with the style of the style image.",
"plt.figure(figsize=(10,10))\n\ncontent = load_img(content_path).astype('uint8')\nstyle = load_img(style_path).astype('uint8')\n\nplt.subplot(1, 2, 1)\nimshow(content, 'Content Image')\n\nplt.subplot(1, 2, 2)\nimshow(style, 'Style Image')\nplt.show()",
"Prepare the data\nLet's create methods that will allow us to load and preprocess our images easily. We perform the same preprocessing process as are expected according to the VGG training process. VGG networks are trained on image with each channel normalized by mean = [103.939, 116.779, 123.68]and with channels BGR.",
"def load_and_process_img(path_to_img):\n img = load_img(path_to_img)\n img = tf.keras.applications.vgg19.preprocess_input(img)\n return img",
"In order to view the outputs of our optimization, we are required to perform the inverse preprocessing step. Furthermore, since our optimized image may take its values anywhere between $- \\infty$ and $\\infty$, we must clip to maintain our values from within the 0-255 range.",
"def deprocess_img(processed_img):\n x = processed_img.copy()\n if len(x.shape) == 4:\n x = np.squeeze(x, 0)\n assert len(x.shape) == 3, (\"Input to deprocess image must be an image of \"\n \"dimension [1, height, width, channel] or [height, width, channel]\")\n if len(x.shape) != 3:\n raise ValueError(\"Invalid input to deprocessing image\")\n \n # perform the inverse of the preprocessiing step\n x[:, :, 0] += 103.939\n x[:, :, 1] += 116.779\n x[:, :, 2] += 123.68\n x = x[:, :, ::-1]\n\n x = np.clip(x, 0, 255).astype('uint8')\n return x",
"Define content and style representations\nIn order to get both the content and style representations of our image, we will look at some intermediate layers within our model. As we go deeper into the model, these intermediate layers represent higher and higher order features. In this case, we are using the network architecture VGG19, a pretrained image classification network. These intermediate layers are necessary to define the representation of content and style from our images. For an input image, we will try to match the corresponding style and content target representations at these intermediate layers. \nWhy intermediate layers?\nYou may be wondering why these intermediate outputs within our pretrained image classification network allow us to define style and content representations. At a high level, this phenomenon can be explained by the fact that in order for a network to perform image classification (which our network has been trained to do), it must understand the image. This involves taking the raw image as input pixels and building an internal representation through transformations that turn the raw image pixels into a complex understanding of the features present within the image. This is also partly why convolutional neural networks are able to generalize well: they’re able to capture the invariances and defining features within classes (e.g., cats vs. dogs) that are agnostic to background noise and other nuisances. Thus, somewhere between where the raw image is fed in and the classification label is output, the model serves as a complex feature extractor; hence by accessing intermediate layers, we’re able to describe the content and style of input images. \nSpecifically we’ll pull out these intermediate layers from our network:",
"# Content layer where will pull our feature maps\ncontent_layers = ['block5_conv2'] \n\n# Style layer we are interested in\nstyle_layers = ['block1_conv1',\n 'block2_conv1',\n 'block3_conv1', \n 'block4_conv1', \n 'block5_conv1'\n ]\n\nnum_content_layers = len(content_layers)\nnum_style_layers = len(style_layers)",
"Build the Model\nIn this case, we load VGG19, and feed in our input tensor to the model. This will allow us to extract the feature maps (and subsequently the content and style representations) of the content, style, and generated images.\nWe use VGG19, as suggested in the paper. In addition, since VGG19 is a relatively simple model (compared with ResNet, Inception, etc) the feature maps actually work better for style transfer. \nIn order to access the intermediate layers corresponding to our style and content feature maps, we get the corresponding outputs and using the Keras Functional API, we define our model with the desired output activations. \nWith the Functional API defining a model simply involves defining the input and output: \nmodel = Model(inputs, outputs)",
"def get_model():\n \"\"\" Creates our model with access to intermediate layers. \n \n This function will load the VGG19 model and access the intermediate layers. \n These layers will then be used to create a new model that will take input image\n and return the outputs from these intermediate layers from the VGG model. \n \n Returns:\n returns a keras model that takes image inputs and outputs the style and \n content intermediate layers. \n \"\"\"\n # Load our model. We load pretrained VGG, trained on imagenet data\n vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet')\n vgg.trainable = False\n # Get output layers corresponding to style and content layers \n style_outputs = [vgg.get_layer(name).output for name in style_layers]\n content_outputs = [vgg.get_layer(name).output for name in content_layers]\n model_outputs = style_outputs + content_outputs\n # Build model \n return models.Model(vgg.input, model_outputs)",
"In the above code snippet, we’ll load our pretrained image classification network. Then we grab the layers of interest as we defined earlier. Then we define a Model by setting the model’s inputs to an image and the outputs to the outputs of the style and content layers. In other words, we created a model that will take an input image and output the content and style intermediate layers! \nDefine and create our loss functions (content and style distances)\nContent Loss\nOur content loss definition is actually quite simple. We’ll pass the network both the desired content image and our base input image. This will return the intermediate layer outputs (from the layers defined above) from our model. Then we simply take the euclidean distance between the two intermediate representations of those images. \nMore formally, content loss is a function that describes the distance of content from our output image $x$ and our content image, $p$. Let $C_{nn}$ be a pre-trained deep convolutional neural network. Again, in this case we use VGG19. Let $X$ be any image, then $C_{nn}(X)$ is the network fed by X. Let $F^l_{ij}(x) \\in C_{nn}(x)$ and $P^l_{ij}(p) \\in C_{nn}(p)$ describe the respective intermediate feature representation of the network with inputs $x$ and $p$ at layer $l$. Then we describe the content distance (loss) formally as: $$L^l_{content}(p, x) = \\sum_{i, j} (F^l_{ij}(x) - P^l_{ij}(p))^2$$\nWe perform backpropagation in the usual way such that we minimize this content loss. We thus change the initial image until it generates a similar response in a certain layer (defined in content_layer) as the original content image.\nThis can be implemented quite simply. Again it will take as input the feature maps at a layer L in a network fed by x, our input image, and p, our content image, and return the content distance.\nComputing content loss\nWe will actually add our content losses at each desired layer. This way, each iteration when we feed our input image through the model (which in eager is simply model(input_image)!) all the content losses through the model will be properly compute and because we are executing eagerly, all the gradients will be computed.",
"def get_content_loss(base_content, target):\n return tf.reduce_mean(tf.square(base_content - target))",
"Style Loss\nComputing style loss is a bit more involved, but follows the same principle, this time feeding our network the base input image and the style image. However, instead of comparing the raw intermediate outputs of the base input image and the style image, we instead compare the Gram matrices of the two outputs. \nMathematically, we describe the style loss of the base input image, $x$, and the style image, $a$, as the distance between the style representation (the gram matrices) of these images. We describe the style representation of an image as the correlation between different filter responses given by the Gram matrix $G^l$, where $G^l_{ij}$ is the inner product between the vectorized feature map $i$ and $j$ in layer $l$. We can see that $G^l_{ij}$ generated over the feature map for a given image represents the correlation between feature maps $i$ and $j$. \nTo generate a style for our base input image, we perform gradient descent from the content image to transform it into an image that matches the style representation of the original image. We do so by minimizing the mean squared distance between the feature correlation map of the style image and the input image. The contribution of each layer to the total style loss is described by\n$$E_l = \\frac{1}{4N_l^2M_l^2} \\sum_{i,j}(G^l_{ij} - A^l_{ij})^2$$\nwhere $G^l_{ij}$ and $A^l_{ij}$ are the respective style representation in layer $l$ of $x$ and $a$. $N_l$ describes the number of feature maps, each of size $M_l = height * width$. Thus, the total style loss across each layer is \n$$L_{style}(a, x) = \\sum_{l \\in L} w_l E_l$$\nwhere we weight the contribution of each layer's loss by some factor $w_l$. In our case, we weight each layer equally ($w_l =\\frac{1}{|L|}$)\nComputing style loss\nAgain, we implement our loss as a distance metric .",
"def gram_matrix(input_tensor):\n # We make the image channels first \n channels = int(input_tensor.shape[-1])\n a = tf.reshape(input_tensor, [-1, channels])\n n = tf.shape(a)[0]\n gram = tf.matmul(a, a, transpose_a=True)\n return gram / tf.cast(n, tf.float32)\n\ndef get_style_loss(base_style, gram_target):\n \"\"\"Expects two images of dimension h, w, c\"\"\"\n # height, width, num filters of each layer\n # We scale the loss at a given layer by the size of the feature map and the number of filters\n height, width, channels = base_style.get_shape().as_list()\n gram_style = gram_matrix(base_style)\n \n return tf.reduce_mean(tf.square(gram_style - gram_target))# / (4. * (channels ** 2) * (width * height) ** 2)",
"Apply style transfer to our images\nRun Gradient Descent\nIf you aren't familiar with gradient descent/backpropagation or need a refresher, you should definitely check out this awesome resource.\nIn this case, we use the Adam* optimizer in order to minimize our loss. We iteratively update our output image such that it minimizes our loss: we don't update the weights associated with our network, but instead we train our input image to minimize loss. In order to do this, we must know how we calculate our loss and gradients. \n* Note that L-BFGS, which if you are familiar with this algorithm is recommended, isn’t used in this tutorial because a primary motivation behind this tutorial was to illustrate best practices with eager execution, and, by using Adam, we can demonstrate the autograd/gradient tape functionality with custom training loops.\nWe’ll define a little helper function that will load our content and style image, feed them forward through our network, which will then output the content and style feature representations from our model.",
"def get_feature_representations(model, content_path, style_path):\n \"\"\"Helper function to compute our content and style feature representations.\n\n This function will simply load and preprocess both the content and style \n images from their path. Then it will feed them through the network to obtain\n the outputs of the intermediate layers. \n \n Arguments:\n model: The model that we are using.\n content_path: The path to the content image.\n style_path: The path to the style image\n \n Returns:\n returns the style features and the content features. \n \"\"\"\n # Load our images in \n content_image = load_and_process_img(content_path)\n style_image = load_and_process_img(style_path)\n \n # batch compute content and style features\n style_outputs = model(style_image)\n content_outputs = model(content_image)\n \n \n # Get the style and content feature representations from our model \n style_features = [style_layer[0] for style_layer in style_outputs[:num_style_layers]]\n content_features = [content_layer[0] for content_layer in content_outputs[num_style_layers:]]\n return style_features, content_features",
"Computing the loss and gradients\nHere we use tf.GradientTape to compute the gradient. It allows us to take advantage of the automatic differentiation available by tracing operations for computing the gradient later. It records the operations during the forward pass and then is able to compute the gradient of our loss function with respect to our input image for the backwards pass.",
"def compute_loss(model, loss_weights, init_image, gram_style_features, content_features):\n \"\"\"This function will compute the loss total loss.\n \n Arguments:\n model: The model that will give us access to the intermediate layers\n loss_weights: The weights of each contribution of each loss function. \n (style weight, content weight, and total variation weight)\n init_image: Our initial base image. This image is what we are updating with \n our optimization process. We apply the gradients wrt the loss we are \n calculating to this image.\n gram_style_features: Precomputed gram matrices corresponding to the \n defined style layers of interest.\n content_features: Precomputed outputs from defined content layers of \n interest.\n \n Returns:\n returns the total loss, style loss, content loss, and total variational loss\n \"\"\"\n style_weight, content_weight = loss_weights\n \n # Feed our init image through our model. This will give us the content and \n # style representations at our desired layers. Since we're using eager\n # our model is callable just like any other function!\n model_outputs = model(init_image)\n \n style_output_features = model_outputs[:num_style_layers]\n content_output_features = model_outputs[num_style_layers:]\n \n style_score = 0\n content_score = 0\n\n # Accumulate style losses from all layers\n # Here, we equally weight each contribution of each loss layer\n weight_per_style_layer = 1.0 / float(num_style_layers)\n for target_style, comb_style in zip(gram_style_features, style_output_features):\n style_score += weight_per_style_layer * get_style_loss(comb_style[0], target_style)\n \n # Accumulate content losses from all layers \n weight_per_content_layer = 1.0 / float(num_content_layers)\n for target_content, comb_content in zip(content_features, content_output_features):\n content_score += weight_per_content_layer* get_content_loss(comb_content[0], target_content)\n \n style_score *= style_weight\n content_score *= content_weight\n\n # Get total loss\n loss = style_score + content_score \n return loss, style_score, content_score",
"Then computing the gradients is easy:",
"def compute_grads(cfg):\n with tf.GradientTape() as tape: \n all_loss = compute_loss(**cfg)\n # Compute gradients wrt input image\n total_loss = all_loss[0]\n return tape.gradient(total_loss, cfg['init_image']), all_loss",
"Optimization loop",
"import IPython.display\n\ndef run_style_transfer(content_path, \n style_path,\n num_iterations=1000,\n content_weight=1e3, \n style_weight=1e-2): \n # We don't need to (or want to) train any layers of our model, so we set their\n # trainable to false. \n model = get_model() \n for layer in model.layers:\n layer.trainable = False\n \n # Get the style and content feature representations (from our specified intermediate layers) \n style_features, content_features = get_feature_representations(model, content_path, style_path)\n gram_style_features = [gram_matrix(style_feature) for style_feature in style_features]\n \n # Set initial image\n init_image = load_and_process_img(content_path)\n init_image = tfe.Variable(init_image, dtype=tf.float32)\n # Create our optimizer\n opt = tf.train.AdamOptimizer(learning_rate=5, beta1=0.99, epsilon=1e-1)\n\n # For displaying intermediate images \n iter_count = 1\n \n # Store our best result\n best_loss, best_img = float('inf'), None\n \n # Create a nice config \n loss_weights = (style_weight, content_weight)\n cfg = {\n 'model': model,\n 'loss_weights': loss_weights,\n 'init_image': init_image,\n 'gram_style_features': gram_style_features,\n 'content_features': content_features\n }\n \n # For displaying\n num_rows = 2\n num_cols = 5\n display_interval = num_iterations/(num_rows*num_cols)\n start_time = time.time()\n global_start = time.time()\n \n norm_means = np.array([103.939, 116.779, 123.68])\n min_vals = -norm_means\n max_vals = 255 - norm_means \n \n imgs = []\n for i in range(num_iterations):\n grads, all_loss = compute_grads(cfg)\n loss, style_score, content_score = all_loss\n opt.apply_gradients([(grads, init_image)])\n clipped = tf.clip_by_value(init_image, min_vals, max_vals)\n init_image.assign(clipped)\n end_time = time.time() \n \n if loss < best_loss:\n # Update best loss and best image from total loss. \n best_loss = loss\n best_img = deprocess_img(init_image.numpy())\n\n if i % display_interval== 0:\n start_time = time.time()\n \n # Use the .numpy() method to get the concrete numpy array\n plot_img = init_image.numpy()\n plot_img = deprocess_img(plot_img)\n imgs.append(plot_img)\n IPython.display.clear_output(wait=True)\n IPython.display.display_png(Image.fromarray(plot_img))\n print('Iteration: {}'.format(i)) \n print('Total loss: {:.4e}, ' \n 'style loss: {:.4e}, '\n 'content loss: {:.4e}, '\n 'time: {:.4f}s'.format(loss, style_score, content_score, time.time() - start_time))\n print('Total time: {:.4f}s'.format(time.time() - global_start))\n IPython.display.clear_output(wait=True)\n plt.figure(figsize=(14,4))\n for i,img in enumerate(imgs):\n plt.subplot(num_rows,num_cols,i+1)\n plt.imshow(img)\n plt.xticks([])\n plt.yticks([])\n \n return best_img, best_loss \n\nbest, best_loss = run_style_transfer(content_path, \n style_path, num_iterations=1000)\n\nImage.fromarray(best)",
"To download the image from Colab uncomment the following code:",
"#from google.colab import files\n#files.download('wave_turtle.png')",
"Visualize outputs\nWe \"deprocess\" the output image in order to remove the processing that was applied to it.",
"def show_results(best_img, content_path, style_path, show_large_final=True):\n plt.figure(figsize=(10, 5))\n content = load_img(content_path) \n style = load_img(style_path)\n\n plt.subplot(1, 2, 1)\n imshow(content, 'Content Image')\n\n plt.subplot(1, 2, 2)\n imshow(style, 'Style Image')\n\n if show_large_final: \n plt.figure(figsize=(10, 10))\n\n plt.imshow(best_img)\n plt.title('Output Image')\n plt.show()\n\nshow_results(best, content_path, style_path)",
"Try it on other images\nImage of Tuebingen \nPhoto By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons\nStarry night + Tuebingen",
"best_starry_night, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg',\n '/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')\n\nshow_results(best_starry_night, '/tmp/nst/Tuebingen_Neckarfront.jpg',\n '/tmp/nst/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg')",
"Pillars of Creation + Tuebingen",
"best_poc_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg', \n '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')\n\nshow_results(best_poc_tubingen, \n '/tmp/nst/Tuebingen_Neckarfront.jpg',\n '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')",
"Kandinsky Composition 7 + Tuebingen",
"best_kandinsky_tubingen, best_loss = run_style_transfer('/tmp/nst/Tuebingen_Neckarfront.jpg', \n '/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')\n\nshow_results(best_kandinsky_tubingen, \n '/tmp/nst/Tuebingen_Neckarfront.jpg',\n '/tmp/nst/Vassily_Kandinsky,_1913_-_Composition_7.jpg')",
"Pillars of Creation + Sea Turtle",
"best_poc_turtle, best_loss = run_style_transfer('/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg', \n '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')\n\nshow_results(best_poc_turtle, \n '/tmp/nst/Green_Sea_Turtle_grazing_seagrass.jpg',\n '/tmp/nst/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg')",
"Key Takeaways\nWhat we covered:\n\nWe built several different loss functions and used backpropagation to transform our input image in order to minimize these losses\nIn order to do this we had to load in a pretrained model and use its learned feature maps to describe the content and style representation of our images.\nOur main loss functions were primarily computing the distance in terms of these different representations\n\n\nWe implemented this with a custom model and eager execution\nWe built our custom model with the Functional API \nEager execution allows us to dynamically work with tensors, using a natural python control flow\nWe manipulated tensors directly, which makes debugging and working with tensors easier. \nWe iteratively updated our image by applying our optimizers update rules using tf.gradient. The optimizer minimized a given loss with respect to our input image. \n\nImage of Tuebingen \nPhoto By: Andreas Praefcke [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], from Wikimedia Commons\nImage of Green Sea Turtle\nBy P.Lindgren [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
PiercingDan/mat245
|
Labs/Lab9/MAT245 Lab 9.ipynb
|
mit
|
[
"MAT245 Lab 9\nClassifcation using Logistic Regression\nBackground\nIn a binary classification problem we have samples of data $x \\in \\mathbb{R}^n$, and we want to predict the value of a target variable $y \\in {0, 1}$. For instance, a farmer might want to know if a $32 \\times 32$ image $X \\in \\mathbb{R}^{32\\times 32}$ contained a picture of a cucumber or not. We model absense or presense of a cucumber with outputs of $0$ or $1$ respectively.\nThe logistic regression approach to classification uses a hypothesis function $h_\\theta$ of the form\n$$\nh_\\theta(x) \n = \ng(\\theta^T x)\n =\n\\frac{1}{1 + e^{-\\theta^T x}}.\n$$\nThe parameter $\\theta$ is what we're going to want to optimize. Since $h_\\theta(x) \\in [0, 1]$, we can interpret its value as the probability of $x$ having a certain label:\n\\begin{align}\n \\mathbb{P}(y = 1 ~|~ x, \\theta) &= h_\\theta(x) \\\n \\mathbb{P}(y = 0 ~|~ x, \\theta) &= 1 - h_\\theta(x).\n\\end{align}\nSo if $h_\\theta(x) \\geq 0.5$, we predict $y =1$, otherwise we predict $y = 0$. Written differently, this is\n$$\n \\mathbb{P}(y ~|~ x, \\theta) = h_\\theta(x)^y (1 - h_\\theta(x))^{1-y}.\n$$\nNow, suppose we have $m$ independently generated samples in our dataset. As usual, we arrange these $m$ samples into an $m\\times n$ matrix whose rows each represent individual samples. The likelihood of the parameter $\\theta$ is given by\n\\begin{align}\nL(\\theta)\n &=\n\\mathbb{P}(y ~|~ X, \\theta) \\\n &= \n\\prod_{i=1}^m \\mathbb{P}(y^{(i)} ~|~ X^{(i)}, \\theta) \\\n &=\n\\prod_{i=1}^m h_\\theta(x^{(i)})^{y^{(i)}} (1 - h_\\theta(x^{(i)}))^{1-y^{(i)}}.\n\\end{align}\nOur goal is then to choose $\\theta$ to maximize this likelihood. In practice, it is easier to maximize the log-likelihood function:\n\\begin{align}\n l(\\theta) \n &= \n \\log(L(\\theta)) \\ \n &= \n \\sum_{i=1}^m y^{(i)} \\log [ h_\\theta(x^{(i)})] + (1 - y^{(i)}) \\log[1 - h_\\theta(x^{(i)})].\n\\end{align}\nWe can maximize the log-likelihood by performing stochastic gradient ascent. In other words, we choose a training pair $(x, y) = (x^{(i)}, y^{(i)})$ at random, and compute the gradient of $l$ at this pair using the formula:\n\\begin{align}\n\\frac{\\partial }{\\partial\\theta_j} l (\\theta)\n &=\n\\left(y \\frac{1}{g(\\theta^T x} - (1 - y) \\frac{1}{1 - g(\\theta^T x)}\\right) \\frac{\\partial}{\\partial\\theta_j}g(\\theta^T x) \\\n &=\n\\left(y \\frac{1}{g(\\theta^T x} - (1 - y) \\frac{1}{1 - g(\\theta^T x)}\\right) \n g(\\theta^T x)(1 - g(\\theta^Tx)) \\frac{\\partial}{\\partial\\theta_j}\\theta^Tx \\\n &=\n(y(1 - g(\\theta^Tx)) - (1 - y)g(\\theta^Tx))x_j \\\n &=\n(y - h_\\theta(x))x_j.\n\\end{align}\nAbove we used the derivative identity $g'(x) = g(z)(1-g(z))$. To choose new $\\theta$ values, we want to take a small step in the direction of the gradient (since we are maximizing $l(\\theta)$). This gives the update rule of\n$$\n\\theta_j = \\theta_j + \\alpha (y^{(i)} - h_\\theta(x^{(i)}))x_j^{(i)}\n$$\nwhere $\\alpha$ is the learning rate parameter. \nApplication: breast cancer detection\nThe sklearn breast cancer dataset consists of $569$ $30$-dimensional data points. The goal is to classify each data point as representing either a malignant or benign tumor. You can load the data with the following code:",
"from sklearn import datasets\n\nbc = datasets.load_breast_cancer()\nsamples, targets = bc.data, bc.target",
"Goals (1):\n\nSplit the breast cancer data into 70% training and 30% validation sets. \nWrite a python implementation of the logistic regression function $(\\theta, x) \\mapsto h_\\theta(x)$. \nImplement the stochastic gradient ascent (SGA) algorithm described above to choose the best parameter $\\theta$ for the hypothesis function $h_\\theta$. How do different learning rates affect convergence? Typical choices are in the range 0.001 - 0.1. \nValidate your model's classification accuracy on the validation set (the sklearn.metrics.accuracy_score function may come in handy here).\nHow many iterations of SGA do you need to consistently get >85% classification accuracy on the validation set?\n\nPrincipal component analysis\nBackground\nPrincipal component analysis (PCA) is a dimensionality reduction technique. The idea is to project the data down to lower dimension by 'dropping' those directions/dimensions that don't contain much variance. For instance, consider the following sample of data points in 2D:\n<img src=\"pca.svg\" alt=\"Gaussian data in 2D\" style=\"width: 300px;\"/>\nThe goal of a PCA in this case would be to project all of the data points onto the axis spanned by the longer arrow; since the short arrow is orthogonal to the large one, it would be ideal if we could project along the short arrow. The new dataset will be 1-dimensional, and since most of the variation in the data was along the direction spanned by the long arrow, hopefully we haven't lost much information.\nFor more details about the mathematics of PCA, see Andrew Ng's great notes here.\nIdentifying digits with PCA and k-Nearest Neighbors.\nThe sklearn digits dataset contains images of handwritten digits, much like the famous MNIST dataset. Here's a sample:",
"import matplotlib.pyplot as plt\nimport numpy as np\n\ndigits = datasets.load_digits()\nsamples, targets = digits.data, digits.target\n\n%matplotlib inline\nplt.imshow(np.reshape(samples[0], (8,8)), cmap='Greys')",
"The images are each 8x8, for a total number of 64 dimensions. \nGoals (2):\n\nSplit the digits dataset into 70% training and 30% validation.\nUse sklearn's PCA implementation to reduce the dimensionality of the digits dataset (see sklearn.decomposition.PCA).\nOnce you've used PCA to reduce the dimensionality of the entire dataset, use the k-nearest neighbor algorithm to classify the digits by finding the class of the digits nearest to your reduced example (see sklearn.neighbors.KNeighborsClassifier). \nValidate your PCA + kNN model on the test set. How does accuracy change with the number of principal components you select?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
wanderer2/pymc3
|
docs/source/notebooks/GLM-robust-with-outlier-detection.ipynb
|
apache-2.0
|
[
"GLM: Robust Regression with Outlier Detection\nA minimal reproducable example of Robust Regression with Outlier Detection using Hogg 2010 Signal vs Noise method.\n\nThis is a complementary approach to the Student-T robust regression as illustrated in Thomas Wiecki's notebook in the PyMC3 documentation, that approach is also compared here.\nThis model returns a robust estimate of linear coefficients and an indication of which datapoints (if any) are outliers.\nThe likelihood evaluation is essentially a copy of eqn 17 in \"Data analysis recipes: Fitting a model to data\" - Hogg 2010.\nThe model is adapted specifically from Jake Vanderplas' implementation (3rd model tested).\nThe dataset is tiny and hardcoded into this Notebook. It contains errors in both the x and y, but we will deal here with only errors in y.\n\nNote:\n\nPython 3.4 project using latest available PyMC3\nDeveloped using ContinuumIO Anaconda distribution on a Macbook Pro 3GHz i7, 16GB RAM, OSX 10.10.5.\nDuring development I've found that 3 data points are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is slightly unstable between runs: the posterior surface appears to have a small number of solutions with similar probability. \nFinally, if runs become unstable or Theano throws weird errors, try clearing the cache $> theano-cache clear and rerunning the notebook.\n\nPackage Requirements (shown as a conda-env YAML):\n```\n$> less conda_env_pymc3_examples.yml\nname: pymc3_examples\n channels:\n - defaults\n dependencies:\n - python=3.4\n - ipython\n - ipython-notebook\n - ipython-qtconsole\n - numpy\n - scipy\n - matplotlib\n - pandas\n - seaborn\n - patsy\n - pip\n$> conda env create --file conda_env_pymc3_examples.yml\n$> source activate pymc3_examples\n$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3\n```\nSetup",
"%matplotlib inline\n%qtconsole --colors=linux\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom scipy import optimize\nimport pymc3 as pm\nimport theano as thno\nimport theano.tensor as T \n\n# configure some basic options\nsns.set(style=\"darkgrid\", palette=\"muted\")\npd.set_option('display.notebook_repr_html', True)\nplt.rcParams['figure.figsize'] = 12, 8\nnp.random.seed(0)",
"Load and Prepare Data\nWe'll use the Hogg 2010 data available at https://github.com/astroML/astroML/blob/master/astroML/datasets/hogg2010test.py\nIt's a very small dataset so for convenience, it's hardcoded below",
"#### cut & pasted directly from the fetch_hogg2010test() function\n## identical to the original dataset as hardcoded in the Hogg 2010 paper\n\ndfhogg = pd.DataFrame(np.array([[1, 201, 592, 61, 9, -0.84],\n [2, 244, 401, 25, 4, 0.31],\n [3, 47, 583, 38, 11, 0.64],\n [4, 287, 402, 15, 7, -0.27],\n [5, 203, 495, 21, 5, -0.33],\n [6, 58, 173, 15, 9, 0.67],\n [7, 210, 479, 27, 4, -0.02],\n [8, 202, 504, 14, 4, -0.05],\n [9, 198, 510, 30, 11, -0.84],\n [10, 158, 416, 16, 7, -0.69],\n [11, 165, 393, 14, 5, 0.30],\n [12, 201, 442, 25, 5, -0.46],\n [13, 157, 317, 52, 5, -0.03],\n [14, 131, 311, 16, 6, 0.50],\n [15, 166, 400, 34, 6, 0.73],\n [16, 160, 337, 31, 5, -0.52],\n [17, 186, 423, 42, 9, 0.90],\n [18, 125, 334, 26, 8, 0.40],\n [19, 218, 533, 16, 6, -0.78],\n [20, 146, 344, 22, 5, -0.56]]),\n columns=['id','x','y','sigma_y','sigma_x','rho_xy'])\n\n\n## for convenience zero-base the 'id' and use as index\ndfhogg['id'] = dfhogg['id'] - 1\ndfhogg.set_index('id', inplace=True)\n\n## standardize (mean center and divide by 1 sd)\ndfhoggs = (dfhogg[['x','y']] - dfhogg[['x','y']].mean(0)) / dfhogg[['x','y']].std(0)\ndfhoggs['sigma_y'] = dfhogg['sigma_y'] / dfhogg['y'].std(0)\ndfhoggs['sigma_x'] = dfhogg['sigma_x'] / dfhogg['x'].std(0)\n\n## create xlims ylims for plotting\nxlims = (dfhoggs['x'].min() - np.ptp(dfhoggs['x'])/5\n ,dfhoggs['x'].max() + np.ptp(dfhoggs['x'])/5)\nylims = (dfhoggs['y'].min() - np.ptp(dfhoggs['y'])/5\n ,dfhoggs['y'].max() + np.ptp(dfhoggs['y'])/5)\n\n## scatterplot the standardized data\ng = sns.FacetGrid(dfhoggs, size=8)\n_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker=\"o\", ls='')\n_ = g.axes[0][0].set_ylim(ylims)\n_ = g.axes[0][0].set_xlim(xlims)\n\nplt.subplots_adjust(top=0.92)\n_ = g.fig.suptitle('Scatterplot of Hogg 2010 dataset after standardization', fontsize=16)",
"Observe: \n\nEven judging just by eye, you can see these datapoints mostly fall on / around a straight line with positive gradient\nIt looks like a few of the datapoints may be outliers from such a line\n\nCreate Conventional OLS Model\nThe linear model is really simple and conventional:\n$$\\bf{y} = \\beta^{T} \\bf{X} + \\bf{\\sigma}$$\nwhere: \n$\\beta$ = coefs = ${1, \\beta_{j \\in X_{j}}}$\n$\\sigma$ = the measured error in $y$ in the dataset sigma_y\nDefine model\nNOTE:\n+ We're using a simple linear OLS model with Normally distributed priors so that it behaves like a ridge regression",
"with pm.Model() as mdl_ols:\n \n ## Define weakly informative Normal priors to give Ridge regression\n b0 = pm.Normal('b0_intercept', mu=0, sd=100)\n b1 = pm.Normal('b1_slope', mu=0, sd=100)\n \n ## Define linear model\n yest = b0 + b1 * dfhoggs['x']\n \n ## Use y error from dataset, convert into theano variable\n sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],\n dtype=thno.config.floatX), name='sigma_y')\n\n ## Define Normal likelihood\n likelihood = pm.Normal('likelihood', mu=yest, sd=sigma_y, observed=dfhoggs['y'])\n",
"Sample",
"with mdl_ols:\n\n ## find MAP using Powell, seems to be more robust\n start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)\n\n ## take samples\n traces_ols = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True)",
"View Traces\nNOTE: I'll 'burn' the traces to only retain the final 1000 samples",
"_ = pm.traceplot(traces_ols[-1000:], figsize=(12,len(traces_ols.varnames)*1.5),\n lines={k: v['mean'] for k, v in pm.df_summary(traces_ols[-1000:]).iterrows()})",
"NOTE: We'll illustrate this OLS fit and compare to the datapoints in the final plot\n\n\nCreate Robust Model: Student-T Method\nI've added this brief section in order to directly compare the Student-T based method exampled in Thomas Wiecki's notebook in the PyMC3 documentation\nInstead of using a Normal distribution for the likelihood, we use a Student-T, which has fatter tails. In theory this allows outliers to have a smaller mean square error in the likelihood, and thus have less influence on the regression estimation. This method does not produce inlier / outlier flags but is simpler and faster to run than the Signal Vs Noise model below, so a comparison seems worthwhile.\nNote: we'll constrain the Student-T 'degrees of freedom' parameter nu to be an integer, but otherwise leave it as just another stochastic to be inferred: no need for prior knowledge.\nDefine Model",
"with pm.Model() as mdl_studentt:\n \n ## Define weakly informative Normal priors to give Ridge regression\n b0 = pm.Normal('b0_intercept', mu=0, sd=100)\n b1 = pm.Normal('b1_slope', mu=0, sd=100)\n \n ## Define linear model\n yest = b0 + b1 * dfhoggs['x']\n \n ## Use y error from dataset, convert into theano variable\n sigma_y = thno.shared(np.asarray(dfhoggs['sigma_y'],\n dtype=thno.config.floatX), name='sigma_y')\n \n ## define prior for Student T degrees of freedom\n nu = pm.DiscreteUniform('nu', lower=1, upper=100)\n\n ## Define Student T likelihood\n likelihood = pm.StudentT('likelihood', mu=yest, sd=sigma_y, nu=nu\n ,observed=dfhoggs['y'])\n",
"Sample",
"with mdl_studentt:\n\n ## find MAP using Powell, seems to be more robust\n start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)\n\n ## two-step sampling to allow Metropolis for nu (which is discrete)\n step1 = pm.NUTS([b0, b1])\n step2 = pm.Metropolis([nu])\n \n ## take samples\n traces_studentt = pm.sample(2000, start=start_MAP, step=[step1, step2], progressbar=True)",
"View Traces",
"_ = pm.traceplot(traces_studentt[-1000:]\n ,figsize=(12,len(traces_studentt.varnames)*1.5)\n ,lines={k: v['mean'] for k, v in pm.df_summary(traces_studentt[-1000:]).iterrows()})",
"Observe:\n\nBoth parameters b0 and b1 show quite a skew to the right, possibly this is the action of a few samples regressing closer to the OLS estimate which is towards the left\nThe nu parameter seems very happy to stick at nu = 1, indicating that a fat-tailed Student-T likelihood has a better fit than a thin-tailed (Normal-like) Student-T likelihood.\nThe inference sampling also ran very quickly, almost as quickly as the conventional OLS\n\nNOTE: We'll illustrate this Student-T fit and compare to the datapoints in the final plot\n\n\nCreate Robust Model with Outliers: Hogg Method\nPlease read the paper (Hogg 2010) and Jake Vanderplas' code for more complete information about the modelling technique.\nThe general idea is to create a 'mixture' model whereby datapoints can be described by either the linear model (inliers) or a modified linear model with different mean and larger variance (outliers).\nThe likelihood is evaluated over a mixture of two likelihoods, one for 'inliers', one for 'outliers'. A Bernouilli distribution is used to randomly assign datapoints in N to either the inlier or outlier groups, and we sample the model as usual to infer robust model parameters and inlier / outlier flags:\n$$\n\\mathcal{logL} = \\sum_{i}^{i=N} log \\left[ \\frac{(1 - B_{i})}{\\sqrt{2 \\pi \\sigma_{in}^{2}}} exp \\left( - \\frac{(x_{i} - \\mu_{in})^{2}}{2\\sigma_{in}^{2}} \\right) \\right] + \\sum_{i}^{i=N} log \\left[ \\frac{B_{i}}{\\sqrt{2 \\pi (\\sigma_{in}^{2} + \\sigma_{out}^{2})}} exp \\left( - \\frac{(x_{i}- \\mu_{out})^{2}}{2(\\sigma_{in}^{2} + \\sigma_{out}^{2})} \\right) \\right]\n$$\nwhere:\n$\\bf{B}$ is Bernoulli-distibuted $B_{i} \\in [0_{(inlier)},1_{(outlier)}]$\nDefine model",
"def logp_signoise(yobs, is_outlier, yest_in, sigma_y_in, yest_out, sigma_y_out):\n '''\n Define custom loglikelihood for inliers vs outliers. \n NOTE: in this particular case we don't need to use theano's @as_op \n decorator because (as stated by Twiecki in conversation) that's only \n required if the likelihood cannot be expressed as a theano expression.\n We also now get the gradient computation for free.\n ''' \n \n # likelihood for inliers\n pdfs_in = T.exp(-(yobs - yest_in + 1e-4)**2 / (2 * sigma_y_in**2)) \n pdfs_in /= T.sqrt(2 * np.pi * sigma_y_in**2)\n logL_in = T.sum(T.log(pdfs_in) * (1 - is_outlier))\n\n # likelihood for outliers\n pdfs_out = T.exp(-(yobs - yest_out + 1e-4)**2 / (2 * (sigma_y_in**2 + sigma_y_out**2))) \n pdfs_out /= T.sqrt(2 * np.pi * (sigma_y_in**2 + sigma_y_out**2))\n logL_out = T.sum(T.log(pdfs_out) * is_outlier)\n\n return logL_in + logL_out\n\n\nwith pm.Model() as mdl_signoise:\n \n ## Define weakly informative Normal priors to give Ridge regression\n b0 = pm.Normal('b0_intercept', mu=0, sd=100)\n b1 = pm.Normal('b1_slope', mu=0, sd=100)\n \n ## Define linear model\n yest_in = b0 + b1 * dfhoggs['x']\n\n ## Define weakly informative priors for the mean and variance of outliers\n yest_out = pm.Normal('yest_out', mu=0, sd=100)\n sigma_y_out = pm.HalfNormal('sigma_y_out', sd=100)\n\n ## Define Bernoulli inlier / outlier flags according to a hyperprior \n ## fraction of outliers, itself constrained to [0,.5] for symmetry\n frac_outliers = pm.Uniform('frac_outliers', lower=0., upper=.5)\n is_outlier = pm.Bernoulli('is_outlier', p=frac_outliers, shape=dfhoggs.shape[0]) \n \n ## Extract observed y and sigma_y from dataset, encode as theano objects\n yobs = thno.shared(np.asarray(dfhoggs['y'], dtype=thno.config.floatX), name='yobs')\n sigma_y_in = thno.shared(np.asarray(dfhoggs['sigma_y']\n , dtype=thno.config.floatX), name='sigma_y_in')\n \n ## Use custom likelihood using DensityDist\n likelihood = pm.DensityDist('likelihood', logp_signoise,\n observed={'yobs':yobs, 'is_outlier':is_outlier,\n 'yest_in':yest_in, 'sigma_y_in':sigma_y_in,\n 'yest_out':yest_out, 'sigma_y_out':sigma_y_out})\n",
"Sample",
"with mdl_signoise:\n\n ## two-step sampling to create Bernoulli inlier/outlier flags\n step1 = pm.NUTS([frac_outliers, yest_out, sigma_y_out, b0, b1])\n step2 = pm.BinaryMetropolis([is_outlier], tune_interval=100)\n\n ## find MAP using Powell, seems to be more robust\n start_MAP = pm.find_MAP(fmin=optimize.fmin_powell, disp=True)\n\n ## take samples\n traces_signoise = pm.sample(2000, start=start_MAP, step=[step1,step2], progressbar=True)",
"View Traces",
"_ = pm.traceplot(traces_signoise[-1000:], figsize=(12,len(traces_signoise.varnames)*1.5),\n lines={k: v['mean'] for k, v in pm.df_summary(traces_signoise[-1000:]).iterrows()})",
"NOTE:\n\nDuring development I've found that 3 datapoints id=[1,2,3] are always indicated as outliers, but the remaining ordering of datapoints by decreasing outlier-hood is unstable between runs: the posterior surface appears to have a small number of solutions with very similar probability.\nThe NUTS sampler seems to work okay, and indeed it's a nice opportunity to demonstrate a custom likelihood which is possible to express as a theano function (thus allowing a gradient-based sampler like NUTS). However, with a more complicated dataset, I would spend time understanding this instability and potentially prefer using more samples under Metropolis-Hastings.\n\n\n\nDeclare Outliers and Compare Plots\nView ranges for inliers / outlier predictions\nAt each step of the traces, each datapoint may be either an inlier or outlier. We hope that the datapoints spend an unequal time being one state or the other, so let's take a look at the simple count of states for each of the 20 datapoints.",
"outlier_melt = pd.melt(pd.DataFrame(traces_signoise['is_outlier', -1000:],\n columns=['[{}]'.format(int(d)) for d in dfhoggs.index]),\n var_name='datapoint_id', value_name='is_outlier')\nax0 = sns.pointplot(y='datapoint_id', x='is_outlier', data=outlier_melt,\n kind='point', join=False, ci=None, size=4, aspect=2)\n\n_ = ax0.vlines([0,1], 0, 19, ['b','r'], '--')\n\n_ = ax0.set_xlim((-0.1,1.1))\n_ = ax0.set_xticks(np.arange(0, 1.1, 0.1))\n_ = ax0.set_xticklabels(['{:.0%}'.format(t) for t in np.arange(0,1.1,0.1)])\n\n_ = ax0.yaxis.grid(True, linestyle='-', which='major', color='w', alpha=0.4)\n_ = ax0.set_title('Prop. of the trace where datapoint is an outlier')\n_ = ax0.set_xlabel('Prop. of the trace where is_outlier == 1')",
"Observe:\n\nThe plot above shows the number of samples in the traces in which each datapoint is marked as an outlier, expressed as a percentage.\nIn particular, 3 points [1, 2, 3] spend >=95% of their time as outliers\nContrastingly, points at the other end of the plot close to 0% are our strongest inliers.\nFor comparison, the mean posterior value of frac_outliers is ~0.35, corresponding to roughly 7 of the 20 datapoints. You can see these 7 datapoints in the plot above, all those with a value >50% or thereabouts.\nHowever, only 3 of these points are outliers >=95% of the time. \nSee note above regarding instability between runs.\n\nThe 95% cutoff we choose is subjective and arbitrary, but I prefer it for now, so let's declare these 3 to be outliers and see how it looks compared to Jake Vanderplas' outliers, which were declared in a slightly different way as points with means above 0.68.\nDeclare outliers\nNote:\n+ I will declare outliers to be datapoints that have value == 1 at the 5-percentile cutoff, i.e. in the percentiles from 5 up to 100, their values are 1. \n+ Try for yourself altering cutoff to larger values, which leads to an objective ranking of outlier-hood.",
"cutoff = 5\ndfhoggs['outlier'] = np.percentile(traces_signoise[-1000:]['is_outlier'],cutoff, axis=0)\ndfhoggs['outlier'].value_counts()",
"Posterior Prediction Plots for OLS vs StudentT vs SignalNoise",
"g = sns.FacetGrid(dfhoggs, size=8, hue='outlier', hue_order=[True,False],\n palette='Set1', legend_out=False)\n\nlm = lambda x, samp: samp['b0_intercept'] + samp['b1_slope'] * x\n\npm.glm.plot_posterior_predictive(traces_ols[-1000:],\n eval=np.linspace(-3, 3, 10), lm=lm, samples=200, color='#22CC00', alpha=.2)\n\npm.glm.plot_posterior_predictive(traces_studentt[-1000:], lm=lm,\n eval=np.linspace(-3, 3, 10), samples=200, color='#FFA500', alpha=.5)\n\npm.glm.plot_posterior_predictive(traces_signoise[-1000:], lm=lm,\n eval=np.linspace(-3, 3, 10), samples=200, color='#357EC7', alpha=.3)\n\n_ = g.map(plt.errorbar, 'x', 'y', 'sigma_y', 'sigma_x', marker=\"o\", ls='').add_legend()\n\n_ = g.axes[0][0].annotate('OLS Fit: Green\\nStudent-T Fit: Orange\\nSignal Vs Noise Fit: Blue',\n size='x-large', xy=(1,0), xycoords='axes fraction',\n xytext=(-160,10), textcoords='offset points')\n_ = g.axes[0][0].set_ylim(ylims)\n_ = g.axes[0][0].set_xlim(xlims)",
"Observe:\n\n\nThe posterior preditive fit for:\n\nthe OLS model is shown in Green and as expected, it doesn't appear to fit the majority of our datapoints very well, skewed by outliers\nthe Robust Student-T model is shown in Orange and does appear to fit the 'main axis' of datapoints quite well, ignoring outliers\nthe Robust Signal vs Noise model is shown in Blue and also appears to fit the 'main axis' of datapoints rather well, ignoring outliers.\n\n\n\nWe see that the Robust Signal vs Noise model also yields specific estimates of which datapoints are outliers:\n\n17 'inlier' datapoints, in Blue and\n3 'outlier' datapoints shown in Red.\nFrom a simple visual inspection, the classification seems fair, and agrees with Jake Vanderplas' findings.\n\n\n\nOverall, it seems that:\n\nthe Signal vs Noise model behaves as promised, yielding a robust regression estimate and explicit labelling of inliers / outliers, but\nthe Signal vs Noise model is quite complex and whilst the regression seems robust and stable, the actual inlier / outlier labelling seems slightly unstable\nif you simply want a robust regression without inlier / outlier labelling, the Student-T model may be a good compromise, offering a simple model, quick sampling, and a very similar estimate.\n\n\n\n\nExample originally contributed by Jonathan Sedar 2015-12-21 github.com/jonsedar"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ReactiveX/RxPY
|
notebooks/reactivex.io/Part VII - Meta Operations.ipynb
|
mit
|
[
"%run startup.py\n\n%%javascript\n$.getScript('./assets/js/ipython_notebook_toc.js')",
"A Decision Tree of Observable Operators\nPart 7: Meta Operations\n\nsource: http://reactivex.io/documentation/operators.html#tree.\n(transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, axiros) \n\nThis tree can help you find the ReactiveX Observable operator you’re looking for.\nSee Part 1 for Usage and Output Instructions. \nWe also require acquaintance with the marble diagrams feature of RxPy.\n<h2 id=\"tocheading\">Table of Contents</h2>\n<div id=\"toc\"></div>\n\nI want to convert the entire sequence of items emitted by an Observable into some other data structure to_iterable/to_list, to_blocking, to_dict, to_future, to_marbles, to_set",
"rst(O.to_iterable)\ns = marble_stream(\"a--b-c|\")\nl, ts = [], time.time()\ndef on_next(listed):\n print('got', listed, time.time()-ts)\n\nfor i in (1, 2): \n d = s.subscribe(on_next)\n # second run: only one value, the list.\n s = s.to_list()\n # both are started around same time -> check time deltas\n\n\nrst(O.to_blocking)\nts = time.time()\ns = O.interval(200).take(3)\nsb = s.to_blocking()\n# this is instant:\nassert time.time() - ts < 0.2\n\nprint('''In some implementations of ReactiveX, there is also an operator that converts an Observable into a “Blocking” Observable. A Blocking Observable extends the ordinary Observable by providing a set of methods, operating on the items emitted by the Observable, that block. Some of the To operators are in this Blocking Obsevable set of extended operations.''')\n# -> diffing dir(s) with dir(sb) we get:\n# __iter__\n# for_each\n# observable\nrst(sb.__iter__)\nfor i in (1, 2):\n # not interleaved results:\n for it in sb:\n log(it)\n\nrst(sb.for_each) \nsb.for_each(log)\n\nheader(\".observable -> getting async again\")\n# interleaved again:\nd = subs(sb.observable, name='observer 1')\nd = subs(sb.observable, name='observer 2')\n\nrst(O.to_dict)\nd = subs(O.from_('abc').to_dict(key_mapper=lambda x: x, element_mapper=lambda a: '%s%s' % (a, a)))\n\nrst(O.to_future)\ndef emit(obs): \n for ev in 'first', 'second':\n sleep(.5)\n log('emitting', ev)\n obs.on_next(ev)\n # vital for the future to get done:\n obs.on_completed()\n \n \ntry:\n # required for py2 (backport of guidos' tulip stuffs, now asyncio)\n # caution: people say this is not production ready and will never be.\n import trollius\n f = rx.Observable.create(emit).to_future(trollius.Future)\n # this is async, not a busy loop\n log('future.result():', f.result())\nexcept: # notebook should always run all cells\n print ('skipping this; pip install trollius required')\n\nrst(O.from_marbles)\nd = subs(rx.Observable.from_string(\"1-(42)-3-|\").to_blocking())\n\nrst(O.to_set)\nd = subs(O.from_(\"abcabc\").to_set())",
"I want an operator to operate on a particular Scheduler: subscribe_on\nAdvanced feature: Adding side effects to subscription and unsubscription events.\nThis is a good read:\n\nPlus see the other\nlinks on RX docu",
"rst(O.subscribe_on)\n\n# start simple:\nheader('Switching Schedulers')\ns = O.just(42, reactivex.scheduler.ImmediateScheduler())\nd = subs(s.subscribe_on(reactivex.scheduler.TimeoutScheduler()), name='SimpleSubs')\n\nsleep(0.1)\n\nheader('Custom Subscription Side Effects')\n\nfrom reactivex.scheduler.newthreadscheduler import NewThreadScheduler\nfrom reactivex.scheduler.eventloopscheduler import EventLoopScheduler\n\nclass MySched(NewThreadScheduler):\n '''For adding side effects at subscription and unsubscription time'''\n def schedule(self, action, state=None):\n log('new scheduling task', action)\n scheduler = EventLoopScheduler(\n thread_factory=self.thread_factory,\n exit_if_empty=True)\n return scheduler.schedule(action, state)\n \ns = O.interval(200).take(2)\ns = s.subscribe_on(MySched())\nd = subs(s, name=\"subs1\")\nd = subs(s, name=\"subs2\")\n",
"...when it notifies observers observe_on\nVia this you can add side effects on any notification to any subscriber.\nThis example shall demonstrate whats going on:",
"rst(O.observe_on)\nfrom reactivex.scheduler.newthreadscheduler import NewThreadScheduler\n\nheader('defining a custom thread factory for a custom scheduler')\ndef my_thread_factory(target, args=None): \n 'just to show that also here we can customize'\n t = threading.Thread(target=target, args=args or [])\n t.setDaemon(True)\n print ('\\ncreated %s\\n' % t.getName())\n return t\n\n\nclass MySched:\n def __init__(self):\n self.rx_sched = NewThreadScheduler(my_thread_factory)\n \n def __getattr__(self, a):\n 'called whenever the observe_on scheduler is on duty'\n log('RX called', a, 'on mysched\\n')\n return getattr(self.rx_sched, a)\n \nmysched = MySched() \ns = O.interval(200).take(3) #.delay(100, mysched)\n\nd = subs(s.observe_on(mysched))\n\nsleep(2)\nprint 'all threads after finish:' # all cleaned up\nprint (' '.join([t.name for t in threading.enumerate()]))",
"I want an Observable to invoke a particular action when certain events occur do_action/tap, finally_action",
"rst(O.do_action)\ndef say(v=None):\n if v:\n log('NI!', v)\n else:\n log('EOF')\n \nd = subs(O.range(10, 10).take(2).tap(say, on_completed=say)) \n\nrst(O.finally_action)\nd = subs(O.on_error('err').take(2).finally_action(say))",
"I want an Observable that will notify observers of an error throw**",
"rst(O.throw)\nd = subs(O.range(1, 3).concat(O.on_error(\"ups\")))",
"...if a specified period of time elapses without it emitting an item timeout / timeout_with_selector**",
"rst(O.timeout)\nd = subs(marble_stream(\"a-b---c|\").timeout(200, O.just('timeout')))\n# this also works with absolute time. See docstring:\n\nrst(O.timeout_with_selector)\nd = subs(marble_stream(\"2-2-1-1|\")\\\n .timeout_with_selector(\n # you get the value and can adjust the timeout accordingly:\n timeout_duration_mapper=lambda x: O.timer(100 * int(x)),\n other=O.just('timeout')))\n",
"I want an Observable to recover gracefully\n...from a timeout by switching to a backup Observable timeout / timeout_with_selector**\n(example: see above)\n...from an upstream error notification catch_exception, on_error_resume_next**",
"rst(O.catch_exception)\nfubar1 = O.on_error('Ups')\nfubar2 = O.on_error('Argh')\ngood = O.just(42)\nd = subs(O.catch(fubar1, fubar2, good))\n\n\nrst(O.on_error_resume_next)\n\nbucket = [0]\ndef emitter(obs):\n v = bucket[-1]\n bucket.append(v)\n for i in range(0, len(bucket) + 2):\n obs.on_next(i)\n if len(bucket) > 2:\n log('notify error')\n obs.on_error(\"ups\")\n log('notify complete')\n obs.on_completed()\n \n \n \nd = subs(O.on_error_resume_next(O.just('running'),\n O.create(emitter),\n O.create(emitter),\n O.just('all good')\n ))\n",
"... by attempting to resubscribe to the upstream Observable retry",
"rst(O.retry)\nts = time.time()\ndef emit(obs):\n dt = time.time() - ts\n obs.on_next('try %s' % dt)\n if dt < 1:\n sleep(0.2)\n log('error')\n obs.on_error('ups')\n obs.on_completed()\n \nd = subs(O.create(emit).retry(10))",
"I want to create a resource that has the same lifespan as the Observable using\nhttp://www.introtorx.com/Content/v1.0.10621.0/11_AdvancedErrorHandling.html#Using:\nThe Using factory method allows you to bind the lifetime of a resource to the lifetime of an observable sequence. The signature itself takes two factory methods; one to provide the resource and one to provide the sequence. This allows everything to be lazily evaluated.\nThis mechanism can find varied practical applications in the hands of an imaginative developer. The resource being an IDisposable is convenient; indeed, it makes it so that many types of resources can be bound to, such as other subscriptions, stream reader/writers, database connections, user controls and, with Disposable(Action), virtually anything else.",
"rst(O.using)\n#d = subs(O.interval(1000).take(2))\n\nlifetime = 2000\ndef result(disposable_resource_fac):\n return O.just(disposable_resource_fac).delay(lifetime)\n\nd2 = subs(O.using(lambda: subs(O.interval(100).take(1000), name='resource fac\\n'),\n result), name='outer stream\\n')\n",
"I want to subscribe to an Observable and receive a Future that blocks until the Observable completes start, start_async, to_async",
"rst(O.start)\ndef starter():\n # called only once, async:\n return 'start: ', time.time()\ns = O.start(starter).concat(O.from_('abc'))\nd = subs(s, name='sub1')\nd = subs(s, name='sub2')\n\nrst(O.start_async)\n\ndef emit(obs):\n \n for ev in 'first', 'second':\n sleep(.2)\n log('emitting', ev)\n obs.on_next(ev)\n # vital for the future to get done:\n obs.on_completed()\n \ndef future():\n # only called once:\n log('called future')\n future = trollius.Future()\n future.set_result(('42', time.time()))\n future.set_exception(Exception('ups'))\n return future\n \ntry:\n # required for py2 (backport of guidos' tulip stuffs, now asyncio)\n # caution: people say this is not production ready and will never be.\n import trollius\n s = O.start_async(future)\n d = subs(s, name='subs1')\n # same result:\n d = subs(s, name='subs2')\nexcept Exception as ex: # notebook should always run all cells\n print ('%s skipping this; pip install trollius required' % ex)\n\nrst(O.to_async)\nd = subs(O.to_async(lambda x, y: x + y)(4, 3) )"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ProfessorKazarinoff/staticsite
|
content/code/functions/functions_in_python.ipynb
|
gpl-3.0
|
[
"Functions are pieces of reusable code. Each function contains three descrete elements: name, input, and output. Functions take in input, called arguments or input arguments, and produce output. A function is called in Python by coding\noutput = function_name(input)\nNote the output is written first followed by the equals sign = and then the function name and the input is enclosed with parenthesis ( ). \nPython has many useful built-in functions such as min(),max(),abs() and sum(). We can use these built-in functions without importing any modules.\nFor example:",
"out = sum([2, 3])",
"In the function above, the input is the list [2, 3]. The output of the function is assigned to the out variable and the function name is sum().\nWe can write our own functions in Python using the general form:\n```\ndef function_name(input):\ncode to run indented\n\nreturn output\n\n```\nA couple import points about the general form above. The def keyword starts the function definition. Without the word def you are writing a regular line of Python code. Afterthe function name, the input is encolsed with parenthesis ( ) followed by a colon :. Don't forget the colon :. Without the colon : your function will not run. The code within the body of the function must be indentend. Finally, the key word return needs to be at the end of you function followed by the output. The input and output variables can be any thing you want, but the def, : and return must be used.\nLet's create our own function to convet kilograms(kg) to grams(g). Let's call our function kg2g.\nThe first thing to do is make sure that our function name kg2g is not assigned to another function or keyword by Python. We can check if the name kg2g has already been defined using Python's type function. We know that sum() is a function and def is a keyword, how about kg2g?\nLet's first check if sum is already a function name:",
"type(sum)",
"Now let's check if def is a keyword.",
"from keyword import iskeyword\niskeyword('def')",
"OK, so how about kg2g is that a function name?",
"type(g2kg)",
"g2kg is not a function name. Now let's test to see if g2kg is a keyword in Python:",
"from keyword import iskeyword\niskeyword('g2kg')",
"Once we know that our function name is available, we can build our function. Remember the parenthsis, colon and return statment.",
"def kg2g(kg):\n g = kg*1000\n \n return g",
"Now let's try and use our function. How many kg's is 1300 grams. We expect the output to be 1.3 kg",
"kg2g(1.3)",
"It is good practice to add a doc string to our function. A doc string is used to give the user an idea of what a function does. THe doc string is called when Python's help function invoked. A typical doc string includes the following:\n\nA summary of the function\nthe function input, including data type\nthe function output, including data type\nAn example of the function run with sample input and the output produced",
"def kg2g(kg):\n \"\"\"\n \n Function kg2g converts between kg and g\n \n input: a measurement in kilograms (kg), int or float\n output: measurment in grams (g), float\n \n Example:\n \n >>> kg2g(1.3)\n \n 1300.0\n\n \"\"\"\n \n g = kg * 1000\n \n return g\n\nhelp(kg2g)\n\nkg2g(1.3)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Benedicto/ML-Learning
|
Linear_Regression_Overfitting_Demo_Ridge_Lasso.ipynb
|
gpl-3.0
|
[
"Overfitting demo\nCreate a dataset based on a true sinusoidal relationship\nLet's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \\sin(4x)$:",
"import graphlab\nimport math\nimport random\nimport numpy\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"Create random values for x in interval [0,1)",
"random.seed(98103)\nn = 30\nx = graphlab.SArray([random.random() for i in range(n)]).sort()",
"Compute y",
"y = x.apply(lambda x: math.sin(4*x))",
"Add random Gaussian noise to y",
"random.seed(1)\ne = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])\ny = y + e",
"Put data into an SFrame to manipulate later",
"data = graphlab.SFrame({'X1':x,'Y':y})\ndata",
"Create a function to plot the data, since we'll do it many times",
"def plot_data(data): \n plt.plot(data['X1'],data['Y'],'k.')\n plt.xlabel('x')\n plt.ylabel('y')\n\nplot_data(data)",
"Define some useful polynomial regression functions\nDefine a function to create our features for a polynomial regression model of any degree:",
"def polynomial_features(data, deg):\n data_copy=data.copy()\n for i in range(1,deg):\n data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']\n return data_copy",
"Define a function to fit a polynomial linear regression model of degree \"deg\" to the data in \"data\":",
"def polynomial_regression(data, deg):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,l1_penalty=0.,\n validation_set=None,verbose=False)\n return model",
"Define function to plot data and predictions made, since we are going to use it many times.",
"def plot_poly_predictions(data, model):\n plot_data(data)\n\n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n \n # Create 200 points in the x axis and compute the predicted value for each point\n x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})\n y_pred = model.predict(polynomial_features(x_pred,deg))\n \n # plot predictions\n plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')\n plt.legend(loc='upper left')\n plt.axis([0,1,-1.5,2])",
"Create a function that prints the polynomial coefficients in a pretty way :)",
"def print_coefficients(model): \n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n\n # Get learned parameters as a list\n w = list(model.coefficients['value'])\n\n # Numpy has a nifty function to print out polynomials in a pretty way\n # (We'll use it, but it needs the parameters in the reverse order)\n print 'Learned polynomial for degree ' + str(deg) + ':'\n w.reverse()\n print numpy.poly1d(w)",
"Fit a degree-2 polynomial\nFit our degree-2 polynomial to the data generated above:",
"model = polynomial_regression(data, deg=2)",
"Inspect learned parameters",
"print_coefficients(model)",
"Form and plot our predictions along a grid of x values:",
"plot_poly_predictions(data,model)",
"Fit a degree-4 polynomial",
"model = polynomial_regression(data, deg=4)\nprint_coefficients(model)\nplot_poly_predictions(data,model)",
"Fit a degree-16 polynomial",
"model = polynomial_regression(data, deg=16)\nprint_coefficients(model)",
"Woah!!!! Those coefficients are crazy! On the order of 10^6.",
"plot_poly_predictions(data,model)",
"Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.\n\n\n# \n# \nRidge Regression\nRidge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\\|w\\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called \"L2_penalty\").\nDefine our function to solve the ridge objective for a polynomial regression model of any degree:",
"def polynomial_ridge_regression(data, deg, l2_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n return model",
"Perform a ridge fit of a degree-16 polynomial using a very small penalty strength",
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Perform a ridge fit of a degree-16 polynomial using a very large penalty strength",
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Let's look at fits for a sequence of increasing lambda values",
"for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:\n model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)\n print 'lambda = %.2e' % l2_penalty\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('Ridge, lambda = %.2e' % l2_penalty)\n\ndata",
"Perform a ridge fit of a degree-16 polynomial using a \"good\" penalty strength\nWe will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider \"leave one out\" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.",
"# LOO cross validation -- return the average MSE\ndef loo(data, deg, l2_penalty_values):\n # Create polynomial features\n data = polynomial_features(data, deg)\n \n # Create as many folds for cross validatation as number of data points\n num_folds = len(data)\n folds = graphlab.cross_validation.KFold(data,num_folds)\n \n # for each value of l2_penalty, fit a model for each fold and compute average MSE\n l2_penalty_mse = []\n min_mse = None\n best_l2_penalty = None\n for l2_penalty in l2_penalty_values:\n next_mse = 0.0\n for train_set, validation_set in folds:\n # train model\n model = graphlab.linear_regression.create(train_set,target='Y', \n l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n \n # predict on validation set \n y_test_predicted = model.predict(validation_set)\n # compute squared error\n next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()\n \n # save squared error in list of MSE for each l2_penalty\n next_mse = next_mse/num_folds\n l2_penalty_mse.append(next_mse)\n if min_mse is None or next_mse < min_mse:\n min_mse = next_mse\n best_l2_penalty = l2_penalty\n \n return l2_penalty_mse,best_l2_penalty",
"Run LOO cross validation for \"num\" values of lambda, on a log scale",
"l2_penalty_values = numpy.logspace(-4, 10, num=10)\nl2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)",
"Plot results of estimating LOO for each value of lambda",
"plt.plot(l2_penalty_values,l2_penalty_mse,'k-')\nplt.xlabel('$\\ell_2$ penalty')\nplt.ylabel('LOO cross validation error')\nplt.xscale('log')\nplt.yscale('log')",
"Find the value of lambda, $\\lambda_{\\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit",
"best_l2_penalty\n\nmodel = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Lasso Regression\nLasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called \"L1_penalty\"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\\|w\\|$.\nDefine our function to solve the lasso objective for a polynomial regression model of any degree:",
"def polynomial_lasso_regression(data, deg, l1_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,\n l1_penalty=l1_penalty,\n validation_set=None, \n solver='fista', verbose=False,\n max_iterations=3000, convergence_threshold=1e-10)\n return model",
"Explore the lasso solution as a function of a few different penalty strengths\nWe refer to lambda in the lasso case below as \"l1_penalty\"",
"for l1_penalty in [0.0001, 0.01, 0.1, 10]:\n model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)\n print 'l1_penalty = %e' % l1_penalty\n print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))",
"Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
datamicroscopes/release
|
examples/.ipynb_checkpoints/titanic-checkpoint.ipynb
|
bsd-3-clause
|
[
"Modeling with Mixed Datatypes using the Titanic Dataset\n\nThe RMS Titanic is a well known passenger ship that sank in 1912. The passenger list of the ship is a popular dataset of demographic information.\nWhile most models predict likelihood of surviving the crash given demographic information, we will instead use the dataset to demonstrate how to work with mixed datatypes and combine likelihoods.\nLet's set up our environment and load our data",
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom microscopes.models import dd as dirichlet_discrete\nfrom microscopes.models import bb as beta_bernoulli\nfrom microscopes.models import gp as gamma_poisson\nfrom microscopes.models import nich as normal_inverse_chisquared\nfrom sklearn.preprocessing import LabelEncoder as Encoder\nfrom microscopes.mixture.definition import model_definition\nfrom microscopes.common.rng import rng\nfrom microscopes.common.recarray.dataview import numpy_dataview\nfrom microscopes.mixture import model, runner, query\nfrom microscopes.kernels import parallel\nfrom microscopes.common.query import groups, zmatrix_heuristic_block_ordering, zmatrix_reorder\nsns.set_context('talk')\n\n\n%matplotlib inline\n\nti = sns.load_dataset('titanic')",
"Some of the columns in the data are reduntant, while others have missing data. Let's ignore columns with lots of missing data, and focus on columns with key information. We'll also drop rows with missing data.",
"unique_cols = ['survived','pclass','sex','age','sibsp','parch','embark_town','fare']\nto_cluster = ti[unique_cols].dropna()\nprint '%d passengers in dataset' % len(to_cluster.index)",
"For this example, we'll be using a Dirichlet Process Mixture Model to cluster the passengers\nIn this mixutre model, each column will have its own distributuion over values conditioned on the cluster assignment. We must specify the distribution by selecting which likelihood we'd like to use for each column.",
"to_cluster.head()",
"For modelling, we must encode all categorical data into integer values. To do this we'll use scikitlearn's LabelEncoder",
"en1 = Encoder()\nto_cluster['sex'] = en1.fit_transform(to_cluster['sex'])\nen2 = Encoder()\nto_cluster['embark_town'] = en2.fit_transform(to_cluster['embark_town'])\nto_cluster.head()",
"To decide on which likelihood model to use on each column, let's get some basic statistics on our data",
"to_cluster.describe()",
"The above data table gives us a good idea of what kinds of likelihoods to use for each column:\n\nsurvived, which indicates survival from the crash, is a binary column, which we will model with a beta-bernoulli distribution\npclass, which is passenger class in the ship, ranges from 1 to 3. This ordinal column we'll model with a gamma-poisson distribution\nsex is another binary column we can model as beta-bernoulli\nage is real valued, so we model this column as normally distributed\nsibsp, which is number of siblings/spouses on the ship, ranges from 1 to 5. Since this data is integer valued and of limited range, we'll model this as also gamma-poisson.\nparch, which is number of parents/children on the ship, has similar characteristics to sibsp so it'll also be modeled gamma-poisson\nembark_town indicates where the passenger boarded the ship. Since this is categorical data, we'll model it with a dirichlet-discrete distribution.\nfare is also real valued so we can model this as normal. However, the data is long tailed so we'll take the log of fare instead\n\nWe have two kinds of normal distributions, the normal inverse-wishart and the normal inverse-chi-square distribution. To determine which model we should use, let's find the correlation between age and logfare",
"to_cluster['fare'] = np.log(to_cluster['fare'])\n\nto_cluster[['age','fare']].corr()",
"Since the correlation between age and fare is near 0, we can model these columns with a normal inverse-chi-square distribution\nWe'll now define our model using the model_definition function and run a gibbs sampler for 30000 iterations to learn the latent clusters\nNote that for a the dirichlet-discrete likelihood we have to specify the number of categories in the data, in this case 3",
"nchains = 5\niters = 30000\nsplits = 7\n\ndefn = model_definition(to_cluster.shape[0], [beta_bernoulli, gamma_poisson, beta_bernoulli, normal_inverse_chisquared, gamma_poisson, gamma_poisson, dirichlet_discrete(3), normal_inverse_chisquared]) \n\nprng = rng()\n\ndata_rec = np.array([(list(to_cluster.ix[c]),) for c in to_cluster.index], dtype=[('', np.float32, to_cluster.shape[1])])\nview = numpy_dataview(data_rec)\nlatents = [model.initialize(defn, view, prng) for _ in xrange(nchains)]\nrunners = [runner.runner(defn, view, latent, kernel_config=['assign']) for latent in latents]\n\nfor i, rnr in enumerate(runners):\n print 'starting runner %d at %d iterations' % (i, iters)\n rnr.run(r=prng, niters=iters)\n print 'runner %d done' % i\n\ninfers = [r.get_latent() for r in runners]\n\nzmat = query.zmatrix(infers)\nzmat = zmatrix_reorder(zmat, zmatrix_heuristic_block_ordering(zmat))\nf, ax = plt.subplots(figsize=(12, 9))\nlabels = ['%d' % i if i % (zmat.shape[0]/splits) == 0 else '' for i in xrange(zmat.shape[0])]\nsns.heatmap(zmat, linewidths=0, square=True, ax=ax, xticklabels=labels, yticklabels=labels)\nplt.title('Z-Matrix of Titanic Dataset\\nwith %d Chains at %d Iterations' % (nchains, iters))\nplt.xlabel('people (sorted)')\nplt.ylabel('people (sorted)')\n\nassignments = infers[0].assignments()\n\nclusters = list(set(assignments))\nK = len(clusters)\n\n'%d clusters inferred' % K\n\nto_cluster['cluster'] = assignments",
"Now that we have our cluster assignments, let's plot the posterior distributions over each column in the data\nSince survived, pclass, sex, sibsp, and parch have few observed values, we can plot these with bar graphs",
"for c in ['survived', 'pclass', 'sex', 'sibsp', 'parch', 'embark_town']:\n dist = to_cluster.groupby(['cluster', c])['age'].count()/to_cluster.groupby('cluster')['age'].count()\n dist.unstack().plot(kind='bar', figsize=(12,6))\n plt.title('distribution over %s among clusters' % c)\n plt.ylabel('probability')\n plt.xticks(range(K), range(K))",
"Each of the clusters have their own distribution over values, as shown by the plots\nLet's now plot the columns we've modeled as normal inverse-chi-squared, age and fare, with a kernel density estimate\nRecall that we've transformed fare into $\\text{ln(fare)}$",
"for c in ['age', 'fare']:\n plt.figure(figsize=(12,9))\n for k in clusters:\n sns.kdeplot(to_cluster.loc[to_cluster['cluster'] == k, c], legend = False)\n if c == 'fare':\n c = 'log fare'\n plt.title('KDE of %s among titanic passengers clusters' % c)\n plt.ylabel('probability')\n plt.legend(range(K))\n plt.xlabel(c)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_object_epochs.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"The :class:Epochs <mne.Epochs> data structure: epoched data",
"from __future__ import print_function\n\nimport mne\nimport os.path as op\nimport numpy as np\nfrom matplotlib import pyplot as plt",
":class:Epochs <mne.Epochs> objects are a way of representing continuous\ndata as a collection of time-locked trials, stored in an array of\nshape(n_events, n_channels, n_times). They are useful for many statistical\nmethods in neuroscience, and make it easy to quickly overview what occurs\nduring a trial.\n:class:Epochs <mne.Epochs> objects can be created in three ways:\n 1. From a :class:Raw <mne.io.RawFIF> object, along with event times\n 2. From an :class:Epochs <mne.Epochs> object that has been saved as a\n .fif file\n 3. From scratch using :class:EpochsArray <mne.EpochsArray>. See\n tut_creating_data_structures",
"data_path = mne.datasets.sample.data_path()\n# Load a dataset that contains events\nraw = mne.io.read_raw_fif(\n op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'))\n\n# If your raw object has a stim channel, you can construct an event array\n# easily\nevents = mne.find_events(raw, stim_channel='STI 014')\n\n# Show the number of events (number of rows)\nprint('Number of events:', len(events))\n\n# Show all unique event codes (3rd column)\nprint('Unique event codes:', np.unique(events[:, 2]))\n\n# Specify event codes of interest with descriptive labels.\n# This dataset also has visual left (3) and right (4) events, but\n# to save time and memory we'll just look at the auditory conditions\n# for now.\nevent_id = {'Auditory/Left': 1, 'Auditory/Right': 2}",
"Now, we can create an :class:mne.Epochs object with the events we've\nextracted. Note that epochs constructed in this manner will not have their\ndata available until explicitly read into memory, which you can do with\n:func:get_data <mne.Epochs.get_data>. Alternatively, you can use\npreload=True.\nExpose the raw data as epochs, cut from -0.1 s to 1.0 s relative to the event\nonsets",
"epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=1,\n baseline=(None, 0), preload=True)\nprint(epochs)",
"Epochs behave similarly to :class:mne.io.Raw objects. They have an\n:class:info <mne.Info> attribute that has all of the same\ninformation, as well as a number of attributes unique to the events contained\nwithin the object.",
"print(epochs.events[:3])\nprint()\nprint(epochs.event_id)",
"You can select subsets of epochs by indexing the :class:Epochs <mne.Epochs>\nobject directly. Alternatively, if you have epoch names specified in\nevent_id then you may index with strings instead.",
"print(epochs[1:5])\nprint(epochs['Auditory/Right'])",
"It is also possible to iterate through :class:Epochs <mne.Epochs> objects\nin this way. Note that behavior is different if you iterate on Epochs\ndirectly rather than indexing:",
"# These will be epochs objects\nfor i in range(3):\n print(epochs[i])\n\n# These will be arrays\nfor ep in epochs[:2]:\n print(ep)",
"You can manually remove epochs from the Epochs object by using\n:func:epochs.drop(idx) <mne.Epochs.drop>, or by using rejection or flat\nthresholds with :func:epochs.drop_bad(reject, flat) <mne.Epochs.drop_bad>.\nYou can also inspect the reason why epochs were dropped by looking at the\nlist stored in epochs.drop_log or plot them with\n:func:epochs.plot_drop_log() <mne.Epochs.plot_drop_log>. The indices\nfrom the original set of events are stored in epochs.selection.",
"epochs.drop([0], reason='User reason')\nepochs.drop_bad(reject=dict(grad=2500e-13, mag=4e-12, eog=200e-6), flat=None)\nprint(epochs.drop_log)\nepochs.plot_drop_log()\nprint('Selection from original events:\\n%s' % epochs.selection)\nprint('Removed events (from numpy setdiff1d):\\n%s'\n % (np.setdiff1d(np.arange(len(events)), epochs.selection).tolist(),))\nprint('Removed events (from list comprehension -- should match!):\\n%s'\n % ([li for li, log in enumerate(epochs.drop_log) if len(log) > 0]))",
"If you wish to save the epochs as a file, you can do it with\n:func:mne.Epochs.save. To conform to MNE naming conventions, the\nepochs file names should end with '-epo.fif'.",
"epochs_fname = op.join(data_path, 'MEG', 'sample', 'sample-epo.fif')\nepochs.save(epochs_fname)",
"Later on you can read the epochs with :func:mne.read_epochs. For reading\nEEGLAB epochs files see :func:mne.read_epochs_eeglab. We can also use\npreload=False to save memory, loading the epochs from disk on demand.",
"epochs = mne.read_epochs(epochs_fname, preload=False)",
"If you wish to look at the average across trial types, then you may do so,\ncreating an :class:Evoked <mne.Evoked> object in the process. Instances\nof Evoked are usually created by calling :func:mne.Epochs.average. For\ncreating Evoked from other data structures see :class:mne.EvokedArray and\ntut_creating_data_structures.",
"ev_left = epochs['Auditory/Left'].average()\nev_right = epochs['Auditory/Right'].average()\n\nf, axs = plt.subplots(3, 2, figsize=(10, 5))\n_ = f.suptitle('Left / Right auditory', fontsize=20)\n_ = ev_left.plot(axes=axs[:, 0], show=False)\n_ = ev_right.plot(axes=axs[:, 1], show=False)\nplt.tight_layout()",
"To export and manipulate Epochs using Pandas see tut_io_export_pandas."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/pcmdi/cmip6/models/sandbox-2/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: PCMDI\nSource ID: SANDBOX-2\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:36\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-2', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lmoresi/UoM-VIEPS-Intro-to-Python
|
Notebooks/SphericalMeshing/CartesianTriangulations/Ex4-Gradients.ipynb
|
mit
|
[
"Example 4 - stripy gradients\nSRFPACK is a Fortran 77 software package that constructs a smooth interpolatory or approximating surface to data values associated with arbitrarily distributed points. It employs automatically selected tension factors to preserve shape properties of the data and avoid overshoot and undershoot associated with steep gradients.\nNotebook contents\n\nAnalytic function and derivatives\nEvaluating accuracy\n\nThe next example is Ex5-Smoothing\nDefine a computational mesh\nUse the (usual) icosahedron with face points included.",
"import stripy as stripy\n\nxmin = 0.0\nxmax = 10.0\nymin = 0.0\nymax = 10.0\nextent = [xmin, xmax, ymin, ymax]\n\nspacingX = 0.2\nspacingY = 0.2\n\nmesh = stripy.cartesian_meshes.elliptical_mesh(extent, spacingX, spacingY, refinement_levels=3)\n\nprint(\"number of points = {}\".format(mesh.npoints))",
"Analytic function\nDefine a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse",
"import numpy as np\n\ndef analytic(xs, ys, k1, k2):\n return np.cos(k1*xs) * np.sin(k2*ys)\n\ndef analytic_ddx(xs, ys, k1, k2):\n return -k1 * np.sin(k1*xs) * np.sin(k2*ys) / np.cos(ys)\n\ndef analytic_ddy(xs, ys, k1, k2):\n return k2 * np.cos(k1*xs) * np.cos(k2*ys) \n\nanalytic_sol = analytic(mesh.x, mesh.y, 0.1, 1.0)\nanalytic_sol_ddx = analytic_ddx(mesh.x, mesh.y, 0.1, 1.0)\nanalytic_sol_ddy = analytic_ddy(mesh.x, mesh.y, 0.1, 1.0)\n\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef axis_mesh_field(fig, ax, mesh, field, label):\n\n ax.axis('off')\n\n x0 = mesh.x\n y0 = mesh.y\n \n trip = ax.tripcolor(x0, y0, mesh.simplices, field, cmap=plt.cm.RdBu)\n fig.colorbar(trip, ax=ax)\n \n ax.set_title(str(label))\n return\n\n \nfig = plt.figure(figsize=(10, 8), facecolor=\"none\")\nax = fig.add_subplot(111)\naxis_mesh_field(fig, ax, mesh, analytic_sol, \"analytic solution\")",
"Derivatives of solution compared to analytic values\nThe gradient method of Triangulation takes a data array f representing values on the mesh vertices and returns the x,y derivatives.\n``` python\nTriangulation.gradient(f, nit=3, tol=0.001)\n```\nDerivatives of higher accuracy can be obtained by tweaking tol, which controls the convergence tolerance, or nit which controls the number of iterations to a solution. The default values are set to an optimal trade-off between speed and accuracy.",
"stripy_ddx, stripy_ddy = mesh.gradient(analytic_sol)\n\n\nfig, ax = plt.subplots(3,2, figsize=(12, 15), facecolor=\"none\")\n\naxis_mesh_field(fig, ax[0,0], mesh, analytic_sol, label=\"original\")\naxis_mesh_field(fig, ax[1,0], mesh, stripy_ddx, label=\"ddy\")\naxis_mesh_field(fig, ax[1,1], mesh, stripy_ddy, label=\"ddx\")\naxis_mesh_field(fig, ax[2,0], mesh, stripy_ddx-analytic_sol_ddx, label=\"ddx_err\")\naxis_mesh_field(fig, ax[2,1], mesh, stripy_ddy-analytic_sol_ddy, label=\"ddy_err\")\n\nax[0,1].axis('off')\n\nplt.show()",
"The next example is Ex5-Smoothing"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
BayesianTestsML/tutorial
|
Python/Correlated t-test for comparing classifiers performance on the same dataset.ipynb
|
gpl-3.0
|
[
"Bayesian Correlated t-test\nModule correlated_ttest in bayesiantests can be used to perform the correlated t-test on the performance of two classifiers that have been assessed by $m$-runs of $k$-fold cross-validation on the same dataset\nThis notebook demonstrates the use of the module.\nWe will load the classification accuracies of the naive Bayesian classifier (NBC) and AODE on the dataset Anneal from the file Data/nbc_anneal.csv and Data/aode_anneal.csv. The classifiers have been evaluated by 10-runs of 10-fold cross-validation.",
"import numpy as np\nAcc_nbc = np.loadtxt('Data/nbc_anneal.csv', delimiter=',', skiprows=1)\nAcc_aode = np.loadtxt('Data/aode_anneal.csv', delimiter=',', skiprows=1)\nnames = (\"AODE\", \"NBC\")\nx=np.zeros((len(Acc_nbc),2),'float')\nx[:,0]=Acc_aode/100\nx[:,1]=Acc_nbc/100\n#we consider the difference of accuracy scaled in (0,1)",
"Functions in the module accept the following arguments.\n\nx: a 2-d array with scores of two models (each row corresponding to a data set) or a vector of differences.\nrope: the region of practical equivalence. We consider two classifiers equivalent if the difference in their performance is smaller than rope. \nruns: number of repetitions of cross validation \nnames: the names of the two classifiers; if x is a vector of differences, positive values mean that the second (right) model had a higher score.\nverbose: when True the functions also prints out the probabilities",
"import bayesiantests as bt\nrope=0.01\nleft, within, right = bt.correlated_ttest(x, rope=rope,runs=10,verbose=True,names=names)",
"We can also plot the posterior distribution.",
"import warnings\nwarnings.filterwarnings('ignore')\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as snb\n#generate samples from posterior (it is not necesssary because the posterior is a Student)\nsamples=bt.correlated_ttest_MC(x, rope=rope,runs=10,nsamples=50000)\n#plot posterior\nsnb.kdeplot(samples, shade=True) \n#plot rope region\nplt.axvline(x=-rope,color='orange')\nplt.axvline(x=rope,color='orange')\n#add label\nplt.xlabel('Nbc-Aode on Anneal dataset');",
"We will load the classification accuracies of NBC and AODE on the dataset Audiology from the file. The classifiers have been evaluated by 10-runs of 10-fold cross-validation.",
"import numpy as np\nAcc_nbc = np.loadtxt('Data/nbc_audiology.csv', delimiter=',', skiprows=1)\nAcc_aode = np.loadtxt('Data/aode_audiology.csv', delimiter=',', skiprows=1)\nnames = (\"AODE\", \"NBC\")\ndiff=(Acc_nbc-Acc_aode)/100.0 #we consider the difference of accuracy scaled in (0,1)\n\nimport bayesiantests as bt\nrope=0.01\nleft, within, right = bt.correlated_ttest(diff, rope=rope,runs=10,verbose=True,names=names)\n\nimport warnings\nwarnings.filterwarnings('ignore')\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as snb\n#generate samples from posterior (it is not necesssary because the posterior is a Student)\nsamples=bt.correlated_ttest_MC(diff, rope=rope,runs=10,nsamples=50000)\n#plot posterior\nsnb.kdeplot(samples, shade=True) \n#plot rope region\nplt.axvline(x=-rope,color='orange')\nplt.axvline(x=rope,color='orange')\n#add label\nplt.xlabel('Nbc-Aode on Audiology dataset');",
"References\n@ARTICLE{bayesiantests2016,\n author = {{Benavoli}, A. and {Corani}, G. and {Demsar}, J. and {Zaffalon}, M.},\n title = \"{Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis}\",\n journal = {ArXiv e-prints},\n archivePrefix = \"arXiv\",\n eprint = {1606.04316},\n url={https://arxiv.org/abs/1606.04316},\n year = 2016,\n month = jun\n}\n@article{corani2015a,\n year = {2015},\n volume = {100},\n number = {2},\n journal = {Machine Learning},\n doi = {10.1007/s10994-015-5486-z},\n title = {{A Bayesian approach for comparing cross-validated algorithms on multiple data sets}},\n publisher = {Springer US},\n author = {Corani, Giorgio and Benavoli, Alessio},\n pages = {285--304},\n url = {http://www.idsia.ch/~alessio/corani2015a.pdf}\n}"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
chromium/chromium
|
third_party/tflite_support/src/tensorflow_lite_support/examples/colab/on_device_text_to_image_search_tflite.ipynb
|
bsd-3-clause
|
[
"Copyright 2022 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"On-device Text-to-Image Search with TensorFlow Lite Searcher Library\nIn this colab, we showcase an end to end example of how to train an image-text dual encoder model and how to perform retrieval with TFLite Searcher Library. We are going to use the COCO 2014 dataset, and in the end you'll be able to retrieve images using a text description.\nFirst, we need to encode the images into high-dimensional vectors. Then we index them with Model Maker Searcher API. During inference, a TFLite text embedder encodes the text query into another high-dimensional vector in the same embedding space, and invokes the on-device ScaNN searcher to retrieve similar images.\nYou can download the pre-trained searcher model packed with ScaNN index from here and skip to inference. Be sure to name it searcher_model.tflite and upload it to colab under the current working directory.",
"!pip install -q -U tensorflow tensorflow-hub tensorflow-addons\n!pip install -q -U tflite-support\n!pip install -q -U tflite-model-maker\n!pip install -q -U tensorflow-text==2.10.0b2\n!sudo apt-get -qq install libportaudio2 # Needed by tflite-support",
"Note you might need to restart the runtime after installation.",
"import json\nimport math\nimport os\nimport pickle\nimport random\nimport shutil\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom tensorflow import keras\nimport tensorflow.compat.v1 as tf1\nfrom tensorflow.keras import layers\nimport tensorflow_addons as tfa\nimport tensorflow_hub as hub\nimport tensorflow_text as text\nfrom tensorflow_text.python.ops import fast_sentencepiece_tokenizer as sentencepiece_tokenizer\n\n# Suppressing tf.hub warnings\ntf.get_logger().setLevel('ERROR')\n\nDATASET_DIR = 'datasets'\nCAPTION_URL = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip'\nTRAIN_IMAGE_URL = 'http://images.cocodataset.org/zips/train2014.zip'\nVALID_IMAGE_URL = 'http://images.cocodataset.org/zips/val2014.zip'\nTRAIN_IMAGE_DIR = os.path.join(DATASET_DIR, 'train2014')\nVALID_IMAGE_DIR = os.path.join(DATASET_DIR, 'val2014')\nTRAIN_IMAGE_PREFIX = 'COCO_train2014_'\nVALID_IMAGE_PREFIX = 'COCO_val2014_'\n\nIMAGE_SIZE = (384, 384)\nEFFICIENT_NET_URL = 'https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/feature_vector/2'\nUNIVERSAL_SENTENCE_ENCODER_URL = 'https://tfhub.dev/google/universal-sentence-encoder-lite/2'\n\nBATCH_SIZE = 256\nNUM_EPOCHS = 10\nSEQ_LENGTH = 128\nEMB_SIZE = 128",
"Get COCO dataset\nWe are not using Tensorflow Dataset to get the coco_captions dataset due to disk space concerns. The following code will download and process the dataset.",
"#@title Functions for downloading and parsing annotations.\n\ndef parse_annotation_json(json_path):\n # Assuming the json file is already downloaded.\n with open(json_path, 'r') as f:\n json_obj = json.load(f)\n\n # Parsing out the following information from the annotation json: the COCO\n # image id and their corresponding flickr post id, as well as the captions.\n mapping = dict()\n for caption in json_obj['annotations']:\n image_id = caption['image_id']\n if image_id not in mapping:\n mapping[image_id] = [[]]\n mapping[image_id][0].append(caption['caption'])\n for image in json_obj['images']:\n # The flickr url here is the CDN url. We need to split it to get the post\n # id.\n flickr_url = image['flickr_url']\n url_parts = flickr_url.split('/')\n flickr_id = url_parts[-1].split('_')[0]\n mapping[image['id']].append(flickr_id)\n return list(mapping.items())\n\n\ndef get_train_valid_captions():\n # Parse and cache the annotation for train and valid\n train_pickle_path = os.path.join(DATASET_DIR, 'train_captions.pickle')\n valid_pickle_path = os.path.join(DATASET_DIR, 'valid_captions.pickle')\n\n if not os.path.exists(train_pickle_path) or not os.path.exists(\n valid_pickle_path):\n # Parse and cache the annotations if they don't exist\n annotation_zip = tf.keras.utils.get_file(\n 'annotations.zip',\n cache_dir=os.path.abspath('.'),\n cache_subdir=os.path.join(DATASET_DIR, 'tmp'),\n origin=CAPTION_URL,\n extract=True,\n )\n os.remove(annotation_zip)\n train_img_cap = parse_annotation_json(\n os.path.join(DATASET_DIR, 'tmp', 'annotations',\n 'captions_train2014.json'))\n valid_img_cap = parse_annotation_json(\n os.path.join(DATASET_DIR, 'tmp', 'annotations',\n 'captions_val2014.json'))\n with open(train_pickle_path, 'wb') as f:\n pickle.dump(train_img_cap, f)\n with open(valid_pickle_path, 'wb') as f:\n pickle.dump(valid_img_cap, f)\n shutil.rmtree(os.path.join(DATASET_DIR, 'tmp'))\n else:\n # Load the cached annotations\n with open(train_pickle_path, 'rb') as f:\n train_img_cap = pickle.load(f)\n with open(valid_pickle_path, 'rb') as f:\n valid_img_cap = pickle.load(f)\n return train_img_cap, valid_img_cap\n\n#@title Functions for downloading the images and create the dataset.\n\ndef get_sentencepiece_tokenizer_in_tf2():\n # The universal sentence encoder model from TFHub is in TF1 Module format. We\n # need to directly access the asset_paths to get the sentencepiece tokenizer\n # proto path.\n module = hub.load(UNIVERSAL_SENTENCE_ENCODER_URL)\n spm_path = module.asset_paths[0].asset_path.numpy()\n with tf.io.gfile.GFile(spm_path, mode='rb') as f:\n return sentencepiece_tokenizer.FastSentencepieceTokenizer(f.read())\n\n\ndef prepare_dataset(id_image_info_list,\n image_file_prefix,\n image_dir,\n image_zip_url,\n shuffle=False):\n # Download and unzip the dataset if it's not there already.\n if not os.path.exists(image_dir):\n image_zip = tf.keras.utils.get_file(\n 'image.zip',\n cache_dir=os.path.abspath('.'),\n cache_subdir=os.path.join(DATASET_DIR),\n origin=image_zip_url,\n extract=True,\n )\n os.remove(image_zip)\n\n # Convert the lists into tensors so that we can index into it in the dataset\n # transformations later.\n coco_ids, image_info = zip(*id_image_info_list)\n captions, flickr_ids = zip(*image_info)\n file_names = list(\n map(\n lambda id: os.path.join(image_dir, '%s%012d.jpg' %\n (image_file_prefix, id)), coco_ids))\n coco_ids_tensor = tf.constant(coco_ids)\n captions_tensor = tf.ragged.constant(captions)\n file_names_tensor = tf.constant(file_names)\n flickr_ids_tensor = tf.constant(flickr_ids)\n\n # The initial dataset only contains the index. This is to make sure the\n # dataset has a known size.\n dataset = tf.data.Dataset.range(len(coco_ids))\n\n sp = get_sentencepiece_tokenizer_in_tf2()\n\n def _load_image_and_select_caption(i):\n image_id = coco_ids_tensor[i]\n captions = captions_tensor[i]\n image_path = file_names_tensor[i]\n flickr_id = flickr_ids_tensor[i]\n image = tf.image.decode_jpeg(tf.io.read_file(image_path), channels=3)\n\n # Randomly select one caption from the many captions we have for each image\n caption_idx = tf.random.uniform((1,),\n minval=0,\n maxval=tf.shape(captions)[0],\n dtype=tf.int32)[0]\n caption = captions[caption_idx]\n caption = tf.sparse.from_dense(sp.tokenize(caption))\n example = {\n 'image': image,\n 'image_id': image_id,\n 'caption': caption,\n 'flickr_id': flickr_id\n }\n return example\n\n def _resize_image(example):\n # Efficient net requires the pixels to be in range of [0, 1].\n example['image'] = tf.image.resize(example['image'], size=IMAGE_SIZE) / 255\n return example\n\n dataset = (\n # Load the images from disk and decode them into numpy arrays.\n dataset.map(\n _load_image_and_select_caption,\n num_parallel_calls=tf.data.AUTOTUNE,\n deterministic=not shuffle)\n\n # Resizing image is slow. We put the stage into a separate map so that it\n # could get more threads to not be the bottleneck.\n .map(\n _resize_image,\n num_parallel_calls=tf.data.AUTOTUNE,\n deterministic=not shuffle))\n\n if shuffle:\n dataset = dataset.shuffle(BATCH_SIZE * 10)\n\n dataset = dataset.batch(BATCH_SIZE)\n return dataset",
"Download the datasets and preprocess them.",
"# We parse the caption json files first.\ntrain_img_cap, valid_img_cap = get_train_valid_captions()\nprint(f'Train number of images: {len(train_img_cap)}')\nprint(f'Valid number of images: {len(valid_img_cap)}')\n\nexample = train_img_cap[0]\nprint(f'COCO image id: {example[0]}')\nprint(f'Captions: {example[1][0]}')\nprint(f'Flickr post url: http://flickr.com/photo.gne?id={example[1][1]}')\n\n# Shuffle both the train and validation sets\nrandom.shuffle(valid_img_cap)\nrandom.shuffle(train_img_cap)\n\n# We randomly sample 5000 image-caption pairs from validation set for validation\n# during training, to match the setup of\n# https://www.tensorflow.org/datasets/catalog/coco_captions. However, when\n# generating the retrieval database later on, we will use all the images in both\n# validation and training splits.\nvalid_dataset = prepare_dataset(\n valid_img_cap[:5000],\n VALID_IMAGE_PREFIX,\n VALID_IMAGE_DIR,\n VALID_IMAGE_URL)\ntrain_dataset = prepare_dataset(\n train_img_cap,\n TRAIN_IMAGE_PREFIX,\n TRAIN_IMAGE_DIR,\n TRAIN_IMAGE_URL,\n shuffle=True)",
"Define models\nThe image encoder and text encoder may not output the embeddings with the same amount of dimensions. We need to project them into the same embedding space",
"def project_embeddings(embeddings, num_projection_layers, projection_dims,\n dropout_rate):\n\n projected_embeddings = layers.Dense(units=projection_dims)(embeddings)\n for _ in range(num_projection_layers):\n x = tf.nn.relu(projected_embeddings)\n x = layers.Dense(projection_dims)(x)\n x = layers.Dropout(dropout_rate)(x)\n x = layers.Add()([projected_embeddings, x])\n projected_embeddings = layers.LayerNormalization()(x)\n\n # Finally we L2 normalize the embeddings. In general, L2 normalized embeddings\n # are easier to retrieve.\n projected_embeddings = tf.math.l2_normalize(projected_embeddings, axis=1)\n return projected_embeddings\n\ndef create_image_encoder(num_projection_layers,\n projection_dims,\n dropout_rate,\n trainable=False):\n efficient_net = hub.KerasLayer(EFFICIENT_NET_URL, trainable=trainable)\n inputs = layers.Input(shape=IMAGE_SIZE + (3,), name='image_input')\n embeddings = efficient_net(inputs)\n outputs = project_embeddings(embeddings, num_projection_layers,\n projection_dims, dropout_rate)\n return keras.Model(inputs, outputs, name='image_encoder')",
"We use Universal Sentence Encoder, a SOTA sentence embedding model, as the text encoder base model. The TFHub lite version is a TF1 saved model. To make it work well in TF2 and later TFLite conversion, we create two models, one is the frozen universal sentence encoder, and the other is the trainable projection layer.",
"def create_text_encoder():\n encoder = hub.KerasLayer(\n UNIVERSAL_SENTENCE_ENCODER_URL,\n name='universal_sentence_encoder',\n signature='default')\n encoder.trainable = False\n inputs = layers.Input(\n shape=(None,), dtype=tf.int64, name='text_input', sparse=True)\n embeddings = encoder(\n dict(\n values=inputs.values,\n indices=inputs.indices,\n dense_shape=inputs.dense_shape))\n return keras.Model(inputs, embeddings, name='text_encoder')\n\n\ndef create_text_embedder_projection(input_dim, num_projection_layers,\n projection_dims, dropout_rate):\n inputs = layers.Input(shape=(input_dim), dtype=tf.float32, name='text_input')\n outputs = project_embeddings(inputs, num_projection_layers, projection_dims,\n dropout_rate)\n return keras.Model(inputs, outputs, name='projection_layers')",
"This dual encoder model is derived from this Keras post",
"class DualEncoder(keras.Model):\n\n def __init__(self,\n text_encoder,\n text_encoder_projection,\n image_encoder,\n temperature,\n **kwargs):\n super(DualEncoder, self).__init__(**kwargs)\n self.text_encoder = text_encoder\n self.text_encoder_projection = text_encoder_projection\n self.image_encoder = image_encoder\n\n # Temperature controls the contrast of softmax output. In general, a low\n # temperature increases the contrast and a high temperature decreases it.\n self.temperature = temperature\n self.loss_tracker = keras.metrics.Mean(name='loss')\n\n @property\n def metrics(self):\n return [self.loss_tracker]\n\n def call(self, features, training=False):\n # If there are two GPUs present, we use one of them for image encoder and\n # one for text encoder. If there's only one GPU then they will be trained on\n # the same GPU.\n with tf.device('/gpu:0'):\n caption_embeddings = self.text_encoder(\n features['caption'], training=False)\n caption_embeddings = self.text_encoder_projection(\n caption_embeddings, training=training)\n with tf.device('/gpu:1'):\n image_embeddings = self.image_encoder(\n features['image'], training=training)\n return caption_embeddings, image_embeddings\n\n def compute_loss(self, caption_embeddings, image_embeddings):\n # Computing the loss with dot product similarity between image and text\n # embeddings.\n logits = (\n tf.matmul(caption_embeddings, image_embeddings, transpose_b=True) /\n self.temperature)\n images_similarity = tf.matmul(\n image_embeddings, image_embeddings, transpose_b=True)\n captions_similarity = tf.matmul(\n caption_embeddings, caption_embeddings, transpose_b=True)\n\n # The targets is the mean of the self-similarity of the captions and images.\n # This is more lenient to the similar examples appeared in the same batch.\n targets = keras.activations.softmax(\n (captions_similarity + images_similarity) / (2 * self.temperature))\n captions_loss = keras.losses.categorical_crossentropy(\n y_true=targets, y_pred=logits, from_logits=True)\n images_loss = keras.losses.categorical_crossentropy(\n y_true=tf.transpose(targets),\n y_pred=tf.transpose(logits),\n from_logits=True)\n return (captions_loss + images_loss) / 2\n\n def train_step(self, features):\n with tf.GradientTape() as tape:\n # Forward pass\n caption_embeddings, image_embeddings = self(features, training=True)\n loss = self.compute_loss(caption_embeddings, image_embeddings)\n\n # Backward pass\n gradients = tape.gradient(loss, self.trainable_variables)\n self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))\n self.loss_tracker.update_state(loss)\n return {'loss': self.loss_tracker.result()}\n\n def test_step(self, features):\n caption_embeddings, image_embeddings = self(features, training=False)\n loss = self.compute_loss(caption_embeddings, image_embeddings)\n self.loss_tracker.update_state(loss)\n return {'loss': self.loss_tracker.result()}",
"Train the Dual Encoder model\nLoad the models from Tensorflow Hub.",
"# The text embedder consists of two models. One is the frozen base universal\n# sentence encoder, and the other is the trainable projection layer. We are\n# doing this instead of one model to make later TFLite model conversion easier.\ntext_encoder = create_text_encoder()\nprojection_layers = create_text_embedder_projection(\n input_dim=512, # Universal sentence encoder output has 512 dimensions\n num_projection_layers=1,\n projection_dims=EMB_SIZE,\n dropout_rate=0.1)\n\nimage_encoder = create_image_encoder(\n num_projection_layers=1, projection_dims=EMB_SIZE, dropout_rate=0.1)\n\ndual_encoder = DualEncoder(\n text_encoder, projection_layers, image_encoder, temperature=0.05)\ndual_encoder.compile(\n optimizer=tfa.optimizers.AdamW(learning_rate=0.001, weight_decay=0.001))",
"Train the dual encoder model.",
"# We train the first three epochs with the learning rate of 0.001 and\n# decrease it exponentially later on.\ndef lr_scheduler(epoch, lr):\n if epoch < 3:\n return lr\n else:\n return max(lr * tf.math.exp(-0.1), lr * 0.1)\n\n# In colab, training takes roughly 4s per step, around 24 mins per epoch\nearly_stopping = tf.keras.callbacks.EarlyStopping(\n monitor='val_loss', patience=2, restore_best_weights=True)\nhistory = dual_encoder.fit(\n train_dataset,\n epochs=NUM_EPOCHS,\n validation_data=valid_dataset,\n callbacks=[\n tf.keras.callbacks.LearningRateScheduler(lr_scheduler), early_stopping\n ],\n max_queue_size=2,\n)\n\n# Save the models. We are not going to save the text_encoder since it's frozen\n# and the TF2 saved model for text_encoder has problems converting to TFLite.\nprint('Training completed. Saving image and text encoders.')\ndual_encoder.image_encoder.save('image_encoder')\ndual_encoder.text_encoder_projection.save('text_encoder_projection')\nprint('Models are saved.')",
"Create the text-to-image Searcher model using Model Maker\nGenerate image embeddings\nLoad the valid and train dataset one more time. This time we are not going to shuffle the train split and we use the whole validataion split. Since images are not loaded until they are iterated, creating the datasets should be cheap.",
"combined_valid_dataset = prepare_dataset(\n valid_img_cap,\n VALID_IMAGE_PREFIX,\n VALID_IMAGE_DIR,\n VALID_IMAGE_URL)\ndeterministic_train_dataset = prepare_dataset(\n train_img_cap,\n TRAIN_IMAGE_PREFIX,\n TRAIN_IMAGE_DIR,\n TRAIN_IMAGE_URL)\n\nall_combined = deterministic_train_dataset.concatenate(combined_valid_dataset)",
"Create the metadata (image file names and the flickr post id) from the dataset. This will later be packed into the TFLite model.",
"def create_metadata(image_file_prefix, image_dir):\n\n def _create_metadata(image_info):\n # This is the same way we generated the image paths in the prepare_dataset\n # function above\n coco_id = image_info[0]\n flickr_id = image_info[1][1]\n return ('%s_%s' %\n (flickr_id,\n os.path.join(image_dir, '%s%012d.jpg' %\n (image_file_prefix, coco_id)))).encode('utf-8')\n\n return _create_metadata\n\n\n# We don't store the images in the index file, as that would be too big. We only\n# store the image path and the corresponding Flickr id.\nmetadata = list(\n map(create_metadata(TRAIN_IMAGE_PREFIX, TRAIN_IMAGE_DIR), train_img_cap))\nmetadata.extend(\n map(create_metadata(VALID_IMAGE_PREFIX, VALID_IMAGE_DIR), valid_img_cap))",
"Generate the embeddings for all the images we have. We do it in Tensorflow with GPU instead of Model Maker. Again, these will be packed into the TFLite model.",
"# Image encoder takes one input named `image_input` so we remove other values in\n# the dataset.\nimage_dataset = all_combined.map(\n lambda example: {'image_input': example['image']})\nimage_embeddings = dual_encoder.image_encoder.predict(image_dataset, verbose=1)\nprint(f'Embedding matrix shape: {image_embeddings.shape}')",
"Convert text embedder to TFLite\nWe need to convert the saved model to TF1 as the base Universal Sentence Encoder is a TF1 model. It'll create a saved model dir on disk called converted_model",
"#@title Prepare the saved model\n!rm -rf converted_model\n\n# This create a new TF1 SavedModel from 1). The tfhub USE, and 2). The\n# projection layers trained and saved from TF2.\nwith tf1.Graph().as_default() as g:\n with tf1.Session() as sess:\n # Reload the Universal Sentence Encoder model from tfhub. We can't just save\n # the USE in TF2 as we did for the projection layers as that causes issues\n # in the TFLite converter.\n module = hub.Module(UNIVERSAL_SENTENCE_ENCODER_URL)\n spm_path = sess.run(module(signature='spm_path'))\n with tf1.io.gfile.GFile(spm_path, mode='rb') as f:\n serialized_spm = f.read()\n spm_path = sess.run(module(signature='spm_path'))\n input_str = tf1.placeholder(dtype=tf1.string, shape=[None])\n tokenizer = sentencepiece_tokenizer.FastSentencepieceTokenizer(\n model=serialized_spm)\n tokenized = tf1.sparse.from_dense(tokenizer.tokenize(input_str).to_tensor())\n tokenized = tf1.cast(tokenized, dtype=tf1.int64)\n encodings = module(\n inputs=dict(\n values=tokenized.values,\n indices=tokenized.indices,\n dense_shape=tokenized.dense_shape))\n\n # Then combine it with the trained projection layers\n projection_layers = tf1.keras.models.load_model('text_encoder_projection')\n encodings = projection_layers(encodings)\n\n sess.run([tf1.global_variables_initializer(), tf1.tables_initializer()])\n\n # Save with SavedModelBuilder\n builder = tf1.saved_model.Builder('converted_model')\n sig_def = tf1.saved_model.predict_signature_def(\n inputs={'input': input_str}, outputs={'output': encodings})\n builder.add_meta_graph_and_variables(\n sess,\n tags=['serve'],\n signature_def_map={\n tf1.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY: sig_def\n },\n clear_devices=True)\n builder.save()\nprint('Model saved to converted_model/')",
"Convert and save the TFLite model. Here the model only has the text encoder. We will add in the retrieval index in the following steps.",
"converter = tf.lite.TFLiteConverter.from_saved_model('converted_model')\nconverter.experimental_new_converter = True\nconverter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]\nconverter.allow_custom_ops = True\nconverted_model_tflite = converter.convert()\nwith open('text_embedder.tflite', 'wb') as f:\n f.write(converted_model_tflite)",
"Create TFLite Searcher model\nIn general see the documentation of ScaNNOptions for how to configure the searcher for your dataset.",
"import tflite_model_maker as mm\n\nscann_options = mm.searcher.ScaNNOptions(\n # We use the dot product similarity as this is how the model is trained\n distance_measure='dot_product',\n # Enable space partitioning with K-Means tree\n tree=mm.searcher.Tree(\n # How many partitions to have. A rule of thumb is the square root of the\n # dataset size. In this case it's 351.\n num_leaves=int(math.sqrt(len(metadata))),\n # Searching 4 partitions seems to give reasonable result. Searching more\n # will definitely return better results, but it's more costly to run.\n num_leaves_to_search=4),\n # Compress each float to int8 in the embedding. See\n # https://www.tensorflow.org/lite/api_docs/python/tflite_model_maker/searcher/ScoreAH\n # for details\n score_ah=mm.searcher.ScoreAH(\n # Using 1 dimension per quantization block.\n 1,\n # Generally 0.2 works pretty well.\n anisotropic_quantization_threshold=0.2))\n\ndata = mm.searcher.DataLoader(\n embedder_path='text_embedder.tflite',\n dataset=image_embeddings,\n metadata=metadata)\n\nmodel = mm.searcher.Searcher.create_from_data(\n data=data, scann_options=scann_options)\nmodel.export(\n export_filename='searcher_model.tflite',\n userinfo='',\n export_format=mm.searcher.ExportFormat.TFLITE)",
"Run inference using Task Library",
"from tflite_support.task import text\nfrom tflite_support.task import core",
"Configure the searcher to return 6 results per query and not to L2 normalize the query embeddings because the text encoder has already normalized them. See source code on how to configure the TextSearcher.",
"options = text.TextSearcherOptions(\n base_options=core.BaseOptions(\n file_name='searcher_model.tflite'))\n\n# The searcher returns 6 results\noptions.search_options.max_results = 6\n\ntflite_searcher = text.TextSearcher.create_from_options(options)\n\ndef search_image_with_text(query_str, show_images=False):\n neighbors = tflite_searcher.search(query_str)\n\n for i, neighbor in enumerate(neighbors.nearest_neighbors):\n metadata = neighbor.metadata.decode('utf-8').split('_')\n flickr_id = metadata[0]\n print('Flickr url for %d: http://flickr.com/photo.gne?id=%s' %\n (i + 1, flickr_id))\n\n if show_images:\n plt.figure(figsize=(20, 13))\n for i, neighbor in enumerate(neighbors.nearest_neighbors):\n ax = plt.subplot(2, 3, i + 1)\n\n # Using negative distance since on-device ScaNN returns negative\n # dot-product distance.\n ax.set_title('%d: Similarity: %.05f' % (i + 1, -neighbor.distance))\n metadata = neighbor.metadata.decode('utf-8').split('_')\n image_path = '_'.join(metadata[1:])\n image = tf.image.decode_jpeg(\n tf.io.read_file(image_path), channels=3) / 255\n plt.imshow(image)\n plt.axis('off')",
"We will not show the image here due to copyright issues. You can set show_images=True to display them (note that you can't set it to True unless you've downloaded the images at this cell). Please check the post links for the license of each image.",
"search_image_with_text('A man riding on a bike')",
"Congratulations on finishing this colab! For next steps, you can try deploy the model on-device (inference + search on Pixel 6 is around 6 ms), or you can train the model with your own dataset. In the mean time, don't forget to checkout our documentations (Model Maker, Task Library) and the reference app, which searches news articles in CNN_DailyMail dataset"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mila-udem/summerschool2015
|
fuel_tutorial/fuel_logreg.ipynb
|
bsd-3-clause
|
[
"Fuel exercise: logistic regression\nThis notebook is a copy of logistic_regression.ipynb, without the code downloading and unpacking the dataset.\nYour goal is to use Fuel as a provider of data, instead of the previous approach. You will have to update the code in several places.\nThe solution is at fuel_logreg_solution.ipynb.\nGet the data\nThe model\nLogistic regression is a probabilistic, linear classifier. It is parametrized\nby a weight matrix $W$ and a bias vector $b$. Classification is\ndone by projecting an input vector onto a set of hyperplanes, each of which\ncorresponds to a class. The distance from the input to a hyperplane reflects\nthe probability that the input is a member of the corresponding class.\nMathematically, the probability that an input vector $x$ is a member of a\nclass $i$, a value of a stochastic variable $Y$, can be written as:\n$$P(Y=i|x, W,b) = softmax_i(W x + b) = \\frac {e^{W_i x + b_i}} {\\sum_j e^{W_j x + b_j}}$$\nThe model's prediction $y_{pred}$ is the class whose probability is maximal, specifically:\n$$ y_{pred} = {\\rm argmax}_i P(Y=i|x,W,b)$$\nNow, let us define our input variables. First, we need to define the dimension of our tensors:\n- n_in is the length of each training vector,\n- n_out is the number of classes.\nOur variables will be:\n- x is a matrix, where each row contains a different example of the dataset. Its shape is (batch_size, n_in), but batch_size does not have to be specified in advance, and can change during training.\n- W is a shared matrix, of shape (n_in, n_out), initialized with zeros. Column k of W represents the separation hyperplane for class k.\n- b is a shared vector, of length n_out, initialized with zeros. Element k of b represents the free parameter of hyperplane k.",
"import numpy\nimport theano\nfrom theano import tensor\n\n# Size of the data\nn_in = 28 * 28\n# Number of classes\nn_out = 10\n\nx = tensor.matrix('x')\nW = theano.shared(value=numpy.zeros((n_in, n_out), dtype=theano.config.floatX),\n name='W',\n borrow=True)\nb = theano.shared(value=numpy.zeros((n_out,), dtype=theano.config.floatX),\n name='b',\n borrow=True)",
"Now, we can build a symbolic expression for the matrix of class-membership probability (p_y_given_x), and for the class whose probability is maximal (y_pred).",
"p_y_given_x = tensor.nnet.softmax(tensor.dot(x, W) + b)\ny_pred = tensor.argmax(p_y_given_x, axis=1)",
"Defining a loss function\nLearning optimal model parameters involves minimizing a loss function. In the\ncase of multi-class logistic regression, it is very common to use the negative\nlog-likelihood as the loss. This is equivalent to maximizing the likelihood of the\ndata set $\\cal{D}$ under the model parameterized by $\\theta$. Let\nus first start by defining the likelihood $\\cal{L}$ and loss\n$\\ell$:\n$$\\mathcal{L} (\\theta={W,b}, \\mathcal{D}) =\n \\sum_{i=0}^{|\\mathcal{D}|} \\log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\\n \\ell (\\theta={W,b}, \\mathcal{D}) = - \\mathcal{L} (\\theta={W,b}, \\mathcal{D})\n$$\nAgain, we will express those expressions using Theano. We have one additional input, the actual target class y:\n- y is an input vector of integers, of length batch_size (which will have to match the length of x at runtime). The length of y can be symbolically expressed by y.shape[0].\n- log_prob is a (batch_size, n_out) matrix containing the log probabilities of class membership for each example.\n- arange(y.shape[0]) is a symbolic vector which will contain [0,1,2,... batch_size-1]\n- log_likelihood is a vector containing the log probability of the target, for each example.\n- loss is the mean of the negative log_likelihood over the examples in the minibatch.",
"y = tensor.lvector('y')\nlog_prob = tensor.log(p_y_given_x)\nlog_likelihood = log_prob[tensor.arange(y.shape[0]), y]\nloss = - log_likelihood.mean()",
"Training procedure\nThis notebook will use the method of stochastic gradient descent with mini-batches (MSGD) to find values of W and b that minimize the loss.\nWe can let Theano compute symbolic expressions for the gradient of the loss wrt W and b.",
"g_W, g_b = theano.grad(cost=loss, wrt=[W, b])",
"g_W and g_b are symbolic variables, which can be used as part of a computation graph. In particular, let us define the expressions for one step of gradient descent for W and b, for a fixed learning rate.",
"learning_rate = numpy.float32(0.13)\nnew_W = W - learning_rate * g_W\nnew_b = b - learning_rate * g_b",
"We can then define update expressions, or pairs of (shared variable, expression for its update), that we will use when compiling the Theano function. The updates will be performed each time the function gets called.\nThe following function, train_model, returns the loss on the current minibatch, then changes the values of the shared variables according to the update rules. It needs to be passed x and y as inputs, but not the shared variables, which are implicit inputs.\nThe entire learning algorithm thus consists in looping over all examples in the dataset, considering all the examples in one minibatch at a time, and repeatedly calling the train_model function.",
"train_model = theano.function(inputs=[x, y],\n outputs=loss,\n updates=[(W, new_W),\n (b, new_b)])",
"Testing the model\nWhen testing the model, we are interested in the number of misclassified examples (and not only in the likelihood). Here, we build a symbolic expression for retrieving the number of misclassified examples in a minibatch.\nThis will also be useful to apply on the validation and testing sets, in order to monitor the progress of the model during training, and to do early stopping.",
"misclass_nb = tensor.neq(y_pred, y)\nmisclass_rate = misclass_nb.mean()\n\ntest_model = theano.function(inputs=[x, y],\n outputs=misclass_rate)",
"Training the model\nHere is the main training loop of the algorithm:\n- For each epoch, or pass through the training set\n - split the training set in minibatches, and call train_model on each minibatch\n - split the validation set in minibatches, and call test_model on each minibatch to measure the misclassification rate\n - if the misclassification rate has not improved in a while, stop training\n- Measure performance on the test set\nThe early stopping procedure is what decide whether the performance has improved enough. There are many variants, and we will not go into the details of this one here.\nWe first need to define a few parameters for the training loop and the early stopping procedure.",
"import timeit\n\n## Define a couple of helper variables and functions for the optimization\nbatch_size = 500\n# compute number of minibatches for training, validation and testing\nn_train_batches = train_set_x.shape[0] / batch_size\nn_valid_batches = valid_set_x.shape[0] / batch_size\nn_test_batches = test_set_x.shape[0] / batch_size\n\ndef get_minibatch(i, dataset_x, dataset_y):\n start_idx = i * batch_size\n end_idx = (i + 1) * batch_size\n batch_x = dataset_x[start_idx:end_idx]\n batch_y = dataset_y[start_idx:end_idx]\n return (batch_x, batch_y)\n\n## early-stopping parameters\n# maximum number of epochs\nn_epochs = 1000\n# look as this many examples regardless\npatience = 5000\n# wait this much longer when a new best is found\npatience_increase = 2\n# a relative improvement of this much is considered significant\nimprovement_threshold = 0.995\n\n# go through this many minibatches before checking the network on the validation set;\n# in this case we check every epoch\nvalidation_frequency = min(n_train_batches, patience / 2)\n\nfrom six.moves import xrange\n\nbest_validation_loss = numpy.inf\ntest_score = 0.\nstart_time = timeit.default_timer()\n\ndone_looping = False\nepoch = 0\nwhile (epoch < n_epochs) and (not done_looping):\n epoch = epoch + 1\n for minibatch_index in xrange(n_train_batches):\n minibatch_x, minibatch_y = get_minibatch(minibatch_index, train_set_x, train_set_y)\n minibatch_avg_cost = train_model(minibatch_x, minibatch_y)\n\n # iteration number\n iter = (epoch - 1) * n_train_batches + minibatch_index\n if (iter + 1) % validation_frequency == 0:\n # compute zero-one loss on validation set\n validation_losses = []\n for i in xrange(n_valid_batches):\n valid_xi, valid_yi = get_minibatch(i, valid_set_x, valid_set_y)\n validation_losses.append(test_model(valid_xi, valid_yi))\n this_validation_loss = numpy.mean(validation_losses)\n print('epoch %i, minibatch %i/%i, validation error %f %%' %\n (epoch,\n minibatch_index + 1,\n n_train_batches,\n this_validation_loss * 100.))\n\n # if we got the best validation score until now\n if this_validation_loss < best_validation_loss:\n # improve patience if loss improvement is good enough\n if this_validation_loss < best_validation_loss * improvement_threshold:\n patience = max(patience, iter * patience_increase)\n\n best_validation_loss = this_validation_loss\n\n # test it on the test set\n test_losses = []\n for i in xrange(n_test_batches):\n test_xi, test_yi = get_minibatch(i, test_set_x, test_set_y)\n test_losses.append(test_model(test_xi, test_yi))\n\n test_score = numpy.mean(test_losses)\n print(' epoch %i, minibatch %i/%i, test error of best model %f %%' %\n (epoch,\n minibatch_index + 1,\n n_train_batches,\n test_score * 100.))\n\n # save the best parameters\n numpy.savez('best_model.npz', W=W.get_value(), b=b.get_value())\n\n if patience <= iter:\n done_looping = True\n break\n\nend_time = timeit.default_timer()\nprint('Optimization complete with best validation score of %f %%, '\n 'with test performance %f %%' %\n (best_validation_loss * 100., test_score * 100.))\n\nprint('The code ran for %d epochs, with %f epochs/sec' %\n (epoch, 1. * epoch / (end_time - start_time)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
alexandrnikitin/algorithm-sandbox
|
courses/DAT256x/Module02/02-01-Rate of Change.ipynb
|
mit
|
[
"Rate of Change\nFunctions are often visualized as a line on a graph, and this line shows how the value returned by the function changes based on changes in the input value.\nLinear Rate of Change\nFor example, imagine a function that returns the number of meters travelled by a cyclist based on the number of seconds that the cyclist has been cycling.\nHere is such a function:\n\\begin{equation}q(x) = 2x + 1\\end{equation}\nWe can plot the output for this function for a period of 10 seconds like this:",
"%matplotlib inline\n\ndef q(x):\n return 2*x + 1\n\n# Plot the function\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from 0 to 10\nx = np.array(range(0, 11))\n\n# Set up the graph\nplt.xlabel('Seconds')\nplt.ylabel('Meters')\nplt.xticks(range(0,11, 1))\nplt.yticks(range(0, 22, 1))\nplt.grid()\n\n# Plot x against q(x)\nplt.plot(x,q(x), color='green')\n\nplt.show()",
"It's clear from the graph that q is a linear function that describes a slope in which distance increases at a constant rate over time. In other words, the cyclist is travelling at a constant speed.\nBut what speed?\nSpeed, or more technically, velocity is a measure of change - it measures how the distance travelled changes over time (which is why we typically express it as a unit of distance per a unit of time, like miles-per-hour or meters-per-second). So we're looking for a way to measure the change in the line created by the function.\nThe change in values along the line define its slope, which we know from a previous lesson is represented like this:\n\\begin{equation}m = \\frac{\\Delta{y}}{\\Delta{x}} \\end{equation}\nWe can calculate the slope of our function like this:\n\\begin{equation}m = \\frac{q(x){2} - q(x){1}}{x_{2} - x_{1}} \\end{equation}\nSo we just need two ordered pairs of x and q(x) values from our line to apply this equation.\n\nAfter 1 second, x is 1 and q(1) = 3.\nAfter 10 seconds, x is 10 and q(10) = 21.\n\nSo we can meassure the rate of change like this:\n\\begin{equation}m = \\frac{21 - 3}{10 - 1} \\end{equation}\nThis is the same as:\n\\begin{equation}m = \\frac{18}{9} \\end{equation}\nWhich simplifies to:\n\\begin{equation}m = \\frac{2}{1} \\end{equation}\nSo our rate of change is <sup>2</sup>/<sub>1</sub> or put another way, the cyclist is travelling at 2 meters-per-second.\nAverage Rate of Change\nOK, let's look at another function that calculates distance travelled for a given number of seconds:\n\\begin{equation}r(x) = x^{2} + x\\end{equation}\nLet's take a look at that using Python:",
"%matplotlib inline\n\ndef r(x):\n return x**2 + x\n\n# Plot the function\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from 0 to 10\nx = np.array(range(0, 11))\n\n# Set up the graph\nplt.xlabel('Seconds')\nplt.ylabel('Meters')\nplt.grid()\n\n# Plot x against r(x)\nplt.plot(x,r(x), color='green')\n\nplt.show()",
"This time, the function is not linear. It's actually a quadratic function, and the line from 0 seconds to 10 seconds shows an exponential increase; in other words, the cyclist is accelerating.\nTechnically, acceleration itself is a measure of change in velocity over time; and velocity, as we've already discussed, is a measure of change in distance over time. So measuring accelleration is pretty complex, and requires differential calculus, which we're going to cover shortly. In fact, even just measuring the velocity at a single point in time requires differential calculus; but we can use algebraic methods to calculate an average rate of velocity for a given period shown in the graph.\nFirst, we need to define a secant line that joins two points in our exponential arc to create a straight slope. For example, a secant line for the entire 10 second time span would join the following two points:\n\n0, r(0)\n10, r(10)\n\nRun the following Python code to visualize this line:",
"%matplotlib inline\n\ndef r(x):\n return (x)**2 + x\n\n# Plot the function\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from 0 to 10\nx = np.array(range(0, 11))\n\n# Create an array for the secant line\ns = np.array([0,10])\n\n# Set up the graph\nplt.xlabel('Seconds')\nplt.ylabel('Meters')\nplt.grid()\n\n# Plot x against r(x)\nplt.plot(x,r(x), color='green')\n\n# Plot the secant line\nplt.plot(s,r(s), color='magenta')\n\nplt.show()",
"Now, because the secant line is straight, we can apply the slope formula we used for a linear function to calculate the average velocity for the 10 second period:\n\nAt 0 seconds, x is 0 and r(0) = 0.\nAt 10 seconds, x is 10 and r(10) = 110.\n\nSo we can meassure the rate of change like this:\n\\begin{equation}m = \\frac{110 - 0}{10 - 0} \\end{equation}\nThis is the same as:\n\\begin{equation}m = \\frac{110}{10} \\end{equation}\nWhich simplifies to:\n\\begin{equation}m = \\frac{11}{1} \\end{equation}\nSo our rate of change is <sup>11</sup>/<sub>1</sub> or put another way, the cyclist is travelling at an average velocity of 11 meters-per-second over the 10-second period.\nOf course, we can measure the average velocity between any two points on the exponential line. Use the following Python code to show the secant line for the period between 2 and 7 seconds, and calculate the average velocity for that period",
"%matplotlib inline\n\ndef r(x):\n return x**2 + x\n\n# Plot the function\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from 0 to 10\nx = np.array(range(0, 11))\n\n# Create an array for the secant line\ns = np.array([2,7])\n\n# Calculate rate of change\nx1 = s[0]\nx2 = s[-1]\ny1 = r(x1)\ny2 = r(x2)\na = (y2 - y1)/(x2 - x1)\n\n\n# Set up the graph\nplt.xlabel('Seconds')\nplt.ylabel('Meters')\nplt.grid()\n\n# Plot x against r(x)\nplt.plot(x,r(x), color='green')\n\n# Plot the secant line\nplt.plot(s,r(s), color='magenta')\n\nplt.annotate('Average Velocity =' + str(a) + ' m/s',((x2+x1)/2, (y2+y1)/2))\n\nplt.show()\n\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cshankm/rebound
|
ipython_examples/PoincareMap.ipynb
|
gpl-3.0
|
[
"Poincare Map\nThis example shows how to calculate a simple Poincare Map with REBOUND. A Poincare Map (or sometimes calles Poincare Section) can be helpful to understand dynamical systems.",
"import rebound\nimport numpy as np",
"We first create the initial conditions for our map. The most interesting Poincare maps exist near resonance, so we have to find a system near a resonance. The easiest way to get planets into resonance is migration. So that's what we'll do. Initially we setup a simulation in which the planets are placed just outside the 2:1 mean motion resonance.",
"sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(m=1e-3,a=1,e=0.001)\nsim.add(m=0.,a=1.65)\nsim.move_to_com()",
"We then define a simple migration force that will act on the outer planet. We implement it in python. This is relatively slow, but we only need to migrate the planet for a short time.",
"def migrationForce(reb_sim):\n tau = 40000.\n ps[2].ax -= ps[2].vx/tau\n ps[2].ay -= ps[2].vy/tau\n ps[2].az -= ps[2].vz/tau",
"Next, we link the additional migration forces to our REBOUND simulation and get the pointer to the particle array.",
"sim.additional_forces = migrationForce\nps = sim.particles",
"Then, we just integrate the system for 3000 time units, about 500 years in units where $G=1$.",
"sim.integrate(3000.)",
"Then we save the simulation to a binary file. We'll be reusing it a lot later to create the initial conditions and it is faster to load it from file than to migrate the planets into resonance each time.",
"sim.save(\"resonant_system.bin\") ",
"To create the poincare map, we first define which hyper surface we want to look at. Here, we choose the pericenter of the outer planet.",
"def hyper(sim):\n ps = sim.particles\n dx = ps[2].x -ps[0].x\n dy = ps[2].y -ps[0].y\n dvx = ps[2].vx-ps[0].vx\n dvy = ps[2].vy-ps[0].vy\n return dx*dvx + dy*dvy",
"We will also need a helper function that ensures our resonant angle is in the range $[-\\pi:\\pi]$.",
"def mod2pi(x):\n if x>np.pi:\n return mod2pi(x-2.*np.pi)\n if x<-np.pi:\n return mod2pi(x+2.*np.pi)\n return x",
"The following function generate the Poincare Map for one set of initial conditions. \nWe first load the resonant system from the binary file we created earlier. \nWe then randomly perturb the velocity of one of the particles. If we perturb the velocity enough, the planets will not be in resonant anymore.\nWe also initialize shadow particles to calculate the MEGNO, a fast chaos indicator.",
"def runone(args):\n i = args # integer numbering the run\n N_points_max = 2000 # maximum number of point in our Poincare Section\n N_points = 0\n poincare_map = np.zeros((N_points_max,2))\n \n # setting up simulation from binary file\n sim = rebound.Simulation.from_file(\"resonant_system.bin\")\n vx = 0.97+0.06*(float(i)/float(Nsim))\n sim.particles[2].vx *= vx\n sim.t = 0. # reset time to 0\n sim.init_megno(1e-16) # add variational (shadow) particles and calculate MEGNO\n \n # Integrate simulation in small intervals\n # After each interval check if we crossed the \n # hypersurface. If so, bisect until we hit the \n # hypersurface exactly up to a precision\n # of dt_epsilon\n dt = 0.13\n dt_epsilon = 0.001\n sign = hyper(sim)\n while sim.t<15000. and N_points < N_points_max:\n oldt = sim.t\n olddt = sim.dt\n sim.integrate(oldt+dt)\n nsign = hyper(sim)\n if sign*nsign < 0.:\n # Hyper surface crossed.\n leftt = oldt\n rightt = sim.t\n sim.dt = -olddt\n while (rightt-leftt > dt_epsilon):\n # Bisection.\n midt = (leftt+rightt)/2.\n sim.integrate(midt, exact_finish_time=1)\n msign = hyper(sim)\n if msign*sign > 0.:\n leftt = midt\n sim.dt = 0.3*olddt\n else:\n rightt = midt\n sim.dt = -0.3*olddt\n # Hyper surface found up to precision of dt_epsilon.\n # Calculate orbital elements\n o = sim.calculate_orbits()\n # Check if we cross hypersurface in one direction or the other.\n if o[1].r<o[1].a:\n # Calculate resonant angle phi and its time derivative \n tp = np.pi*2.\n phi = mod2pi(o[0].l-2.*o[1].l+o[1].omega+o[1].Omega)\n phid = (tp/o[0].P-2.*tp/o[1].P)/(tp/o[0].P)\n # Store value for map\n poincare_map[N_points] = [phi,phid]\n N_points += 1\n sim.dt = olddt\n sim.integrate(oldt+dt)\n sign = nsign\n return (poincare_map, sim.calculate_megno(),vx)",
"For this example we'll run 10 initial conditions. Some of them will be in resonance, some other won't be. We run them in parallel using the InterruptiblePool that comes with REBOUND.",
"Nsim = 10\npool = rebound.InterruptiblePool()\nres = pool.map(runone,range(Nsim))",
"Now we can finally plot the Poincare Map. We color the points by the MEGNO value of the particular simulation. A value close to 2 corresponds to quasi-periodic orbits, a large value indicate chaotic motion.",
"%matplotlib inline \nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(14,8))\nax = plt.subplot(111)\nax.set_xlabel(\"$\\phi$\"); ax.set_ylabel(\"$\\dot{\\phi}$\")\nax.set_xlim([-np.pi,np.pi]); ax.set_ylim([-0.06,0.1])\ncm = plt.cm.get_cmap('brg')\nfor m, megno, vx in res:\n c = np.empty(len(m[:,0])); c.fill(megno)\n p = ax.scatter(m[:,0],m[:,1],marker=\".\",c=c, vmin=1.4, vmax=3, s=25,edgecolor='none', cmap=cm)\ncb = plt.colorbar(p, ax=ax)\ncb.set_label(\"MEGNO $<Y>$\")",
"The red orbits are periodic or quasi periodic, the green orbits are chaotic."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fja05680/pinkfish
|
examples/170.follow-trend/strategy.ipynb
|
mit
|
[
"follow-trend\n1. S&P 500 index closes above its 200 day moving average\n2. The stock closes above its upper band, buy\n\n1. S&P 500 index closes below its 200 day moving average\n2. The stock closes below its lower band, sell your long position.\n\n(Compare the result of applying same strategy to Multiple securities.)",
"import datetime\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport pinkfish as pf\nimport strategy\n\n# Format price data\npd.options.display.float_format = '{:0.2f}'.format\n\n%matplotlib inline\n\n# Set size of inline plots\n'''note: rcParams can't be in same cell as import matplotlib\n or %matplotlib inline\n \n %matplotlib notebook: will lead to interactive plots embedded within\n the notebook, you can zoom and resize the figure\n \n %matplotlib inline: only draw static images in the notebook\n'''\nplt.rcParams[\"figure.figsize\"] = (10, 7)",
"Some global data",
"capital = 10000\nstart = datetime.datetime(2000, 1, 1)\nend = datetime.datetime.now()",
"Define symbols",
"SP500_Sectors = ['SPY', 'XLB', 'XLE', 'XLF', 'XLI', 'XLK', 'XLP', 'XLU', 'XLV', 'XLY']\n\nOther_Sectors = ['RSP', 'DIA', 'IWM', 'QQQ', 'DAX', 'EEM', 'TLT', 'GLD', 'XHB']\n\nElite_Stocks = ['ADP', 'BMY', 'BRK-B', 'BTI', 'BUD', 'CL', 'CLX', 'CMCSA', 'DIS', 'DOV']\nElite_Stocks += ['GIS', 'HD', 'HRL', 'HSY', 'INTC', 'JNJ', 'K', 'KMB', 'KMI', 'KO']\nElite_Stocks += ['LLY', 'LMT', 'MCD', 'MO', 'MRK', 'MSFT', 'NUE', 'PG', 'PM', 'RDS-B']\nElite_Stocks += ['SO', 'T', 'UL', 'V', 'VZ', 'XOM']\n\n# Pick one of the above\nsymbols = SP500_Sectors\n\noptions = {\n 'use_adj' : False,\n 'use_cache' : True,\n 'sma_period': 200,\n 'percent_band' : 0,\n 'use_regime_filter' : True\n}",
"Run Strategy",
"strategies = pd.Series(dtype=object)\nfor symbol in symbols:\n print(symbol, end=\" \")\n strategies[symbol] = strategy.Strategy(symbol, capital, start, end, options)\n strategies[symbol].run()",
"Summarize results",
"metrics = ('start',\n 'annual_return_rate',\n 'max_closed_out_drawdown',\n 'sharpe_ratio',\n 'sortino_ratio',\n 'monthly_std',\n 'annual_std',\n 'pct_time_in_market',\n 'total_num_trades',\n 'pct_profitable_trades',\n 'avg_points')\n\ndf = pf.optimizer_summary(strategies, metrics)\npd.set_option('display.max_columns', len(df.columns))\ndf\n\n# Averages\navg_annual_return_rate = df.loc['annual_return_rate'].mean()\navg_sharpe_ratio = df.loc['sharpe_ratio'].mean()\nprint('avg_annual_return_rate: {:.2f}'.format(avg_annual_return_rate))\nprint('avg_sharpe_ratio: {:.2f}'.format(avg_sharpe_ratio))\n\npf.plot_equity_curves(strategies)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
aboucaud/python-euclid2016
|
notebooks/02-Numpy.ipynb
|
bsd-3-clause
|
[
"Numpy\nNumPy is the fundamental package for scientific computing with Python. You can find more tutorials at http://wiki.scipy.org/Tentative_NumPy_Tutorial . Also check http://www.numpy.org for additional informations.\n\nArray Creation\nAccessing Array Elements\nArray Operations\nMasked Array\n\nBroadcasting\n\n\nExercice time",
"# uncomment that line if you are using python 2\n# from __future__ import print_function, division \n\nimport numpy as np",
"Array Creation\nnumpy arrays are generally defined from lists\nUnlike python lists, numpy arrays contain objects of ONLY one type.",
"# 1D array\nnp.array([1, 2, 3, 4])\n\n# 2D array\nnp.array([[1, 2], [3, 4]])\n\na = np.array([[1, 2], [3, 4]])\nprint(a, type(a))",
"A numpy array is an object with many attributes",
"print('* length (a.size) : %i' % a.size )\nprint('* number of dimensions (a.ndim) : %i' % a.ndim )\nprint('* shape (a.shape) : %s' % str(a.shape) )\nprint('* data type (a.dtype) : %s' % a.dtype )",
"NOTE: Unlike python lists, numpy arrays contain objects of ONLY one data type. By default, the data type (dtype) is usually set to float which corresponds to 64-byte float or np.float64\nempty, zeros and ones\nTo quickly initialize a numpy array, there are a few convenience methods that only require a shape as input (and optionally a dtype)..",
"np.empty((3, 3))\n\nnp.zeros((2, 2), dtype=float)\n\nnp.ones((3, 5), dtype=int)",
".. or another numpy array.",
"np.empty_like(a)\n\nnp.zeros_like(a, dtype=complex)\n\nnp.ones_like(a, dtype=str)",
"arange\nA range can be quickly created with the arange method ( as indgen in IDL)",
"np.arange(10)",
"Additional arguments enable to set the lower and upper bounds as well as the range step.",
"np.arange(2, 3, 0.25)",
"linspace and logspace\nSimilarly to arange, linspace and logspace are convenience methods to create ranges, but instead of specifying the elements spacing, the constraint is on the total number of elements.",
"np.linspace(1, 10, 5)",
"For logspace, the boundaries should be specified in log.",
"np.logspace(np.log10(1), np.log10(10), 5)",
"NOTE: arange never includes the upper bound while linespace and logspace do.\nogrid, mgrid and indices\nFor multidimensional ranges, ogrid creates a column and a row",
"np.ogrid[0:3, 0:3]",
"while mgrid and indices create the full coordinate arrays",
"np.mgrid[0:3, 0:3]\n\nnp.indices((3, 3))",
"Accessing Array Elements\nIndexing in 1D\nnumpy arrays are accessed with the same slicing as for lists.\nReminder: [start:end:step]",
"b = np.arange(10)\nb[0:9:2]",
"Indexing in n-dimensions\nThe first index represents the row, the second represents the column. Dimensions need to be separated with commas ','.",
"# Square array from arange with reshape method\nc = np.arange(5 * 5).reshape(5, 5)\nprint(c)\nprint(c[2, 3]) # Third row, fourth column\nprint(c[-1, 1]) # Last row, second column",
"To specify \"all the elements along that particular dimension\", use the colon :",
"d = np.arange(27).reshape(3, 3, 3)\nprint(d)\n\nd[0, :, 0] # First row, entire column of the first slice ",
"If you have many dimensions, you can replace several colons by an ellipsis ...",
"d[..., 1] == d[:, :, 1]",
"Slicing",
"print(c)\nprint(\"c[0, :] = %s\"% str( c[0, :]) )# First row equivalent to c[0]\nprint(\"c[1, :] = %s\"% str( c[1, :]) )# Second row equivalent to c[1]\nprint(\"c[:, 0] = %s\"% str( c[:, 0]) )# First column\nprint(\"c[:, 1] = %s\"% str( c[:, 1]) )# Second column",
"More funky..",
"print(c[0:3, 0]) # from 0 to 3 (excluded) on the first axis\nprint(c[0:4:2, 1:5:2]) # from 0 to 4 (excluded) every 2 elements on the 1th to 5th axis every 2 elements",
"Fancy indexing\nnumpy provides a way to give an array as index for a second array, an operation referred to as fancy indexing . \nThis is should very familiar to IDL users.",
"size = 6\nx = np.random.randint(10, size=size)\nidx = np.arange(0, size, 2) # every second integer\nprint(x)\nprint(idx)\nprint(x[idx])",
"This can be useful to extract some information from an array without modifying it.\nwhere ?\nThis is the magical IDL command, which is rendered almost useless in Python",
"# np.random is a submodule of numpy, dedicated to random generators\na = np.random.uniform(size=6)\nprint(a)",
"A simple test operation with a numpy array becomes an array of boolean values (element-wise test), which can turn extremely convenient to create masks.",
"mask = a > 0.5\nprint(mask)",
"These masks are then used as indices (fancy indexing) to only retrieve the elements that match the test criterion.",
"a[mask]",
"or that don't match the criterion",
"a[~mask]",
"And the tests can be combined all at once, either is plain",
"a[(a > 0.4) & (a < 0.7)]",
"or with numpy dedicated methods",
"a[np.logical_or(a < 0.3, a > 0.66)]",
"However to get the indices of the array satisfying certain conditions, then np.where should be used",
"np.where(a > 0.5) # return a tuple with the indexes where the condition is True as first element\nprint(np.where(a > 0.5)[0]) # ... to access directly the indexes",
"but the best use case for np.where is to return an array progamatically : \"if condition is met, return x, otherwise return y\"",
"np.where(a > 0.5, a, np.nan)",
"Array operations\nBasic operations",
"c + 1\n\nc * 5 - 3 / 2",
"Also work inplace",
"c += 3\nc",
"ufuncs\nAll common arithmetic operations are optimally implemented in numpy and take advantage of the C machinery underneath. It means that squaring an array or computing its sum is much more efficient in numpy than just computing it from lists.\nThese small convenience methods are implemented as universal functions (ufunc - http://docs.scipy.org/doc/numpy/reference/ufuncs.html), like min, max, mean, std, etc.. can provide information on the whole array",
"c\n\nc.mean()\n\nc.std()",
"or in multidimensional data, can provide information along a given axis ( 0 = rows | 1 = columns )",
"c.sum(axis=1)\n\nc.max(axis=0)",
"This is a simpler way of applying the np.min() or np.std() functions to the array.\nMatrices and vectors\nOperations on vectors and matrices are handled easily, but upcasting (int => float) is always happening.",
"a = np.array([[1, 1],[0, 0]], dtype=float)\nb = np.array([[1, 1],[1, 1]])\nc = np.array([[1, 2],[3, 4]])\n(a + 2) + (b * 4) * c",
"Between matrices, there is the element-wise product *",
"a * c",
"and the matrix product np.dot.",
"np.dot(a, c)",
"For more complex operations on vectors and matrices, there is a submodule in numpy dedicated to linear algebra called np.linalg for eigen vector decomposition, inverse operations, etc.\nMasked Array\nYou can associate a mask to an array to create a masked array",
"a = np.array([1,2,3,4])\nmask = np.array(a == 2)\nprint(a)\nprint(mask) # True when masked\n\nmasked_a = np.ma.array(a, mask=mask)\nmasked_a\n\nprint(a.sum())\nprint(masked_a.sum()) # The mask are taken into account\n\nmasked_b = np.ma.array(a, mask=(a ==1) )\nmasked_b\n\nmasked_a+masked_b # Will create a common mask",
"Broadcasting\nOne of the major feature of numpy is the use of array broadcasting. Broadcasting allows operations (such as addition, multiplication etc.) which are normally element-wise to be carried on arrays of different shapes. It is a virtual replication of the arrays along the missing dimensions. It can be seen as a generalization of operations involving an array and a scalar.",
"matrix = np.zeros((4, 5))\nmatrix",
"The addition of a scalar on an matrix can be seen as the addition of a matrix with identical elements (and same dimensions)",
"matrix + 6",
"The addition of a row on a matrix will be seen as the addition of a matrix with replicated rows (the number of columns must match).",
"row = np.arange(5)\nrow\n\nmatrix + row",
"The addition of a column on a matrix will be seen as the addition of a matrix with replicated columns (the number of rows must match).",
"column = np.ones(4)\ncolumn\n\n# This would fail\n\n# matrix + column ",
"This one failed since the righmost dimensions are different. So for columns, an additional dimension must be specified and added on the right, indexing the array with an additional np.newaxis or simply None.",
"column = np.arange(4)[:, None] # or np.ones(4)[:, np.newaxis]\ncolumn\n\nmatrix + column",
"NOTE: In the row case above, the shapes also did not match (4,5) for the matrix and (5,) for the row. The actual rule of broadcasting is that for arrays of different rank, dimensions of length 1 are prepended (added on the left of the array shape) until the two arrays have the same rank.\nFor this reason, arrays with the following shapes can be broadcasted together:\n* (1, 1, 1, 8) and (9, 1)\n* (4, 1, 9) and (3, 1)\nReading text files\nNumpy as basic capabilities to read text files",
"%%file numpy_data.txt\n1 2 3 \n4 5 6\n\ndata = np.loadtxt('numpy_data.txt')\ndata",
"The astropy.io.ascii module has a much more complete function of reading ascii files in different type of format. Furthermore, scipy.io.readsav has the capability of reading IDL sav files into python objects. The python pickle and cPickle module can also be used to dump and read data to disk.\nYou are looking for a particular method in numpy ?\nSimply use the lookfor method..",
"np.lookfor('fourier transform')",
"Exercice time !\nThese exercices are extracted from Pierre Chanial numpy lecture.\nWaving for loops goodbye\nCompute $\\pi$, as given by the Madhava of Sangamagrama approximation formula:\n$\\pi = \\sqrt{12}\\sum^\\infty_{k=0} \\frac{(-\\frac{1}{3})^{k}}{2k+1}$.\nThe $k$ indices ranging from 0 to (let’s say) 29 will be returned by the NumPy function arange and $\\pi$ will be computed by calling another NumPy function (sum), instead of using a for loop.",
"# Write your solution here (2 lines expected)\n\nN = 30\nk = 0 # to be replaced with an array of indices using N\npi = 3.14 # to be replaced with the mathematical expression above\n\nprint('Relative error:', np.abs(pi - np.pi) / np.pi)",
"Computation of $\\pi$ with Monte Carlo\nGiven the random variables X and Y following the uniform distribution between -1 and 1, the probability for the point (X, Y) to be inside the unity disk is the ratio of the surface of the unity disk and that of the unity square, i.e. $\\pi/4$. It is then possible possible to compute $\\pi$ by drawing realizations of X and Y and counting the fraction of points (X, Y) inside the unity disk.\nVectorize the following pure Python code, by using NumPy arrays and functions.",
"import math\nimport random\n\nNTOTAL = 1000000\n\nrandom.seed(0)\nninside = 0\nfor i in range(NTOTAL):\n x = random.uniform(-1, 1)\n y = random.uniform(-1, 1)\n ninside += math.sqrt(x**2 + y**2) < 1\npi = 4 * ninside / NTOTAL\n\nprint('Relative error:', np.abs(pi - np.pi) / np.pi)",
"Write the NumPy version below",
"NTOTAL = 1000000\n\nrandom.seed(0) # use NumPy random module instead\nx = random.uniform(-1, 1) # use NumPy random module instead\ny = random.uniform(-1, 1) # use NumPy random module instead\nninside = math.sqrt(x**2 + y**2) < 1 # to be replaced with NumPy version\npi = 4 * ninside / NTOTAL\n\nprint('Relative error:', np.abs(pi - np.pi) / np.pi)",
"Broadcasting theory\nTry to predict the output shape of the two examples given in the note on broadcasting, a paragraph above.\n\nVelocity histogram\nComplete the missing parts of the code below to do this exercise. Given a large number of particules of velocities $v_x$, $v_y$, $v_z$ distributed according to the standard normal distribution, plot the histogram of the speed in 1, 2 and 3 dimensions:\n$v_1 = |v_x| = \\sqrt{v_x^2}$\n$v_2 = \\sqrt{v_x^2+v_y^2}$\n$v_3 = \\sqrt{v_x^2+v_y^2+v_z^2}$\nand compare it to the theoretical Maxwell distributions:\n$f_n(v) = \\left(\\frac{\\pi}{2}\\right)^{-\\frac{|n-2|}{2}} v^{n-1} e^{-\\frac{1}{2}v^2}$\nwhere $n = 1, 2, 3$ is the number of dimensions.",
"\n# Replace the ... with your solution\n\nimport numpy as np\n\ndef velocity2speed(velocity, ndims):\n \"\"\" Return the ndims-dimensional speed of the particles. \"\"\"\n speed = np.sqrt(1) # replace 1 with an expression that sums the velocity depending on ndims\n return speed\n\n\ndef speed_distribution(speed, ndims):\n \"\"\"\n Return the probability distribution function of the ndims-dimensional\n speed of the particles.\n \"\"\"\n distrib = 0 # replace with mathematical expression of Maxwell distribution\n # normalize the distribution\n return distrib\n\n\nNPARTICULES = 1000000\n\nvelocity = np.random.standard_normal((NPARTICULES, 3))\n\n# Test your functions here before plotting the result\n\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfor ndims in (1, 2, 3):\n velocity = np.random.standard_normal((NPARTICULES, ndims))\n speed = velocity2speed(velocity, ndims)\n ax = plt.subplot(1, 3, ndims)\n n, bins, patches = ax.hist(speed, bins=100, normed=True, alpha=0.7)\n ax.set_title('{}-d speed distribution'.format(ndims))\n ax.set_xlim(0, 5)\n ax.set_ylim(0, 1.0)\n ax.set_xlabel('speed')\n ax.plot(speed, speed_distribution(speed, ndims), 'ro')\n\nplt.show()\n\nfrom IPython.core.display import HTML\nHTML(open('../styles/notebook.css', 'r').read())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
karlstroetmann/Formal-Languages
|
ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open('../../style.css', 'r') as file:\n css = file.read()\nHTML(css)",
"Implementing an SLR-Table-Generator\nA Grammar for Grammars\nAs the goal is to generate an SLR-table-generator we first need to implement a parser for context free grammars.\nThe file arith.g contains an example grammar that describes arithmetic expressions.",
"!type Examples\\arith.g\n\n!cat Examples/arith.g",
"We use <span style=\"font-variant:small-caps;\">Antlr</span> to develop a parser for context free grammars. The pure grammar used to parse context free grammars is stored in the file Pure.g4. It is similar to the grammar that we have already used to implement Earley's algorithm, but allows additionally the use of the operator |, so that all grammar rules that define a variable can be combined in one rule.",
"!type Pure.g4\n\n!cat Pure.g4",
"The annotated grammar is stored in the file Grammar.g4.\nThe parser will return a list of grammar rules, where each rule of the form\n$$ a \\rightarrow \\beta $$\nis stored as the tuple (a,) + 𝛽.",
"!type Grammar.g4\n\n!cat -n Grammar.g4",
"We start by generating both scanner and parser.",
"!antlr4 -Dlanguage=Python3 Grammar.g4\n\nfrom GrammarLexer import GrammarLexer\nfrom GrammarParser import GrammarParser\nimport antlr4",
"The Class GrammarRule\nThe class GrammarRule is used to store a single grammar rule. As we have to use objects of type GrammarRule as keys in a dictionary later, we have to provide the methods __eq__, __ne__, and __hash__.",
"class GrammarRule:\n def __init__(self, variable, body):\n self.mVariable = variable\n self.mBody = body\n \n def __eq__(self, other):\n return isinstance(other, GrammarRule) and \\\n self.mVariable == other.mVariable and \\\n self.mBody == other.mBody\n \n def __ne__(self, other):\n return not self.__eq__(other)\n \n def __hash__(self):\n return hash(self.__repr__())\n \n def __repr__(self):\n return f'{self.mVariable} → {\" \".join(self.mBody)}'",
"The function parse_grammar takes a string filename as its argument and returns the grammar that is stored in the specified file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure.",
"def parse_grammar(filename):\n input_stream = antlr4.FileStream(filename, encoding=\"utf-8\")\n lexer = GrammarLexer(input_stream)\n token_stream = antlr4.CommonTokenStream(lexer)\n parser = GrammarParser(token_stream)\n grammar = parser.start()\n return [GrammarRule(head, tuple(body)) for head, *body in grammar.g]\n\ngrammar = parse_grammar('Examples/arith.g')\ngrammar",
"Given a string name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character \"'\".",
"def is_var(name):\n return name[0] != \"'\" and name.islower()",
"Fun Fact: The invocation of \"'return'\".islower() returns True. This is the reason that we have to test that\nname does not start with a \"'\" character because otherwise keywords like 'return' or 'while' appearing in a grammar would be mistaken for variables.",
"\"'return'\".islower()",
"Given a list Rules of GrammarRules, the function collect_variables(Rules) returns the set of all variables occuring in Rules.",
"def collect_variables(Rules):\n Variables = set()\n for rule in Rules:\n Variables.add(rule.mVariable)\n for item in rule.mBody:\n if is_var(item):\n Variables.add(item)\n return Variables",
"Given a set Rules of GrammarRules, the function collect_tokens(Rules) returns the set of all tokens and literals occuring in Rules.",
"def collect_tokens(Rules):\n Tokens = set()\n for rule in Rules:\n for item in rule.mBody:\n if not is_var(item):\n Tokens.add(item)\n return Tokens",
"Marked Rules\nThe class MarkedRule stores a single marked rule of the form\n$$ v \\rightarrow \\alpha \\bullet \\beta $$\nwhere the variable $v$ is stored in the member variable mVariable, while $\\alpha$ and $\\beta$ are stored in the variables mAlphaand mBeta respectively. These variables are assumed to contain tuples of grammar symbols. A grammar symbol is either\n- a variable,\n- a token, or\n- a literal, i.e. a string enclosed in single quotes.\nLater, we need to maintain sets of marked rules to represent states. Therefore, we have to define the methods __eq__, __ne__, and __hash__.",
"class MarkedRule():\n def __init__(self, variable, alpha, beta):\n self.mVariable = variable\n self.mAlpha = alpha\n self.mBeta = beta\n \n def __eq__(self, other):\n return isinstance(other, MarkedRule) and \\\n self.mVariable == other.mVariable and \\\n self.mAlpha == other.mAlpha and \\\n self.mBeta == other.mBeta\n \n def __ne__(self, other):\n return not self.__eq__(other)\n \n def __hash__(self):\n return hash(self.__repr__())\n \n def __repr__(self):\n alphaStr = ' '.join(self.mAlpha)\n betaStr = ' '.join(self.mBeta)\n return f'{self.mVariable} → {alphaStr} • {betaStr}'",
"Given a marked rule self, the function is_complete checks, whether the marked rule self has the form\n$$ c \\rightarrow \\alpha\\; \\bullet,$$\ni.e. it checks, whether the $\\bullet$ is at the end of the grammar rule.",
"def is_complete(self):\n return len(self.mBeta) == 0\n\nMarkedRule.is_complete = is_complete\ndel is_complete",
"Given a marked rule self of the form\n$$ c \\rightarrow \\alpha \\bullet X\\, \\delta, $$\nthe function symbol_after_dot returns the symbol $X$. If there is no symbol after the $\\bullet$, the method returns None.",
"def symbol_after_dot(self):\n if len(self.mBeta) > 0:\n return self.mBeta[0]\n return None\n\nMarkedRule.symbol_after_dot = symbol_after_dot\ndel symbol_after_dot",
"Given a marked rule of the form\n$$ c \\rightarrow \\alpha \\bullet b \\delta, $$\nthis function returns the variable $b$ following the dot. If there is no variable following the dot, the function returns None.",
"def next_var(self):\n if len(self.mBeta) > 0:\n var = self.mBeta[0]\n if is_var(var):\n return var\n return None\n\nMarkedRule.next_var = next_var\ndel next_var",
"The function move_dot(self) transforms a marked rule of the form \n$$ c \\rightarrow \\alpha \\bullet X\\, \\beta $$\ninto a marked rule of the form\n$$ c \\rightarrow \\alpha\\, X \\bullet \\beta, $$\ni.e. the $\\bullet$ is moved over the next symbol. Invocation of this method assumes that there is a symbol\nfollowing the $\\bullet$.",
"def move_dot(self):\n return MarkedRule(self.mVariable, \n self.mAlpha + (self.mBeta[0],), \n self.mBeta[1:])\n\nMarkedRule.move_dot = move_dot\ndel move_dot",
"The function to_rule(self) turns the marked rule self into a GrammarRule, i.e. the marked rule\n$$ c \\rightarrow \\alpha \\bullet \\beta $$\nis turned into the grammar rule\n$$ c \\rightarrow \\alpha\\, \\beta. $$",
"def to_rule(self):\n return GrammarRule(self.mVariable, self.mAlpha + self.mBeta)\n\nMarkedRule.to_rule = to_rule\ndel to_rule",
"SLR-Table-Generation\nThe class Grammar represents a context free grammar. It stores a list of the GrammarRules of the given grammar.\nEach grammar rule is of the form\n$$ a \\rightarrow \\beta $$\nwhere $\\beta$ is a tuple of variables, tokens, and literals.\nThe start symbol is assumed to be the variable on the left hand side of the first rule. The grammar is augmented with the rule\n$$ \\widehat{s} \\rightarrow s\\, \\$. $$\nHere $s$ is the start variable of the given grammar and $\\widehat{s}$ is a new variable that is the start variable of the augmented grammar. The symbol $ denotes the end of input. The non-obvious member variables of the class Grammar have the following interpretation\n- mStates is the set of all states of the SLR-parser. These states are sets of marked rules.\n- mStateNamesis a dictionary assigning names of the form s0, s1, $\\cdots$, sn to the states stored in \n mStates. The functions action and goto will be defined for state names, not for states, because \n otherwise the table representing these functions would become both huge and unreadable.\n- mConflicts is a Boolean variable that will be set to true if the table generation discovers \n shift/reduce conflicts or reduce/reduce conflicts.",
"class Grammar():\n def __init__(self, Rules):\n self.mRules = Rules\n self.mStart = Rules[0].mVariable\n self.mVariables = collect_variables(Rules)\n self.mTokens = collect_tokens(Rules)\n self.mStates = set()\n self.mStateNames = {}\n self.mConflicts = False\n self.mVariables.add('ŝ')\n self.mTokens.add('$')\n self.mRules.append(GrammarRule('ŝ', (self.mStart, '$'))) # augmenting\n self.compute_tables()",
"Given a set of Variables, the function initialize_dictionary returns a dictionary that assigns the empty set to all variables.\nThis function is needed to initialize the member variable mFirst and mFollow that are dictionaries storing the first-set and\nfollow-sets of the syntactical variables.",
"def initialize_dictionary(Variables):\n return { a: set() for a in Variables }",
"Given a Grammar, the function compute_tables computes\n- the sets First(v) and Follow(v) for every variable v,\n- the set of all states of the SLR-Parser,\n- the action table, and\n- the goto table. \nGiven a grammar g,\n- the set g.mFirst is a dictionary such that g.mFirst[a] = First[a] and\n- the set g.mFollow is a dictionary such that g.mFollow[a] = Follow[a] for all variables a.",
"def compute_tables(self):\n self.mFirst = initialize_dictionary(self.mVariables)\n self.mFollow = initialize_dictionary(self.mVariables)\n self.compute_first()\n self.compute_follow()\n self.compute_rule_names()\n self.all_states()\n self.compute_action_table()\n self.compute_goto_table()\n \nGrammar.compute_tables = compute_tables\ndel compute_tables",
"The function compute_rule_names assigns a unique name to each rule of the grammar. These names are used later\nto represent reduce actions in the action table.",
"def compute_rule_names(self):\n self.mRuleNames = {}\n counter = 0\n for rule in self.mRules:\n self.mRuleNames[rule] = 'r' + str(counter)\n counter += 1\n \nGrammar.compute_rule_names = compute_rule_names\ndel compute_rule_names",
"The function compute_first(self) computes the sets $\\texttt{First}(c)$ for all variables $c$ and stores them in the dictionary mFirst. Abstractly, given a variable $c$ the function $\\texttt{First}(c)$ is the set of all tokens that can start a string that is derived from $c$:\n$$\\texttt{First}(\\texttt{c}) := \n \\Bigl{ t \\in T \\Bigm| \\exists \\gamma \\in (V \\cup T)^: \\texttt{c} \\Rightarrow^ t\\,\\gamma \\Bigr}.\n$$\nThe definition of the function $\\texttt{First}()$ is extended to strings from $(V \\cup T)^$ as follows:\n- $\\texttt{FirstList}(\\varepsilon) = {}$.\n- $\\texttt{FirstList}(t \\beta) = { t }$ if $t \\in T$.\n- $\\texttt{FirstList}(\\texttt{a} \\beta) = \\left{\n \\begin{array}[c]{ll}\n \\texttt{First}(\\texttt{a}) \\cup \\texttt{FirstList}(\\beta) & \\mbox{if $\\texttt{a} \\Rightarrow^ \\varepsilon$;} \\\n \\texttt{First}(\\texttt{a}) & \\mbox{otherwise.}\n \\end{array}\n \\right.\n $ \nIf $\\texttt{a}$ is a variable of $G$ and the rules defining $\\texttt{a}$ are given as \n$$\\texttt{a} \\rightarrow \\alpha_1 \\mid \\cdots \\mid \\alpha_n, $$\nthen we have\n$$\\texttt{First}(\\texttt{a}) = \\bigcup\\limits_{i=1}^n \\texttt{FirstList}(\\alpha_i). $$\nThe dictionary mFirst that stores this function is computed via a fixed point iteration.",
"def compute_first(self):\n change = True\n while change:\n change = False\n for rule in self.mRules:\n a, body = rule.mVariable, rule.mBody\n first_body = self.first_list(body)\n if not (first_body <= self.mFirst[a]):\n change = True\n self.mFirst[a] |= first_body \n print('First sets:')\n for v in self.mVariables:\n print(f'First({v}) = {self.mFirst[v]}')\n \nGrammar.compute_first = compute_first\ndel compute_first",
"Given a tuple of variables and tokens alpha, the function first_list(alpha) computes the function $\\texttt{FirstList}(\\alpha)$ that has been defined above. If alpha is nullable, then the result will contain the empty string $\\varepsilon = \\texttt{''}$.",
"def first_list(self, alpha):\n if len(alpha) == 0:\n return { '' }\n elif is_var(alpha[0]): \n v, *r = alpha\n return eps_union(self.mFirst[v], self.first_list(r))\n else:\n t = alpha[0]\n return { t }\n \nGrammar.first_list = first_list\ndel first_list",
"The arguments S and T of eps_union are sets that contain tokens and, additionally, they might contain the empty string. The specification of eps_union is:\n$$ \\texttt{eps_union}(S, T) = \\left{ \\begin{array}{ll}\n S & \\mbox{if $\\varepsilon \\not\\in S$} \\\n S \\cup T & \\mbox{if $\\varepsilon \\in S \\wedge \\varepsilon \\in T$} \\\n S \\cup T - {\\varepsilon } & \\mbox{if $\\varepsilon \\in S \\wedge \\varepsilon \\not\\in T$}\n \\end{array}\n \\right.\n$$",
"def eps_union(S, T):\n if '' in S: \n if '' in T: \n return S | T\n return (S - { '' }) | T\n return S",
"Given an augmented grammar $G = \\langle V,T,R\\cup{\\widehat{s} \\rightarrow s\\,\\$}, \\widehat{s}\\rangle$ \nand a variable $a$, the set of tokens that might follow $a$ is defined as:\n$$\\texttt{Follow}(a) := \n \\bigl{ t \\in \\widehat{T} \\,\\bigm|\\, \\exists \\beta,\\gamma \\in (V \\cup \\widehat{T})^: \n \\widehat{s} \\Rightarrow^ \\beta \\,a\\, t\\, \\gamma \n \\bigr}.\n$$\nThe function compute_follow computes the sets $\\texttt{Follow}(a)$ for all variables $a$ via a fixed-point iteration.",
"def compute_follow(self):\n self.mFollow[self.mStart] = { '$' }\n change = True\n while change:\n change = False\n for rule in self.mRules:\n a, body = rule.mVariable, rule.mBody\n for i in range(len(body)):\n if is_var(body[i]):\n yi = body[i]\n Tail = self.first_list(body[i+1:])\n firstTail = eps_union(Tail, self.mFollow[a])\n if not (firstTail <= self.mFollow[yi]): \n change = True\n self.mFollow[yi] |= firstTail \n print('Follow sets (note that \"$\" denotes the end of file):');\n for v in self.mVariables:\n print(f'Follow({v}) = {self.mFollow[v]}')\n \nGrammar.compute_follow = compute_follow\ndel compute_follow",
"If $\\mathcal{M}$ is a set of marked rules, then the closure of $\\mathcal{M}$ is the smallest set $\\mathcal{K}$ such that\nwe have the following:\n- $\\mathcal{M} \\subseteq \\mathcal{K}$,\n- If $a \\rightarrow \\beta \\bullet c\\, \\delta$ is a marked rule from \n $\\mathcal{K}$, and $c$ is a variable and if, furthermore,\n $c \\rightarrow \\gamma$ is a grammar rule,\n then the marked rule $c \\rightarrow \\bullet \\gamma$\n is an element of $\\mathcal{K}$:\n $$(a \\rightarrow \\beta \\bullet c\\, \\delta) \\in \\mathcal{K} \n \\;\\wedge\\; \n (c \\rightarrow \\gamma) \\in R\n \\;\\Rightarrow\\; (c \\rightarrow \\bullet \\gamma) \\in \\mathcal{K}\n $$\nWe define $\\texttt{closure}(\\mathcal{M}) := \\mathcal{K}$. The function cmp_closure computes this closure for a given set of marked rules via a fixed-point iteration.",
"def cmp_closure(self, Marked_Rules):\n All_Rules = Marked_Rules\n New_Rules = Marked_Rules\n while True:\n More_Rules = set()\n for rule in New_Rules:\n c = rule.next_var()\n if c == None:\n continue\n for rule in self.mRules:\n head, alpha = rule.mVariable, rule.mBody\n if c == head:\n More_Rules |= { MarkedRule(head, (), alpha) }\n if More_Rules <= All_Rules:\n return frozenset(All_Rules)\n New_Rules = More_Rules - All_Rules\n All_Rules |= New_Rules\n\nGrammar.cmp_closure = cmp_closure\ndel cmp_closure",
"Given a set of marked rules $\\mathcal{M}$ and a grammar symbol $X$, the function $\\texttt{goto}(\\mathcal{M}, X)$ \nis defined as follows:\n$$\\texttt{goto}(\\mathcal{M}, X) := \\texttt{closure}\\Bigl( \\bigl{ \n a \\rightarrow \\beta\\, X \\bullet \\delta \\bigm| (a \\rightarrow \\beta \\bullet X\\, \\delta) \\in \\mathcal{M} \n \\bigr} \\Bigr).\n$$",
"def goto(self, Marked_Rules, x):\n Result = set()\n for mr in Marked_Rules:\n if mr.symbol_after_dot() == x:\n Result.add(mr.move_dot())\n return self.cmp_closure(Result)\n\nGrammar.goto = goto\ndel goto",
"The function all_states computes the set of all states of an SLR-parser. The function starts with the state\n$$ \\texttt{closure}\\bigl({ \\widehat{s} \\rightarrow \\bullet s \\, $}\\bigr) $$\nand then tries to compute new states by using the function goto. This computation proceeds via a \nfixed-point iteration. Once all states have been computed, the function assigns names to these states.\nThis association is stored in the dictionary mStateNames.",
"def all_states(self): \n start_state = self.cmp_closure({ MarkedRule('ŝ', (), (self.mStart, '$')) })\n self.mStates = { start_state }\n New_States = self.mStates\n while True:\n More_States = set()\n for Rule_Set in New_States:\n for mr in Rule_Set: \n if not mr.is_complete():\n x = mr.symbol_after_dot()\n if x != '$':\n More_States |= { self.goto(Rule_Set, x) }\n if More_States <= self.mStates:\n break\n New_States = More_States - self.mStates;\n self.mStates |= New_States\n print(\"All SLR-states:\")\n counter = 1\n self.mStateNames[start_state] = 's0'\n print(f's0 = {set(start_state)}')\n for state in self.mStates - { start_state }:\n self.mStateNames[state] = f's{counter}'\n print(f's{counter} = {set(state)}')\n counter += 1\n\nGrammar.all_states = all_states\ndel all_states",
"The following function computes the action table and is defined as follows:\n- If $\\mathcal{M}$ contains a marked rule of the form $a \\rightarrow \\beta \\bullet t\\, \\delta$\n then we have\n $$\\texttt{action}(\\mathcal{M},t) := \\langle \\texttt{shift}, \\texttt{goto}(\\mathcal{M},t) \\rangle.$$\n- If $\\mathcal{M}$ contains a marked rule of the form $a \\rightarrow \\beta\\, \\bullet$ and we have\n $t \\in \\texttt{Follow}(a)$, then we define\n $$\\texttt{action}(\\mathcal{M},t) := \\langle \\texttt{reduce}, a \\rightarrow \\beta \\rangle$$\n- If $\\mathcal{M}$ contains the marked rule $\\widehat{s} \\rightarrow s \\bullet \\$ $, then we define \n $$\\texttt{action}(\\mathcal{M},\\$) := \\texttt{accept}. $$\n- Otherwise, we have\n $$\\texttt{action}(\\mathcal{M},t) := \\texttt{error}. $$",
"def compute_action_table(self):\n self.mActionTable = {}\n print('\\nAction Table:')\n for state in self.mStates:\n stateName = self.mStateNames[state]\n actionTable = {}\n # compute shift actions\n for token in self.mTokens:\n if token != '$':\n newState = self.goto(state, token)\n if newState != set():\n newName = self.mStateNames[newState]\n actionTable[token] = ('shift', newName)\n self.mActionTable[stateName, token] = ('shift', newName)\n print(f'action(\"{stateName}\", {token}) = (\"shift\", {newName})')\n # compute reduce actions\n for mr in state:\n if mr.is_complete():\n for token in self.mFollow[mr.mVariable]:\n action1 = actionTable.get(token)\n action2 = ('reduce', mr.to_rule())\n if action1 == None:\n actionTable[token] = action2 \n r = self.mRuleNames[mr.to_rule()]\n self.mActionTable[stateName, token] = ('reduce', r)\n print(f'action(\"{stateName}\", {token}) = {action2}')\n elif action1 != action2: \n self.mConflicts = True\n print('')\n print(f'conflict in state {stateName}:')\n print(f'{stateName} = {state}')\n print(f'action(\"{stateName}\", {token}) = {action1}') \n print(f'action(\"{stateName}\", {token}) = {action2}')\n print('')\n for mr in state:\n if mr == MarkedRule('ŝ', (self.mStart,), ('$',)):\n actionTable['$'] = 'accept'\n self.mActionTable[stateName, '$'] = 'accept'\n print(f'action(\"{stateName}\", $) = accept')\n\nGrammar.compute_action_table = compute_action_table\ndel compute_action_table",
"The function compute_goto_table computes the goto table.",
"def compute_goto_table(self):\n self.mGotoTable = {}\n print('\\nGoto Table:')\n for state in self.mStates:\n for var in self.mVariables:\n newState = self.goto(state, var)\n if newState != set():\n stateName = self.mStateNames[state]\n newName = self.mStateNames[newState]\n self.mGotoTable[stateName, var] = newName\n print(f'goto({stateName}, {var}) = {newName}')\n\nGrammar.compute_goto_table = compute_goto_table\ndel compute_goto_table\n\ngrammar\n\n%%time\ng = Grammar(grammar)\n\ndef strip_quotes(t):\n if t[0] == \"'\" and t[-1] == \"'\":\n return t[1:-1]\n return t\n\ndef dump_parse_table(self, file):\n with open(file, 'w', encoding=\"utf-8\") as handle:\n handle.write('# Grammar rules:\\n')\n for rule in self.mRules:\n rule_name = self.mRuleNames[rule] \n handle.write(f'{rule_name} = (\"{rule.mVariable}\", {rule.mBody})\\n')\n handle.write('\\n# Action table:\\n')\n handle.write('actionTable = {}\\n')\n for s, t in self.mActionTable:\n action = self.mActionTable[s, t]\n t = strip_quotes(t)\n if action[0] == 'reduce':\n rule_name = action[1]\n handle.write(f\"actionTable['{s}', '{t}'] = ('reduce', {rule_name})\\n\")\n elif action == 'accept':\n handle.write(f\"actionTable['{s}', '{t}'] = 'accept'\\n\")\n else:\n handle.write(f\"actionTable['{s}', '{t}'] = {action}\\n\")\n handle.write('\\n# Goto table:\\n')\n handle.write('gotoTable = {}\\n')\n for s, v in self.mGotoTable:\n state = self.mGotoTable[s, v]\n handle.write(f\"gotoTable['{s}', '{v}'] = '{state}'\\n\")\n \nGrammar.dump_parse_table = dump_parse_table\ndel dump_parse_table\n\ng.dump_parse_table('parse-table.py')\n\n!type parse-table.py\n\n!cat parse-table.py\n\n!del GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp \n!rmdir /S /Q __pycache__\n\n!dir /B\n\n!rm GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp \n!rm -r __pycache__\n\n!ls"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gdsfactory/gdsfactory
|
docs/notebooks/02_movement.ipynb
|
mit
|
[
"Movement\nYou can move, rotate and mirror ComponentReference as well as Port, Polygon, CellArray, Label, and Group",
"import gdsfactory as gf\n\n# Start with a blank Component\nc = gf.Component(\"demo_movement\")\n\n# Create some more shape Devices\nT = gf.components.text(\"hello\", size=10, layer=(1, 0))\nE = gf.components.ellipse(radii=(10, 5), layer=(2, 0))\nR = gf.components.rectangle(size=(10, 3), layer=(3, 0))\n\n# Add the shapes to D as references\ntext = c << T\nellipse = c << E\nrect1 = c << R\nrect2 = c << R\n\nc\n\nc = gf.Component(\"move_one_ellipse\")\ne1 = c << gf.components.ellipse(radii=(10, 5), layer=(2, 0))\ne2 = c << gf.components.ellipse(radii=(10, 5), layer=(2, 0))\ne1.movex(10)\nc\n\nc = gf.Component(\"move_one_ellipse\")\ne1 = c << gf.components.ellipse(radii=(10, 5), layer=(2, 0))\ne2 = c << gf.components.ellipse(radii=(10, 5), layer=(2, 0))\ne2.xmin = e1.xmax\nc",
"Now let's practice moving and rotating the objects:",
"c = gf.Component(\"ellipse\")\nE = gf.components.ellipse(radii=(10, 5), layer=(2, 0))\ne1 = c << E\ne2 = c << E\nc\n\nc = gf.Component(\"ellipse\")\ne = gf.components.ellipse(radii=(10, 5), layer=(2, 0))\ne1 = c << e\ne2 = c << e\ne2.move(origin=[5, 5], destination=[10, 10]) # Translate by dx = 5, dy = 5\nc\n\nc = gf.Component(\"ellipse\")\ne = gf.components.ellipse(radii=(10, 5), layer=(2, 0))\ne1 = c << e\ne2 = c << e\ne2.move([5, 5]) # Translate by dx = 5, dy = 5\nc\n\nc = gf.Component(\"rectangles\")\nr = gf.components.rectangle(size=(10, 5), layer=(2, 0))\nrect1 = c << r\nrect2 = c << r\n\nrect1.rotate(45) # Rotate the first straight by 45 degrees around (0,0)\nrect2.rotate(\n -30, center=[1, 1]\n) # Rotate the second straight by -30 degrees around (1,1)\nc\n\nc = gf.Component(\"mirror\")\ntext = c << gf.components.text(\"hello\")\n\ntext.mirror(p1=[1, 1], p2=[1, 3]) # Reflects across the line formed by p1 and p2\nc\n\nc = gf.Component(\"hello\")\ntext = c << gf.components.text(\"hello\")\nc",
"Each Component and ComponentReference object has several properties which can be\nused\nto learn information about the object (for instance where it's center coordinate\nis). Several of these properties can actually be used to move the geometry by\nassigning them new values.\nAvailable properties are:\n\nxmin / xmax: minimum and maximum x-values of all points within the object\nymin / ymax: minimum and maximum y-values of all points within the object\nx: centerpoint between minimum and maximum x-values of all points within the\nobject\ny: centerpoint between minimum and maximum y-values of all points within the\nobject\nbbox: bounding box (see note below) in format ((xmin,ymin),(xmax,ymax))\ncenter: center of bounding box",
"print(\"bounding box:\")\nprint(\n text.bbox\n) # Will print the bounding box of text in terms of [(xmin, ymin), (xmax, ymax)]\nprint(\"xsize and ysize:\")\nprint(text.xsize) # Will print the width of text in the x dimension\nprint(text.ysize) # Will print the height of text in the y dimension\nprint(\"center:\")\nprint(text.center) # Gives you the center coordinate of its bounding box\nprint(\"xmax\")\nprint(text.xmax) # Gives you the rightmost (+x) edge of the text bounding box",
"Let's use these properties to manipulate our shapes to arrange them a little\nbetter",
"import gdsfactory as gf\n\nc = gf.Component(\"canvas\")\ntext = c << gf.components.text(\"hello\")\nE = gf.components.ellipse(radii=(10, 5), layer=(3, 0))\nR = gf.components.rectangle(size=(10, 5), layer=(2, 0))\nrect1 = c << R\nrect2 = c << R\nellipse = c << E\n\nc\n\n# First let's center the ellipse\nellipse.center = [\n 0,\n 0,\n] # Move the ellipse such that the bounding box center is at (0,0)\n\n# Next, let's move the text to the left edge of the ellipse\ntext.y = (\n ellipse.y\n) # Move the text so that its y-center is equal to the y-center of the ellipse\ntext.xmax = ellipse.xmin # Moves the ellipse so its xmax == the ellipse's xmin\n\n# Align the right edge of the rectangles with the x=0 axis\nrect1.xmax = 0\nrect2.xmax = 0\n\n# Move the rectangles above and below the ellipse\nrect1.ymin = ellipse.ymax + 5\nrect2.ymax = ellipse.ymin - 5\n\nc",
"In addition to working with the properties of the references inside the\nComponent,\nwe can also manipulate the whole Component if we want. Let's try mirroring the\nwhole Component D:",
"print(c.xmax) # Prints out '10.0'\n\nc2 = c.mirror((0, 1)) # Mirror across line made by (0,0) and (0,1)\nc2",
"A bounding box is the smallest enclosing box which contains all points of the geometry.",
"# The gf.components.library has a handy bounding-box function\n# which takes a bounding box and returns the rectangle points for it\nimport gdsfactory as gf\n\nc = gf.Component()\ntext = c << gf.components.text(\"hi\")\nbbox = text.bbox\nc << gf.components.bbox(bbox=bbox, layer=(2, 0))\nc\n\n# gf.get_padding_points can also add a bbox with respect to the bounding box edges\nc = gf.Component(\"sample_padding\")\ntext = c << gf.components.text(\"bye\")\ndevice_bbox = text.bbox\nc.add_polygon(gf.get_padding_points(text, default=1), layer=(2, 0))\nc",
"When we query the properties of D, they will be calculated with respect to this\nbounding-rectangle. For instance:",
"print(\"Center of Component c:\")\nprint(c.center)\n\nprint(\"X-max of Component c:\")\nprint(c.xmax)\n\nD = gf.Component(\"rect\")\nR = gf.components.rectangle(size=(10, 3), layer=(2, 0))\nrect1 = D << R\nD",
"You can chain many of the movement/manipulation functions because they all return the object they manipulate.\nFor instance you can combine two expressions:",
"rect1.rotate(angle=37)\nrect1.move([10, 20])\nD",
"...into this single-line expression",
"D = gf.Component()\nR = gf.components.rectangle(size=(10, 3), layer=(2, 0))\nrect1 = D << R\nrect1.rotate(angle=37).move([10, 20])\nD"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/dwd/cmip6/models/sandbox-3/atmoschem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: DWD\nSource ID: SANDBOX-3\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:57\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'dwd', 'sandbox-3', 'atmoschem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Timestep Framework --> Split Operator Order\n5. Key Properties --> Tuning Applied\n6. Grid\n7. Grid --> Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --> Surface Emissions\n11. Emissions Concentrations --> Atmospheric Emissions\n12. Emissions Concentrations --> Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --> Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmospheric chemistry model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmospheric chemistry model code.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Chemistry Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"1.8. Coupling With Chemical Reactivity\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemical species advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Split Operator Chemistry Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemistry (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Split Operator Alternate Order\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\n?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.6. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.7. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Timestep Framework --> Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.2. Convection\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.3. Precipitation\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.4. Emissions\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.5. Deposition\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.6. Gas Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.9. Photo Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.10. Aerosols\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview of transport implementation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Use Atmospheric Transport\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Transport Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric chemistry emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Emissions Concentrations --> Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Emissions Concentrations --> Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Emissions Concentrations --> Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview gas phase atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Number Of Bimolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.4. Number Of Termolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.7. Number Of Advected Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.8. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.9. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.10. Wet Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.11. Wet Oxidation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Gas Phase Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n",
"14.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n",
"14.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.5. Sedimentation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Gas Phase Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n",
"15.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.5. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric photo chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Number Of Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17. Photo Chemistry --> Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nPhotolysis scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"17.2. Environmental Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gboeing/urban-data-science
|
modules/11-supervised-learning/lecture.ipynb
|
mit
|
[
"Supervised learning\nOverview of today's topics:\n - model training, evaluation, and tuning\n - binomial and multinomial logistic regression\n - decision trees and random forests\n - k-nearest neighbors\n - naive bayes\n - perceptrons, support vector machines, and the kernel trick\nIn machine learning, we feed data to an algorithm so that it can make predictions and extract information. Machine learning's broad categories:\n - supervised learning: train a model on observed (labeled) data to predict unobserved data\n - classification: predict categorical variable\n - regresssion: predict continuous variable\n - unsupervised learning: discover structure in and extract information from unlabeled data\n - clustering: assign observations to groups based on their features\n - dimensionality reduction: transform many features to a lower-dimension space\n - reinforcement learning: train model by rewarding it when it takes correct action\n - artificial neural networks and deep learning\nBasic machine learning tasks:\n - data collection and cleaning\n - feature selection: select a relevant subset of features to train your model\n - feature extraction: apply a function to a feature to create a new feature, dimensionality reduction, scaling\n - model choice: identify the right learning algorithm for the task\n - model training: train the model on a set of data (if the model is parametric, we often call this step \"parameter estimation\")\n - model evaluation: assess its performance on a set of testing data, did it over or underfit?\n - model selection and hyperparameter tuning: adjust the features and the hyperparameters the algorithm uses to fit the model for optimum performance\n - prediction: use the model to make predictions on unseen data\nProbability refresher\nProbability is the ratio of an event occuring to all possible events occurring, whereas the odds are the ratio of an event occuring to it not occurring. That is, the odds are the ratio of the probability of an event occurring to the probability of it not occurring: $\\text{odds}=\\frac{p}{1-p}$ and conversely $p=\\frac{\\text{odds}}{1 + \\text{odds}}$\nFor example, if there are 8 blue marbles and 2 red marbles in an urn, the probability of drawing a blue marble is $\\frac{8}{8+2}=0.8$, its odds are $\\frac{8}{2}=4$ (often expressed 4:1) which is equivalent to $\\frac{0.8}{1-0.8}=4$, and its log-odds are therefore $\\log(4)=1.386$.\nLog-odds (the logarithm of the odds) are useful because they take odds that are asymmetrically distributed around 1 and transform them symmetrically around 0, such that $\\log(4)=-\\log(\\frac{1}{4})$, and allow us to linearly combine odds ratios by simply adding and subtracting (because the log of a ratio is the log of the numerator minus the log of the denominator). In other words, the odds are the ratio of two probabilities, and an odds ratio is the ratio of two odds: useful when comparing the odds of a \"what if\" scenario to the odds of the base scenario, as we will see.",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression, Perceptron\nfrom sklearn.metrics import classification_report\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier\n\nnp.random.seed(0)\n\n# load CA tract-level census variables\ndf = pd.read_csv('../../data/census_tracts_data_ca.csv', dtype={'GEOID10':str}).set_index('GEOID10')\ndf.shape\n\ndf.head()",
"1. Logistic Regression\nLogistic regression is a regression analysis technique used when the response is binary. Logistic regression is a classification method, but it uses the general linear model and the same formula as linear regression, and it generates a continuous probability value before converting it to a classification prediction. Logistic regression uses maximum likelihood estimation (with regularization by default in scikit-learn) to estimate the parameters of a logit model. It maximizes the (log) likelihood function, equivalent to minimizing the cost function via gradient descent.\nThe logit model of some probability $p$ represents its log-odds:\n$\\text{logit}(p) = \\log{\\frac{p}{1-p}} = \\beta_0 + \\beta_1 X_1 + \\ldots + \\beta_k X_k$\nThe logit function is the inverse of the logistic function. It takes a value $p$ that ranges from 0 to 1 and converts it to a value that ranges from $-\\infty$ to $+\\infty$, which is necessary for regression analysis. In our example, $p$ represents the probability of being assigned to one of the classes in our classification scheme.\nToday we will build some simple models to predict tract poverty status.",
"# classify tracts into high poverty vs not\ndf['poverty'] = (df['pct_below_poverty'] > 20).astype(int)\ndf['poverty'].value_counts().sort_index()\n\n# feature selection: which features are important for predicting our categories?\nresponse = 'poverty'\npredictors = ['median_age', 'pct_renting', 'pct_bachelors_degree', 'pct_english_only']\ndata = df[[response] + predictors].dropna()\ny = data[response]\nX = data[predictors]\n\n# feature scaling: important for optimal performance especially if algorithm\n# uses gradient descent or requires regularization\nX_std = StandardScaler().fit_transform(X)\n\n# split data into 70/30 training and test sets\nX_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size=0.3)\n\n# train model on training data then use it to make predictions with test data\nblr = LogisticRegression()\ny_pred = blr.fit(X_train, y_train).predict(X_test)\n\n# inspect the probabilities\nprobs = blr.predict_proba(X_test)\ndf_probs = pd.DataFrame(probs, columns=blr.classes_)\ndf_probs['pred'] = y_pred\ndf_probs['actual'] = y_test.values\ndf_probs.head()",
"Manually calculate the probability of observation $i$ belonging to class $1$ as $p = \\frac{e^{\\beta X_i}}{1 + e^{\\beta X_i}}$ where the decision function is $\\text{logit}(p) = \\beta X$",
"# calculate the logit (log-odds) for observation 0 and convert to probability\n# this is its probability of being assigned to class 1\nlog_odds = np.dot(blr.coef_, X_test[0]) + blr.intercept_\nodds = np.exp(log_odds)\nprob = odds / (1 + odds)\nprob\n\n# now it's your turn\n# what is the predicted probability of test case 9 being a high-poverty tract?\n",
"2. Classification into multiple categories\nBinomial logistic regression's predictions assign observations to one of two classes, but many real-world scenarios require three or more classes. The rest of today's examples will explore multi-class supervised learning classification by categorizing tracts into \"low\", \"mid\", or \"high\" poverty status. We will pretend like these class labels are not ordinal.",
"# create a poverty classification variable\n# by default, set all as mid poverty tracts\ndf['poverty'] = 'mid'\n\n# identify all low poverty tracts\nmask_low = df['pct_below_poverty'] <= 5\ndf.loc[mask_low, 'poverty'] = 'low'\n\n# identify all high poverty tracts\nmask_high = df['pct_below_poverty'] >= 25\ndf.loc[mask_high, 'poverty'] = 'high'\n\ndf['poverty'].value_counts().sort_index()\n\n# feature selection\nresponse = 'poverty'\npredictors = ['median_age', 'pct_renting', 'pct_bachelors_degree', 'pct_english_only']\ndata = df[[response] + predictors].dropna()\ny = data[response]\nX = data[predictors]\n\n# feature scaling\nX_std = StandardScaler().fit_transform(X)\n\n# split data into training and test sets\nX_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size=0.3)",
"3. Multinomial Logistic Regression\nMultinomial logistic regression generalizes binomial logistic regression to multiple classes. That is, it is a regression analysis technique used when the response is categorical and contains >2 classes. Multinomial logistic regression uses the softmax function to generalize the logistic function to multiple inputs: in probability theory, the softmax represents a probability distribution across a set of possible classes.",
"# train model on training data then use it to make predictions with test data\nmlr = LogisticRegression(multi_class='multinomial', C=1)\ny_pred = mlr.fit(X_train, y_train).predict(X_test)",
"Making sense of the probabilities\nLet's inspect the estimated probabilities of observation $i$ belonging to class $c$ given $\\beta$ and $X_i$, the estimated coefficents and $i$'s features.\nThen, manually calculate the logit, normalize via softmax, and compare.",
"probs = mlr.predict_proba(X_test)\ndf_probs = pd.DataFrame(probs, columns=mlr.classes_)\ndf_probs['pred'] = y_pred\ndf_probs['actual'] = y_test.values\ndf_probs.head()\n\n# calculate the logit (log-odds) then normalize with softmax function\ni = 0 # pick an observation\nlog_odds = np.dot(mlr.coef_, X_test[i]) + mlr.intercept_\nprob = np.exp(log_odds) / np.exp(log_odds).sum()\nprob",
"Making sense of the coefficients\nLogistic regression is a parametric method. We have estimated the parameters (coefficients) of a logit model. Each estimated coefficient is the log of the odds ratio (pay attention to the difference between \"log odds\" and \"log [of the] odds ratio\").\nAn odds ratio is the ratio of the odds of the event occurring to the odds of it not occurring. In our case the event is a 1-unit increase in the predictor. Thus the logit coefficient $\\beta_{c,k}=\\log\\frac{\\text{odds}(y=c | X_k+1)}{\\text{odds}(y=c | X_k)}$ is the ceteris paribus log of the odds of an observation being in class $c$ if $x_k$ is incremented by $1$ divided by the odds of it being in class $c$ if nothing changes. Conversely, the odds ratio is the exponentiated logit coefficient: $\\text{odds ratio} = \\frac{\\text{odds}(y=c | X_k+1)}{\\text{odds}(y=c | X_k)} = e^{\\beta_{c,k}}$.\nNote that scikit-learn is a machine learning package and treats logistic regression in the predictive ML paradigm. That is, it generally ignores inference and explanation in its parameter estimates. For a traditional statistical inference treatment of logistic regression, including hypothesis testing, use the statsmodels package instead.",
"# estimated coefficients on each variable, for each class\ndf_coeffs = pd.DataFrame(mlr.coef_, columns=X.columns, index=mlr.classes_)\ndf_coeffs\n\n# calculate the odds ratio for some class and some predictor\n# a 1-unit increase in predictor k increases the odds of class c by what %\nB_ck = df_coeffs.loc['low', 'pct_english_only']\nodds_ratio = np.exp(B_ck)\nodds_ratio",
"Given an odds ratio $\\rho$, the percent change $\\delta$ in the odds can be calculated as $\\delta = 100(\\rho - 1)$\nThat is, a 1-unit increase in the percent that speak English-only at home is associated with a $\\delta$% increase in the odds of being classified in the low poverty category. Note that we standardized our features, so we're working in units of standard deviations.",
"# manually calculate the odds ratio for some observation, class, and predictor\ni = 0 # observation in position 0 (you can pick any one)\nc = 1 # class in position 1 (ie, \"low\")\nk = 3 # predictor in position 3 (ie, \"pct_english_only\")\n\n# calculate the logit of class c if nothing changes, then convert to odds\nx0 = X_test[i]\nlog_odds0 = np.dot(mlr.coef_, X_test[i]) + mlr.intercept_\nodds0 = np.exp(log_odds0[c]) # convert log-odds to odds\n\n# calculate the logit of class c if we increase k by 1, then convert to odds\nx1 = x0.copy()\nx1[k] = x1[k] + 1\nlog_odds1 = np.dot(mlr.coef_, x1) + mlr.intercept_\nodds1 = np.exp(log_odds1[c]) # convert log-odds to odds\n\n# calculate the odds ratio\nodds_ratio = odds1 / odds0\nodds_ratio",
"4. Model Assessment\nThe model's performance can be assessed via several validation methods, including:\n\nholdout method: fit model to one subset of data, then test it on a different subset\nk-fold cross validation: divide data into k groups then, for each group, train the model on all the other groups, then test the model on the group and record its assessment score\nbootstrapping: sample with replacement from dataset to assess accuracy\n\nTypical assessments to report include:\n - misclassification error rate or, alternatively, accuracy\n - precision: what share of all true + false positives are true positives? That is, among everything predicted to be in this class, how many were right?\n - recall, aka sensitivity = true positive rate: what share of all true positives + false negatives are true positives? That is, among all the actual items in this class, how many were predicted correctly?\n - specificity = true negative rate: what share of all true negatives + false positives are true negatives?\n - $F_1$ score: an overall measure of accuracy: the harmonic mean of precision and recall\n - plot ROC curves: true positives vs false positives, and measure area under curve\nBias-variance tradeoff: you want a model that both 1) captures the nuanced patterns of the training data and 2) generalizes well to new data. However you cannot improve both at the same time, you must trade them off. Overfitting means high variance: your model is too sensitive to noise in the training data and may need regularization, which reduces model complexity. Underfitting means high bias: your model is too smooth and misses important details in the training data. Judicious feature selection, dimensionality reduction, and larger training sample sizes can reduce variance (overfitting). Adding additional predictors can reduce bias (underfitting).\nFor example, with logistic regression, you can adjust regularization with the C parameter, which is the inverse of L2 regularization/shrinkage (i.e., lower C values give you higher regularization). If your model need not be specified according to specific theory, but rather just needs the greatest predictive accuracy, consider using something like stepwise selection for feature selection.",
"# calculate misclassification error rate and accuracy\nmisclassified = (y_test != y_pred).sum()\nerror_rate = misclassified / len(y_test)\naccuracy = 1 - error_rate\nerror_rate, accuracy\n\n# how did the classifier perform?\n# the report tells about the quality of its predictions\n# support means how many of each class it saw\nprint(classification_report(y_test, y_pred))\n\n# now it's your turn\n# adjust the C parameter of the logistic regression model\n# how does it impact our model's accuracy? why?\n\n\n# helper function to visualize the model's decision surface\n# fits model pairwise to just 2 features at a time and plots them\ndef plot_decision(X, y, feature_names, classifier):\n \n class_colors = {'high': 'r', 'mid': 'y', 'low': 'b'}\n class_ints = {'high': 0, 'mid': 1, 'low': 2}\n pairs = [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]\n fig, axes = plt.subplots(2, 3, figsize=(9, 6))\n for ax, pair in zip(axes.flat, pairs):\n \n # take the two corresponding features\n Xp = X[:, pair]\n x_min, x_max = Xp[:, 0].min() - 1, Xp[:, 0].max() + 1\n y_min, y_max = Xp[:, 1].min() - 1, Xp[:, 1].max() + 1\n xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),\n np.arange(y_min, y_max, 0.02))\n \n # fit model to the two features, predict for meshgrid points, then plot\n Z = classifier.fit(Xp, y).predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\n for cat, i in class_ints.items():\n Z[np.where(Z==cat)] = i\n cs = ax.contourf(xx, yy, Z, cmap=plt.cm.RdYlBu, alpha=0.7)\n \n # scatter plot each class in same color as corresponding contour\n for cat, color in class_colors.items():\n idx = np.where(y == cat)\n ax.scatter(Xp[idx, 0], Xp[idx, 1], c=color, label=cat, s=1)\n\n ax.set_xlabel(feature_names[pair[0]])\n ax.set_ylabel(feature_names[pair[1]])\n ax.figure.tight_layout() \n plt.legend()\n\n# plot the model's decision surface\n# fits model pairwise to just 2 features at a time and plots them\nplot_decision(X_train, y_train, X.columns, mlr)",
"Look at the figure above. Are our classes linearly separable? What are the implications?\nNext steps: to select the best model for your task, you should choose several algorithms and compare their performance against each other, evaluating and tuning them iteratively. Consider using a hyperparameter optimization technique.\n\nchoose an appropriate learning algorithm for your task\nchoose a key performance metric to evaluate\nchoose a classification algorithm and an optimization algorithm\ntrain the model and evaluate its performance\ntune its hyperparameters to improve performance then re-assess\n\nThe rest of the lecture will investigate other candidate algorithms for our task and consider their strengths and limitations.\n5. Decision Trees and Random Forests\nThe logistic regression models we saw earlier were parametric models, meaning they learn a classification function. A decision tree is a nonparametric model that partitions the feature space into boxes. It has a tendency to overfit data, but ensembles can help prevent it from getting stuck in a local minimum. Ensemble techniques, which build many individual models then combine them, can be broadly divided into bagging and boosting methods:\n\nbagging: train many models on different random bootstrap (ie, with replacement) samples of the data then average them (\"bootstrap aggregating\")\nboosting: build ensemble incrementally by empasizing the previously misclassified data points as you train each new model\n\nA random forest is an ensemble learning method that constructs multiple decision trees then uses bagging. This helps correct for decision trees' overfitting the training data. Random forests tend to work well out of the box, handle nonlinearity well, and work well in very high dimension spaces. Note we don't have to standardize the data to train a decision tree or random forest.",
"# train model on training data then use it to make predictions with test data\n# set max_depth\ndt = DecisionTreeClassifier(max_depth=3)\ny_pred = dt.fit(X_train, y_train).predict(X_test)\n\n# how did the classifier perform?\nprint(classification_report(y_test, y_pred))\n\n# plot the model's decision surface\nplot_decision(X_train, y_train, X.columns, dt)\n\n# now it's your turn\n# don't prune the tree: set max_depth to None. what happens? why?\n\n\n%%time\n# train model on training data then use it to make predictions with test data\n# use 1,000 decision trees and all available CPUs\nrf = RandomForestClassifier(n_estimators=1000, n_jobs=-1)\ny_pred = rf.fit(X_train, y_train).predict(X_test)\n\n# how did the classifier perform?\nprint(classification_report(y_test, y_pred))\n\n# plot the model's decision surface\nplot_decision(X_train, y_train, X.columns, rf)",
"6. k-Nearest Neighbors\nkNN is a nonlinear, nonparametric, lazy-learning model and represents an example of instance-based learning.\nBy \"lazy\" we mean that it does not learn a classification function, but rather just memorizes the entire training dataset to subsequently find nearest neighbors. It can require a lot of memory but works well with a small number of dimensions, though less well with high dimensionality: nearest-neighbor search is hard in high-dimension feature spaces because of the curse of dimensionality.",
"# train model on training data then use it to make predictions with test data\nknn = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')\ny_pred = knn.fit(X_train, y_train).predict(X_test)\n\n# how did the classifier perform?\nprint(classification_report(y_test, y_pred))\n\n# plot the model's decision surface\nplot_decision(X_train, y_train, X.columns, knn)",
"7. Naïve Bayes\nNaïve Bayes is a high bias/low variance classifier that is less likely to overfit small training datasets than a low bias/high variance classifier is (such as kNN or logistic regression). It is a simple algorithm and converges quickly but strongly assumes independence (hence, naïve).",
"# train model on training data then use it to make predictions with test data\ngnb = GaussianNB(priors=None)\ny_pred = gnb.fit(X_train, y_train).predict(X_test)\n\n# how did the classifier perform?\nprint(classification_report(y_test, y_pred))\n\n# plot the model's decision surface\nplot_decision(X_train, y_train, X.columns, gnb)",
"8. Perceptron\nA perceptron is a simple linear, binary, parametric classifier. It is a very simple single-layer neural network: too simple to be useful for many real-world tasks. The $\\eta$ hyperparameter here is the learning rate. If it's too small, it takes a long time to converge. If it's too large, it can overshoot the cost function minimum during gradient descent.",
"# train model on training data then use it to make predictions with test data\nppn = Perceptron(eta0=1)\ny_pred = ppn.fit(X_train, y_train).predict(X_test)\n\n# how did the classifier perform?\nprint(classification_report(y_test, y_pred))\n\n# plot the model's decision surface\nplot_decision(X_train, y_train, X.columns, ppn)",
"9. Support Vector Machines\nSVMs are models that extend the perceptron to find an optimal hyperplane providing the largest margin (ie, separation) between the classes of training data (the training data are the \"support vectors\"). In other words, SVM finds the hyperplane that maximizes the distance between it and the nearest data point on either side of it. SVMs can classify data linearly/parametrically or, using the kernel trick, nonlinearly/nonparametrically if the classes are not linearly separable.\nAn SVM with a linear kernel is very similar to logistic regression. But they might be a good choice instead of logistic regression if the problem is not linearly separable or has a high-dimensional feature space. Choosing the right kernel can be challenging and the results are not straightforwardly explainable. SVMs can be very inefficient to train, so not a good choice for large training data sets.\nTuning the SVM's hyperparameters is critical! Here we fit an untuned model as a quick demo, but you should run something like GridSearchCV for tuning.",
"# train model on training data then use it to make predictions with test data\n# train the linear SVM (namely, support vector classification)\nsvc = SVC(kernel='linear', C=1)\ny_pred = svc.fit(X_train, y_train).predict(X_test)\n\n# how did the classifier perform?\nprint(classification_report(y_test, y_pred))",
"SVMs for nonlinear classification with a kernel function\nWe can turn a linear SVM model into a nonlinear model by using the kernel trick to operate in a higher-dimension feature space. In this example, we will use the radial basis function, aka Gaussian kernel.",
"# train model on training data then use it to make predictions with test data\nsvc_kt = SVC(kernel='rbf', gamma=0.2, C=1)\ny_pred = svc_kt.fit(X_train, y_train).predict(X_test)\n\n# how did the classifier perform?\nprint(classification_report(y_test, y_pred))\n\n# plot the model's decision surface: this is slow\nplot_decision(X_train, y_train, X.columns, svc_kt)\n\n# now it's your turn\n# try a polynomial kernel. how does that impact model performance?\n",
"Higher gamma parameter values lead to tighter decision boundaries",
"# train model on training data then use it to make predictions with test data\nsvc_kt2 = SVC(kernel='rbf', gamma=10, C=1)\ny_pred = svc_kt2.fit(X_train, y_train).predict(X_test)\n\n# how did the classifier perform?\nprint(classification_report(y_test, y_pred))\n\n# plot the model's decision surface: this is slow\nplot_decision(X_train, y_train, X.columns, svc_kt2)",
"Here we see poor generalization because the model was overfitted with that high gamma value: the training overemphasized small fluctations in the training data.\nSelf-paced exercise\nScroll back up to the steps at the bottom of the \"model assessment\" section. Work through those tasks, considering how to improve your model's performance through better feature selection/extraction, hyperparameter optimization, training, and testing."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Hades1996/nummeth
|
finding roots/tercerPunto.ipynb
|
mit
|
[
"<h2>3. Aproximacion de las raices para 230 x^4 + 18 x^3 + 9 x^2 - 221 x -9</h2>\n\nA partir de métodos gráficos podemos ver que existen dos reales en los intervalos (-0.5,0) y (0.5,1.5)\n<h3>Método de bisección:</h3>",
"import math\n\ndef funcion(x):\n return (230*math.pow(x,4))+(18*math.pow(x,3))+(9*math.pow(x,2))-(221*x)-9\n\ndef biseccion(intA, intB, errorA, noMaxIter):\n if(funcion(intA)*funcion(intB)<0):\n noIter = 0\n errorTmp = 1\n intTmp = 0\n oldInt = intA\n while(noIter<noMaxIter and errorTmp>errorA and funcion(intTmp)!=0):\n \n intTmp = (intB+intA)/2\n if(funcion(intA)*funcion(intTmp)<0):\n intB = intTmp\n else:\n intA = intTmp\n noIter+=1\n errorTmp=abs((intTmp-oldInt)/intTmp)*100\n oldInt = intTmp\n #print('Error: ',errorTmp)\n print('La raíz es: ',intTmp)\n print('F(raiz) es:' ,funcion(intTmp))\n print('Error: ',errorTmp)\n print('No. de iteraciones realizadas: ',noIter)\n else:\n print('En el intervalo dado la función no presenta cambio de signo')\n print('No hay raices que encontrar')\n\nprint('------------------------------------')\nprint('Valores inicial : -0.5 y 0')\nbiseccion(-0.5,0,math.pow(10,-6),1000)\nprint('------------------------------------')\nprint('Valores iniciales : 0.5,1.5')\nbiseccion(0.5,1.5,math.pow(10,-6),1000)\nprint('------------------------------------')",
"<h3>Método de regla falsa:</h3>\n\n<p>Debido al comportamiento de la función cerca a las raices (decrece y crece demasiado rápido) no fue posible usar este método para alcanzar una aproximación con precisión de 10^-6 aunque reducieramos los intervalos iniciales basándonos en los resultados del método de bisección (que si satisfacen la precisión).</p>",
"import math\n\ndef funcion(x):\n return (230*math.pow(x,4))+(18*math.pow(x,3))+(9*math.pow(x,2))-(221*x)-9\n\ndef reglaFalsa(intA, intB, errorA, noMaxIter):\n if(funcion(intA)*funcion(intB)<0):\n noIter = 0\n errorTmp = 1\n intTmp = 0\n oldInt = intA\n while(noIter<noMaxIter and errorTmp>errorA and funcion(intTmp)!=0):\n intTmp = intB-((funcion(intB)*(intA+intB))/(funcion(intA)-funcion(intB)))\n if(funcion(intA)*funcion(intTmp)<0):\n intB = intTmp\n else:\n intA = intTmp\n noIter+=1\n errorTmp=abs((intTmp-oldInt)/intTmp)*100\n oldInt = intTmp\n #print('Error: ',errorTmp)\n print('La raíz es: ',intTmp)\n print('F(raiz) es:' ,funcion(intTmp))\n print('Error: ',errorTmp)\n print('No. de iteraciones realizadas: ',noIter)\n else:\n print('En el intervalo dado la función no presenta cambio de signo')\n print('No hay raices que encontrar')\n \nprint('------------------------------------')\nprint('Valores inicial : -0.5 y 0')\nreglaFalsa(-0.5,0,math.pow(10,-6),1000)\nprint('------------------------------------')\nprint('Valores iniciales : 0.5,1.5')\nreglaFalsa(0.5,1.5,math.pow(10,-6),1000)\nprint('------------------------------------')\nprint('Valores inicial : -0.2 y 0')\nreglaFalsa(-0.2,0,math.pow(10,-6),1000)\nprint('------------------------------------')\nprint('Valores iniciales : 0.8,1')\nreglaFalsa(0.8,1,math.pow(10,-6),1000)\nprint('------------------------------------')",
"<h3>Método de Newton - Raphson:</h3>",
"import math\n\ndef funcion(x):\n return (230*math.pow(x,4))+(18*math.pow(x,3))+(9*math.pow(x,2))-(221*x)-9\n\ndef funcionDeriv(x):\n return (920*math.pow(x,3))+(54*math.pow(x,2))+(18*x)-221\n\ndef newtonRaphson(val, errorA, noMaxIter):\n noIter = 0\n errorTmp = 1\n intTmp = 0\n while(noIter<noMaxIter and errorTmp>errorA and funcion(intTmp)!=0):\n valTmp = val-((funcion(val))/(funcionDeriv(val)))\n errorTmp=abs((valTmp-val)/valTmp)*100\n val = valTmp\n noIter+=1\n print('La raíz es: ',valTmp)\n print('F(raiz) es:' ,funcion(valTmp))\n print('Error: ',errorTmp)\n print('No. de iteraciones realizadas: ',noIter)\nprint('------------------------------------')\nprint('Valores inicial : -0.5')\nnewtonRaphson(-0.5,math.pow(10,-6),1000)\nprint('------------------------------------')\nprint('Valores iniciales : 1.5')\nnewtonRaphson(1.5,math.pow(10,-6),1000)\nprint('------------------------------------')",
"<h3>Método de secante:</h3>",
"import math\n\ndef funcion(x):\n return (230*math.pow(x,4))+(18*math.pow(x,3))+(9*math.pow(x,2))-(221*x)-9\n\ndef secante(primerVal, segundoVal, errorA, noMaxIter):\n noIter = 0\n errorTmp = 1\n intTmp = 0\n while(noIter<noMaxIter and errorTmp>errorA and funcion(segundoVal)!=0):\n valTmp = segundoVal-((funcion(segundoVal)*(primerVal-segundoVal))/(funcion(primerVal)-funcion(segundoVal)))\n primerVal = segundoVal\n segundoVal = valTmp\n errorTmp=abs((segundoVal-primerVal)/segundoVal)*100\n #print('Noiter: ',noIter, ' primVal:', primerVal,' segunVal:',segundoVal,' error:',errorTmp)\n noIter+=1\n print('La raíz es: ',valTmp)\n print('F(raiz) es:' ,funcion(valTmp))\n print('Error: ',errorTmp)\n print('No. de iteraciones realizadas: ',noIter)\n \nprint('------------------------------------')\nprint('Valores inicial : -0.5 y 0')\nsecante(-0.5,0,math.pow(10,-6),1000)\nprint('------------------------------------')\nprint('Valores iniciales : 0.5,1.5')\nsecante(0.5,1.5,math.pow(10,-6),1000)\nprint('------------------------------------')",
"<h2>Conclusión</h2>\n<p>En los resultados anteriores se pudo comprobar la superioridad de los métodos abiertos a la hora de encontrar las dos raices de la función, especialmente el del método de Newton - Raphson que tuvo una leve mejora a la hora de aproximar la segunda raíz en menos iteraciones.</p>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ga7g08/ga7g08.github.io
|
_notebooks/2015-02-09-Gibbs-sampler-with-a-bivariate-normal-distribution.ipynb
|
mit
|
[
"Gibbs sampler with the Bivariate normal disttibution\nThis is a short post on recreating the Gibbs sampler example from p277 of Gelman's\nBayesian Data Analysis\nAlternating conditional sampling\nTo remind my future self of how this works I am going to recreate the details of \nthe Gibbs sampler for a two dimensional problem. Suppose the parameter vector\n$\\theta$ has two components $\\theta= [\\theta_1, \\theta_2]$, then each iteration\nof the Gibbs sampler cycles through each component and draws a new value conditional\non all the others. There are thus 2 steps for each iteration. We consider the example\nof the bivariate normal distribution with unknown mean $\\theta$, but known \ncovariance matrix \n$$\\left(\\begin{array}{cc}1 & \\rho \\ \\rho & 1 \\end{array}\\right).$$\nIf one observation $y=[y_1, y2]$ is made and a uniform prior on $\\theta$\nis used, the posterior is given by \n$$ \\left(\\begin{array}{c} \\theta_1 \\ \\theta_2 \\end{array}\\right) \\biggr\\rvert y\n \\sim N\\left(\\left(\\begin{array}{c} y_1 \\ y_2 \\end{array}\\right), \n \\left(\\begin{array}{cc}1 & \\rho \\ \\rho & 1 \\end{array}\\right)\\right)\n$$\nIn order to illustrate the use of the Gibbs sampler we need the conditional \nposterior distributions, which from the properties of multivariate normal\ndistributions are given by \n$$ \\theta_1 | \\theta_2, y \\sim N\\left(y_1 + \\rho(\\theta_2 - y_2), 1-\\rho^{2}\\right) $$\n$$ \\theta_1 | \\theta_1, y \\sim N\\left(y_2 + \\rho(\\theta_1 - y_1), 1-\\rho^{2}\\right) $$\nThe Gibbs sampler proceeds by alternately sampling from these two normal \ndistributions. This is now coded in simple Python deliberately making the\nsteps obvious.",
"from numpy.random import normal \nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.rcParams['axes.labelsize'] = 22\n\ndef GibbsSampler(theta0, y, k, rho):\n \"\"\" Simple implementation of the Gibbs sampler for a bivariate normal\n distribution. \"\"\"\n \n theta = [theta0]\n for i in range(k):\n theta2 = theta[-1][1] # theta2 from previous iteration\n theta1 = normal(y[0] + rho*(theta2 - y[1]), 1-rho**2)\n theta.append([theta1, theta2])\n theta2 = normal(y[1] + rho*(theta1 - y[0]), 1-rho**2)\n theta.append([theta1, theta2])\n \n return np.array(theta)\n \n# Data as given by Gelman et al. \ny = [0, 0]\nrho = 0.8\nk = 500\n\n# Four chains starting from four points of a square\ntheta0_list = [[-2.5, -2.5], [2.5, -2.5], [-2.5, 2.5], [2.5, 2.5]]\ndata = []\nfor theta0 in theta0_list:\n data.append(GibbsSampler(theta0, y, k, rho))\n\ndata = np.array(data)\nprint np.array(data).shape",
"Here data is a $4 \\times 2k+1 \\times d$ numpy array. The first axis gives the four chains (started from four different initial conditions, the second gives the iteration number (of length $2k+1$ for each chain because we are\nsaving the data after each update and we added the initial data, the final axis is the number of dimensions (in this case 2). \nRecreate figure 11.2 a\nFirst lets just look at the first few steps of all four chains. What we are plotting\nhere is the location, in thet $\\theta$ parameter space, of each chain.",
"nsteps = 10\nfor i in range(4):\n plt.plot(data[i, 0:nsteps, 0], data[i, 0:nsteps, 1], \"o-\")\n \nplt.xlabel(r\"$\\theta_1$\")\nplt.ylabel(r\"$\\theta_2$\", rotation=\"horizontal\")\n\nplt.show()",
"Recreating figure 11.2 b\nNow we increase the number of steps to the full range",
"for i in range(4):\n plt.plot(data[i, 0:, 0], data[i, 0:, 1], \"o-\")\n \nplt.xlabel(r\"$\\theta_1$\")\nplt.ylabel(r\"$\\theta_2$\", rotation=\"horizontal\")\n\nplt.show()",
"Removing the repeated data\nIn the example so far we purposefully left in data from the updates during\neach iteration. Before trying to do any analysis this should be removed.",
"data_reduced = data[:, ::2, :]\nprint data_reduced.shape",
"Illustration of the burn in period\nThe step which is not so obvious in Gelman BDA is why they remove the\nfirst half of data - the so-called burn-in period. To illustrate we\nplot the values of $\\theta_i$ against the iteration number. This is done\nfor all four chains.",
"fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True)\n\nfor j in range(4):\n ax1.plot(range(k+1), data_reduced[j, :, 0])\n ax2.plot(range(k+1), data_reduced[j, :, 1])\n\nax1.set_ylabel(r\"$\\theta_1$\", rotation=\"horizontal\")\nax2.set_ylabel(r\"$\\theta_2$\", rotation=\"horizontal\")\nplt.show()",
"The takeaway here is that at least for the first 50 iterations\nthe chains display same memory of their initial position. After\nthis the appear to have 'converged' to an approximation of\nthe posterior distribution. For this reason Gelman discards\nthe first half of data.\nPlotting the marginal and joint posterior\n11.2 c is the joint posterior distribution $p(\\theta_1, \\theta_2| y$. \nWe can now plot this along with the marginal distributions for each\nindividual parameter in a so-called triangle plot.",
"# Generate figure and axes\nfig = plt.figure(figsize=(10, 8))\nndim = 2\nax1 = plt.subplot2grid((ndim, ndim), (0, 0))\nax2 = plt.subplot2grid((ndim, ndim), (1, 0))\nax3 = plt.subplot2grid((ndim, ndim), (1, 1))\n\n# Remove labels\nax1.set_xticks([])\nax1.set_yticks([])\nax3.set_yticks([])\n\n# Get the final data\nburnin=50\ndata_burnt = data_reduced[:, burnin:, :] # Burn first 100 points\ndata_final = data_burnt.reshape(4 * (k-burnin + 1), 2) # Flatten chain data\n\n# Plot the marginal distribution for theta1\nhist, bin_edges = np.histogram(data_final[:, 0], bins=40, normed=True)\nbin_mids = bin_edges[:-1] + np.diff(bin_edges)\nax1.step(bin_mids, hist, color=\"k\")\n\n# Plot the joint distribution \nax2.hist2d(data_final[:, 0], data_final[:, 1], bins=30, cmap=plt.cm.Greys)\n\n# Plot the marginal distribution for theta1\nhist, bin_edges = np.histogram(data_final[:, 1], bins=40, normed=True)\nbin_mids = bin_edges[:-1] + np.diff(bin_edges)\nax3.step(bin_mids, hist, color=\"k\")\n\n\nax2.set_xlabel(r\"$\\Theta_1$\")\nax2.set_ylabel(r\"$\\Theta_2$\", rotation=\"horizontal\", labelpad=10)\nax3.set_xlabel(r\"$\\Theta_2$\")\n\nplt.subplots_adjust(hspace=0, wspace=0)\nplt.show()",
"The lower left panel is the joint probability distribution as approximated\nby the Gibbs sampler (11.2 c). The other two plots show the marginal \ndistributions for each of the parameters. \nAs a note a nice python package written by Daniel Foreman-Mackay exist to automate this type of triangular plot."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ngovindaraj/Udacity_Projects
|
Inferential_Statistics/Inferential_Statistics.ipynb
|
mit
|
[
"Name: Navina Govindaraj\nDate: April 2017\nTest a Perceptual Phenomenon: The Stroop Effect\nThe Stroop dataset contains data from participants who were presented with a list of words, with each word displayed in a color of ink. The participant’s task was to say out loud the color of the ink in which the word was printed. The task had two conditions: a congruent words condition, and an incongruent words condition. \n- In the congruent words condition, the words being displayed are color words whose names match the colors in which they are printed. \n- In the incongruent words condition, the words displayed are color words whose names do not match the colors in which they are printed. In each case, the time it took to name the ink colors were measured in equally-sized lists.",
"import numpy as np\nimport pandas as pd\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('whitegrid')\n%pylab inline",
"Variables",
"stroop_data = pd.read_csv('./stroopdata.csv')\nstroop_data.head()",
"Independent variable: Treatment condition consisting of congruent and incongruent words\nDependent variable: Response time \nHypothesis\n$H_0 : \\mu_C = \\mu_I $ There is no difference in mean response time between the congruent and incongruent word conditions\n$H_a : \\mu_C \\neq \\mu_I $ There is a difference in mean response time between the congruent and incongruent word conditions\n$\\mu_C$ and $\\mu_I$ denote the population means for the congruent and incongruent groups respectively.\n\n\nStatistical test: Dependent t-test for paired samples is the statistical test that will be used. \n\n\nThis is a within-subject design, where the same subjects are being presented with two test conditions.\n\n\nThe reasons for choosing this test are as follows:\n1) The sample size is less than 30\n2) The population standard deviation is unknown\n3) It is assumed that the distributions are Gaussian\nData Exploration and Visualization",
"stroop_data.describe()\n\nprint \"Median:\\n\", stroop_data.median()\nprint \"\\nVariance:\\n\", stroop_data.var()\n\nfig, axs = plt.subplots(figsize=(18, 5), ncols = 3, sharey=True)\nplt.figure(figsize=(8, 6))\nsns.set_palette(\"Set2\")\n\n# Fig 1 - Congruent Words - Response Time\nsns.boxplot(y=\"Congruent\", data=stroop_data, \n ax=axs[0]).set_title(\"Fig 1: Congruent Words - Response Time (in seconds)\")\n\n# Fig 2 - Incongruent Words - Response Time\nsns.boxplot(y=\"Incongruent\", data=stroop_data, color=\"coral\", \n ax=axs[1]).set_title(\"Fig 2: Incongruent Words - Response Time (in seconds)\")\n\n# Fig 3 - Congruence vs. Incongruence\nsns.regplot(x=\"Congruent\", y=\"Incongruent\", data=stroop_data, color=\"m\", fit_reg=False,\n ax=axs[2]).set_title(\"Fig 3: Congruence vs. Incongruence (in seconds)\")",
"The above visualizations clearly show that the response time for the congruent words condition is much lower in comparison to the incongruent words condition.\nEven if the two outliers present in Fig 2 are ignored, it is evident that not just the mean (14 seconds vs. 22 seconds), but the lower and upper bounds for both conditions are markedly different as well.\nFig 3 shows a scatter plot of response times from both treatment conditions. The plot shows that for every x value (time taken for congruent words) plotted, the y value (time taken for incongruent words) is higher.\n\nStatistical Test\nα: 0.05\nConfidence level: 95%\nt-critical value: 1.714",
"# Dependent t-test for paired samples\n\nstats.ttest_rel(stroop_data[\"Congruent\"], stroop_data[\"Incongruent\"])",
"We reject the null hypothesis since p-value < α level of 0.05\nHence it can be concluded that there is a difference in mean response time between the congruent and incongruent word conditions\nThe results match expectations because every one of the 24 samples in the dataset showed increased response time during the incongruent words condition.\n\n6. Optional: What do you think is responsible for the effects observed? Can you think of an alternative or similar task that would result in a similar effect? Some research about the problem will be helpful for thinking about these two questions!\n\nWhen we are presented with words, we are trained to process the meaning. When we are asked to process the color of the word instead of the word meaning, we are trying to do the opposite of what we are so used to doing. This interference causes a delay in information processing, which is why the time it takes to process incongruent words is more.\nA similar effect is produced in a \"Directional Stroop Effect\" experiment, where you are required to say the word location in a box, contrary to the actual direction the word states.\n\nReferences\nhttps://en.wikipedia.org/wiki/Stroop_effect\nhttps://faculty.washington.edu/chudler/java/readyd.html\nhttps://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.stats.ttest_rel.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
alexbrie/LearningDeepLearning
|
.ipynb_checkpoints/FirstNotebook-checkpoint.ipynb
|
mit
|
[
"Learning Python, Numpy and Deep Learning\nChapter 1: my first Jupyter Notebook\n\nAlex Brie, 29/06/2017\n\nThis is my first Jupyter notebook, a in-progress experiment of learning the ropes of Python and how to use it for datascience and deep learning.\nUltimate goal: create new deep network architectures using Keras and use them in revolutionary mobile apps that will change the way we perceive reality and man's purpose on earth (I'm obviously joking here, capisce?)\nBut first, baby steps:\nFor the first self-taught lesson I just want to learn how to use Jupyter notebooks, how to open some dataset (csv) and how to apply some basic operations on it (filtering, basic numpy methods)\nFor dataset I didn't want just any dataset but a government one. Therefore I'm using a csv that contains the number of vaccines administered to children in the first trimester of 2017, in Romanian cities data.gov.ro\nConclusion (later edit)\nThe good part: I'm able to use Jupyter, with autocomplete(after installing readline), look at documentation in a separate terminal using pydoc, create new cells, run them, open a csv, convert it into numpy array and then do basic operations on it such as filtering, etc.\nThe bad part: I didn't do anything that you can't do in 10 seconds by simply opening the aforementioned csv in Excel. Plus, my filtering probably sucks. But it's a start.",
"import pandas as pd\nimport numpy as np\n\ndatas = pd.read_csv('copii.csv', sep=';', names=[\"J\",\"L\",\"V\", \"N\", \"A\"])\n\nprint(datas[0:10])\n# print(datas.columns)\n\n\nnp_datas = np.array(datas)\n\njudete = np_datas[:,0]\norase = np_datas[:, 1]\nvaccinuri = np_datas[:, 2]\ncantitati = np_datas[:, 3]\nani = np_datas[:, 4]",
"Test printing a filter",
"print(orase[judete==\"Prahova\"])\nprint(judete[orase==\"Busteni\"])",
"Prepare filter columns that allow us to select any county/town combo, or county/town/vaccine name combo",
"jud_or = judete + \"_\"+ orase\njud_or_vac = judete + \"_\"+ orase + \"_\"+ vaccinuri\n\nprint(np.sum(cantitati[jud_or==\"Prahova_Busteni\"]))\nprint(np.sum(cantitati[jud_or_vac==\"Bucuresti_Sector 2_Hexacima\"]))",
"Demo for extracting the quantities and vaccines names for a given county_city combo",
"na = np.array([vaccinuri[jud_or==\"Prahova_Busteni\"], cantitati[jud_or==\"Prahova_Busteni\"]]).T\n\nprint(na)\n\nprint (np.sum(na[:,-1]))",
"Identify the most common/popular/abundant vaccine",
"max_vac = np.max(na[:,-1])\nvaccine_name = na[na[:,-1]==max_vac][0][0]\nprint(vaccine_name)",
"Now for the least common/popular/abundant vaccine",
"min_vac = np.min(na[:,-1])\nvaccine_name = na[na[:,-1]==min_vac][0][0]\nprint(vaccine_name)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
awadalaa/DataSciencePractice
|
kaggle/sf_crime_classification/scripts/SF_Crime_Classification_kNN.ipynb
|
mit
|
[
"San Francisco Crime Classification\nPredict the category of crimes that occurred in the city by the bay\nFrom 1934 to 1963, San Francisco was infamous for housing some of the world's most notorious criminals on the inescapable island of Alcatraz.\nToday, the city is known more for its tech scene than its criminal past. But, with rising wealth inequality, housing shortages, and a proliferation of expensive digital toys riding BART to work, there is no scarcity of crime in the city by the bay.\nFrom Sunset to SOMA, and Marina to Excelsior, this competition's dataset provides nearly 12 years of crime reports from across all of San Francisco's neighborhoods. Given time and location, you must predict the category of crime that occurred.\nWhat we will do here\n\nTraining a machine learning model with scikit-learn\nUse the K-nearest neighbors classification model\n\nHow does K-Nearest Neighbors (KNN) classification work?\n\nPick a value for K.\nSearch for the K observations in the training data that are \"nearest\" to the measurements of the crime category.\nUse the most popular response value from the K nearest neighbors as the predicted response value for the unknown crime category.\n\nResources\n\nNearest Neighbors (user guide), KNeighborsClassifier (class documentation)",
"# Step 1 - importing classes we plan to use\nimport csv as csv\nimport math\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.neighbors import KNeighborsClassifier\nimport seaborn as sns\n\n# show plots inline\n%matplotlib inline\n\n#\n# Preparing the data\n#\ndata = pd.read_csv('../input/train.csv',parse_dates=['Dates'], dtype={\"X\": np.float64,\"Y\": np.float64}, )\n\n# Add column containing day of week expressed in integer\ndow = {\n 'Monday':0,\n 'Tuesday':1,\n 'Wednesday':2,\n 'Thursday':3,\n 'Friday':4,\n 'Saturday':5,\n 'Sunday':6\n}\ndata['DOW'] = data.DayOfWeek.map(dow)\n\n# Add column containing time of day\ndata['Hour'] = pd.to_datetime(data.Dates).dt.hour\n# display the first 5 rows\ndata.head()",
"Time vs. Day by category",
"# Retrieve categories list\ncats = pd.Series(data.Category.values.ravel()).unique()\ncats.sort()\n#\n# First, take a look at the total of all categories\n#\n\nplt.figure(1,figsize=(8,4))\nplt.hist2d(\n data.Hour.values,\n data.DOW.values,\n bins=[24,7],\n range=[[-0.5,23.5],[-0.5,6.5]],\n cmap=plt.cm.rainbow\n)\nplt.xticks(np.arange(0,24,6))\nplt.xlabel('Time of Day')\nplt.yticks(np.arange(0,7),['Mon','Tue','Wed','Thu','Fri','Sat','Sun'])\nplt.ylabel('Day of Week')\nplt.gca().invert_yaxis()\nplt.title('Occurance by Time and Day - All Categories')\n\n#\n# Now look into each category\n#\n\nplt.figure(2,figsize=(16,9))\nplt.subplots_adjust(hspace=0.5)\nfor i in np.arange(1,cats.size + 1):\n ax = plt.subplot(5,8,i)\n ax.set_title(cats[i - 1],fontsize=10)\n ax.axes.get_xaxis().set_visible(False)\n ax.axes.get_yaxis().set_visible(False)\n plt.hist2d(\n data[data.Category==cats[i - 1]].Hour.values,\n data[data.Category==cats[i - 1]].DOW.values, \n bins=[24,7],\n range=[[-0.5,23.5],[-0.5,6.5]],\n cmap=plt.cm.rainbow\n )\n plt.gca().invert_yaxis()\n",
"scikit-learn 4-step modeling pattern\nStep 1:Import the class you plan to use\nStep 2: \"Instantiate\" the \"estimator\"\n* \"Estimator\" is scikit-learn's term for model\n* \"Instantiate\" means \"make an instance of\"\n* Name of the object does not matter\n* Can specify tuning parameters (aka \"hyperparameters\") during this step\n* All parameters not specified are set to their defaults\nStep 3: Fit the model with data (aka \"model training\")\n* Model is learning the relationship between X and y\n* Occurs in-place\nStep 4: Predict the response for a new observation\n* New observations are called \"out-of-sample\" data\n* Uses the information it learned during the model training process\nFitting the data\nOk, so what are we actually trying to do? Given location, you must predict the category of crime that occurred. \n* Store feature matrix in X - this will be the location inputs\n* Store response in y - this will be the category of crime, since that is what we are prediciting",
"# Separate test and train set out of orignal train set.\nmsk = np.random.rand(len(data)) < 0.8\nknn_train = data[msk]\nknn_test = data[~msk]\nn = len(knn_test)\nprint(\"Original size: %s\" % len(data))\nprint(\"Train set: %s\" % len(knn_train))\nprint(\"Test set: %s\" % len(knn_test))\n# Prepare data sets\nx = knn_train[['X', 'Y']]\ny = knn_train['Category'].astype('category')\nactual = knn_test['Category'].astype('category')\n\n# Fit\nimport scipy as sp\ndef llfun1(act, pred):\n epsilon = 1e-15\n pred = sp.maximum(epsilon, pred)\n pred = sp.minimum(1-epsilon, pred)\n ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))\n ll = ll * -1.0/len(act)\n return ll\ndef llfun(act, pred):\n \"\"\" Logloss function for 1/0 probability\n \"\"\"\n return (-(~(act == pred)).astype(int) * math.log(1e-15)).sum() / len(act)\n\nlogloss = []\nfor i in range(1, 50, 1):\n knn = KNeighborsClassifier(n_neighbors=i)\n knn.fit(x, y)\n \n # Predict on test set\n outcome = knn.predict(knn_test[['X', 'Y']])\n \n # Logloss\n logloss.append(llfun(actual, outcome))\n",
"Logarithmic Loss\nhttps://www.kaggle.com/wiki/LogarithmicLoss\nThe logarithm of the likelihood function for a Bernouli random distribution.\nIn plain English, this error metric is used where contestants have to predict that something is true or false with a probability (likelihood) ranging from definitely true (1) to equally true (0.5) to definitely false(0).\nThe use of log on the error provides extreme punishments for being both confident and wrong. In the worst possible case, a single prediction that something is definitely true (1) when it is actually false will add infinite to your error score and make every other entry pointless. In Kaggle competitions, predictions are bounded away from the extremes by a small value in order to prevent this.\nLet's plot it as a function of k for our nearest neighbor",
"plt.plot(logloss)\nplt.savefig('n_neighbors_vs_logloss.png')",
"based on the log loss we can see that around 40 is optimal for k. Now lets predict using the test data",
"# Submit for K=40\nknn = KNeighborsClassifier(n_neighbors=40)\nknn.fit(x, y)\n\n# predict from our test set\ntest = pd.read_csv('../input/test.csv',parse_dates=['Dates'], dtype={\"X\": np.float64,\"Y\": np.float64}, )\nx_test = test[['X', 'Y']]\noutcomes = knn.predict(x_test)\n\nsubmit = pd.DataFrame({'Id': test.Id.tolist()})\nfor category in y.cat.categories:\n submit[category] = np.where(outcomes == category, 1, 0)\n \nsubmit.to_csv('k_nearest_neigbour.csv', index = False)",
"Can we do better?\nMaybe we can try with more features than just lat and lon\nBut first, our data is not very useful as text. So let's map the strings to ints so that we can use them later",
"# map pd district to int\nunique_pd_district = data[\"PdDistrict\"].unique()\npd_district_mapping = {}\ni=0\nfor c in unique_pd_district:\n pd_district_mapping[c] = i\n i += 1\ndata['PdDistrictId'] = data.PdDistrict.map(pd_district_mapping)\nprint(data.describe())\ndata.tail()\n\n# store feature matrix in \"X\"\nX = data[['Hour','DOW','X','Y','PdDistrictId']]\n\n# store response vector in \"y\"\ny = data['Category'].astype('category')\n\n# Submit for K=40\nknn = KNeighborsClassifier(n_neighbors=40)\nknn.fit(X, y)\n\n\ntest = pd.read_csv('../input/test.csv',parse_dates=['Dates'], dtype={\"X\": np.float64,\"Y\": np.float64}, )\n\n# clean up test set\ntest['DOW'] = test.DayOfWeek.map(dow)\ntest['Hour'] = pd.to_datetime(test.Dates).dt.hour\ntest['PdDistrictId'] = test.PdDistrict.map(pd_district_mapping)\ntest.tail()\n\n# Predictions for the test set\nX_test = test[['Hour','DOW','X','Y','PdDistrictId']]\noutcomes = knn.predict(X_test)\n\nsubmit = pd.DataFrame({'Id': test.Id.tolist()})\nfor category in y.cat.categories:\n submit[category] = np.where(outcomes == category, 1, 0)\n \nsubmit.to_csv('k_nearest_neigbour_2.csv', index = False)\n\n# lets see how much dow, hour and district correlate to category\nplt.figure()\nsns.pairplot(data=data[[\"Category\",\"DOW\",\"Hour\",\"PdDistrictId\"]],\n hue=\"Category\", dropna=True)\n\nplt.savefig(\"seaborn_pair_plot.png\")\n\nimport csv as csv\nimport math\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\n\nimport seaborn as sns\n\n# show plots inline\n\n# Add column containing day of week expressed in integer\ndow = {\n 'Monday':0,\n 'Tuesday':1,\n 'Wednesday':2,\n 'Thursday':3,\n 'Friday':4,\n 'Saturday':5,\n 'Sunday':6\n}\n\ndata = pd.read_csv('../input/train.csv',parse_dates=['Dates'], dtype={\"X\": np.float64,\"Y\": np.float64}, )\ndata['DOW'] = data.DayOfWeek.map(dow)\ndata['Hour'] = pd.to_datetime(data.Dates).dt.hour\nX = data[['Hour','DOW','X','Y']]\ny = data['Category'].astype('category')\nknn = KNeighborsClassifier(n_neighbors=39)\nknn.fit(X, y)\n\ntest = pd.read_csv('../input/test.csv',parse_dates=['Dates'], dtype={\"X\": np.float64,\"Y\": np.float64}, )\ntest['DOW'] = test.DayOfWeek.map(dow)\ntest['Hour'] = pd.to_datetime(test.Dates).dt.hour\nX_test = test[['Hour','DOW','X','Y']]\noutcomes = knn.predict(X_test)\nsubmit = pd.DataFrame({'Id': test.Id.tolist()})\nfor category in y.cat.categories:\n submit[category] = np.where(outcomes == category, 1, 0)\n \nsubmit.to_csv('k_nearest_neigbour3.csv', index = False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
atlury/deep-opencl
|
cs480/07 Linear Ridge Regression and Data Partitioning.ipynb
|
lgpl-3.0
|
[
"$\\newcommand{\\xv}{\\mathbf{x}}\n\\newcommand{\\Xv}{\\mathbf{X}}\n\\newcommand{\\yv}{\\mathbf{y}}\n\\newcommand{\\zv}{\\mathbf{z}}\n\\newcommand{\\av}{\\mathbf{a}}\n\\newcommand{\\Wv}{\\mathbf{W}}\n\\newcommand{\\wv}{\\mathbf{w}}\n\\newcommand{\\tv}{\\mathbf{t}}\n\\newcommand{\\Tv}{\\mathbf{T}}\n\\newcommand{\\Norm}{\\mathcal{N}}\n\\newcommand{\\muv}{\\boldsymbol{\\mu}}\n\\newcommand{\\sigmav}{\\boldsymbol{\\sigma}}\n\\newcommand{\\phiv}{\\boldsymbol{\\phi}}\n\\newcommand{\\Phiv}{\\boldsymbol{\\Phi}}\n\\newcommand{\\Sigmav}{\\boldsymbol{\\Sigma}}\n\\newcommand{\\Lambdav}{\\boldsymbol{\\Lambda}}\n\\newcommand{\\half}{\\frac{1}{2}}\n\\newcommand{\\argmax}[1]{\\underset{#1}{\\operatorname{argmax}}\\;}\n\\newcommand{\\argmin}[1]{\\underset{#1}{\\operatorname{argmin}}\\;}$\nLinear Ridge Regression\nRemember when we were discussing the complexity of models? The simplest model was a constant. A simple way to predict rainfall is to ignore all measurements and just predict the average rainfall. If a linear model of measurements may do no better. The degree to which it does do better can be expressed in the relative sum of squared errors (RSE) or\n$$RSE = \\frac{\\sum_{i=1}^N (\\tv_i - \\xv_i^T \\wv)^2}{\\sum_{i=1}^N (\\tv_i - \\bar{\\Tv})^2}$$\nIf RSE is 1, then your linear model is no better than using the mean. The closer RSE is to 0, the better your linear model is.",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnp.any?\n\ndef makeMPGData(filename='auto-mpg.data'):\n def missingIsNan(s):\n return np.nan if s == b'?' else float(s)\n data = np.loadtxt(filename, usecols=range(8), converters={3: missingIsNan})\n print(\"Read\",data.shape[0],\"rows and\",data.shape[1],\"columns from\",filename)\n goodRowsMask = np.isnan(data).sum(axis=1) == 0\n data = data[goodRowsMask,:]\n print(\"After removing rows containing question marks, data has\",data.shape[0],\"rows and\",data.shape[1],\"columns.\")\n X = data[:,1:]\n T = data[:,0:1]\n Xnames = ['cylinders','displacement','horsepower','weight','acceleration','year','origin']\n Tname = 'mpg'\n return X,T,Xnames,Tname\n\nX,T,Xnames,Tname = makeMPGData()\n\nmeans = X.mean(0)\nstds = X.std(0)\nnRows = X.shape[0]\nXs1 = np.insert((X-means)/stds, 0, 1, axis=1) # insert column of ones in new 0th column\n# Xs1 = np.hstack(( np.ones((nRows,1)), (X-means)/stds ))\nw = np.linalg.lstsq(Xs1, T)[0]\nw\n\n# predict = Xs1.dot(w)\npredict = Xs1 @ w\nRSE = np.sum((T-predict)**2) / np.sum((T - T.mean(0))**2)\nRSE",
"So, our linear model seems to be quite a bit better than using just the mean mpg.\nWell, maybe our model would do better if it was a little closer to just the mean, or, equivalently, just using the bias weight. This is the question that drives the derivation of \"ridge regression\". Let's add a term to our sum of squared error objective function that is the sum of all weight magnitudes except the bias weight. Then, we not only minimize the sum of squared errors, we also minimize the sum of the weight magnitudes:\n$$ \\sum_{i=1}^N (\\tv_i - \\xv_i^T \\wv)^2 + \\lambda \\sum_{i=2}^N w_i^2$$\nNotice that $\\lambda$ in there. With $\\lambda=0$ we have our usual linear regression objective function. With $\\lambda>0$, we are adding in a penalty for the weight magnitudes.\nHow does this change our solution for the best $\\wv$? You work it out. You should get\n$$ \\wv = (X^T X + \\lambda I)^{-1} X^T T $$\nexcept instead of using \n$$\n \\lambda I =\n \\begin{bmatrix}\n \\lambda & 0 & \\dotsc & 0\\\n 0 & \\lambda & \\dotsc & 0\\\n \\vdots \\\n 0 & 0 & \\dotsc & \\lambda\n \\end{bmatrix}\n$$\nwe want\n$$\n \\begin{bmatrix}\n 0 & 0 & \\dotsc & 0\\\n 0 & \\lambda & \\dotsc & 0\\\n \\vdots \\\n 0 & 0 & \\dotsc & \\lambda\n \\end{bmatrix}\n$$\nso we don't penalize the bias weight, the weight multiplying the constant 1 input component.\nNow, what value should $\\lambda$ be? Must determine empirically, by calculating the sum of squared error on test data.\nActually, we should not find the best value of $\\lambda$ by comparing error on the test data. This will give us a too optimistic prediction of error on novel data, because the test data was used to pick the best $\\lambda$. We really must hold out another partition of data from the training set for this. This third partition is often called the model validation set. So, we partition our data into disjoint training, validation, and testing subsets, and\n\nFor each value of $\\lambda$\nSolve for $\\wv$ using the training set\nUse $\\wv$ to predict the output for the validation set and calculate the squared error.\nUse $\\wv$ to predict the output for the testing set and calculate the squared error.\nPick value of $\\lambda$ that produced the lowest validation set error, and report the testing set error obtained using that value of $\\lambda$.\n\nLet's do it!",
"lamb = 0.1\nD = Xs1.shape[1]\nlambdaDiag = np.eye(D) * lamb\nlambdaDiag[0,0] = 0\nlambdaDiag\n\ndef makeLambda(D,lamb=0):\n lambdaDiag = np.eye(D) * lamb\n lambdaDiag[0,0] = 0\n return lambdaDiag\nmakeLambda(3,0.2)\n\nw = np.linalg.lstsq(Xs1.T @ Xs1 + lambdaDiag, Xs1.T @ T)[0]\nw\n\nD = Xs1.shape[1]\nw1 = np.linalg.lstsq(Xs1.T @ Xs1 + makeLambda(D,0.1), Xs1.T @ T)[0]\nw2 = np.linalg.lstsq(Xs1.T @ Xs1 + makeLambda(D,0), Xs1.T @ T)[0]\nnp.hstack((w1,w2))\n\nfor lamb in [0,0.1,1,10,100]:\n w = np.linalg.lstsq(Xs1.T @ Xs1 + makeLambda(D,lamb), Xs1.T @ T)[0]\n plt.plot(w)\n\nlambdas = [0,0.1,1,10,100,1000]\nfor lamb in lambdas:\n w = np.linalg.lstsq(Xs1.T @ Xs1 + makeLambda(D,lamb), Xs1.T @ T)[0]\n plt.plot(Xs1[:30] @ w)\nplt.plot(T[:30],'ro',lw=5,alpha=0.8)\nplt.legend(lambdas,loc='best')",
"Which $\\lambda$ value is best? Careful. What is the best value of $\\lambda$ if just comparing error on training data?\nNow, careful again! Can't report expected error from testing data that is also used to pick best value of $\\lambda$. Error is likely to be better than what you will get on new data that was not used to train the model and also was not used to pick value of $\\lambda$.\nNeed a way to partition our data into training, validation and testing subsets. Let's write a function that makes these partitions randomly, given the data matrix and the fractions to be used in the three partitions.",
"def partition(X,T,trainFraction=0.8, validateFraction=0.1, testFraction=0.1):\n '''Usage: Xtrain,Ttrain,Xval,Tval,Xtest,Ttext = partition(X,T,0.8,0.2,0.2)'''\n if trainFraction + validateFraction + testFraction != 1:\n raise ValueError(\"Train, validate and test fractions must sum to 1. Given values sum to \" + str(trainFraction+validateFraction+testFraction))\n n = X.shape[0]\n nTrain = round(trainFraction * n)\n nValidate = round(validateFraction * n)\n nTest = round(testFraction * n)\n if nTrain + nValidate + nTest != n:\n nTest = n - nTrain - nValidate\n # Random order of data matrix row indices\n rowIndices = np.arange(X.shape[0])\n np.random.shuffle(rowIndices)\n # Build X and T matrices by selecting corresponding rows for each partition\n Xtrain = X[rowIndices[:nTrain],:]\n Ttrain = T[rowIndices[:nTrain],:]\n Xvalidate = X[rowIndices[nTrain:nTrain+nValidate],:]\n Tvalidate = T[rowIndices[nTrain:nTrain+nValidate],:]\n Xtest = X[rowIndices[nTrain+nValidate:nTrain+nValidate+nTest],:]\n Ttest = T[rowIndices[nTrain+nValidate:nTrain+nValidate+nTest],:]\n return Xtrain,Ttrain,Xvalidate,Tvalidate,Xtest,Ttest\n\nX = np.arange(20).reshape((10,2))\nX\n\nT = np.arange(10).reshape((-1,1))\nT\n\nX = np.arange(20).reshape((10,2))\nT = np.arange(10).reshape((-1,1))\nXtrain,Ttrain,Xval,Tval,Xtest,Ttest = partition(X,T,0.6,0.2,0.2)\n\nprint(\"Xtrain:\")\nprint(Xtrain)\nprint(\" Ttrain:\")\nprint(Ttrain)\nprint(\"\\n Xval:\")\nprint(Xval)\nprint(\" Tval:\")\nprint(Tval)\nprint(\"\\n Xtest:\")\nprint(Xtest)\nprint(\" Ttest:\")\nprint(Ttest)",
"Now we can train our model using several values of $\\lambda$ on Xtrain,Train and calculate the model error on Xval,Tval. Then pick best value of $\\lambda$ based on error on Xval,Tval. Finally, calculate error of model using best $\\lambda$ on Xtest,Ttest as our estimate of error on new data.\nHowever, these errors will be affected by the random partitioning of the data. Repeating with new partitions may result in a different $\\lambda$ being best. We should repeat the above procedure many times and average over the results. How many repetitions do we need?\nAnother approach, commonly followed in machine learning, is to first partition the data into $k$ subsets, called \"folds\". Pick one fold to be the test partition, another fold to be the validate partition, and collect the remaining folds to be the train partition. We can do this $k\\,(k-1)$ ways. (Why?) So, with $k=5$ we get 20 repetitions performed in a way that most distributes samples among the partitions.\nFirst, a little note on the yield statement in python. The yield statement is like return except that execution pauses at this point in the function, after returning the values. When the function is called again, it continues from that paused point.",
"def count(n):\n for a in range(n):\n yield a\n\ncount(4)\n\nlist(count(4))\n\nfor i in count(5):\n print(i)\n\nzip?\n\ndef partitionKFolds(X,T,nFolds,shuffle=False,nPartitions=3):\n '''Usage: for Xtrain,Ttrain,Xval,Tval,Xtest,Ttext in partitionKFolds(X,T,5):'''\n # Randomly arrange row indices\n rowIndices = np.arange(X.shape[0])\n if shuffle:\n np.random.shuffle(rowIndices)\n # Calculate number of samples in each of the nFolds folds\n nSamples = X.shape[0]\n nEach = int(nSamples / nFolds)\n if nEach == 0:\n raise ValueError(\"partitionKFolds: Number of samples in each fold is 0.\")\n # Calculate the starting and stopping row index for each fold.\n # Store in startsStops as list of (start,stop) pairs\n starts = np.arange(0,nEach*nFolds,nEach)\n stops = starts + nEach\n stops[-1] = nSamples\n startsStops = list(zip(starts,stops))\n # Repeat with testFold taking each single fold, one at a time\n for testFold in range(nFolds):\n if nPartitions == 3:\n # Repeat with validateFold taking each single fold, except for the testFold\n for validateFold in range(nFolds):\n if testFold == validateFold:\n continue\n # trainFolds are all remaining folds, after selecting test and validate folds\n trainFolds = np.setdiff1d(range(nFolds), [testFold,validateFold])\n # Construct Xtrain and Ttrain by collecting rows for all trainFolds\n rows = []\n for tf in trainFolds:\n a,b = startsStops[tf] \n rows += rowIndices[a:b].tolist()\n Xtrain = X[rows,:]\n Ttrain = T[rows,:]\n # Construct Xvalidate and Tvalidate\n a,b = startsStops[validateFold]\n rows = rowIndices[a:b]\n Xvalidate = X[rows,:]\n Tvalidate = T[rows,:]\n # Construct Xtest and Ttest\n a,b = startsStops[testFold]\n rows = rowIndices[a:b]\n Xtest = X[rows,:]\n Ttest = T[rows,:]\n # Return partition matrices, then suspend until called again.\n yield Xtrain,Ttrain,Xvalidate,Tvalidate,Xtest,Ttest,testFold\n else:\n # trainFolds are all remaining folds, after selecting test and validate folds\n trainFolds = np.setdiff1d(range(nFolds), [testFold])\n # Construct Xtrain and Ttrain by collecting rows for all trainFolds\n rows = []\n for tf in trainFolds:\n a,b = startsStops[tf] \n rows += rowIndices[a:b].tolist()\n Xtrain = X[rows,:]\n Ttrain = T[rows,:]\n # Construct Xtest and Ttest\n a,b = startsStops[testFold]\n rows = rowIndices[a:b]\n Xtest = X[rows,:]\n Ttest = T[rows,:]\n # Return partition matrices, then suspend until called again.\n yield Xtrain,Ttrain,Xtest,Ttest,testFold\n\nX = np.arange(20).reshape((10,2))\nT = np.arange(10).reshape((-1,1))\nk = 0\nfor Xtrain,Ttrain,Xval,Tval,Xtest,Ttest,testFold in partitionKFolds(X,T,5):\n k += 1\n print(\"Fold\",k)\n print(\" Xtrain:\")\n print(Xtrain)\n print(\" Ttrain:\")\n print(Ttrain)\n print(\"\\n Xval:\")\n print(Xval)\n print(\" Tval:\")\n print(Tval)\n print(\"\\n Xtest:\")\n print(Xtest)\n print(\" Ttest:\")\n print(Ttest)",
"Typical use of these partitions is shown below. It is most handy to just collect all results in a matrix and calculate averages afterwards, rather than accumulating each result and dividing by the number of repetitions at the end.",
"def train(X,T,lamb):\n means = X.mean(0)\n stds = X.std(0)\n n,d = X.shape\n Xs1 = np.insert( (X - means)/stds, 0, 1, axis=1)\n lambDiag = np.eye(d+1) * lamb\n lambDiag[0,0] = 0\n w = np.linalg.lstsq( Xs1.T @ Xs1 + lambDiag, Xs1.T @ T)[0]\n return {'w': w, 'means':means, 'stds':stds}\n\ndef use(X,model):\n Xs1 = np.insert((X-model['means'])/model['stds'], 0, 1, axis=1)\n return Xs1 @ model['w']\n\ndef rmse(A,B):\n return np.sqrt(np.mean( (A-B)**2 ))\n\nlambdas = [0,1,5,10,20]\nresults = []\nfor Xtrain,Ttrain,Xval,Tval,Xtest,Ttest,_ in partitionKFolds(X,T,5):\n for lamb in lambdas:\n model = train(Xtrain,Ttrain,lamb)\n predict = use(Xval,model)\n results.append([lamb,\n rmse(use(Xtrain,model),Ttrain),\n rmse(use(Xval,model),Tval),\n rmse(use(Xtest,model),Ttest)])\nresults = np.array(results)\nprint(results)\nprint(results.shape)\n# print(results)\navgresults = []\nfor lam in lambdas:\n print(lam)\n print(results[results[:,0]==lam,1:])\n avgresults.append( [lam] + np.mean(results[results[:,0]==lam,1:],axis=0).tolist())\navgresults = np.array(avgresults)\nprint(avgresults)\n\n\nplt.plot(avgresults[:,0],avgresults[:,1:],'o-')\nplt.xlabel('$\\lambda$')\nplt.ylabel('RMSE')\nplt.legend(('Train','Validate','Test'),loc='best');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cliburn/sta-663-2017
|
notebook/06_Graphics.ipynb
|
mit
|
[
"Graphics in Python\nThe foundational package for most graphics in Python is matplotlib, and the seaborn package builds on this to provide more statistical graphing options. We will focus on these two packages, but there are many others if these don't meet your needs.\nThere are also several specialized packages that might come in useful:\n\nggplot is a port of the R ggplto2 library to Python\nvispy and lightning for interactive visualization of large data sets\nbokeh for web-friendly interactive graphics\npyxley and spyre for R shiny fans\nand several others.\n\nResources\n\nMatplotlib tutorial\nMatplotlib gallery\nSeaborn gallery",
"import warnings\nwarnings.filterwarnings(\"ignore\")",
"Matplotlib\nMatplotlib has a \"functional\" interface similar to Matlab via the pyplot module for simple interactive use, as well as an object-oriented interface that is useful for more complex graphic creations.\nTypes of plots",
"plt.hist(np.random.randn(1000), bins=np.linspace(-4,4,11))\npass\n\nplt.boxplot(np.random.random((6,10)))\npass\n\nplt.scatter(*np.random.uniform(0.1, 0.9, (2,100)),\n s=np.random.randint(10, 200, 100), \n c=np.random.random(100))\npass\n\nplt.stem(np.random.random(8))\nplt.margins(0.05)\npass\n\nx = np.linspace(0, 2*np.pi, 100)\ny = np.sin(x)\n\nplt.plot(x, y)\nplt.axis([0, 2*np.pi, -1.05, 1.05,])\npass",
"Colors",
"plt.scatter(*np.random.uniform(0.1, 0.9, (2,100)),\n s=np.random.randint(10, 200, 100), \n c=np.random.random(100))\npass\n\nplt.scatter(*np.random.uniform(0.1, 0.9, (2,100)),\n s=np.random.randint(10, 200, 100), \n c=np.random.random(100), cmap='summer')\npass\n\nplt.scatter(*np.random.uniform(0.1, 0.9, (2,100)),\n s=np.random.randint(10, 200, 100), \n c=np.random.random(100), cmap='hsv')\npass",
"Gettting a list of colors from a colormap\nGiving an argument of 0.0 < x < 1.0 to a colormap gives the appropriate interpolated color.",
"# find the bottom, middle and top colors of the winter colormap\ncolors = plt.cm.winter(np.linspace(0, 1, 3))\ncolors\n\nplt.scatter(*np.random.uniform(0.1, 0.9, (2,100)),\n s=np.random.randint(10, 200, 100), \n c=colors)\npass",
"Styles",
"plt.style.available\n\nwith plt.style.context('classic'):\n plt.plot(x, y)\n plt.axis([0, 2*np.pi, -1.05, 1.05,])\n\nwith plt.style.context('fivethirtyeight'):\n plt.plot(x, y)\n plt.axis([0, 2*np.pi, -1.05, 1.05,])\n\nwith plt.style.context('ggplot'):\n plt.plot(x, y)\n plt.axis([0, 2*np.pi, -1.05, 1.05,])\n\nwith plt.xkcd():\n plt.plot(x, y)\n plt.axis([0, 2*np.pi, -1.05, 1.05,])",
"Creating your onw style\nMany, many options can be configured.",
"plt.rcParams\n\n%%file foo.mplstyle\naxes.grid: True\naxes.titlesize : 24\naxes.labelsize : 20\nlines.linewidth : 3\nlines.markersize : 10\nxtick.labelsize : 16\nytick.labelsize : 16\n\nwith plt.style.context('foo.mplstyle'):\n plt.plot(x, y)\n plt.axis([0, 2*np.pi, -1.05, 1.05,])",
"Customizing plots",
"plt.rcParams.update({'font.size': 22})\n\nfig = plt.figure(figsize=(8,6))\nax = plt.subplot(1,1,1)\nplt.plot(x, y, color='red', linewidth=2, linestyle='dashed', label='sine curve')\nplt.plot(x, np.cos(x), 'b-', label='cosine curve')\nplt.legend(loc='best', fontsize=14)\nplt.axis([0, 2*np.pi, -1.05, 1.05,])\nplt.xlabel('x')\nplt.ylabel('sin(x)')\nplt.xticks([0,0.5*np.pi,np.pi,1.5*np.pi,2*np.pi], \n [0, r'$\\frac{\\pi}{2}$', r'$\\pi$', r'$\\frac{3\\pi}{2}$', r'$2\\pi$'])\nplt.title('Sine and Cosine Plots')\nplt.text(0.45, 0.9, 'Empty space', transform=ax.transAxes, ha='left', va='top')\npass",
"Plot layouts",
"fig, axes = plt.subplots(2,2,figsize=(8,8))\naxes[0,0].plot(x,y, 'r')\naxes[0,1].plot(x,y, 'g')\naxes[1,0].plot(x,y, 'b')\naxes[1,1].plot(x,y, 'k')\nfor ax in axes.ravel():\n ax.margins(0.05)\npass\n\nax1 = plt.subplot2grid((3,3), (0,0), colspan=3)\nax2 = plt.subplot2grid((3,3), (1,0), colspan=2)\nax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)\nax4 = plt.subplot2grid((3,3), (2,0), colspan=2)\naxes = [ax1, ax2, ax3, ax4]\ncolors = ['r', 'g', 'b', 'k']\nfor ax, c in zip(axes, colors):\n ax.plot(x, y, c)\n ax.margins(0.05)\nplt.tight_layout()",
"Seaborn",
"sns.set_context(\"notebook\", font_scale=1.5, rc={\"lines.linewidth\": 2.5})\n\nimport numpy.random as rng",
"Density plots",
"xs = rng.normal(0,1,100)\n\nfig, axes = plt.subplots(1, 2, figsize=(8,4))\nsns.distplot(xs, hist=False, rug=True, ax=axes[0]);\nsns.distplot(xs, hist=True, ax=axes[1])\npass",
"Kernel density estimate",
"sns.kdeplot(np.r_[rng.normal(0,1,50), rng.normal(4,0.8,100)])\npass\n\niris = sns.load_dataset('iris')\n\niris.head()",
"Joint distribution plot",
"sns.jointplot(x='petal_length', y='petal_width', data=iris, kind='kdeplot')\npass",
"Box and violin plots",
"fig, axes = plt.subplots(1, 2, figsize=(8,4))\n\nsns.boxplot(x='species', y='petal_length', data=iris, ax=axes[0])\nsns.violinplot(x='species', y='petal_length', data=iris, ax=axes[1])\npass",
"Composite plots",
"url = 'https://raw.githubusercontent.com/mwaskom/seaborn-data/master/titanic.csv'\ntitanic = pd.read_csv(url)\n\ntitanic.head()\n\nsns.set_context('notebook', font_scale=1.5)\n\nsns.lmplot(x='fare', y='survived', col='alone', row='sex', data=titanic, logistic=True)\npass\n\ng = sns.PairGrid(titanic,\n y_vars=['fare', 'age'],\n x_vars=['sex', 'class', 'embark_town' ],\n aspect=1, size=5.5)\ng.map(sns.stripplot, jitter=True, palette=\"bright\")\npass",
"Using ggplot as an alternative to seaborn.\nThe ggplot module is a port of R's ggplot2 - usage is very similar except for the following minor differences:\n\nPass in a pandas dataframe\naethetics comes before data in the argument list ot ggplot\nGive column names and other arugments (e.g.. function to call) as strings\nYou need to use the line continuation character \\ to extend over multiple lines\n\nOnly the most elementary examples are shown below. The ggplot module is extremely rich and sophisticated with a steep learning curve if you're not already familiar with it from R. Please see documentation for details.",
"from ggplot import *",
"Interacting with R",
"%load_ext rpy2.ipython",
"Note that we are exporting the R mtcars dataframe to Python (converts to pandas DataFrame)",
"%R -o mtcars\n\nmtcars.head()\n\nggplot(aes(x='wt', y='mpg'), data=mtcars,) + geom_point()\n\nggplot(aes(x='wt', y='mpg'), data=mtcars) + geom_point() + geom_smooth(method='loess')\n\nggplot(aes(x='wt', y='mpg'), data=mtcars) + geom_point() + geom_line()\n\nggplot(aes(x='mpg'), data=mtcars) + geom_histogram(binwidth=2)\n\nggplot(aes(x='mpg'), mtcars) + \\\ngeom_line(stat=\"density\") + \\\nxlim(2.97, 41.33) + \\\nlabs(title=\"Density plot\")",
"Use ggplot in R directly with %R magic",
"cars = mtcars",
"Note that we pass in Python variables with the -i optin and using the %%R cell magic",
"%%R -i cars\nlibrary('ggplot2')\nggplot(cars, aes(x=mpg, y=am)) +\ngeom_point(position=position_jitter(width=.3, height=.08), shape=21, alpha=0.6, size=3) +\nstat_smooth(method=glm, method.args=list(family=\"binomial\"), color=\"red\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Leguark/GeMpy
|
Prototype Notebook/Sandstone Project_legacy.ipynb
|
mit
|
[
"# Importing\nimport theano.tensor as T\nimport sys, os\nsys.path.append(\"/home/bl3/PycharmProjects/GeMpy/GeMpy\")\nsys.path.append(\"/home/bl3/PycharmProjects/pygeomod/pygeomod\")\nsys.path.append(\"/home/miguel/PycharmProjects/GeMpy/GeMpy\")\nimport GeoMig\n\nimport importlib\nimportlib.reload(GeoMig)\nimport numpy as np\nimport pandas as pn\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\n\nos.environ['CUDA_LAUNCH_BLOCKING'] = '1'\nnp.set_printoptions(precision = 6, linewidth= 130, suppress = True)\n\n#%matplotlib inline\n%matplotlib notebook",
"Sandstone Model\nFirst we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).\nGeneral note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means",
"# Setting extend, grid and compile\n# Setting the extent\nsandstone = GeoMig.Interpolator(696000,747000,6863000,6950000,-20000, 2000,\n range_var = np.float32(110000),\n u_grade = 9) # Range used in geomodeller\n\n# Setting resolution of the grid\nsandstone.set_resolutions(40,40,80)\nsandstone.create_regular_grid_3D()\n\n# Compiling\nsandstone.theano_compilation_3D()",
"Loading data from geomodeller\nSo there are 3 series, 2 of one single layer and 1 with 2. Therefore we need 3 potential fields, so lets begin.",
"sandstone.load_data_csv(\"foliations\", os.pardir+\"/input_data/a_Foliations.csv\")\nsandstone.load_data_csv(\"interfaces\", os.pardir+\"/input_data/a_Points.csv\")\npn.set_option('display.max_rows', 25)\nsandstone.Foliations;\n\nsandstone.Foliations",
"Defining Series",
"sandstone.set_series({\"EarlyGranite_Series\":sandstone.formations[-1], \n \"BIF_Series\":(sandstone.formations[0], sandstone.formations[1]),\n \"SimpleMafic_Series\":sandstone.formations[2]}, \n order = [\"EarlyGranite_Series\",\n \"BIF_Series\",\n \"SimpleMafic_Series\"]) \nsandstone.series",
"Early granite",
"sandstone.compute_potential_field(\"EarlyGranite_Series\", verbose = 1)\nsandstone.plot_potential_field_2D(direction = \"y\", cell_pos = 13, figsize=(7,6), contour_lines = 20, \n potential_field = True)\n\nsandstone.potential_interfaces;\n\n%matplotlib qt4\nblock = np.ones_like(sandstone.Z_x)\nblock[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0\nblock[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 1\nblock = block.reshape(40,40,80)\n#block = np.swapaxes(block, 0, 1)\nplt.imshow(block[:,8,:].T, origin = \"bottom\", aspect = \"equal\", extent = (sandstone.xmin, sandstone.xmax,\n sandstone.zmin, sandstone.zmax),\n interpolation = \"none\")",
"BIF Series",
"sandstone.compute_potential_field(\"BIF_Series\", verbose=1)\nsandstone.plot_potential_field_2D(direction = \"y\", cell_pos = 12, figsize=(7,6), contour_lines = 100,\n potential_field = True)\n\nsandstone.potential_interfaces, sandstone.layers[0].shape;\n\n%matplotlib qt4\nblock = np.ones_like(sandstone.Z_x)\nblock[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0\nblock[(sandstone.Z_x<sandstone.potential_interfaces[0]) * (sandstone.Z_x>sandstone.potential_interfaces[-1])] = 1\nblock[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 2\nblock = block.reshape(40,40,80)\nplt.imshow(block[:,13,:].T, origin = \"bottom\", aspect = \"equal\", extent = (sandstone.xmin, sandstone.xmax,\n sandstone.zmin, sandstone.zmax),\n interpolation = \"none\")",
"SImple mafic",
"sandstone.compute_potential_field(\"SimpleMafic_Series\", verbose = 1)\nsandstone.plot_potential_field_2D(direction = \"y\", cell_pos = 15, figsize=(7,6), contour_lines = 20,\n potential_field = True)\n\nsandstone.potential_interfaces, sandstone.layers[0].shape;\n\n%matplotlib qt4\nblock = np.ones_like(sandstone.Z_x)\nblock[sandstone.Z_x>sandstone.potential_interfaces[0]] = 0\nblock[sandstone.Z_x<sandstone.potential_interfaces[-1]] = 1\nblock = block.reshape(40,40,80)\n#block = np.swapaxes(block, 0, 1)\nplt.imshow(block[:,13,:].T, origin = \"bottom\", aspect = \"equal\", extent = (sandstone.xmin, sandstone.xmax,\n sandstone.zmin, sandstone.zmax))",
"Optimizing the export of lithologies\nHere I am going to try to return from the theano interpolate function the internal type of the result (in this case DK I guess) so I can make another function in python I guess to decide which potential field I calculate at every grid_pos",
"# Reset the block\nsandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))\n\n# Compute the block\nsandstone.compute_block_model([0,1,2], verbose = 1)\n\n%matplotlib qt4\n\nplot_block = sandstone.block.get_value().reshape(40,40,80)\nplt.imshow(plot_block[:,13,:].T, origin = \"bottom\", aspect = \"equal\",\n extent = (sandstone.xmin, sandstone.xmax, sandstone.zmin, sandstone.zmax), interpolation = \"none\")",
"Export vtk",
"\"\"\"Export model to VTK\n\nExport the geology blocks to VTK for visualisation of the entire 3-D model in an\nexternal VTK viewer, e.g. Paraview.\n\n..Note:: Requires pyevtk, available for free on: https://github.com/firedrakeproject/firedrake/tree/master/python/evtk\n\n**Optional keywords**:\n - *vtk_filename* = string : filename of VTK file (default: output_name)\n - *data* = np.array : data array to export to VKT (default: entire block model)\n\"\"\"\nvtk_filename = \"noddyFunct2\"\n\nextent_x = 10\nextent_y = 10\nextent_z = 10\n\ndelx = 0.2\ndely = 0.2\ndelz = 0.2\nfrom pyevtk.hl import gridToVTK\n# Coordinates\nx = np.arange(0, extent_x + 0.1*delx, delx, dtype='float64')\ny = np.arange(0, extent_y + 0.1*dely, dely, dtype='float64')\nz = np.arange(0, extent_z + 0.1*delz, delz, dtype='float64')\n\n# self.block = np.swapaxes(self.block, 0, 2)\ngridToVTK(vtk_filename, x, y, z, cellData = {\"geology\" : sol})",
"Performance Analysis\nCPU",
"%%timeit\nsol = interpolator.geoMigueller(dips,dips_angles,azimuths,polarity, rest, ref)[0]\n\nsandstone.block_export.profile.summary()",
"GPU",
"%%timeit\n# Reset the block\nsandstone.block.set_value(np.zeros_like(sandstone.grid[:,0]))\n\n# Compute the block\nsandstone.compute_block_model([0,1,2], verbose = 0)\n\nsandstone.block_export.profile.summary()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jeroarenas/MLBigData
|
0_Introduction/Intro_PySpark_2.ipynb
|
mit
|
[
"Analysis of a Web Server Log \nThe log files that we use for this assignment are in the Apache Common Log Format (CLF). The log file entries produced in CLF will look something like this:\n127.0.0.1 - - [01/Aug/1995:00:00:01 -0400] \"GET /images/launch-logo.gif HTTP/1.0\" 200 1839\nEach part of this log entry is described below.\n\n\n127.0.0.1: IP address (host name, if available) of the client which made the request to the server.\n\n\n-: User identity from remote machine, not available.\n\n\n-: User identity from local logon, not available.\n\n\n[01/Aug/1995:00:00:01 -0400]: Date and time that the server finished processing the request.\n\n\nGET: The request method (e.g., GET, POST, etc.)\n\n\n/images/launch-logo.gif: The endpoint (a Uniform Resource Identifier, URI)\n\n\nHTTP/1.0: The client protocol version\n\n\n200: Status code that the server sends back to the client.\n\n\n1839: Size of the object returned to the client.\n\n\nNASA-HTTP Web Server Log: We will use a data set from NASA Kennedy Space Center WWW server in Florida. The full data set is freely available (http://ita.ee.lbl.gov/html/contrib/NASA-HTTP.html) and contains two month's of all HTTP requests. We are using here a subset.\n RDD Creation \nCreate the corresponding RDD using sc.textfile(logFile) to convert each line of the file into an element in an RDD. The data is stored in the file 'data/apachelog.txt'.",
"RDDlogs = (sc\n .textFile('data/apachelog.txt') \n )\n\nNlines = RDDlogs.<COMPLETAR>()\nprint \"The log file has %d lines\" % Nlines\nprint \" \"\n\nfirst_10 = RDDlogs.<COMPLETAR>(10)\nprint \"The first 10 elements are:\"\nprint \" \"\nfor x in first_10:\n print x",
"Let us define a funtion that transforms every string into a list of separated fields.\nEXERCISE: Use this function to transform the RDD such that now every element is a list instead of a string. Compare the results with the previous section. We cache the RDD in memory for faster access to the data in the next sections.\nThe answer should be:\n<pre><code>\nThe first 10 elements are:\n\n['\"in24.inetnebr.com', '-', '-', '1995-08-01 00:00:01', 'GET', '/shuttle/missions/sts-68/news/sts-68-mcc-05.txt', 'HTTP/1.0', '200', '1839\"']\n['\"uplherc.upl.com', '-', '-', '1995-08-01 00:00:07', 'GET', '/', 'HTTP/1.0', '304', '0\"']\n['\"uplherc.upl.com', '-', '-', '1995-08-01 00:00:08', 'GET', '/images/ksclogo-medium.gif', 'HTTP/1.0', '304', '0\"']\n['\"uplherc.upl.com', '-', '-', '1995-08-01 00:00:08', 'GET', '/images/MOSAIC-logosmall.gif', 'HTTP/1.0', '304', '0\"']\n['\"uplherc.upl.com', '-', '-', '1995-08-01 00:00:08', 'GET', '/images/USA-logosmall.gif', 'HTTP/1.0', '304', '0\"']\n['\"ix-esc-ca2-07.ix.netcom.com', '-', '-', '1995-08-01 00:00:09', 'GET', '/images/launch-logo.gif', 'HTTP/1.0', '200', '1713\"']\n['\"uplherc.upl.com', '-', '-', '1995-08-01 00:00:10', 'GET', '/images/WORLD-logosmall.gif', 'HTTP/1.0', '304', '0\"']\n['\"slppp6.intermind.net', '-', '-', '1995-08-01 00:00:10', 'GET', '/history/skylab/skylab.html', 'HTTP/1.0', '200', '1687\"']\n['\"piweba4y.prodigy.com', '-', '-', '1995-08-01 00:00:10', 'GET', '/images/launchmedium.gif', 'HTTP/1.0', '200', '11853\"']\n['\"slppp6.intermind.net', '-', '-', '1995-08-01 00:00:11', 'GET', '/history/skylab/skylab-small.gif', 'HTTP/1.0', '200', '9202\"']\n</code></pre>",
"def str2list(x):\n return <COMPLETAR>\n\nRDDlist = (RDDlogs\n .map(lambda x: str2list(x))\n .cache()\n )\n\nfirst_10 = RDDlist.take(10)\nprint \"The first 10 elements are:\"\nprint \" \"\nfor x in first_10:\n print x",
"Unfortunately, some preprocessing errors have occurred and some of the RDD elements do not have 9 fields. We need to fix that before we can continue.\nEXERCISE: Obtain the size of every RDD element as a Python List and identify the unique values.\nThe answer should be:\n<pre><code>\nWe have extracted 977769 sizes, and the unique values are:\n[9, 10, 11, 1]\n</code></pre>",
"Element_sizes = (RDDlist\n .map(lambda x: <COMPLETAR>)\n .collect()\n )\n\nNsizes = len(Element_sizes)\nUnique_values = list(set(Element_sizes))\n\nprint \"We have extracted %d sizes, and the unique values are:\" % Nsizes\nprint Unique_values",
"However, the code above is suboptimal. We have extracted a full list of elements and processed them in batch mode. If the list is too large, it might even not fit into memory. Let's do the same computation using only Spark commands.\nEXERCISE: Count how many lines can be found of every size.\nThe answer should be:\n<pre><code>\nLengths and counts:\n[(9, 964781), (1, 12986), (10, 1), (11, 1)]\n</code></pre>",
"pairs_count = (RDDlist\n .map(lambda x: <COMPLETAR>)\n .reduceByKey(<COMPLETAR>)\n .collect()\n )\n\nprint \"Lengths and counts:\"\nprint pairs_count\n",
"Filtering the dataset\nWe want to retain only log lines with 9 fields, the other cases can be considered as storage errors.\nEXERCISE: Obtain a cleanRDD where all elements have exactly 9 fields. The size of the resulting RDD must be equal to the number of '9' values obtained in the previous section.\nThe answer should be:\n<pre><code>\nThe size of the cleaned RDD is 964781\n</code></pre>\n\nWe cache the RDD in memory for faster access to the data in the next sections.",
"cleanRDD = (RDDlist\n .filter(lambda x: <COMPLETAR>)\n .cache()\n )\n\nsize_cleanRDD = cleanRDD.count()\n\nprint \"The size of the cleaned RDD is %d\" % size_cleanRDD",
"Basic counts\nWe want to compute some counts on the log file, to answer some simple questions:\nEXERCISE: Obtain the unique values of field 4, and how many times they appear in the dataset.\nThe answer should be:\n<pre><code>\n[('HEAD', 2395), ('GET', 962318), ('POST', 68)]\n</code></pre>",
"unique_field4 = (cleanRDD\n .map(lambda x: <COMPLETAR>)\n .reduceByKey(lambda x, y: x + y)\n .collect()\n )\n\nprint unique_field4\n",
"EXERCISE: Repeat the same analysis for the status code, but print the results ordered by number of occurrences, largest first. Use the takeOrdered function. Note that by default, the takeOrdered function directly sorts the tuples. Try to pass as argument a lambda function with the sorting criterion.\nThe answer should be:\n<pre><code>\n[('200', 881815), ('304', 74774), ('404', 5835), ('302', 2293), ('403', 47), ('501', 15), ('500', 2)]\n</code></pre>",
"unique_field7 = (cleanRDD\n .<COMPLETAR>\n .cache()\n )\n\nN_elements = len(unique_field7.collect())\n \nordered_elements = unique_field7.takeOrdered(N_elements, lambda x: -x[1])\n\nprint ordered_elements\n",
"We want now to characterize the size of the contents returned (field 8).\nEXERCISE: compute the average, minimum and maximum size of the contents. You may need to print some values, to check if the format of the retrieved values is correct. Use the mean(), min() and max() functions. Also compute the mean value using reduce() and a count() instead of mean().\nThe answer should be:\n<pre><code>\nThese are the first 10 content values:\n[1839, 0, 0, 0, 0, 1713, 0, 1687, 11853, 9202]\n\nThis is the minimum value:\n0\n\nThis is the maximum value:\n3421948\n\nThis is the average value using the 'mean' function:\n17840.7790929\n\nThis is the average value WITHOUT using the 'mean' function:\n17840.7790929</code></pre>",
"content_sizesRDD = (cleanRDD\n .map(lambda x: int(x[8].replace('\"','')))\n .cache()\n )\n\nprint \"These are the first 10 content values:\"\nprint content_sizesRDD.<COMPLETAR>(10)\nprint \" \"\n\nprint \"This is the minimum value:\"\nprint content_sizesRDD.<COMPLETAR>()\nprint \" \"\n\nprint \"This is the maximum value:\"\nprint content_sizesRDD.<COMPLETAR>()\nprint \" \"\n\nprint \"This is the average value using the 'mean' function:\"\nprint content_sizesRDD.<COMPLETAR>()\nprint \" \"\n\nprint \"This is the average value WITHOUT using the 'mean' function:\"\nprint <COMPLETAR>\nprint \" \"\n",
"Let's obtain now a list of the 10 hosts that have been accessed more times.\nEXERCISE: Build a new filtered_host_RDD containing only the 10 hosts that have been accessed more times. Verify the numbers by computing the frequencies in the new RDD.\nThe answer should be:\n<pre><code>\nThis is the sorted count of the 10 most frequent hosts:\n\nedams.ksc.nasa.gov : 3737\npiweba5y.prodigy.com : 3067\npiweba4y.prodigy.com : 2690\npiweba3y.prodigy.com : 2658\nwww-d1.proxy.aol.com : 2591\nnews.ti.com : 2358\n163.206.89.4 : 2317\nwww-b2.proxy.aol.com : 2289\nwww-b3.proxy.aol.com : 2254\nwww-d2.proxy.aol.com : 2229\n</code></pre>",
"hostcountRDD = (cleanRDD\n .map(lambda x: <COMPLETAR>)\n .reduceByKey(lambda x, y: x + y)\n .cache()\n )\n\n\npairs_of_ten_most_frequent_hosts = hostcountRDD.<COMPLETAR>\n\nlist_of_ten_most_frequent_hosts = [x[0].replace('\"','') for x in pairs_of_ten_most_frequent_hosts]\n\nfiltered_host_RDD = (cleanRDD\n .filter(lambda x: x[0].replace('\"','') in list_of_ten_most_frequent_hosts)\n )\n\nhost_freq_RDD = (filtered_host_RDD\n .map(lambda x: <COMPLETAR>)\n .reduceByKey(<COMPLETAR>)\n .cache()\n )\n\nten_most_frequent_hosts_count = (host_freq_RDD\n .takeOrdered(20, lambda x: -x[1])\n )\n\nprint \"These are the counts for the 10 most frequent hosts:\"\nprint \" \"\nfor x in ten_most_frequent_hosts_count:\n print x[0].replace('\"','') + \" : \" + str(x[1])\nprint \" \"",
"EXERCISE: Determine the number of unique hosts on a daily basis (days of the month). Obtain a list of tuples (day of the month, number of unique hosts in that day of the month). Explore the use of the function groupByKey. You may also want to define a function that takes a date field and returns the day of the month. We will plot the result at the end, do not forget to obtain the results sorted by day, or the plot will be a mess. Can you indentify the weekends in the plot?\nThe answer should be:\n<pre><code>\n[(1, 33668), (3, 40828), (4, 58822), (5, 31446), (6, 31957), (7, 56672), (8, 59367), (9, 59708), (10, 60458), (11, 60503), (12, 37331), (13, 35840), (14, 59091), (15, 58029), (16, 55965), (17, 58182), (18, 55508), (19, 31615), (20, 32546), (21, 47245)]\n</code></pre>",
"def get_day(date_string):\n day = <COMPLETAR>\n return day\n\npairs_day_host_RDD = (cleanRDD\n .map(lambda x: <COMPLETAR>)\n )\n\npairs_day_host_RDD.take(3)\n\nuniquehosts = (pairs_day_host_RDD\n .<COMPLETAR>\n .map(lambda x: <COMPLETAR>)\n .takeOrdered(30, lambda x: x[0])\n )\n\nprint uniquehosts\n\ndays = <COMPLETAR>\nhosts = <COMPLETAR>\n\nfrom matplotlib import pyplot as plt\nfig = plt.figure(figsize=(8,4.5), facecolor='white', edgecolor='white')\nplt.axis([min(days), max(days), 0, max(hosts)+500])\nplt.grid(b=True, which='major', axis='y')\nplt.xlabel('Day')\nplt.ylabel('Hosts')\nplt.plot(days, hosts)\npass\n\n#print get_day(pairs_day_host_RDD.take(1)[0][0])\n",
"EXERCISE: Obtain the top ten 404 Response URIs, this is, the contents that most times failed due to a 404 error.\nThe answer should be:\n<pre><code>\nThese are the URIs that produced more 404 errors:\n\n588 /pub/winvn/readme.txt\n457 /pub/winvn/release.txt\n411 /shuttle/missions/STS-69/mission-STS-69.html\n319 /images/nasa-logo.gif\n168 /elv/DELTA/uncons.htm\n147 /shuttle/missions/sts-68/ksc-upclose.gif\n144 /history/apollo/sa-1/sa-1-patch-small.gif\n116 /images/crawlerway-logo.gif\n114 /://spacelink.msfc.nasa.gov\n91 /history/apollo/a-001/a-001-patch-small.gif\n</code></pre>",
"URIs_404 = (cleanRDD\n .filter(<COMPLETAR>)\n .map(lambda x: <COMPLETAR>)\n .reduceByKey(lambda x, y: x + y)\n .takeOrdered(10, lambda x:<COMPLETAR>)\n )\n\nprint \"These are the URIs that produced more 404 errors:\"\nprint \" \"\nfor u in URIs_404:\n print u[1], 2*'\\t', u[0]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
t-vi/pytorch-tvmisc
|
german_lm/German_LM_prepare.ipynb
|
mit
|
[
"Preparing German Wikipedia to train a fast.ai (ULMFiT) model for German\n(should work with most other languages, too)\nThomas Viehmann tv@lernapparat.de\nThe core idea of Howard and Ruder's ULMFiT paper, see also https://nlp.fast.ai/, is to pretrain a language model on some corpus.\nNaturally, we also want such a thing for German. And happily I just launched MathInf, a great mathematical modelling, machine learning and actuarial consulting company, that allows me to do this type of research and make it public.\nI have very raw info (and hope to add more description soon) on my blog. I'm making this available early at public request and hope it is useful to you to build great things, it is not as clean or well-commented I would love it to be, yet.\nI would love to hear from you if you make good use of it!\nSo we take a wikipedia dump (de_wikipedia_extracted dewiki-latest-pages-articles.xml.bz2 downloaded from dumps.wikipedia.org and prepocessed by wikiextractor/WikiExtractor.py -s --json -o de_wikipedia_extracted dewiki-latest-pages-articles.xml.bz2) and make token files out of them.\nNote that the German Wikipedia contains more tokens (i.e. words) than recommended 100M to train the language model.\nI don't cut off much here, but only do this later when loading the tokens to start the training. That is a bit wasteful and follows a \"keep as much data as long as you can\" approach.\nCredit for all the good things in the Notebook likely belong to Sylvain Gugger (see his notebook) and Jeremy Howard see the original imdb notebook from his great course, whose work I built on, all errors are my own.\nEnough talk, here is the data preparation.",
"%matplotlib inline\n%reload_ext autoreload\n%autoreload 2\n\nfrom fastai.text import *\nimport html\nfrom matplotlib import pyplot\nimport numpy\nimport time\n\nBOS = 'xbos' # beginning-of-sentence tag\nFLD = 'xfld' # data field tag\n\nLANG='de'\ndatasetpath = Path('/home/datasets/nlp/wiki/')\n# I ran this: wikiextractor/WikiExtractor.py -s --json -o de_wikipedia_extracted dewiki-latest-pages-articles.xml.bz2 \nwork_path = Path('~/data/nlp/german_lm/data/de_wiki/tmp/').expanduser()\nwork_path.mkdir(exist_ok=True)",
"Standarize format\nYou can skip this entire section if you like the results. In this case continue at Tokenize.",
"LANG_FILENAMES = [str(f) for f in datasetpath.rglob(\"de_wikipedia_extracted/*/*\")]\n\nlen(LANG_FILENAMES), LANG_FILENAMES[:5]\n\nLANG_TEXT = []\nfor fn in tqdm(LANG_FILENAMES):\n for line in open(fn, encoding='utf8'):\n LANG_TEXT.append(json.loads(line))\n \nLANG_TEXT = pd.DataFrame(LANG_TEXT)\n\nLANG_TEXT.head()\n\n# Getting rid of the title name in the text field\ndef split_title_from_text(text):\n words = text.split(\"\\n\\n\", 1)\n if len(words) == 2:\n return words[1]\n else:\n return words[0]\n \nLANG_TEXT['text'] = LANG_TEXT['text'].apply(lambda x: split_title_from_text(x))\n\nLANG_TEXT.head()",
"Determine article lengths and only keep at most the largest million and only those with at least 2000 characters",
"LANG_TEXT['label'] = 0 # dummy\nLANG_TEXT['length'] = LANG_TEXT['text'].str.len()\n\nMAX_ARTICLES = 1_000_000\n# keep at most 1 million articles and only those of more than 2000 characters\nMIN_LENGTH_CHARS = max(2000, int(numpy.percentile(LANG_TEXT['length'], 100-min(100*MAX_ARTICLES/len(LANG_TEXT), 100))))\nLANG_TEXT = LANG_TEXT[LANG_TEXT['length'] >= MIN_LENGTH_CHARS] # Chars not words...\n\n\nLANG_TEXT.to_csv(datasetpath/'wiki_de.csv', header=True, index=False) # I must say, I think the header is good! If in doubt, you should listen to Jeremy though.\n\nLANG_TEXT = pd.read_csv(datasetpath/'wiki_de.csv')\n\npercentages = range(0,110,10)\nprint ('Article length percentiles' , ', '.join(['{}%: {}'.format(p, int(q)) for p,q in zip(percentages, numpy.percentile(LANG_TEXT['length'], percentages))]))\nprint ('Number of articles', len(LANG_TEXT))\n\n#LANG_TEXT = LANG_TEXT.sort_values(by=['length'], ascending=False)\nLANG_TEXT.head()",
"Splitting 10% for validation.",
"df_trn,df_val = sklearn.model_selection.train_test_split(LANG_TEXT.pipe(lambda x: x[['label', 'text']]), test_size=0.1)\n\ndf_trn.to_csv(work_path/'train.csv', header=False, index=False)\ndf_val.to_csv(work_path/'valid.csv', header=False, index=False)",
"I'm always trying to produce notebooks that you can run through in one go, so here is my attempt at getting rid of old stuff.",
"del LANG_TEXT\nimport gc\ngc.collect()",
"Tokenize\nNote: be sure to care for your memory. I had all my memory allocated (for having several wikipedia copies in memory) and was swapping massively with the multiprocessing tokenization. My fix was to restart the notebook after after I had finished the above.",
"chunksize = 4000\nN_CPUS = num_cpus() # I like to use all cores here, needs a patch to fast ai\n\nre1 = re.compile(r' +')\n\ndef fixup(x):\n x = x.replace('#39;', \"'\").replace('amp;', '&').replace('#146;', \"'\").replace(\n 'nbsp;', ' ').replace('#36;', '$').replace('\\\\n', \"\\n\").replace('quot;', \"'\").replace(\n '<br />', \"\\n\").replace('\\\\\"', '\"').replace('<unk>','u_n').replace(' @.@ ','.').replace(\n ' @-@ ','-').replace('\\\\', ' \\\\ ')\n return re1.sub(' ', html.unescape(x))\n\ndf_trn = pd.read_csv(work_path/'train.csv', header=None, chunksize=chunksize)\ndf_val = pd.read_csv(work_path/'valid.csv', header=None, chunksize=chunksize)\n\ndef get_texts(df, n_lbls=1):\n labels = df.iloc[:,range(n_lbls)].values.astype(np.int64)\n texts = f'\\n{BOS} {FLD} 1 ' + df[n_lbls].astype(str)\n for i in range(n_lbls+1, len(df.columns)): texts += f' {FLD} {i-n_lbls} ' + df[i].astype(str)\n texts = texts.apply(fixup).values.astype(str)\n #tok = Tokenizer.proc_all(texts, lang=LANG) # use this if you have memory trouble\n tok = Tokenizer.proc_all_mp(partition(texts, (len(texts)+N_CPUS-1)//N_CPUS), lang=LANG, ncpus=N_CPUS)\n return tok, list(labels)\n\ndef get_all(df, name, n_lbls=1):\n time_start = time.time()\n for i, r in enumerate(df):\n print(\"\\r\", i, end=\" \")\n if i > 0:\n print ('time per chunk {}s'.format(int((time.time() - time_start) / i)), end=\"\")\n tok_, labels_ = get_texts(r, n_lbls)\n #save the partial tokens instead of regrouping them in one big array.\n np.save(work_path/f'{name}_tok{i}.npy', tok_)\n\n\nget_all(df_trn,'trn',1)\n\nget_all(df_val,'val',1)",
"Numericalize\nGet the Counter object from all the splitted files.",
"def count_them_all(names):\n cnt = Counter()\n for name in names:\n for file in work_path.glob(f'{name}_tok*'):\n tok = np.load(file)\n cnt_tok = Counter(word for sent in tok for word in sent)\n cnt += cnt_tok\n return cnt\n\ncnt = count_them_all(['trn'])\n\ncnt.most_common(25)\n\nmax_vocab = 60000\nmin_freq = 5\n\nitos = [o for o,c in cnt.most_common(max_vocab) if c > min_freq]\nitos.insert(0,'_pad_')\nitos.insert(0,'_unk_')\n\nlen(itos)\npickle.dump(itos, open(work_path/'itos.pkl', 'wb'))\n\nstoi = collections.defaultdict(int,{s:i for (i,s) in enumerate(itos)})",
"Numericalize each partial file.",
"def numericalize(name):\n results = []\n for file in tqdm(work_path.glob(f'{name}_tok*')):\n tok = np.load(file)\n results.append(np.array([[stoi[word] for word in sent] for sent in tok]))\n return np.concatenate(results)\n\ntrn_ids = numericalize('trn')\nnp.save(work_path/'trn_ids.npy', trn_ids)\n\nval_ids = numericalize('val')\nnp.save(work_path/'val_ids.npy', val_ids)",
"So now you have gread dumps to use with the training program I published on my blog.\nAs always, I would be honored by your feedback at tv@lernapparat.de. I read and appreciate every mail.\nThomas"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
asydorchuk/ml
|
classes/cs231n/assignment2/BatchNormalization.ipynb
|
mit
|
[
"Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.",
"# As usual, a bit of setup\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape",
"Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.",
"# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint 'Before batch normalization:'\nprint ' means: ', a.mean(axis=0)\nprint ' stds: ', a.std(axis=0)\n\n# Means should be close to zero and stds close to one\nprint 'After batch normalization (gamma=1, beta=0)'\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint ' mean: ', a_norm.mean(axis=0)\nprint ' std: ', a_norm.std(axis=0)\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint 'After batch normalization (nontrivial gamma, beta)'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in xrange(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint 'After batch normalization (test-time):'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)\n\nfrom cs231n.layers import mean_forward, mean_backward\n\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ndout = np.random.randn(N, D) + 3\n\nout, _ = mean_forward(x)\nprint 'Mean: ', out.mean(axis=0)\n\nfx = lambda x: mean_forward(x)[0]\ndx_num = eval_numerical_gradient_array(fx, x, dout)\ndx = mean_backward(dout, None)\nprint 'dx error: ', rel_error(dx_num, dx)\n\nfrom cs231n.layers import var_forward, var_backward\n\nN, D = 4, 5\nx = 5 * np.random.randn(N, D)\nx -= x.mean(axis=0)\ndout = np.random.randn(N, D)\n\nout, cache = var_forward(x, 1e-5)\nprint 'Var: ', out.var(axis=0)\n\nfx = lambda x: var_forward(x, 1e-5)[0]\ndx_num = eval_numerical_gradient_array(fx, x, dout)\ndx = var_backward(dout, cache)\nprint 'dx error: ', rel_error(dx_num, dx)",
"Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.",
"# Gradient check batchnorm backward pass\n\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dgamma error: ', rel_error(da_num, dgamma)\nprint 'dbeta error: ', rel_error(db_num, dbeta)",
"Batch Normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.",
"N, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint 'dx difference: ', rel_error(dx1, dx2)\nprint 'dgamma difference: ', rel_error(dgamma1, dgamma2)\nprint 'dbeta difference: ', rel_error(dbeta1, dbeta2)\nprint 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))",
"Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.",
"N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print 'Running check with reg = ', reg\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))\n if reg == 0: print",
"Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.",
"# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()",
"Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.",
"plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.",
"# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()",
"Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:\nBatch normalization makes the network to converge faster disregarding the initial initialization of weights, which is expected. Without batch normalization, one needs to be extremely cautions while choosing initial weights, otherwise the NN might not converge at all."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
longjon/caffe
|
examples/02-brewing-logreg.ipynb
|
bsd-2-clause
|
[
"Brewing Logistic Regression then Going Deeper\nWhile Caffe is made for deep networks it can likewise represent \"shallow\" models like logistic regression for classification. We'll do simple logistic regression on synthetic data that we'll generate and save to HDF5 to feed vectors to Caffe. Once that model is done, we'll add layers to improve accuracy. That's what Caffe is about: define a model, experiment, and then deploy.",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport os\nos.chdir('..')\n\nimport sys\nsys.path.insert(0, './python')\nimport caffe\n\n\nimport os\nimport h5py\nimport shutil\nimport tempfile\n\nimport sklearn\nimport sklearn.datasets\nimport sklearn.linear_model\n\nimport pandas as pd",
"Synthesize a dataset of 10,000 4-vectors for binary classification with 2 informative features and 2 noise features.",
"X, y = sklearn.datasets.make_classification(\n n_samples=10000, n_features=4, n_redundant=0, n_informative=2, \n n_clusters_per_class=2, hypercube=False, random_state=0\n)\n\n# Split into train and test\nX, Xt, y, yt = sklearn.cross_validation.train_test_split(X, y)\n\n# Visualize sample of the data\nind = np.random.permutation(X.shape[0])[:1000]\ndf = pd.DataFrame(X[ind])\n_ = pd.scatter_matrix(df, figsize=(9, 9), diagonal='kde', marker='o', s=40, alpha=.4, c=y[ind])",
"Learn and evaluate scikit-learn's logistic regression with stochastic gradient descent (SGD) training. Time and check the classifier's accuracy.",
"%%timeit\n# Train and test the scikit-learn SGD logistic regression.\nclf = sklearn.linear_model.SGDClassifier(\n loss='log', n_iter=1000, penalty='l2', alpha=1e-3, class_weight='auto')\n\nclf.fit(X, y)\nyt_pred = clf.predict(Xt)\nprint('Accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(yt, yt_pred)))",
"Save the dataset to HDF5 for loading in Caffe.",
"# Write out the data to HDF5 files in a temp directory.\n# This file is assumed to be caffe_root/examples/hdf5_classification.ipynb\ndirname = os.path.abspath('./examples/hdf5_classification/data')\nif not os.path.exists(dirname):\n os.makedirs(dirname)\n\ntrain_filename = os.path.join(dirname, 'train.h5')\ntest_filename = os.path.join(dirname, 'test.h5')\n\n# HDF5DataLayer source should be a file containing a list of HDF5 filenames.\n# To show this off, we'll list the same data file twice.\nwith h5py.File(train_filename, 'w') as f:\n f['data'] = X\n f['label'] = y.astype(np.float32)\nwith open(os.path.join(dirname, 'train.txt'), 'w') as f:\n f.write(train_filename + '\\n')\n f.write(train_filename + '\\n')\n \n# HDF5 is pretty efficient, but can be further compressed.\ncomp_kwargs = {'compression': 'gzip', 'compression_opts': 1}\nwith h5py.File(test_filename, 'w') as f:\n f.create_dataset('data', data=Xt, **comp_kwargs)\n f.create_dataset('label', data=yt.astype(np.float32), **comp_kwargs)\nwith open(os.path.join(dirname, 'test.txt'), 'w') as f:\n f.write(test_filename + '\\n')",
"Let's define logistic regression in Caffe through Python net specification. This is a quick and natural way to define nets that sidesteps manually editing the protobuf model.",
"from caffe import layers as L\nfrom caffe import params as P\n\ndef logreg(hdf5, batch_size):\n # logistic regression: data, matrix multiplication, and 2-class softmax loss\n n = caffe.NetSpec()\n n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)\n n.ip1 = L.InnerProduct(n.data, num_output=2, weight_filler=dict(type='xavier'))\n n.accuracy = L.Accuracy(n.ip1, n.label)\n n.loss = L.SoftmaxWithLoss(n.ip1, n.label)\n return n.to_proto()\n \nwith open('examples/hdf5_classification/logreg_auto_train.prototxt', 'w') as f:\n f.write(str(logreg('examples/hdf5_classification/data/train.txt', 10)))\n \nwith open('examples/hdf5_classification/logreg_auto_test.prototxt', 'w') as f:\n f.write(str(logreg('examples/hdf5_classification/data/test.txt', 10)))",
"Time to learn and evaluate our Caffeinated logistic regression in Python.",
"%%timeit\ncaffe.set_mode_cpu()\nsolver = caffe.get_solver('examples/hdf5_classification/solver.prototxt')\nsolver.solve()\n\naccuracy = 0\nbatch_size = solver.test_nets[0].blobs['data'].num\ntest_iters = int(len(Xt) / batch_size)\nfor i in range(test_iters):\n solver.test_nets[0].forward()\n accuracy += solver.test_nets[0].blobs['accuracy'].data\naccuracy /= test_iters\n\nprint(\"Accuracy: {:.3f}\".format(accuracy))",
"Do the same through the command line interface for detailed output on the model and solving.",
"!./build/tools/caffe train -solver examples/hdf5_classification/solver.prototxt",
"If you look at output or the logreg_auto_train.prototxt, you'll see that the model is simple logistic regression.\nWe can make it a little more advanced by introducing a non-linearity between weights that take the input and weights that give the output -- now we have a two-layer network.\nThat network is given in nonlinear_auto_train.prototxt, and that's the only change made in nonlinear_solver.prototxt which we will now use.\nThe final accuracy of the new network should be higher than logistic regression!",
"from caffe import layers as L\nfrom caffe import params as P\n\ndef nonlinear_net(hdf5, batch_size):\n # one small nonlinearity, one leap for model kind\n n = caffe.NetSpec()\n n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)\n # define a hidden layer of dimension 40\n n.ip1 = L.InnerProduct(n.data, num_output=40, weight_filler=dict(type='xavier'))\n # transform the output through the ReLU (rectified linear) non-linearity\n n.relu1 = L.ReLU(n.ip1, in_place=True)\n # score the (now non-linear) features\n n.ip2 = L.InnerProduct(n.ip1, num_output=2, weight_filler=dict(type='xavier'))\n # same accuracy and loss as before\n n.accuracy = L.Accuracy(n.ip2, n.label)\n n.loss = L.SoftmaxWithLoss(n.ip2, n.label)\n return n.to_proto()\n \nwith open('examples/hdf5_classification/nonlinear_auto_train.prototxt', 'w') as f:\n f.write(str(nonlinear_net('examples/hdf5_classification/data/train.txt', 10)))\n \nwith open('examples/hdf5_classification/nonlinear_auto_test.prototxt', 'w') as f:\n f.write(str(nonlinear_net('examples/hdf5_classification/data/test.txt', 10)))\n\n%%timeit\ncaffe.set_mode_cpu()\nsolver = caffe.get_solver('examples/hdf5_classification/nonlinear_solver.prototxt')\nsolver.solve()\n\naccuracy = 0\nbatch_size = solver.test_nets[0].blobs['data'].num\ntest_iters = int(len(Xt) / batch_size)\nfor i in range(test_iters):\n solver.test_nets[0].forward()\n accuracy += solver.test_nets[0].blobs['accuracy'].data\naccuracy /= test_iters\n\nprint(\"Accuracy: {:.3f}\".format(accuracy))",
"Do the same through the command line interface for detailed output on the model and solving.",
"!./build/tools/caffe train -solver examples/hdf5_classification/nonlinear_solver.prototxt\n\n# Clean up (comment this out if you want to examine the hdf5_classification/data directory).\nshutil.rmtree(dirname)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
widdowquinn/notebooks
|
Biopython_get_GenBank_genomes.ipynb
|
mit
|
[
"Retrieving all GenBank genome sequences for a bacterial genus\nSome tasks we are interested in need to be repeated for a set of genomes, whenever new isolate sequences become available. This notebook will take us through the process of collecting all genomes from GenBank that belong to a named genus.\nImporting Biopython\nWe will use Biopython's interface to Entrez, the interface to NCBI's online resources. There are some guidelines to its use that need to be followed, so you are not banned by NCBI:\n\nFor any series of more than 100 requests, conduct your work outwith US peak times.\nMake no more than three queries each second (enforced by Biopython).\nSet your Entrez.email parameter correctly.\nSet your Entrez.tool parameter correctly.\nUse session histories where appropriate.\n\nWe start by importing packages, including Biopython's Entrez module, and setting the appropriate parameters.",
"import os\nfrom Bio import Entrez, SeqIO\n\nEntrez.email = \"\" # Use your own real email\nEntrez.tool = \"Biopython_get_GenBank_genomes.ipynb\"",
"Searching the database\nWe're using Bio.Entrez.esearch() with the genome database to look for our search term. In this case, it's Pectobacterium, a genus of plant pathogenic bacteria.\nWe know we can look in the genome database, because we checked by hand at http://www.ncbi.nlm.nih.gov/genome, and because it's one of the databases named if we look with Entrez.einfo().",
"genus = \"Pectobacterium\"\nquery_text = \"{0} AND bacteria[Organism]\".format(genus)\nhandle = Entrez.esearch(db='genome', term=query_text)\nrecord = Entrez.read(handle)",
"We can get the number of returned records by looking at record[\"Count\"]:",
"record[\"Count\"]",
"But what are our records? We can see their GenBank identifiers by looking at record[\"IdList\"]:",
"record[\"IdList\"]",
"But this isn't immediately informative. We're going to have to look at the assemblies associated with these identifiers in GenBank. We do this with Entrez.elink(), searching for associations between the genome database and the assembly database, compiling all the resulting Link UIDs in a single list.\nWhat are the links we're allowed? Well, there's a big list at http://www.ncbi.nlm.nih.gov/corehtml/query/static/entrezlinks.html, but we can also inspect NCBI's web interface directly to see that http://www.ncbi.nlm.nih.gov/assembly is the likely prefix/database we're looking for.",
"asm_links = []\nfor uid in record[\"IdList\"]:\n links = Entrez.read(Entrez.elink(dbfrom=\"genome\", db=\"assembly\", retmode=\"text\", from_uid=uid))\n [asm_links.append(d.values()[0]) for d in links[0]['LinkSetDb'][0]['Link']]\nprint(\"We find {0} genome entries: {1}\".format(len(asm_links), asm_links))",
"Now we can recover links to the nucleotide database for each of these UIDs. There may be several such links, but as we are looking for the full assembly, we care only about the assembly_nuccore_insdc sequences, which are the contigs.\nWe collect these into a dictionary of contig UIDs, keyed by assembly UID, called sequid_links:",
"sequid_links = {}\nfor uid in asm_links:\n links = Entrez.read(Entrez.elink(dbfrom=\"assembly\", db=\"nucleotide\", retmode=\"gb\", from_uid=uid))\n contigs = [l for l in links[0]['LinkSetDb'] if l['LinkName'] == 'assembly_nuccore_insdc'][0]\n sequid_links[uid] = [e['Id'] for e in contigs['Link']]\nexpected_contigs = {}\nprint(\"There are {0} genomes identified for {1}:\".format(len(sequid_links), genus))\nfor k, v in sorted(sequid_links.items()):\n print(\"Assembly UID {0}: {1} contigs\".format(k, len(v)))\n expected_contigs[k] = len(v)",
"Once we have these nucleotide database identifiers, we can grab all the sequences and write them out as multi-FASTA files, with Entrez.efetch(). The assembly records themselves though, have to be obtained with Entrez.esummary(), and then a byzantine set of keywords navigated, to get the information we're interested in.\nWe use the assembly UID without version number as the filename, and write a labels.txt file suitable for use with pyani.",
"# Make sure there's a relevant output directory\nif not os.path.exists(genus):\n os.mkdir(genus)\nif not os.path.exists(\"failures\"):\n os.mkdir(\"failures\")\n\n# Write output\nwith open(os.path.join(genus, 'labels.txt'), 'w') as lfh:\n with open(os.path.join(genus, 'classes.txt'), 'w') as cfh:\n for asm_uid, contigs in sorted(sequid_links.items()):\n # Get assembly record information\n asm_record = Entrez.read(Entrez.esummary(db='assembly', id=asm_uid, rettype='text'))\n asm_organism = asm_record['DocumentSummarySet']['DocumentSummary'][0]['SpeciesName']\n try:\n asm_strain = asm_record['DocumentSummarySet']['DocumentSummary'][0]['Biosource']['InfraspeciesList'][0]['Sub_value']\n except:\n asm_strain = \"\"\n gname = asm_record['DocumentSummarySet']['DocumentSummary'][0]['AssemblyAccession'].split('.')[0]\n filestem = os.path.join(genus, gname)\n # Write a labels.txt and a classes.txt file suitable for pyani\n glab, species = asm_organism.split(' ', 1)\n glab = glab[0]\n labelstr = \"{0}\\t{1}. {2} {3}\".format(gname, glab, species, asm_strain)\n print >> lfh, labelstr\n print >> cfh, \"{0}\\t{1}\".format(gname, asm_organism)\n print(labelstr)\n # Get FASTA records for each of the contigs (we could do this with GenBank instead,\n # but sometimes these are not formatted correctly with sequences)\n query_uids = ','.join(contigs)\n tries, success = 0, False\n while success == False and tries < 20:\n # Also check for total sequence length errors?\n try:\n print(\"UID:{0} download attempt {1}\".format(asm_uid, tries + 1))\n records = list(SeqIO.parse(Entrez.efetch(db='nucleotide', id=query_uids,\n rettype=\"fasta\", retmode='text'),\n 'fasta'))\n if len(records) == expected_contigs[asm_uid]: # No exceptions, num records = expected\n success = True\n else: # No exceptions, but not all contigs\n print('{0} records downloaded, expected {1}'.format(len(records),\n expected_contigs[asm_uid]))\n SeqIO.write(records, os.path.join(\"failures\",\n \"{0}_{1}_failed.fasta\".format(asm_uid, tries)),\n 'fasta')\n tries += 1\n except: # Catch any errors, incl. from SeqIO.parse and Entrez.efetch\n tries += 1\n if tries >= 10:\n print(\"Download failed for {0}\\n\".format(labelstr))\n print(\"UID:{0} has {1} records, total length {2}\\n\".format(asm_uid, len(records),\n sum([len(r) for r in records])))\n SeqIO.write(records, \"{0}.fasta\".format(filestem), 'fasta')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
liumengjun/cn-deep-learning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
[
"图像分类\n在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。\n获取数据\n请运行以下单元,以下载 CIFAR-10 数据集(Python版)。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"探索数据\n该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1、data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片:\n\n飞机\n汽车\n鸟类\n猫\n鹿\n狗\n青蛙\n马\n船只\n卡车\n\n了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_id 和 sample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。\n问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"实现预处理函数\n标准化\n在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n return x/255\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot 编码\n和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。\n提示:不要重复发明轮子。",
"def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n return np.eye(10)[x]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"随机化数据\n之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。\n预处理所有数据并保存\n运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"检查点\n这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"构建网络\n对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。\n\n注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。\n但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。\n\n我们开始吧!\n输入\n神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数:\n\n实现 neural_net_image_input\n返回 TF Placeholder\n使用 image_shape 设置形状,部分大小设为 None\n使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 \"x\" 命名\n实现 neural_net_label_input\n返回 TF Placeholder\n使用 n_classes 设置形状,部分大小设为 None\n使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 \"y\" 命名\n实现 neural_net_keep_prob_input\n返回 TF Placeholder,用于丢弃保留概率\n使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 \"keep_prob\" 命名\n\n这些名称将在项目结束时,用于加载保存的模型。\n注意:TensorFlow 中的 None 表示形状可以是动态大小。",
"import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, (None, *image_shape), name='x')\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, (None, n_classes), name='y')\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, name='keep_prob')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"卷积和最大池化层\n卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化:\n\n使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。\n使用权重和 conv_strides 对 x_tensor 应用卷积。\n建议使用我们建议的间距(padding),当然也可以使用任何其他间距。\n添加偏置\n向卷积中添加非线性激活(nonlinear activation)\n使用 pool_ksize 和 pool_strides 应用最大池化\n建议使用我们建议的间距(padding),当然也可以使用任何其他间距。\n\n注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。",
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n # variables\n input_dense = x_tensor.shape[-1].value\n weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], input_dense, conv_num_outputs],stddev=0.1))\n bias = tf.Variable(tf.zeros([conv_num_outputs]))\n # conv2d\n conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')\n conv_layer = tf.nn.bias_add(conv_layer, bias)\n #conv_layer = tf.nn.relu(conv_layer) # move it after maxpool\n # maxpool\n conv_layer = tf.nn.max_pool(\n conv_layer,\n ksize=[1, pool_ksize[0], pool_ksize[1], 1],\n strides=[1, pool_strides[0], pool_strides[1], 1],\n padding='SAME')\n return tf.nn.relu(conv_layer)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"扁平化层\n实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。",
"def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.flatten(x_tensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"全连接层\n实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。",
"def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.fully_connected(x_tensor, num_outputs)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"输出层\n实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。\n注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"创建卷积模型\n实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型:\n\n应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers)\n应用一个扁平层(Flatten Layer)\n应用 1、2 或 3 个完全连接层(Fully Connected Layers)\n应用一个输出层(Output Layer)\n返回输出\n使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout",
"def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_layer = conv2d_maxpool(x, 32, (3, 3), (1, 1), (2, 2), (2, 2))\n conv_layer = conv2d_maxpool(x, 64, (3, 3), (1, 1), (2, 2), (2, 2))\n conv_layer = conv2d_maxpool(x, 128, (3, 3), (3, 3), (2, 2), (2, 2))\n conv_layer = tf.nn.dropout(conv_layer, keep_prob)\n \n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n flat_layer = flatten(conv_layer)\n \n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n fc_layer = fully_conn(flat_layer, 512)\n fc_layer = tf.nn.dropout(fc_layer, keep_prob)\n \n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n output_layer = output(fc_layer, 10)\n \n \n # TODO: return output\n return output_layer\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"训练神经网络\n单次优化\n实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数:\n\nx 表示图片输入\ny 表示标签\nkeep_prob 表示丢弃的保留率\n\n每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。\n注意:不需要返回任何内容。该函数只是用来优化神经网络。",
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n session.run(optimizer, feed_dict={\n x: feature_batch,\n y: label_batch,\n keep_prob: keep_probability})\n pass\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"显示数据\n实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。",
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n loss = session.run(cost, feed_dict={\n x: feature_batch,\n y: label_batch,\n keep_prob: 1.})\n valid_acc = session.run(accuracy, feed_dict={\n x: valid_features,\n y: valid_labels,\n keep_prob: 1.})\n\n print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))\n pass",
"超参数\n调试以下超参数:\n* 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数\n* 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小:\n\n64\n128\n256\n...\n设置 keep_probability 表示使用丢弃时保留节点的概率",
"# TODO: Tune Parameters\nepochs = 30\nbatch_size = 256\nkeep_probability = 0.7",
"在单个 CIFAR-10 部分上训练\n我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)",
"完全训练模型\n现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"检查点\n模型已保存到本地。\n测试模型\n利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"为何准确率只有50-80%?\n你可能想问,为何准确率不能更高了?首先,对于简单的 CNN 网络来说,50% 已经不低了。纯粹猜测的准确率为10%。但是,你可能注意到有人的准确率远远超过 80%。这是因为我们还没有介绍所有的神经网络知识。我们还需要掌握一些其他技巧。\n提交项目\n提交项目时,确保先运行所有单元,然后再保存记事本。将 notebook 文件另存为“dlnd_image_classification.ipynb”,再在目录 \"File\" -> \"Download as\" 另存为 HTML 格式。请在提交的项目中包含 “helper.py” 和 “problem_unittests.py” 文件。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tclaudioe/Scientific-Computing
|
SC1/10_GMRes.ipynb
|
bsd-3-clause
|
[
"<center>\n <h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>\n <h2> Generalized Minimal Residual Method </h2>\n <h2> <a href=\"#acknowledgements\"> [S]cientific [C]omputing [T]eam </a> </h2>\n <h2> Version: 1.21</h2>\n</center>\nTable of Contents\n\nIntroduction\nShort reminder about Least Squares\nGMRes\nTheoretical Problems\nPractical Problems\nAcknowledgements",
"import numpy as np\nimport scipy as sp\nfrom scipy import linalg as la\nimport matplotlib.pyplot as plt\nimport scipy.sparse.linalg\n%matplotlib inline\n#%load_ext memory_profiler\nimport matplotlib as mpl\nmpl.rcParams['font.size'] = 14\nmpl.rcParams['axes.labelsize'] = 20\nmpl.rcParams['xtick.labelsize'] = 14\nmpl.rcParams['ytick.labelsize'] = 14\nM=8",
"<div id='intro' />\n\nIntroduction\nWelcome to another edition of our Jupyter Notebooks. A few notebooks back, we saw that the Conjugate Gradient Method, an iterative method, was very useful to solve $A\\,\\mathbf{x}=\\mathbf{b}$ but it only worked when $A$ was positive definite and symmetric. So now we need an iterative method that works with nonsymmetric linear system of equations, and for that we have the Generalized Minimum Residual Method (GMRes). It works really well for finding the solution of large, sparse (and dense as well), nonsymmetric linear systems of equations. Of course, it will also have trouble for ill-conditioned linear system of equations. But it is really easy to add a left or right or both preconditioners!\n<div id='LS' />\n\nA quick review on Least Squares\nLeast Squares is used to solve overdetemined linear systems of equations $A\\,\\mathbf{x} = \\mathbf{b}$. That is, for example, a linear system of equations where there are more equations than unknowns. It finds the best $\\overline{\\mathbf{x}}$ so that it minimizes the euclidean length of $\\mathbf{r} = \\mathbf{b} - A\\,\\mathbf{x}$.\nSo, you might be wondering, what does Least Squares have to do with GMRes? WELL, since you're dying to know, I'll tell you: the backward error of the system in GMRes is minimized at each iteration step using a Least Squares formulation.\n<div id='GMR' />\n\nGMRes\nGMRes is a member of the family of Krylov methods. It finds an approximation of $\\mathbf{x}$ restricted to live on the Krylov sub-space $\\mathcal{K_k}$, where $\\mathcal{K_k}={\\mathbf{r}_0, A\\,\\mathbf{r}_0, A^2\\,\\mathbf{r}_0, \\cdots, A^{k-1}\\,\\mathbf{r}_0}$ and $\\mathbf{r}_0 = \\mathbf{b} - A\\,\\mathbf{x}_0$ is the residual vector of the initial guess.\nThe idea behind this method is to look for improvements to the initial guess $\\mathbf{x}_0$ in the Krylov space. At the $k$-th iteration, we enlarge the Krylov space by adding $A^k\\,\\mathbf{r}_0$, reorthogonalize the basis, and then use least squares to find the best improvement to add to $\\mathbf{x}_0$.\nThe algorithm is as follows:\nGeneralized Minimum Residual Method\n$\\mathbf{x}0$ = initial guess<br>\n$\\mathbf{r}$ = $\\mathbf{b} - A\\,\\mathbf{x}_0$<br>\n$\\mathbf{q}_1$ = $\\mathbf{r} / \\|\\mathbf{r}\\|_2$<br>\nfor $k = 1, ..., m$<br>\n$\\qquad \\ \\ \\mathbf{y} = A\\,\\mathbf{q}_k$<br>\n$\\qquad$ for $j = 1,2,...,k$ <br>\n$\\qquad \\qquad$ $h{jk} = \\mathbf{q}j^*\\,\\mathbf{y}$<br>\n$\\qquad \\qquad$ $\\mathbf{y} = \\mathbf{y} - h{jk}\\, \\mathbf{q}j$<br>\n$\\qquad$ end<br>\n$\\qquad \\ h{k+1,k} = \\|y\\|2 \\qquad$ (If $h{k+1,k} = 0$ , skip next line and terminate at bottom.) <br>\n$\\qquad \\ \\mathbf{q}{k+1} = \\mathbf{y}/h{k+1,k}$ <br>\n$\\qquad$ Minimize $\\left\\|\\widehat{H}_k\\, \\mathbf{c}_k - [\\|\\mathbf{r}\\|_2 \\ 0 \\ 0 \\ ... \\ 0]^T \\right\\|_2$ for $\\mathbf{c}_k$ <br>\n$\\qquad$ $\\mathbf{x}_k = Q_k \\, \\mathbf{c}_k + \\mathbf{x}_0$ <br>\nend\nNow we have to implement it.",
"# This is a very instructive implementation of GMRes.\ndef GMRes(A, b, x0=np.array([0.0]), m=10, flag_display=True, threshold=1e-12):\n n = len(b)\n if len(x0)==1:\n x0=np.zeros(n)\n r0 = b - np.dot(A, x0)\n nr0=np.linalg.norm(r0)\n out_res=np.array(nr0)\n Q = np.zeros((n,n))\n H = np.zeros((n,n))\n Q[:,0] = r0 / nr0\n flag_break=False\n for k in np.arange(np.min((m,n))):\n y = np.dot(A, Q[:,k])\n if flag_display:\n print('||y||=',np.linalg.norm(y))\n for j in np.arange(k+1):\n H[j][k] = np.dot(Q[:,j], y)\n if flag_display:\n print('H[',j,'][',k,']=',H[j][k])\n y = y - np.dot(H[j][k],Q[:,j])\n if flag_display:\n print('||y||=',np.linalg.norm(y))\n # All but the last equation are treated equally. Why?\n if k+1<n:\n H[k+1][k] = np.linalg.norm(y)\n if flag_display:\n print('H[',k+1,'][',k,']=',H[k+1][k])\n if (np.abs(H[k+1][k]) > 1e-16):\n Q[:,k+1] = y/H[k+1][k]\n else:\n print('flag_break has been activated')\n flag_break=True\n # Do you remember e_1? The canonical vector.\n e1 = np.zeros((k+1)+1) \n e1[0]=1\n H_tilde=H[0:(k+1)+1,0:k+1]\n else:\n H_tilde=H[0:k+1,0:k+1]\n # Solving the 'SMALL' least square problem. \n # This could be improved with Givens rotations!\n ck = np.linalg.lstsq(H_tilde, nr0*e1)[0] \n if k+1<n:\n x = x0 + np.dot(Q[:,0:(k+1)], ck)\n else:\n x = x0 + np.dot(Q, ck)\n # Why is 'norm_small' equal to 'norm_full'?\n norm_small=np.linalg.norm(np.dot(H_tilde,ck)-nr0*e1)\n out_res = np.append(out_res,norm_small)\n if flag_display:\n norm_full=np.linalg.norm(b-np.dot(A,x))\n print('..........||b-A\\,x_k||=',norm_full)\n print('..........||H_k\\,c_k-nr0*e1||',norm_small);\n if flag_break:\n if flag_display: \n print('EXIT: flag_break=True')\n break\n if norm_small<threshold:\n if flag_display:\n print('EXIT: norm_small<threshold')\n break\n return x,out_res",
"A very simple example",
"A = np.array([[1,1,0],[0,1,0],[0,1,1]])\nb = np.array([1,2,3])\nx0 = np.zeros(3)\n\n# scipy gmres\nx_scipy = scipy.sparse.linalg.gmres(A,b,x0)[0]\n# our gmres\nx_our, _ = GMRes(A, b)\n# numpy solve\nx_np= np.linalg.solve(A,b)\n\n# Showing the solutions\nprint('--------------------------------')\nprint('x_scipy',x_scipy)\nprint('x_our',x_our)\nprint('x_np',x_np)",
"Another example, how may iteration does it need to converge?",
"A = np.array([[0,0,0,1],[1,0,0,0],[0,1,0,0],[0,0,1,0]])\nb = np.array([1,0,1,0])\nx_our, _ = GMRes(A, b, m=10)\nnorm_full=np.linalg.norm(b-np.dot(A,x_our))\nprint(norm_full)\n\nA = np.random.rand(10,10)+10*np.eye(10)\nb = np.random.rand(10)\nx_our, out_res = GMRes(A, b, m=10,flag_display=True)\nnorm_full=np.linalg.norm(b-np.dot(A,x_our))\nprint(norm_full)",
"Plotting the residual over the iterations",
"plt.figure(figsize=(M,M))\nplt.semilogy(out_res,'.k',markersize=20,label='residual')\nplt.grid(True)\nplt.xlabel(r'$k$')\nplt.ylabel(r'$\\|\\mathbf{b}-A\\,\\mathbf{x}_k\\|_2$')\nplt.grid(True)\nplt.show()",
"<div id='TP' />\n\nTheoretical Problems\n\nProve that in GMRES method, the backward error $||b- Ax_k||$ decreases monotonically with k.\nWhat would happen if we pass a singular matrix $A$ to the previous implementation of GMRes?\nProve that for\n\\begin{equation}\nA=\n\\left[\n\\begin{array}{c|c}\nI & C \\\n\\hline\n0 & I\n\\end{array}\n\\right]\n\\end{equation}\nand any $x_0$ and $b$, GMRES converges to the exact solution after two steps. Here $C$ is a $m_1 \\times m_2$ submatrix, $0$ denotes the $m_2 \\times m_1$ matrix of zeros, and $I$ denotes the appropiate-sized identity matrix.\n\n<div id='PP' />\n\nPractical Problems\n\nA possible improvement to the present algorithm consists on taking out of the loop the least squares computations, since the Krylov subspace spaned by $Q_k$ doesn't depend on previous least squares calculations.\nVerify the truth of the above statement.\nVerify if it is really an improvement.\nImplement it.\nTest both implementations using %timeit\n\n\nThe GMRES method is meant for huge $n\\times n$ sparse matrices $A$. In most cases, the goal is to run the method for $k$ steps (with $k << n$), reducing the complexity of the subproblems (Least squares). Neverthless for $k$ values too small, the solution $x_k$ could be not as good as needed. So to keep the values $k$ small and avoid bad solutions, there exists a variation of the algorithm known as Restarted GMRES: If no enough progress is made toward the solution after $k$ iterations, discard $Q_k$ and start GMRES from the beginning, using the current best guess $x_k$ as the new $x_0$.\nImplement the Restarted GMRES method. Introduce a tolerance parameter to stop restarting.\nCompare the asymptotic operation count and storage requirements of GMRES and Restarted GMRES, for fixed $k$ and increasing $n$.\nExecute it on a huge linear system $A x = b$, and compare the solution with the solution of standard GMRES. Keep a value of $k$ small, and count how many times Restarted GMRES has to restart. Perform benchmarks using %timeit and %memit and verify the results.\nDescribe an example in which Restared GMRES can be expected to fail to converge, whereas GMRES succeds.\n\n\n\n<div id='acknowledgements' />\n\nAcknowledgements\n\nMaterial created by professor Claudio Torres (ctorres@inf.utfsm.cl) and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. April 2016.\nMaterial updated by professor Claudio Torres (ctorres@inf.utfsm.cl). DI UTFSM. June 2017.\nUpdate July 2020 - v1.21 - C.Torres : Fixing formatting issues."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NEONScience/NEON-Data-Skills
|
tutorials-in-development/Python/neon_api/neon_api_02_downloading_observation_py.ipynb
|
agpl-3.0
|
[
"syncID: \ntitle: \"Downlaoding NEON Observation Data with Python\"\ndescription: \"\"\ndateCreated: 2020-04-24\nauthors: Maxwell J. Burner\ncontributors: Donal O'Leary\nestimatedTime: \npackagesLibraries: requests, json, pandas\ntopics: api, data management\nlanguagesTool: python\ndataProduct: DP1.10003.001\ncode1: \ntutorialSeries: python-neon-api-series\nurlTitle: python-neon-api-02-downloading-observational\n\nIn this tutorial we will learn to download Observational Sampling (OS) data from the NEON API into the Python environment.\n<div id=\"ds-objectives\" markdown=\"1\">\n\n### Objectives\nAfter completing this tutorial, you will be able to:\n\n* Navigate a NEON API request from the *data/* endpoint\n* Describe the naming conventions of NEON OS data files\n* Understand how to download NEON observational data using the Python Pandas library\n* Describe the basic components of a Pandas dataframe\n\n\n### Install Python Packages\n\n* **requests**\n* **json** \n* **numpy**\n* **pandas**\n\nWe will not actually use the NumPy package in this tutorial; it is listed here because the Pandas package is built on top of NumPy, and requires that the latter be present.\n\n</div>\n\nIn this tutorial we will learn how to download specific NEON data files into Python. We will specifically look at how to use the Pandas package to read in CSV files of observational data.\nIn the previous tutorial, we saw some of the data files containing information on land bird breeding counts. These are an example of NEON observational data. NEON has three basic types of data: Observational Sampling (OS), Instrumentation Sampling (IS), and Remote Sensing or Airborne Observation Platform data (AOP). The process for requesting data is about the same for all three, but downloading and navigating the data tends to be very different depending on which specific data product we are using.\nHere we will discuss downloading observational data, as it tends to be the simplest to handle.\nLibraries Downloaded\nIn addition to used requests and json packages again, we will use the Pandas package to read in the data. Pandas is a library that adds data frame objects to Python, based on the data frames of the R programming language; these offer a great way to store and manipulate tabular data.",
"import requests\nimport json\nimport pandas as pd\n\nSERVER = 'http://data.neonscience.org/api/v0/'\nSITECODE = 'TEAK'\nPRODUCTCODE = 'DP1.10003.001'",
"Look up Data Files\nWe already know from the last tutorial that landbird breeding counts (DP1.10003.001) are available at the Lower Teakettle site for 2018-06. We can again make a request to see what files in particular are available.",
"#Make Request\ndata_request = requests.get(SERVER+'data/'+PRODUCTCODE+'/'+SITECODE+'/'+'2018-06')\ndata_json = data_request.json()\n\n#View names of files\nfor file in data_json['data']['files']:\n print(file['name'])",
"Let's take a closer look at a file name.",
"print(data_json['data']['files'][6]['name'])",
"The format for most NEON data product file names is:\nNEON.D[two-digit domain number].[site code].[data product ID].[file-specific name].[date of file creation].[file extension]\nSo the file whose name we singled out is domain 17, Lower Teakettle Site, Breeding Landbird point counts (DP1.10003.001), brd_perpoint.2018-06.basic, created 2019-11-07 at 15:32:35 Universal Time. The file name brd_perpoint.2018-06.basic indicates that this is the 'basic' version of bird counts by point, observed in June 2018.\nBird counts and other observational data are usually kept in CSV files in the NEON database. Often the data for a particular month-site combination will be available in through two different .csv files that represent two different 'download packages'; a 'basic' package storing only the main measurements, and an 'expanded' package that also lists the uncertainties involved in each measurement. Let's save the URL for the basic count data CSV file.",
"#Print names and URLs of files with birdcount data\nfor file in data_json['data']['files']:\n if('countdata' in file['name']): #Show all files with 'countdata' in their name\n print(file['name'],file['url'])\n if('basic' in file['name']):\n bird_count_url = file['url'] #save url of file with basic bird count data\n",
"Read file into Pandas Dataframe\nThere are a couple options for reading CSV files into Python. For files read directly from NEON's data repository, one popular option is the 'read_csv' function from the Pandas package. This function converts the contents of the target file into a pandas dataframe object, and has the added advantage of being able to read data files accessed through the web (Python has its own built-in package for reading CSV files, but this package can only read files present on your machine).",
"#Read bird count CSV data into a Pandas Dataframe\ndf_bird = pd.read_csv(bird_count_url)",
"Pandas is a popular Python package for data analysis and data manipulation. The package implements dataframe objects based on the dataframes used in the R programming language, and uses these objects for storing and manipulating tabular data.\nA dataframe is a two-dimensional table of data, a grid built of rows and columns of values. Generally the columns correspond to the different variables being measured, while the rows correspond to each entry or measurement taken (in this case, each bird counted). Dataframes also have a header containing labels for each column, and an index containing labels for each row; both are 'index' objects stored as attributes of the dataframe object.\nPython dataframes store their contents, header, and index in different attributes of the dataframe object. Other attributes contain metadata such as the overall shape of the dataframe, and the data type of each column.\nYou can find more about Pandas at their official site, which includes a tutorials page here.",
"#View the column names\ndf_bird.columns\n\n#Print out dimensions of the new dataframe\nprint('Number of columns: ',df_bird.shape[1])\nprint('Number of Rows: ',df_bird.shape[0])\n\n#Print out names and data types of dataframe columns\nprint(df_bird.dtypes)",
"Pandas dataframes classify data as integer, floating point (decimal numbers), or object; the last category ususally indicates data stored as strings, such as text labels or date-time data.",
"#View first five rows of dataframe using the 'head' method\ndf_bird.head(5)",
"We can now manipulate this dataframe using the various methods and functions of the Pandas library.\nVariable Information\nLook again at the list of files available, specifically those that are NOT counting data.",
"#View names of files\nfor file in data_json['data']['files']:\n if( (not('countdata' in file['name'])) & (not('perpoint' in file['name'])) ):\n print(file['name'])",
"While the .zip files are packages containing multiple bird count data tables, the remaining files mostly serve to provide context to the data. The variables CSV file in particular contains a dataset with information on the variables used in the count data tables. This provides useful information such as units and defintions for each variable.",
"#Get variables information as pandas dataframe\nfor file in data_json['data']['files']:\n if('variables' in file['name']):\n df_variables = pd.read_csv(file['url'])\n\n#View metadata and first few rows\n\nprint('Number of rows: ', df_variables.shape[0])\nprint('Number of columns: ',df_variables.shape[1])\n\nprint('Data Columns:\\n')\nprint(df_variables.dtypes)\n\ndf_variables.head(5)",
"The table includes a column called 'table' indicating in which file a variable appears. We want to see information on the variables for the basic bird count table, since that is the table we downloaded. We can do this using comparisons and subsetting.",
"#Subset to view only variables in the basic countdata table\ndf_variables[(df_variables['table'] == 'brd_countdata')&(df_variables['downloadPkg'] == 'basic')]",
"Challenge\nThe Pandas concat function takes mulitple dataframes that have the same column names and attributes, but different rows, and combines the rows from all of the input dataframes into one output dataframe. Get basic bird count data for other months at Lower Teakettle, and combine the resulting dataframes into one with concat."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
metpy/MetPy
|
v0.6/_downloads/Natural_Neighbor_Verification.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Natural Neighbor Verification\nWalks through the steps of Natural Neighbor interpolation to validate that the algorithmic\napproach taken in MetPy is correct.\nFind natural neighbors visual test\nA triangle is a natural neighbor for a point if the\ncircumscribed circle <https://en.wikipedia.org/wiki/Circumscribed_circle>_ of the\ntriangle contains that point. It is important that we correctly grab the correct triangles\nfor each point before proceeding with the interpolation.\nAlgorithmically:\n\n\nWe place all of the grid points in a KDTree. These provide worst-case O(n) time\n complexity for spatial searches.\n\n\nWe generate a Delaunay Triangulation <https://docs.scipy.org/doc/scipy/\n reference/tutorial/spatial.html#delaunay-triangulations>_\n using the locations of the provided observations.\n\n\nFor each triangle, we calculate its circumcenter and circumradius. Using\n KDTree, we then assign each grid a triangle that has a circumcenter within a\n circumradius of the grid's location.\n\n\nThe resulting dictionary uses the grid index as a key and a set of natural\n neighbor triangles in the form of triangle codes from the Delaunay triangulation.\n This dictionary is then iterated through to calculate interpolation values.\n\n\nWe then traverse the ordered natural neighbor edge vertices for a particular\n grid cell in groups of 3 (n - 1, n, n + 1), and perform calculations to generate\n proportional polygon areas.\n\n\nCircumcenter of (n - 1), n, grid_location\n Circumcenter of (n + 1), n, grid_location\nDetermine what existing circumcenters (ie, Delaunay circumcenters) are associated\n with vertex n, and add those as polygon vertices. Calculate the area of this polygon.\n\n\nIncrement the current edges to be checked, i.e.:\n n - 1 = n, n = n + 1, n + 1 = n + 2\n\n\nRepeat steps 5 & 6 until all of the edge combinations of 3 have been visited.\n\n\nRepeat steps 4 through 7 for each grid cell.",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d\nfrom scipy.spatial.distance import euclidean\n\nfrom metpy.gridding import polygons, triangles\nfrom metpy.gridding.interpolation import nn_point\n\nplt.rcParams['figure.figsize'] = (15, 10)",
"For a test case, we generate 10 random points and observations, where the\nobservation values are just the x coordinate value times the y coordinate\nvalue divided by 1000.\nWe then create two test points (grid 0 & grid 1) at which we want to\nestimate a value using natural neighbor interpolation.\nThe locations of these observations are then used to generate a Delaunay triangulation.",
"np.random.seed(100)\n\npts = np.random.randint(0, 100, (10, 2))\nxp = pts[:, 0]\nyp = pts[:, 1]\nzp = (pts[:, 0] * pts[:, 0]) / 1000\n\ntri = Delaunay(pts)\ndelaunay_plot_2d(tri)\n\nfor i, zval in enumerate(zp):\n plt.annotate('{} F'.format(zval), xy=(pts[i, 0] + 2, pts[i, 1]))\n\nsim_gridx = [30., 60.]\nsim_gridy = [30., 60.]\n\nplt.plot(sim_gridx, sim_gridy, '+', markersize=10)\nplt.axes().set_aspect('equal', 'datalim')\nplt.title('Triangulation of observations and test grid cell '\n 'natural neighbor interpolation values')\n\nmembers, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\n\nval = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)\nplt.annotate('grid 0: {:.3f}'.format(val), xy=(sim_gridx[0] + 2, sim_gridy[0]))\n\nval = nn_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1], tri_info)\nplt.annotate('grid 1: {:.3f}'.format(val), xy=(sim_gridx[1] + 2, sim_gridy[1]))",
"Using the circumcenter and circumcircle radius information from\n:func:metpy.gridding.triangles.find_natural_neighbors, we can visually\nexamine the results to see if they are correct.",
"def draw_circle(x, y, r, m, label):\n nx = x + r * np.cos(np.deg2rad(list(range(360))))\n ny = y + r * np.sin(np.deg2rad(list(range(360))))\n plt.plot(nx, ny, m, label=label)\n\n\nmembers, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\ndelaunay_plot_2d(tri)\nplt.plot(sim_gridx, sim_gridy, 'ks', markersize=10)\n\nfor i, info in tri_info.items():\n x_t = info['cc'][0]\n y_t = info['cc'][1]\n\n if i in members[1] and i in members[0]:\n draw_circle(x_t, y_t, info['r'], 'm-', str(i) + ': grid 1 & 2')\n plt.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[0]:\n draw_circle(x_t, y_t, info['r'], 'r-', str(i) + ': grid 0')\n plt.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[1]:\n draw_circle(x_t, y_t, info['r'], 'b-', str(i) + ': grid 1')\n plt.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n else:\n draw_circle(x_t, y_t, info['r'], 'k:', str(i) + ': no match')\n plt.annotate(str(i), xy=(x_t, y_t), fontsize=9)\n\nplt.axes().set_aspect('equal', 'datalim')\nplt.legend()",
"What?....the circle from triangle 8 looks pretty darn close. Why isn't\ngrid 0 included in that circle?",
"x_t, y_t = tri_info[8]['cc']\nr = tri_info[8]['r']\n\nprint('Distance between grid0 and Triangle 8 circumcenter:',\n euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))\nprint('Triangle 8 circumradius:', r)",
"Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)\nGrab the circumcenters and radii for natural neighbors",
"cc = np.array([tri_info[m]['cc'] for m in members[0]])\nr = np.array([tri_info[m]['r'] for m in members[0]])\n\nprint('circumcenters:\\n', cc)\nprint('radii\\n', r)",
"Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram\n<https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams>_\nwhich serves as a complementary (but not necessary)\nspatial data structure that we use here simply to show areal ratios.\nNotice that the two natural neighbor triangle circumcenters are also vertices\nin the Voronoi plot (green dots), and the observations are in the the polygons (blue dots).",
"vor = Voronoi(list(zip(xp, yp)))\nvoronoi_plot_2d(vor)\n\nnn_ind = np.array([0, 5, 7, 8])\nz_0 = zp[nn_ind]\nx_0 = xp[nn_ind]\ny_0 = yp[nn_ind]\n\nfor x, y, z in zip(x_0, y_0, z_0):\n plt.annotate('{}, {}: {:.3f} F'.format(x, y, z), xy=(x, y))\n\nplt.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)\nplt.annotate('{}, {}'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0]))\nplt.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',\n label='natural neighbor\\ncircumcenters')\n\nfor center in cc:\n plt.annotate('{:.3f}, {:.3f}'.format(center[0], center[1]),\n xy=(center[0] + 1, center[1] + 1))\n\ntris = tri.points[tri.simplices[members[0]]]\nfor triangle in tris:\n x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]\n y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]\n plt.plot(x, y, ':', linewidth=2)\n\nplt.legend()\nplt.axes().set_aspect('equal', 'datalim')\n\n\ndef draw_polygon_with_info(polygon, off_x=0, off_y=0):\n \"\"\"Draw one of the natural neighbor polygons with some information.\"\"\"\n pts = np.array(polygon)[ConvexHull(polygon).vertices]\n for i, pt in enumerate(pts):\n plt.plot([pt[0], pts[(i + 1) % len(pts)][0]],\n [pt[1], pts[(i + 1) % len(pts)][1]], 'k-')\n\n avex, avey = np.mean(pts, axis=0)\n plt.annotate('area: {:.3f}'.format(polygons.area(pts)), xy=(avex + off_x, avey + off_y),\n fontsize=12)\n\n\ncc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info([cc[0], cc1, cc2])\n\ncc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info([cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)\n\ncc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info([cc[1], cc1, cc2], off_x=-15)\n\ncc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info([cc[0], cc[1], cc1, cc2])",
"Put all of the generated polygon areas and their affiliated values in arrays.\nCalculate the total area of all of the generated polygons.",
"areas = np.array([60.434, 448.296, 25.916, 70.647])\nvalues = np.array([0.064, 1.156, 2.809, 0.225])\ntotal_area = np.sum(areas)\nprint(total_area)",
"For each polygon area, calculate its percent of total area.",
"proportions = areas / total_area\nprint(proportions)",
"Multiply the percent of total area by the respective values.",
"contributions = proportions * values\nprint(contributions)",
"The sum of this array is the interpolation value!",
"interpolation_value = np.sum(contributions)\nfunction_output = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)\n\nprint(interpolation_value, function_output)",
"The values are slightly different due to truncating the area values in\nthe above visual example to the 3rd decimal place."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nproctor/phys202-2015-work
|
assignments/assignment03/NumpyEx03.ipynb
|
mit
|
[
"Numpy Exercise 3\nImports",
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport antipackage\nimport github.ellisonbg.misc.vizarray as va",
"Geometric Brownian motion\nHere is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.",
"def brownian(maxt, n):\n \"\"\"Return one realization of a Brownian (Wiener) process with n steps and a max time of t.\"\"\"\n t = np.linspace(0.0,maxt,n)\n h = t[1]-t[0]\n Z = np.random.normal(0.0,1.0,n-1)\n dW = np.sqrt(h)*Z\n W = np.zeros(n)\n W[1:] = dW.cumsum()\n return t, W",
"Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.",
"brownian(1.0, 1000)\nt=brownian(1.0, 1000)[0]\nW=brownian(1.0, 1000)[1]\n\nnp.savez('brownian.npz', t, W)\n\nassert isinstance(t, np.ndarray)\nassert isinstance(W, np.ndarray)\nassert t.dtype==np.dtype(float)\nassert W.dtype==np.dtype(float)\nassert len(t)==len(W)==1000",
"Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.",
"plt.plot(t, W)\nplt.ylabel(\"Wiener Process\")\nplt.xlabel(\"Time\")\n\nassert True # this is for grading",
"Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.",
"dW = np.diff(W)\nprint dW.mean()\nprint dW.std()\n\n\nassert len(dW)==len(W)-1\nassert dW.dtype==np.dtype(float)",
"Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:\n$$\nX(t) = X_0 e^{((\\mu - \\sigma^2/2)t + \\sigma W(t))}\n$$\nUse Numpy ufuncs and no loops in your function.",
"\ndef geo_brownian(t, W, X0, mu, sigma):\n \"Return X(t) for geometric brownian motion with drift mu, volatility sigma.\"\"\"\n t = np.linspace(0.0, 1.0, 1000)\n raised = ((mu - sigma**2/2)*t + sigma*W)\n expfunc = np.exp(raised)\n X = X0*expfunc\n return t, X\n\n\nassert True # leave this for grading",
"Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\\mu=0.5$ and $\\sigma=0.3$ with the Wiener process you computed above.\nVisualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.",
"t = geo_brownian(1.0, W, 1.0, 0.5, 0.3)[0]\nX = geo_brownian(1.0, W, 1.0, 0.5, 0.3)[1]\n\nplt.plot(t,X)\nplt.xlabel(\"Time\")\nplt.ylabel(\"Geometric Brownian Motion\")\n\nassert True # leave this for grading"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
IBMDecisionOptimization/docplex-examples
|
examples/mp/jupyter/Benders_decomposition.ipynb
|
apache-2.0
|
[
"Benders decomposition with decision optimization\nThis tutorial includes everything you need to set up decision optimization engines, build a mathematical programming model, then use the benders decomposition on it.\nWhen you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.\n\nThis notebook is part of Prescriptive Analytics for Python\nIt requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account\nand you can start using IBM Cloud Pak for Data as a Service right away).\nCPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:\n - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:\n - <i>Python 3.x</i> runtime: Community edition\n - <i>Python 3.x + DO</i> runtime: full edition\n - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition\n\nTable of contents:\n\nDescribe the business problem\nHow decision optimization (prescriptive analytics) can help\nUse decision optimization\nStep 1: Import the library\nStep 2: Set up the prescriptive model\nStep 3: Solve the problem with default CPLEX algorithm\nStep 4: Apply a Benders strategy\nStep 5: Use the CPLEX annotations to guide CPLEX in your Benders decomposition\n\n\nSummary\n\n\nBenders decomposition is an approach to solve mathematical programming problems with a decomposable structure.\nStarting with 12.7, CPLEX can decompose the model into a single master and (possibly multiple) subproblems. \nTo do so, CPLEX can use of annotations that you supply for your model or can automatically do the decomposition. \nThis approach can be applied to mixed-integer linear programs (MILP). For certain types of problems, this approach can offer significant performance improvements.\nNote:\nIf your problem does not match such decomposition, CPLEX will raise an error at solve time.\nCPLEX will produce an error CPXERR_BAD_DECOMPOSITION if the annotated decomposition does not yield disjoint subproblems\nLearn more bout Benders decomposition\nDirects a reader to more sources about Benders algorithm.\nThe popular acceptance of the original paper suggesting a decomposition or partitioning of a model to support solution of mixed integer programs gave rise to \"Benders algorithm\" as the name.\n\nJ. Benders. <i>Partitioning procedures for solving mixed-variables programming problems in Numerische Mathematik, volume 4, issue 1, pages 238–252, 1962</i>\n\nOther researchers developed the theory of cut-generating linear programs (CGLP) to further this practice.\n* M. Fischetti, D. Salvagnin, A. Zanette. <i>A note on the selection of Benders’ cuts in Mathematical Programming, series B, volume 124, pages 175-182, 2010</i>\nStill others applied the practice to practical operations research. This technical report describes Benders algorithm in \"modern\" terms and offers implementation hints.\n* M. Fischetti, I. Ljubic, M. Sinnl. <i>Benders decomposition without separability: a computational study for capacitated facility location problems in Technical Report University of Padova, 2016</i>\nHow decision optimization can help\n\n\nPrescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. \n\n\nPrescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. \n\n\nPrescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.\n<br/>\n\n\n<u>With prescriptive analytics, you can:</u> \n\nAutomate the complex decisions and trade-offs to better manage your limited resources.\nTake advantage of a future opportunity or mitigate a future risk.\nProactively update recommendations based on changing events.\nMeet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.\n\nUse decision optimization\nStep 1: Import the library\nRun the following code to import Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.",
"import sys\ntry:\n import docplex.mp\nexcept:\n raise Exception('Please install docplex. See https://pypi.org/project/docplex/')",
"A restart of the kernel might be needed.\nStep 2: Set up the prescriptive model\nWe will write a toy model just in order to show how to use the annotation api.\nThis model is not important: it just matche a benders decomposition, that is CPLEX can apply its new algorithm without any error.\nThe aim of this notebook is to discover and learn how to successfully apply Benders, not to see huge performance differences between a standard solve and a Benders based solve.",
"d1 = 25\nd2 = 35\n\nCosts = [[20, 15, 22, 27, 13, 4, 15, 6, 15, 22, 25, 13, 7, 28, 14, 5, 8, 1, 17, 3, 19, 17, 22, 12, 14],\n [2, 15, 16, 16, 10, 13, 4, 2, 6, 29, 10, 8, 20, 11, 8, 11, 28, 17, 10, 29, 3, 24, 12, 11, 11],\n [13, 14, 6, 17, 14, 13, 8, 29, 19, 26, 22, 0, 8, 29, 15, 20, 5, 20, 26, 17, 24, 10, 24, 9, 1],\n [7, 27, 24, 3, 4, 23, 11, 9, 18, 1, 29, 24, 16, 9, 8, 3, 18, 24, 10, 12, 1, 3, 15, 29, 3],\n [25, 26, 29, 6, 24, 8, 2, 10, 17, 0, 4, 7, 2, 17, 2, 27, 24, 20, 18, 5, 5, 2, 21, 26, 20],\n [29, 5, 15, 5, 4, 26, 18, 8, 2, 14, 13, 6, 14, 28, 16, 28, 23, 8, 5, 8, 10, 28, 17, 0, 23],\n [12, 16, 10, 16, 17, 10, 29, 11, 28, 22, 25, 8, 27, 12, 10, 28, 7, 5, 3, 9, 18, 10, 15, 16, 2],\n [12, 9, 14, 23, 26, 4, 3, 3, 22, 12, 11, 9, 19, 5, 6, 16, 1, 1, 9, 20, 23, 23, 27, 4, 11],\n [18, 13, 28, 29, 3, 28, 16, 11, 9, 2, 7, 20, 13, 23, 6, 10, 3, 16, 14, 2, 15, 17, 1, 19, 27],\n [29, 17, 17, 14, 21, 18, 8, 21, 9, 20, 14, 6, 29, 24, 24, 4, 18, 16, 21, 24, 26, 0, 26, 9, 5],\n [27, 24, 21, 28, 17, 18, 10, 10, 26, 25, 13, 18, 2, 9, 16, 26, 10, 22, 5, 17, 15, 0, 9, 0, 16],\n [13, 15, 17, 21, 25, 9, 22, 13, 20, 15, 1, 17, 18, 10, 2, 27, 19, 21, 14, 26, 29, 13, 28, 28, 15],\n [16, 12, 2, 2, 9, 27, 11, 14, 12, 2, 14, 29, 3, 12, 18, 6, 7, 9, 1, 5, 19, 14, 11, 29, 4],\n [1, 15, 27, 29, 16, 17, 10, 10, 17, 19, 6, 10, 20, 20, 19, 10, 19, 26, 15, 7, 20, 19, 13, 3, 22],\n [22, 14, 12, 3, 22, 6, 15, 3, 6, 10, 9, 13, 11, 21, 6, 19, 29, 4, 5, 21, 7, 12, 13, 11, 22],\n [9, 27, 22, 29, 11, 14, 1, 19, 21, 2, 4, 13, 17, 9, 10, 17, 13, 8, 24, 13, 26, 27, 23, 4, 21],\n [3, 14, 26, 18, 17, 3, 1, 11, 13, 8, 22, 3, 18, 26, 17, 15, 22, 10, 19, 23, 13, 14, 17, 18, 27],\n [21, 14, 1, 28, 28, 0, 0, 29, 12, 23, 22, 17, 19, 2, 10, 19, 4, 18, 28, 13, 27, 12, 9, 29, 22],\n [29, 3, 20, 5, 5, 23, 28, 16, 1, 8, 26, 23, 11, 11, 21, 17, 13, 21, 3, 8, 6, 15, 18, 6, 24],\n [14, 20, 26, 10, 17, 20, 5, 9, 25, 20, 14, 22, 5, 12, 0, 18, 7, 0, 8, 15, 21, 12, 26, 7, 21],\n [7, 7, 1, 9, 24, 29, 0, 3, 29, 24, 1, 6, 14, 0, 11, 5, 21, 12, 15, 1, 25, 4, 7, 17, 16],\n [8, 18, 15, 6, 1, 22, 26, 13, 19, 20, 12, 15, 19, 27, 13, 3, 22, 22, 22, 20, 0, 4, 24, 13, 25],\n [14, 6, 29, 23, 8, 5, 4, 18, 21, 29, 18, 2, 2, 3, 7, 13, 12, 9, 2, 18, 26, 3, 18, 7, 7],\n [5, 8, 4, 8, 25, 4, 6, 20, 14, 21, 18, 16, 15, 11, 7, 8, 20, 27, 22, 7, 5, 8, 24, 11, 8],\n [0, 8, 29, 25, 29, 0, 12, 25, 19, 9, 19, 25, 27, 21, 2, 23, 2, 25, 17, 6, 0, 6, 15, 2, 15],\n [23, 24, 10, 26, 7, 5, 5, 26, 1, 16, 22, 8, 24, 9, 16, 17, 1, 26, 20, 23, 18, 20, 23, 2, 19],\n [16, 3, 9, 21, 15, 29, 8, 26, 20, 12, 18, 27, 29, 15, 24, 9, 17, 24, 3, 5, 21, 28, 7, 1, 12],\n [1, 11, 21, 1, 13, 14, 16, 14, 17, 25, 18, 9, 19, 26, 1, 13, 15, 6, 14, 10, 12, 19, 0, 15, 7],\n [20, 14, 7, 5, 8, 16, 12, 0, 5, 14, 18, 16, 24, 27, 20, 7, 11, 3, 16, 8, 2, 2, 4, 0, 3],\n [26, 19, 27, 29, 8, 9, 8, 10, 18, 4, 6, 0, 5, 17, 12, 18, 17, 17, 13, 0, 16, 12, 18, 19, 16],\n [3, 12, 11, 28, 3, 2, 14, 14, 17, 29, 18, 14, 19, 24, 9, 27, 4, 19, 6, 24, 19, 3, 28, 20, 4],\n [2, 0, 21, 14, 21, 12, 27, 6, 20, 29, 13, 21, 23, 0, 20, 4, 11, 27, 3, 11, 21, 11, 21, 4, 17],\n [20, 26, 5, 8, 18, 14, 12, 12, 24, 3, 8, 0, 25, 16, 19, 21, 7, 4, 23, 21, 20, 28, 6, 21, 19],\n [16, 18, 9, 1, 9, 7, 14, 6, 28, 26, 3, 14, 27, 4, 9, 9, 1, 9, 24, 3, 14, 13, 18, 3, 27],\n [1, 19, 7, 20, 26, 27, 0, 7, 4, 0, 13, 8, 10, 17, 14, 19, 21, 21, 14, 15, 22, 14, 5, 27, 0]];\n\nR1 = range(1,d1)\nR2 = range(1,d2);\n\ndim = range(1,d1*d2+1)",
"Create one model instance, with a name. We set the log output to true such that we can see when CPLEX enables the Benders algorithm.",
"# first import the Model class from docplex.mp\nfrom docplex.mp.model import Model\n\nm = Model(name='benders', log_output=True)\n\nX = m.continuous_var_dict([(i,j) for i in R2 for j in R1])\nY = m.integer_var_dict(R1, 0, 1)\n\n\nbendersPartition = {(i,j) : i for i in R2 for j in R1}\n\nm.minimize( m.sum( Costs[i][j]*X[i,j] for i in R2 for j in R1) + sum(Y[i] for i in R1) )\n\n\nm.add_constraints( m.sum( X[i,j] for j in R1) ==1 for i in R2)\n \nm.add_constraints( X[i,j] - Y[j] <= 0 for i in R2 for j in R1)",
"Solve with Decision Optimization\nIf you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation. On IBM Cloud Pak for Data as a Service, you need to switch the jupyter environment to Python 3.x + DO.\nYou will get the best solution found after n seconds, thanks to a time limit parameter.",
"m.print_information()",
"Step 3: Solve the problem with default CPLEX algorithm",
"msol = m.solve()\nassert msol is not None, \"model can't solve\"\nm.report()",
"Inspect the CPLEX Log.\nIf you inspect the CPLEX, you will see that it is a very standard log.\nCPLEX needed 63 iterations to solve it.",
"obj1 = m.objective_value",
"Step 4: Apply a Benders strategy\nCPLEX implements a default Benders decomposition in certain situations.\nIf you want CPLEX to apply a Benders strategy as it solves your problem, but you do not specify cpxBendersPartition annotations yourself, CPLEX puts all integer variables in master and continuous variables into subproblems. \nIf there are no integer variables in your model, or if there are no continuous variables in your model, CPLEX raises an error stating that it cannot automatically decompose the model to apply a Benders strategy.\nYou just need to set the Benders strategy parameter.\nCPLEX supports 4 values for this parameter, from -1 to 3:\n* OFF (default value) will ignore Benders.\n* AUTO, USER, WORKERS, FULL will enable Benders.\nRefer to the CPLEX documentation to understand the differences between the 4 values that trigger it.",
"m.parameters.benders.strategy = 3\n\nm.print_information()",
"We call cplex solve, but with the <i>clean_before_solve</i> flag because we want it to forget everything about previous solve and solution.",
"msol = m.solve(clean_before_solve=True)\nassert msol is not None, \"model can't solve\"\nm.report()",
"Inspect the CPLEX Log.\nInspect the CPLEX log: you can now see that the log are different and you can see the message\n<code>\nBenders cuts applied: 3\n</code>\nwhich proves CPLEX applied successfully\nYou can see that CPLEX needed only 61 cumulative iterations, while it needed 63 previously.",
"obj2 = m.objective_value",
"Step 5: Use the CPLEX annotations to guide CPLEX in your Benders decomposition",
"m.parameters.benders.strategy = 1",
"Settings benders annotation in docplex is very simple.\nYou just need to use the <i>benders_annotation</i> property available on variables and constraints to state which worker they belong to.",
"for i in R2:\n for j in R1:\n X[i,j].benders_annotation = i%2\n\nm.print_information()\n\nmsol = m.solve(clean_before_solve=True)\nassert msol is not None, \"model can't solve\"\nm.report()",
"Inspect the CPLEX Log.\nInspect the CPLEX log: you can see that you now need only 57 cumulative iterations instead of 61 with default Benders and 63 with no Benders.\nIf you look at the <i>Best Bound</i> column, you will also see that the listed sub problems are not the same as CPLEX applied the decomposition provided by the annotations.",
"obj3 = m.objective_value\n\nassert (obj1 == obj2) and (obj2 == obj3)",
"Summary\nYou learned how to set up and use the IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model and apply a Benders decomposition.\nReferences\n\nDecision Optimization CPLEX Modeling for Python documentation\nIBM Decision Optimization\nNeed help with DOcplex or to report a bug? Please go here\nContact us at dofeedback@wwpdl.vnet.ibm.com\"\n\nCopyright © 2017-2019 IBM. Sample Materials."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
neoscreenager/JupyterNotebookWhirlwindTourOfPython
|
whirlwindDataScienceTools.ipynb
|
gpl-3.0
|
[
"numpy scipy pandas matplotlib scikit-learn\nNumPy: Numerical Python\nNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python. The important features of NumPy are:\nIt provides an ndarray structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.\nIt provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.",
"import numpy as np\nx = np.arange(1, 10)\nx\n\nx ** 2",
"Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional. For example, here we will reshape our x array into a 3x3 array:",
"M = x.reshape((3, 3))\nM\n\n",
"A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using .T:",
"M.T",
"Pandas: Labeled Column-oriented Data\nPandas is a much newer package than NumPy, and is in fact built on top of it. What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages. DataFrames in Pandas look something like this:",
"import pandas as pd\ndf = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],\n 'value': [1, 2, 3, 4, 5, 6]})\ndf\n\ndf['label']\n\ndf['value'].sum()\n\ndf.groupby('label').sum()",
"Bookmark: Metplotlib to be started:http://nbviewer.jupyter.org/github/jakevdp/WhirlwindTourOfPython/blob/master/15-Preview-of-Data-Science-Tools.ipynb"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sofroniewn/2P-RAM-pipeline
|
notebooks/sources-view.ipynb
|
mit
|
[
"Setup",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\npathToImage = '/Users/nick/Downloads/neurofinder.02.00/images_summary/mean.tif'\npathToSources = '/Users/nick/Downloads/neurofinder.02.00/regions/regions3.json'\npathToSourcesGT = '/Users/nick/Downloads/neurofinder.02.00/regions/regionsRand.json'\n\nimport extraction as ex\n\nmodel = ex.load(pathToSources)\n\nfrom skimage.io import imsave, imread\n\nim = imread(pathToImage)\n\nfrom showit import image",
"View region overlay\nCompute overlay",
"from numpy import tile, maximum\n\nbase = tile((im.astype(float)/600).clip(0,1),(3,1,1)).transpose(1,2,0)\nmasks = model.regions.mask((512,512), background='black', fill=[0.8, 0, 0], stroke='white')\nblend = maximum(base, masks)\n\nfig = plt.figure(figsize=[10,10])\nax = plt.axes()\nimage(blend, ax=ax)\nplt.xlim([0, blend.shape[1]]);\nplt.ylim([blend.shape[0], 0]);\n#for s in range(randR.regions.count):\n# plt.annotate(s=str(s), xy=[model.regions.center[s][1],model.regions.center[s][0]], color='w', size = 10);\n\nimsave(pathToImage[:-4] + '-sources.tif', (255*blend).astype('uint8'), plugin='tifffile', photometric='rgb')",
"Compare two regions\nPoint to two different regions files - plot one in red, one in blue, hits in green, within distance threshold (default is inf)\nSave results of region comparison (including neurofinder like summaries)",
"import neurofinder as nf\n\nthreshold = 5\n\nmatches = nf.match(model.regions, modelRand.regions, threshold)\n\nrecall, precision = nf.centers(model.regions, modelRand.regions, threshold)\n\ninclusion, exclusion = nf.shapes(model.regions, modelRand.regions, threshold)\n\nd = {'recall':recall, 'precision':precision, 'inclusion':inclusion, 'exclusion':exclusion, 'threshold':threshold}\n\nimport json\nfrom os.path import join\n\noutput = '/Users/nick/Downloads/neurofinder.02.00/regions'\nwith open(join(output, 'results.json'), 'w') as f:\n f.write(json.dumps(d, indent=2))\n\nfrom regional import many\n\nfrom numpy import where, isnan, nan, full\n\nmatchesRR = full(modelRand.regions.count,nan)\nfor ii in where(~isnan(matches))[0]:\n matchesRR[matches[ii]] = ii\n\nmatchedA = ex.model.ExtractionModel([model.regions[i] for i in where(~isnan(matches))[0]])\nmatchedB = ex.model.ExtractionModel([model.regions[i] for i in where(isnan(matches))[0]])\nmatchedC = ex.model.ExtractionModel([modelRand.regions[i] for i in where(~isnan(matchesRR))[0]])\nmatchedD = ex.model.ExtractionModel([modelRand.regions[i] for i in where(isnan(matchesRR))[0]])\n\nbase = tile((im.astype(float)/600).clip(0,1),(3,1,1)).transpose(1,2,0)\nmasksA = matchedA.regions.mask((512,512), background='black', fill='green', stroke='black')\nmasksB = matchedB.regions.mask((512,512), background='black', fill='red', stroke='black')\nmasksC = matchedC.regions.mask((512,512), background='black', fill='green', stroke='white')\nmasksD = matchedD.regions.mask((512,512), background='black', fill=[.7, 0, 0], stroke='white')\nX = maximum(maximum(maximum(masksA, masksB), masksC), masksD)\nblend = maximum(base, X)\n\nfig = plt.figure(figsize=[10,10])\nax = plt.axes()\nimage(blend, ax=ax)\nplt.xlim([0, blend.shape[1]]);\nplt.ylim([blend.shape[0], 0]);\n#for s in range(randR.regions.count):\n# plt.annotate(s=str(s), xy=[model.regions.center[s][1],model.regions.center[s][0]], color='w', size = 10);",
"Make random regions",
"from regional import one, many\nfrom showit import image\nfrom numpy import zeros, random, asarray, round, where, ones\nfrom scipy.ndimage.morphology import binary_closing, binary_opening, binary_fill_holes, binary_dilation\n\ndims = [512,512]\nmargin = 20\nn = 100\n\ndef topoly(c):\n tmp = zeros(dims)\n coords = asarray([c[0] + random.randn(32) * 3, c[1] + random.randn(32) * 3]).astype('int')\n tmp[coords.tolist()] = 1\n tmp = binary_dilation(tmp, ones((3, 3)))\n tmp = binary_closing(tmp, ones((7, 7)))\n return asarray(where(tmp)).T\n\nxcenters = (dims[0] - margin) * random.random_sample(n) + margin/2\nycenters = (dims[1] - margin) * random.random_sample(n) + margin/2\ncenters = zip(xcenters, ycenters)\n\nregions = many([one(topoly(c)) for c in centers])\n\nmodelRand = ex.model.ExtractionModel(regions)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tralpha/Computer-Vision-Gatech
|
.ipynb_checkpoints/Lesson 2A-L1-checkpoint.ipynb
|
apache-2.0
|
[
"This notebook covers:\nLesson 2A-L1 Images as Functions - Lesson 2C-L3 Aliasing\nImport Libraries and Dependencies",
"import cv2\nimport os\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nIMG = 'imgs/'",
"Some Helper Functions",
"def show_img(img):\n \"\"\"\n Function takes an image, and shows the image using pyplot.\n The image is shown in RGB\n \"\"\"\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n plt.imshow(img)\n",
"Load and Display an Image",
"img_path = os.path.join(IMG, 'sea.jpeg')\nimg = cv2.imread(img_path)\n# What is the size of the Image?\nprint img.shape\n# What is the class of the image?\nprint img.dtype\nshow_img(img)",
"Inspect Image Values\nMy image has three channels: Red, Green, and Blue. That's why when selecting the 50th row and 100th column, three values are returned.",
"img[50,100]\n\nplt.plot(img[1500,:,2])",
"TODO: Extract a 2D slice between rows 101 to 103 and columns 201 to 203 (inclusive)",
"extraction = img[101:103,201:203]",
"Crop Image",
"cropped = img[1700:,300:,:]\n\nshow_img(cropped)\n\n# Size of Cropped Image\ncropped.shape\n",
"Color Planes",
"img_green = img[:,:,1]\n\nplt.imshow(img_green, cmap='gray')",
"Adding Pixels",
"# Load the Dog Image\ndog_path = os.path.join(IMG, 'dog.jpeg')\ndog = cv2.imread(dog_path)\n#dog = cv2.resize\ndog = cv2.resize(dog, (img.shape[1], img.shape[0]))\nshow_img(dog)\n\nsummed = dog + img"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
david4096/bioapi-examples
|
python_notebooks/1kg_read_service.ipynb
|
apache-2.0
|
[
"GA4GH 1000 Genomes Reads Service Example\nThis example illustrates how to access alignment data made available using a GA4GH interface.\nInitialize the client\nIn this step we create a client object which will be used to communicate with the server. It is initialized using the URL.",
"from ga4gh.client import client\nc = client.HttpClient(\"http://1kgenomes.ga4gh.org\")\n\n#Obtain dataSet id REF: -> `1kg_metadata_service`\ndataset = c.search_datasets().next() \n\n#Obtain reference set id REF:-> `1kg_reference_service`\nreference_set = c.search_reference_sets().next()\nreference = c.search_references(reference_set_id=reference_set.id).next()\n",
"Search read group sets\nRead group sets are logical containers for read groups similar to BAM.\nWe can obtain read group sets via a search_read_group_sets request. Observe that this request takes as it's main parameter dataset_id, which was obtained using the example in 1kg_metadata_service using a search_datasets request.",
"counter = 0\nfor read_group_set in c.search_read_group_sets(dataset_id=dataset.id):\n counter += 1\n if counter < 4:\n print \"Read Group Set: {}\".format(read_group_set.name)\n print \"id: {}\".format(read_group_set.id)\n print \"dataset_id: {}\".format(read_group_set.dataset_id)\n print \"Aligned Read Count: {}\".format(read_group_set.stats.aligned_read_count)\n print \"Unaligned Read Count: {}\\n\".format(read_group_set.stats.unaligned_read_count)\n if read_group_set.name == \"NA19675\":\n rgSet = read_group_set\n for read_group in read_group_set.read_groups:\n print \" Read group:\"\n print \" id: {}\".format(read_group.id)\n print \" Name: {}\".format(read_group.name)\n print \" Description: {}\".format(read_group.description)\n print \" Biosample Id: {}\\n\".format(read_group.bio_sample_id)\n else: \n break",
"Note: only a small subset of elements is being illustrated, the data returned by the servers is richer, that is, it contains other informational fields which may be of interest.\nGet read group set\nSimilarly, we can obtain a specific Read Group Set by providing a specific identifier.",
"read_group_set = c.get_read_group_set(read_group_set_id=rgSet.id)\nprint \"Read Group Set: {}\".format(read_group_set.name)\nprint \"id: {}\".format(read_group_set.id)\nprint \"dataset_id: {}\".format(read_group_set.dataset_id)\nprint \"Aligned Read Count: {}\".format(read_group_set.stats.aligned_read_count)\nprint \"Unaligned Read Count: {}\\n\".format(read_group_set.stats.unaligned_read_count)\nfor read_group in read_group_set.read_groups:\n print \" Read Group: {}\".format(read_group.name)\n print \" id: {}\".format(read_group.bio_sample_id)\n print \" bio_sample_id: {}\\n\".format(read_group.bio_sample_id)",
"Note, like in the previous example. Only a selected amount of parameters are selected for illustration, the data returned by the server is far richer, this format is only to have a more aesthetic presentation.\nSearch reads\nThis request returns reads were the read group set names we obtained above. The reference ID provided corresponds to chromosome 1 as obtained from the 1kg_reference_service examples. A search_reads request searches for read alignments in a region using start and end coordinates.",
"for read_group in read_group_set.read_groups:\n print \"Alignment from {}\\n\".format(read_group.name)\n alignment = c.search_reads(read_group_ids=[read_group.id], start=0, end=1000000, reference_id=reference.id).next()\n print \" id: {}\".format(alignment.id)\n print \" fragment_name: {}\".format(alignment.fragment_name)\n print \" aligned_sequence: {}\\n\".format(alignment.aligned_sequence)",
"For documentation on the service, and more information go to.\nhttps://ga4gh-schemas.readthedocs.io/en/latest/schemas/read_service.proto.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dh7/ML-Tutorial-Notebooks
|
tf-image-generation.ipynb
|
bsd-2-clause
|
[
"Tensor Flow to create a useless images\nTo learn how to encode a simple image and a GIF\nImport needed for Tensorflow",
"import numpy as np\nimport tensorflow as tf",
"Import needed for Jupiter",
"%matplotlib notebook\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nfrom IPython.display import Image",
"A function to save a picture",
"def write_png(tensor, name):\n casted_to_uint8 = tf.cast(tensor, tf.uint8)\n converted_to_png = tf.image.encode_png(casted_to_uint8)\n f = open(name, \"wb+\")\n f.write(converted_to_png.eval())\n f.close() ",
"A function to draw the cost function in Jupyter",
"class CostTrace:\n \"\"\"A simple example class\"\"\"\n def __init__(self):\n self.cost_array = []\n def log(self, cost):\n self.cost_array.append(cost)\n def draw(self):\n plt.figure(figsize=(12,5))\n plt.plot(range(len(self.cost_array)), self.cost_array, label='cost')\n plt.legend()\n plt.yscale('log')\n plt.show()\n",
"Create some random pictures\nEncode the input (a number)\nThis example convert the number to a binary representation",
"# Init size\nwidth = 100\nheight = 100\nRGB = 3\nshape = [height,width, RGB]\n\n# Create the generated tensor as a variable\nrand_uniform = tf.random_uniform(shape, minval=0, maxval=255, dtype=tf.float32)\ngenerated = tf.Variable(rand_uniform)\n\n#define the cost function\nc_mean = tf.reduce_mean(tf.pow(generated,2)) # we want a low mean\nc_max = tf.reduce_max(generated) # we want a low max\nc_min = -tf.reduce_min(generated) # we want a high mix\n\nc_diff = 0\nfor i in range(0,height-1, 1):\n line1 = tf.gather(generated, i,)\n line2 = tf.gather(generated, i+1)\n c_diff += tf.reduce_mean(tf.pow(line1-line2-30, 2)) # to force a gradient\n\n\ncost = c_mean + c_max + c_min + c_diff\n#cost = c_mean + c_diff\nprint ('cost defined')\n\ntrain_op = tf.train.GradientDescentOptimizer(0.5).minimize(cost, var_list=[generated])\nprint ('train_op defined')\n\n# Initializing the variables\ninit = tf.initialize_all_variables()\nprint ('variables initialiazed defined') \n\n# Launch the graph\nwith tf.Session() as sess:\n sess.run(init)\n print ('init done')\n cost_trace = CostTrace()\n for epoch in range(0,10000):\n sess.run(train_op)\n if (epoch % 100 == 0):\n c = cost.eval()\n print ('epoch', epoch,'cost' ,c, c_mean.eval(), c_min.eval(), c_max.eval(), c_diff.eval())\n cost_trace.log(c)\n write_png(generated, \"generated{:06}.png\".format(epoch))\nprint ('all done') \n\ncost_trace.draw()\n\nImage(\"generated000000.png\")",
"To create a GIF",
"from PIL import Image, ImageSequence\nimport glob, sys, os\nos.chdir(\".\")\nframes = []\nfor file in glob.glob(\"gene*.png\"):\n print(file)\n im = Image.open(file)\n frames.append(im)\n\nfrom images2gif import writeGif\nwriteGif(\"generated.gif\", frames, duration=0.1)",
"Feedback wellcome @dh7net"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
saga-survey/erik
|
ipython_notebooks/saga_artpop.ipynb
|
gpl-2.0
|
[
"import functools\nimport operator\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom astropy import units as u\nfrom astropy.coordinates import Distance\nfrom astropy import table\n\nfrom tqdm.notebook import tqdm # might require a `pip install tqdm ipywidgets` if you don't have it already\n\nfrom astropy.visualization import make_lupton_rgb\n%matplotlib inline\n\nimport artpop\n\nphot_system = 'SDSSugriz' # assume SDSS photometry for all images\nimage_size = (301, 301)\npixel_scale = 0.396 * u.arcsec/u.pixel #SDSS imaging camera\n\nsdss_imager = artpop.ArtImager(\n phot_system = phot_system, # photometric system\n diameter = 2.5 * u.m, # effective aperture diameter\n read_noise = 5, # read noise in electrons\n)",
"Sagittarius",
"stellar_mass_sgr = 2.1e7 * u.Msun #McConnachie et al. 2012\nRh_sgr = 630*u.pc # extrapolatiopn from the stellar mass following Fig 7 of Tollerud et al 2010\nfeh_sgr = -1\n\ndistance = 30*u.Mpc\n\n# the mag below which to treat as \"integrated light\" - lowers computational time\nmag_limit = 6 + Distance(distance).distmod.value\nmag_limit_band = 'SDSS_r'\n\nweisz14table6_url = 'https://cfn-live-content-bucket-iop-org.s3.amazonaws.com/journals/0004-637X/789/2/147/1/apj495810t6_mrt.txt?AWSAccessKeyId=AKIAYDKQL6LTV7YY2HIK&Expires=1638892261&Signature=8TJGc2BOt7w7ntfRoG%2FD9HEw8ho%3D'\nweisz14table6 = table.Table.read(weisz14table6_url, format='ascii', cache=True)\nweisz14table6",
"Extract out the cumulative SFH info from the information in the above table:",
"fracsfh_colnames = weisz14table6.colnames[6::5]\nfracsfh = np.array([float(colnm[1:]) for colnm in fracsfh_colnames])\n\n\nsgr_idx = list(weisz14table6['ID']).index('Sagittarius')\n\nfracsfh_sgr = np.array([weisz14table6[sgr_idx][colnm] for colnm in fracsfh_colnames])",
"And convert it into bins of rates, since that's what we need for the SSP fractions",
"sfr_bins = np.concatenate([[fracsfh[0]], (fracsfh[1:] + fracsfh[:-1])/2])\nsfr_sgr = np.concatenate([[fracsfh_sgr[0]], np.diff(fracsfh_sgr)])\n\nplt.plot(sfr_bins, sfr_sgr)\nplt.xlabel('log(lookback/yr)')\nplt.ylabel('SFR')\nplt.title('Sagittarius')",
"With a SFH in hand we can now build a composite stellar population. Assume the metallicity is constant, which is fairly representative of older pops in the core a la Mucciarelli 2017. That's certainly wrong in detail, but probably won't influence the qualitative picture.\nWe also cut off the individuial star sampling at mag_limit ($M_r = 5$), because otherwise it takes forever and you might run out of memory",
"ssps = []\nfor log_age, sfr in tqdm(zip(sfr_bins, sfr_sgr), total=len(sfr_bins)):\n if sfr > 0:\n ssp = artpop.MISTSSP(log_age, feh_sgr, phot_system,\n total_mass=stellar_mass_sgr*sfr,\n distance=distance,\n mag_limit=mag_limit, mag_limit_band=mag_limit_band,\n add_remnants=False) # not sure if Sgr M* reports include this, but probably not\n ssps.append(ssp)\ncsp = functools.reduce(operator.add, ssps) # this is equivalent to csp = ssps[0] + ssps[1] + ... + ssps[-1]\ncsp",
"As a sanity-check lets compare the portions of the population that are not sampled vs the stars that actually appear in the ArtPop model:",
"csp.integrated_abs_mags\n\n{k:-2.5*np.log10(np.sum(10**(Ms/-2.5))) for k, Ms in csp.abs_mags.items()}",
"OK good, so only a few percent of the light is off the bottom of the mag limit.",
"xys = artpop.plummer_xy(csp.num_stars, distance, image_size, pixel_scale, Rh_sgr)\nsrc = artpop.Source(xys, csp.mag_table, image_size, pixel_scale=pixel_scale)\n\nhalf_extent = image_size[0]*u.pixel*pixel_scale/2\nhalf_extent\n\n# assuming median seeing/sky brightness for SDSS https://www.sdss.org/dr12/imaging/other_info/\n\npsf = artpop.moffat_psf(fwhm=1.43*u.arcsec)\nobs_g = sdss_imager.observe(src, 'SDSS_g', 54 * u.second, sky_sb=21.86, psf=psf)\nobs_r = sdss_imager.observe(src, 'SDSS_r', 54 * u.second, sky_sb=20.86, psf=psf)\nobs_i = sdss_imager.observe(src, 'SDSS_i', 54 * u.second, sky_sb=19.86, psf=psf)\n\nrgb = make_lupton_rgb(obs_i.image, obs_r.image, obs_g.image, stretch=5)\n\nplt.figure(figsize=(8, 8))\nhalf_extents = np.array(image_size)*pixel_scale.to(u.arcmin/u.pixel).value/2\nplt.imshow(rgb, extent = (-half_extents[0], half_extents[0], -half_extents[1], half_extents[1]))\nplt.xlabel('$\\Delta x$ [arcmin]')\nplt.ylabel('$\\Delta y$ [arcmin]')\nplt.title(f'Sgr dSph at {distance}')\nplt.savefig('sgr_30.png')",
"Timing tests\nFor stellar masses above $\\sim 10^7$ you might have memory/timing issues The below demonstrate some of the knobs that can be turned to fix this",
"%%time \n\n# this is a baseline that's basically the \"constant\" term\nssp = artpop.MISTSSP(10.1, -1, phot_system, num_stars=10, distance=10*u.Mpc, add_remnants=True,\n mag_limit=30, mag_limit_band='SDSS_r')\n\n%%time \n\nssp = artpop.MISTSSP(10.1, -1, phot_system, total_mass=3e6, distance=10*u.Mpc, add_remnants=False)\n\n%%time \n\nssp = artpop.MISTSSP(10.1, -1, phot_system, total_mass=3e6, distance=10*u.Mpc, add_remnants=True)\n\n%%time \n\nssp = artpop.MISTSSP(10.1, -1, phot_system, total_mass=3e6, distance=10*u.Mpc, add_remnants=True,\n mag_limit=30, mag_limit_band='SDSS_r')\n\n%%time \n\nssp = artpop.MISTSSP(10.1, -1, phot_system, total_mass=3e7, distance=10*u.Mpc, add_remnants=True,\n mag_limit=35, mag_limit_band='SDSS_r')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/defd59f40a19378fba659a70b6f1ec76/plot_sensors_decoding.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"===============\nDecoding (MVPA)\n===============\n :depth: 3\n.. include:: ../../links.inc\nDesign philosophy\nDecoding (a.k.a. MVPA) in MNE largely follows the machine\nlearning API of the scikit-learn package.\nEach estimator implements fit, transform, fit_transform, and\n(optionally) inverse_transform methods. For more details on this design,\nvisit scikit-learn_. For additional theoretical insights into the decoding\nframework in MNE, see [1]_.\nFor ease of comprehension, we will denote instantiations of the class using\nthe same name as the class but in small caps instead of camel cases.\nLet's start by loading data for a simple two-class problem:",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler,\n cross_val_multiscore, LinearModel, get_coef,\n Vectorizer, CSP)\n\ndata_path = sample.data_path()\n\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\ntmin, tmax = -0.200, 0.500\nevent_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\n\n# The subsequent decoding analyses only capture evoked responses, so we can\n# low-pass the MEG data. Usually a value more like 40 Hz would be used,\n# but here low-pass at 20 so we can more heavily decimate, and allow\n# the examlpe to run faster. The 2 Hz high-pass helps improve CSP.\nraw.filter(2, 20)\nevents = mne.find_events(raw, 'STI 014')\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=('grad', 'eog'), baseline=(None, 0.), preload=True,\n reject=dict(grad=4000e-13, eog=150e-6), decim=10)\nepochs.pick_types(meg=True, exclude='bads') # remove stim and EOG\n\nX = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times\ny = epochs.events[:, 2] # target: Audio left or right",
"Transformation classes\nScaler\n^^^^^^\nThe :class:mne.decoding.Scaler will standardize the data based on channel\nscales. In the simplest modes scalings=None or scalings=dict(...),\neach data channel type (e.g., mag, grad, eeg) is treated separately and\nscaled by a constant. This is the approach used by e.g.,\n:func:mne.compute_covariance to standardize channel scales.\nIf scalings='mean' or scalings='median', each channel is scaled using\nempirical measures. Each channel is scaled independently by the mean and\nstandand deviation, or median and interquartile range, respectively, across\nall epochs and time points during :class:~mne.decoding.Scaler.fit\n(during training). The :meth:~mne.decoding.Scaler.transform method is\ncalled to transform data (training or test set) by scaling all time points\nand epochs on a channel-by-channel basis. To perform both the fit and\ntransform operations in a single call, the\n:meth:~mne.decoding.Scaler.fit_transform method may be used. To invert the\ntransform, :meth:~mne.decoding.Scaler.inverse_transform can be used. For\nscalings='median', scikit-learn_ version 0.17+ is required.\n<div class=\"alert alert-info\"><h4>Note</h4><p>Using this class is different from directly applying\n :class:`sklearn.preprocessing.StandardScaler` or\n :class:`sklearn.preprocessing.RobustScaler` offered by\n scikit-learn_. These scale each *classification feature*, e.g.\n each time point for each channel, with mean and standard\n deviation computed across epochs, whereas\n :class:`mne.decoding.Scaler` scales each *channel* using mean and\n standard deviation computed across all of its time points\n and epochs.</p></div>\n\nVectorizer\n^^^^^^^^^^\nScikit-learn API provides functionality to chain transformers and estimators\nby using :class:sklearn.pipeline.Pipeline. We can construct decoding\npipelines and perform cross-validation and grid-search. However scikit-learn\ntransformers and estimators generally expect 2D data\n(n_samples * n_features), whereas MNE transformers typically output data\nwith a higher dimensionality\n(e.g. n_samples * n_channels * n_frequencies * n_times). A Vectorizer\ntherefore needs to be applied between the MNE and the scikit-learn steps\nlike:",
"# Uses all MEG sensors and time points as separate classification\n# features, so the resulting filters used are spatio-temporal\nclf = make_pipeline(Scaler(epochs.info),\n Vectorizer(),\n LogisticRegression(solver='lbfgs'))\n\nscores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)\n\n# Mean scores across cross-validation splits\nscore = np.mean(scores, axis=0)\nprint('Spatio-temporal: %0.1f%%' % (100 * score,))",
"PSDEstimator\n^^^^^^^^^^^^\nThe :class:mne.decoding.PSDEstimator\ncomputes the power spectral density (PSD) using the multitaper\nmethod. It takes a 3D array as input, converts it into 2D and computes the\nPSD.\nFilterEstimator\n^^^^^^^^^^^^^^^\nThe :class:mne.decoding.FilterEstimator filters the 3D epochs data.\nSpatial filters\nJust like temporal filters, spatial filters provide weights to modify the\ndata along the sensor dimension. They are popular in the BCI community\nbecause of their simplicity and ability to distinguish spatially-separated\nneural activity.\nCommon spatial pattern\n^^^^^^^^^^^^^^^^^^^^^^\n:class:mne.decoding.CSP is a technique to analyze multichannel data based\non recordings from two classes [2]_ (see also\nhttps://en.wikipedia.org/wiki/Common_spatial_pattern).\nLet $X \\in R^{C\\times T}$ be a segment of data with\n$C$ channels and $T$ time points. The data at a single time point\nis denoted by $x(t)$ such that $X=[x(t), x(t+1), ..., x(t+T-1)]$.\nCommon spatial pattern (CSP) finds a decomposition that projects the signal\nin the original sensor space to CSP space using the following transformation:\n\\begin{align}x_{CSP}(t) = W^{T}x(t)\n :label: csp\\end{align}\nwhere each column of $W \\in R^{C\\times C}$ is a spatial filter and each\nrow of $x_{CSP}$ is a CSP component. The matrix $W$ is also\ncalled the de-mixing matrix in other contexts. Let\n$\\Sigma^{+} \\in R^{C\\times C}$ and $\\Sigma^{-} \\in R^{C\\times C}$\nbe the estimates of the covariance matrices of the two conditions.\nCSP analysis is given by the simultaneous diagonalization of the two\ncovariance matrices\n\\begin{align}W^{T}\\Sigma^{+}W = \\lambda^{+}\n :label: diagonalize_p\\end{align}\n\\begin{align}W^{T}\\Sigma^{-}W = \\lambda^{-}\n :label: diagonalize_n\\end{align}\nwhere $\\lambda^{C}$ is a diagonal matrix whose entries are the\neigenvalues of the following generalized eigenvalue problem\n\\begin{align}\\Sigma^{+}w = \\lambda \\Sigma^{-}w\n :label: eigen_problem\\end{align}\nLarge entries in the diagonal matrix corresponds to a spatial filter which\ngives high variance in one class but low variance in the other. Thus, the\nfilter facilitates discrimination between the two classes.\n.. topic:: Examples\n* `sphx_glr_auto_examples_decoding_plot_decoding_csp_eeg.py`\n* `sphx_glr_auto_examples_decoding_plot_decoding_csp_timefreq.py`\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>The winning entry of the Grasp-and-lift EEG competition in Kaggle used\n the :class:`~mne.decoding.CSP` implementation in MNE and was featured as\n a `script of the week <sotw_>`_.</p></div>\n\nWe can use CSP with these data with:",
"csp = CSP(n_components=3, norm_trace=False)\nclf = make_pipeline(csp, LogisticRegression(solver='lbfgs'))\nscores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=1)\nprint('CSP: %0.1f%%' % (100 * scores.mean(),))",
"Source power comodulation (SPoC)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nSource Power Comodulation (:class:mne.decoding.SPoC) [3]_\nidentifies the composition of\northogonal spatial filters that maximally correlate with a continuous target.\nSPoC can be seen as an extension of the CSP where the target is driven by a\ncontinuous variable rather than a discrete variable. Typical applications\ninclude extraction of motor patterns using EMG power or audio patterns using\nsound envelope.\n.. topic:: Examples\n* `sphx_glr_auto_examples_decoding_plot_decoding_spoc_CMC.py`\n\nxDAWN\n^^^^^\n:class:mne.preprocessing.Xdawn is a spatial filtering method designed to\nimprove the signal to signal + noise ratio (SSNR) of the ERP responses [4]_.\nXdawn was originally\ndesigned for P300 evoked potential by enhancing the target response with\nrespect to the non-target response. The implementation in MNE-Python is a\ngeneralization to any type of ERP.\n.. topic:: Examples\n* `sphx_glr_auto_examples_preprocessing_plot_xdawn_denoising.py`\n* `sphx_glr_auto_examples_decoding_plot_decoding_xdawn_eeg.py`\n\nEffect-matched spatial filtering\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nThe result of :class:mne.decoding.EMS is a spatial filter at each time\npoint and a corresponding time course [5]_.\nIntuitively, the result gives the similarity between the filter at\neach time point and the data vector (sensors) at that time point.\n.. topic:: Examples\n* `sphx_glr_auto_examples_decoding_plot_ems_filtering.py`\n\nPatterns vs. filters\n^^^^^^^^^^^^^^^^^^^^\nWhen interpreting the components of the CSP (or spatial filters in general),\nit is often more intuitive to think about how $x(t)$ is composed of\nthe different CSP components $x_{CSP}(t)$. In other words, we can\nrewrite Equation :eq:csp as follows:\n\\begin{align}x(t) = (W^{-1})^{T}x_{CSP}(t)\n :label: patterns\\end{align}\nThe columns of the matrix $(W^{-1})^T$ are called spatial patterns.\nThis is also called the mixing matrix. The example\nsphx_glr_auto_examples_decoding_plot_linear_model_patterns.py\ndiscusses the difference between patterns and filters.\nThese can be plotted with:",
"# Fit CSP on full data and plot\ncsp.fit(X, y)\ncsp.plot_patterns(epochs.info)\ncsp.plot_filters(epochs.info, scalings=1e-9)",
"Decoding over time\nThis strategy consists in fitting a multivariate predictive model on each\ntime instant and evaluating its performance at the same instant on new\nepochs. The :class:mne.decoding.SlidingEstimator will take as input a\npair of features $X$ and targets $y$, where $X$ has\nmore than 2 dimensions. For decoding over time the data $X$\nis the epochs data of shape n_epochs x n_channels x n_times. As the\nlast dimension of $X$ is the time, an estimator will be fit\non every time instant.\nThis approach is analogous to SlidingEstimator-based approaches in fMRI,\nwhere here we are interested in when one can discriminate experimental\nconditions and therefore figure out when the effect of interest happens.\nWhen working with linear models as estimators, this approach boils\ndown to estimating a discriminative spatial filter for each time instant.\nTemporal decoding\n^^^^^^^^^^^^^^^^^\nWe'll use a Logistic Regression for a binary classification as machine\nlearning model.",
"# We will train the classifier on all left visual vs auditory trials on MEG\n\nclf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))\n\ntime_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)\nscores = cross_val_multiscore(time_decod, X, y, cv=5, n_jobs=1)\n\n# Mean scores across cross-validation splits\nscores = np.mean(scores, axis=0)\n\n# Plot\nfig, ax = plt.subplots()\nax.plot(epochs.times, scores, label='score')\nax.axhline(.5, color='k', linestyle='--', label='chance')\nax.set_xlabel('Times')\nax.set_ylabel('AUC') # Area Under the Curve\nax.legend()\nax.axvline(.0, color='k', linestyle='-')\nax.set_title('Sensor space decoding')",
"You can retrieve the spatial filters and spatial patterns if you explicitly\nuse a LinearModel",
"clf = make_pipeline(StandardScaler(),\n LinearModel(LogisticRegression(solver='lbfgs')))\ntime_decod = SlidingEstimator(clf, n_jobs=1, scoring='roc_auc', verbose=True)\ntime_decod.fit(X, y)\n\ncoef = get_coef(time_decod, 'patterns_', inverse_transform=True)\nevoked = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])\njoint_kwargs = dict(ts_args=dict(time_unit='s'),\n topomap_args=dict(time_unit='s'))\nevoked.plot_joint(times=np.arange(0., .500, .100), title='patterns',\n **joint_kwargs)",
"Temporal generalization\n^^^^^^^^^^^^^^^^^^^^^^^\nTemporal generalization is an extension of the decoding over time approach.\nIt consists in evaluating whether the model estimated at a particular\ntime instant accurately predicts any other time instant. It is analogous to\ntransferring a trained model to a distinct learning problem, where the\nproblems correspond to decoding the patterns of brain activity recorded at\ndistinct time instants.\nThe object to for Temporal generalization is\n:class:mne.decoding.GeneralizingEstimator. It expects as input $X$\nand $y$ (similarly to :class:~mne.decoding.SlidingEstimator) but\ngenerates predictions from each model for all time instants. The class\n:class:~mne.decoding.GeneralizingEstimator is generic and will treat the\nlast dimension as the one to be used for generalization testing. For\nconvenience, here, we refer to it as different tasks. If $X$\ncorresponds to epochs data then the last dimension is time.\nThis runs the analysis used in [6] and further detailed in [7]:",
"# define the Temporal generalization object\ntime_gen = GeneralizingEstimator(clf, n_jobs=1, scoring='roc_auc',\n verbose=True)\n\nscores = cross_val_multiscore(time_gen, X, y, cv=5, n_jobs=1)\n\n# Mean scores across cross-validation splits\nscores = np.mean(scores, axis=0)\n\n# Plot the diagonal (it's exactly the same as the time-by-time decoding above)\nfig, ax = plt.subplots()\nax.plot(epochs.times, np.diag(scores), label='score')\nax.axhline(.5, color='k', linestyle='--', label='chance')\nax.set_xlabel('Times')\nax.set_ylabel('AUC')\nax.legend()\nax.axvline(.0, color='k', linestyle='-')\nax.set_title('Decoding MEG sensors over time')",
"Plot the full (generalization) matrix:",
"fig, ax = plt.subplots(1, 1)\nim = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',\n extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)\nax.set_xlabel('Testing Time (s)')\nax.set_ylabel('Training Time (s)')\nax.set_title('Temporal generalization')\nax.axvline(0, color='k')\nax.axhline(0, color='k')\nplt.colorbar(im, ax=ax)",
"Source-space decoding\nSource space decoding is also possible, but because the number of features\ncan be much larger than in the sensor space, univariate feature selection\nusing ANOVA f-test (or some other metric) can be done to reduce the feature\ndimension. Interpreting decoding results might be easier in source space as\ncompared to sensor space.\n.. topic:: Examples\n* `tut_dec_st_source`\n\nExercise\n\nExplore other datasets from MNE (e.g. Face dataset from SPM to predict\n Face vs. Scrambled)\n\nReferences\n.. [1] Jean-Rémi King et al. (2018) \"Encoding and Decoding Neuronal Dynamics:\n Methodological Framework to Uncover the Algorithms of Cognition\",\n 2018. The Cognitive Neurosciences VI.\n https://hal.archives-ouvertes.fr/hal-01848442/\n.. [2] Zoltan J. Koles. The quantitative extraction and topographic mapping\n of the abnormal components in the clinical EEG. Electroencephalography\n and Clinical Neurophysiology, 79(6):440--447, December 1991.\n.. [3] Dahne, S., Meinecke, F. C., Haufe, S., Hohne, J., Tangermann, M.,\n Muller, K. R., & Nikulin, V. V. (2014). SPoC: a novel framework for\n relating the amplitude of neuronal oscillations to behaviorally\n relevant parameters. NeuroImage, 86, 111-122.\n.. [4] Rivet, B., Souloumiac, A., Attina, V., & Gibert, G. (2009). xDAWN\n algorithm to enhance evoked potentials: application to\n brain-computer interface. Biomedical Engineering, IEEE Transactions\n on, 56(8), 2035-2043.\n.. [5] Aaron Schurger, Sebastien Marti, and Stanislas Dehaene, \"Reducing\n multi-sensor data to a single time course that reveals experimental\n effects\", BMC Neuroscience 2013, 14:122\n.. [6] Jean-Remi King, Alexandre Gramfort, Aaron Schurger, Lionel Naccache\n and Stanislas Dehaene, \"Two distinct dynamic modes subtend the\n detection of unexpected sounds\", PLOS ONE, 2013,\n https://www.ncbi.nlm.nih.gov/pubmed/24475052\n.. [7] King & Dehaene (2014) 'Characterizing the dynamics of mental\n representations: the temporal generalization method', Trends In\n Cognitive Sciences, 18(4), 203-210.\n https://www.ncbi.nlm.nih.gov/pubmed/24593982"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pfschus/fission_bicorrelation
|
methods/nn_sum_and_br_subtraction.ipynb
|
mit
|
[
"%%javascript\n$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')",
"<div id=\"toc\"></div>\n\nN-N sum and background subtraction\nP. Schuster\nUniversity of Michigan\nJanuary 2018 \nRevisiting this in July 2018 for energy-based bhm.\nGoal: Calculate the number of events in a specific time or energy range in the $nn$ cloud on the bicorrelation distribution. \nStart by loading some data. I will use the data from the combined data sets from Cf072115-Cf072215b.",
"import os\nimport sys\nimport matplotlib.pyplot as plt\nimport matplotlib.colors\nimport numpy as np\nimport os\nimport scipy.io as sio\nimport sys\nimport pandas as pd\nfrom tqdm import *\n\n# Plot entire array\nnp.set_printoptions(threshold=np.nan)\n\nimport seaborn as sns\nsns.set_style(style='white')\n\nsys.path.append('../scripts/')\nimport bicorr as bicorr\nimport bicorr_plot as bicorr_plot\nimport bicorr_e as bicorr_e\nimport bicorr_sums as bicorr_sums\n\n%load_ext autoreload\n%autoreload 2\n\ndet_df = bicorr.load_det_df()\n\ndict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)",
"NEW METHOD: energy-based\nLoad bhm_e",
"os.listdir('../analysis/Cf072115_to_Cf072215b/datap')\n\nbhm_e, e_bin_edges, note = bicorr_e.load_bhm_e('../analysis/Cf072115_to_Cf072215b/datap')\n\nbhm_e.shape\n\nbicorr_e.build_bhp_e?",
"Make bhp_e",
"bhp_e = bicorr_e.build_bhp_e(bhm_e, e_bin_edges)[0]\n\nbicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, show_flag = True)",
"Calculate sum",
"bicorr_sums.calc_nn_sum_e(bhp_e, e_bin_edges)",
"I'm going to add this to det_df.",
"det_df.head()",
"Produce bhp for all detector indices.",
"bhp_e = np.zeros((len(det_df),len(e_bin_edges)-1,len(e_bin_edges)-1))\nbhp_e.shape\n\nfor index in det_df.index.values: # index is same as in `bhm`\n \n bhp_e[index,:,:] = bicorr_e.build_bhp_e(bhm_e,e_bin_edges,pair_is=[index])[0]\n\ndet_df['Ce'] = np.nan\ndet_df['Ce_err'] = np.nan\n\ne_min = 0.62\ne_max = 12\n\nfor index in det_df.index.values:\n det_df.loc[index,'Ce'], det_df.loc[index,'Ce_err'] = bicorr_sums.calc_nn_sum_e(bhp_e[index,:,:],e_bin_edges, e_min, e_max)\ndet_df.head()\n\nplt.errorbar(det_df['angle'], det_df['Ce'], yerr=det_df['Ce_err'], fmt='.')\nplt.xlabel('Angle')\nplt.ylabel('Doubles counts')\nplt.title('Doubles counts vs. angle')\nplt.show()",
"OLD METHOD: $nn$ background subtraction\nBuild two bicorr_hist_master versions\n\nOne in the positive time range from 0 to 200 ns, bhm_pos\nOne in the negative time range form -200 to 0 ns, bhm_neg\n\nThis is built into the default flux data processing script in bicorr.py as follows:\nprint('********* Build bhm: Positive time range **********')\nbuild_bhm(folder_start,folder_end,dt_bin_edges = np.arange(0.0,200.25,0.25))\nprint('********* Build bhm: Negative time range **********')\nbuild_bhm(folder_start,folder_end,dt_bin_edges = np.arange(-200.0,0.25,0.25),sparse_filename = 'sparse_bhm_neg')\n\nHere I will load those arrays and perform the background subtraction, then generalize the technique in a new function. I will coarsen the time binning from 0.25 ns spacing to 2.5 ns spacing for both arrays.",
"sparse_bhm_pos, dt_bin_edges_pos = bicorr.load_sparse_bhm(filepath = r'../analysis/Cf072115_to_Cf072215b/datap/')[0:2]\n\nbhm_pos = bicorr.revive_sparse_bhm(sparse_bhm_pos, det_df, dt_bin_edges_pos)\n\nprint('ORIGINAL ARRAYS')\nprint(bhm_pos.shape)\nprint(dt_bin_edges_pos.shape)\nprint(dt_bin_edges_pos[-10:])\n\nbhm_pos, dt_bin_edges_pos = bicorr.coarsen_bhm(bhm_pos, dt_bin_edges_pos, 4)\n\nprint('COARSE ARRAYS')\nprint(bhm_pos.shape)\nprint(dt_bin_edges_pos.shape)\nprint(dt_bin_edges_pos[-10:])\n\nsparse_bhm_neg, dt_bin_edges_neg = bicorr.load_sparse_bhm(filepath = r'../analysis/Cf072115_to_Cf072215b/datap/',filename='sparse_bhm_neg.npz')[0:2]\n\nbhm_neg = bicorr.revive_sparse_bhm(sparse_bhm_neg, det_df, dt_bin_edges_neg)\n\nbhm_neg, dt_bin_edges_neg = bicorr.coarsen_bhm(bhm_neg, dt_bin_edges_neg, 4)",
"Store bhp for $nn$ interactions",
"bhp_nn_pos = bicorr.build_bhp(bhm_pos,dt_bin_edges_pos,type_is=[0])[0]\nbhp_nn_neg = bicorr.build_bhp(bhm_neg,dt_bin_edges_neg,type_is=[0])[0]",
"Take a quick look at plots:",
"bicorr_plot.bhp_plot(bhp_nn_pos,dt_bin_edges_pos,title='Positive time range',show_flag=True)\n\nbicorr_plot.bhp_plot(bhp_nn_neg,dt_bin_edges_neg,title='Negative time range',show_flag=True)",
"Translate negative data to positive time range\nI need to flip the data around. What is the most pythonic way of accomplishing this?\nI'm going to start with a test array to make sure I have the correct method.",
"x = np.arange(-10,1,1)\ny = x.T\n\ntest_array_neg = np.zeros([10,10])\ntest_array_neg[0,0] = 3\ntest_array_neg[2,9] = 6\ntest_array_neg[9,5] = 10\n\nx = dt_test_neg\ny = dt_test_neg.T\nplt.pcolormesh(x,y,test_array_neg.T,cmap='viridis')\nplt.xlabel('Dimension 0')\nplt.ylabel('Dimension 1')\nplt.colorbar()\nplt.show()",
"Flip the array. The result should look as follows:\n* The x and y axes should go from 0 to +10\n* Blue box in upper right\n* Yellow box on left edge\n* Green box on lower edge near right",
"dt_test_pos = np.arange(0,11,1)\ntest_array_pos = test_array_neg[::-1,::-1]\n\nplt.pcolormesh(dt_test_pos,dt_test_pos,test_array_pos.T,cmap=\"viridis\")\nplt.xlabel('Dimension 0')\nplt.ylabel('Dimension 1')\nplt.colorbar()\nplt.show()",
"Try it for a smaller size.",
"x = np.arange(-3,1,1)\ny = x.T\n\ntest_array_neg = np.zeros([3,3])\ntest_array_neg[1,1] = 6\ntest_array_neg[2,2] = 1\ntest_array_neg[2,0] = 12\ntest_array_pos = test_array_neg[::-1,::-1]\n\nplt.pcolormesh(x,y,test_array_neg.T,cmap='viridis')\nplt.xlabel('Dimension 0')\nplt.ylabel('Dimension 1')\nplt.colorbar()\nplt.show()\n\nplt.pcolormesh(x,y,test_array_pos.T,cmap='viridis')\nplt.xlabel('Dimension 0')\nplt.ylabel('Dimension 1')\nplt.colorbar()\nplt.show()\n\ntens = 10*np.ones([3,3])\nprint(tens)\n\nplt.pcolormesh(x,y,tens.T,cmap='viridis')\nplt.xlabel('Dimension 0')\nplt.ylabel('Dimension 1')\nplt.colorbar()\nplt.show()\n\nplt.pcolormesh(x,y,tens.T-test_array_pos.T,cmap='viridis')\nplt.xlabel('Dimension 0')\nplt.ylabel('Dimension 1')\nplt.colorbar()\nplt.show()",
"This subtraction looks good. Now back to the data. \nDo this now for the negative time range.",
"bicorr_plot.bhp_plot(bhp_nn_neg[::-1,::-1],dt_bin_edges_pos,title='Negative data flipped to positive time range',show_flag=True)",
"Look more closely at a corner, positive and negative.",
"bicorr_plot.bhp_plot(bhp_nn_neg[:10,:10],dt_bin_edges_neg[:11],title='Negative time range',show_flag=True)\n\nbicorr_plot.bhp_plot(bhp_nn_neg[::-1,::-1][-10:,-10:],dt_bin_edges_pos[-11:],title='Negative data flipped to positive time range',show_flag=True)",
"It looks like the flipping is happening correctly. So continue on. \nSubtract negative flipped from positive\nAt this point, the data is stored as unsigned floats, but I will encounter instances where the background-subtracted counts are negative, so I need to convert the datatype to floats.",
"bhp_nn_diff = np.subtract(bhp_nn_pos.astype(np.int32),bhp_nn_neg[::-1,::-1].astype(np.int32))\nbhp_nn_diff.shape\n\nbhp_nn_diff.dtype\n\nbicorr_plot.bhp_plot(bhp_nn_diff,dt_bin_edges_pos,title='Background-subtracted bhp',show_flag=True)",
"This looks great. I have to keep the positive and negative matrices for error propagation when I calculate the $nn$ count rate sum.\nCalculate $nn$ sum\nI want to provide energy or timing windows as input parameters and calculate the number of events in that range. I will have to provide a normalization factor in case I have already performed the normalization. \nQuestion... should I go from the bhm or bhp? bhm is in terms of the number of counts, while bhp may already be normalized.\nI think I should work from bhp and provide norm_factor as an optional input parameter. It is returned from build_bhp so I will have to store it. \nConvert energy to time\nFollowing Matthew's code...",
"emin = 0.62\nemax = 12\n\ntmin = bicorr.convert_energy_to_time(emax)\ntmax = bicorr.convert_energy_to_time(emin)\n\nprint(tmin,tmax)",
"Find the corresponding time bins",
"dt_bin_edges_pos\n\ni_min = np.min(np.argwhere(tmin<dt_bin_edges_pos))\ni_max = np.min(np.argwhere(tmax<dt_bin_edges_pos))\n\nprint(i_min,i_max)\n\ndt_bin_edges_pos[i_min:i_max]\n\nnp.sum(bhp_nn_pos[i_min:i_max,i_min:i_max])",
"Functionalized to bicorr.calc_nn_sum.",
"bicorr.calc_nn_sum(bhp_nn_pos, dt_bin_edges_pos)",
"I am going to add some logic into the script to return the real emin and emax values that correspond to the bin edges:\n# What are the energy bin limits that correspond to the bins?\nemin_real = convert_time_to_energy[i_min]\nemax_real = convert_time_to_energy[i_max]\nenergies_real = [emin_real,emax_real]\n\nCalculate absolute sums for pos, neg, and diff regions\nLook at the algebra for this. \nI have measured the following:\n\nNumber of positive counts, $C_P$\nNumber of negative counts, $C_N$\nNumber of br-subtracted counts $C_D = C_P-C_N$ (diff)\nnorm_factor",
"Cp = bicorr.calc_nn_sum(bhp_nn_pos, dt_bin_edges_pos)\nCn = bicorr.calc_nn_sum(bhp_nn_neg[::-1,::-1], dt_bin_edges_pos)\nCd = Cp-Cn\nprint('BR subtraction removes ', Cn/Cp*100, ' % of counts')",
"The background subtraction makes about a 1% correction.\nThe errors in the counts follow counting statistics:\n\n$\\sigma_{C_P} = \\sqrt{C_P}$\n$\\sigma_{C_N} = \\sqrt{C_N}$\n$\\sigma_{C_D} = \\sqrt{C_D} = \\sqrt{\\sigma_{C_P}^2 + \\sigma_{C_N}^2} = \\sqrt{C_P + C_N}$",
"err_Cd = np.sqrt(Cp + Cn)\nprint('BR subtracted counts = ',Cd,' +/- ',err_Cd)\n\nprint('Relative error = ', err_Cd/Cd)",
"Calculate normalized sums for pos, neg, and diff regions\nThe bicorr_hist_master is not normalized, but when I calculate bhp with a norm_factor then it is normalized, so I am actually working with the following:\n\nNormalized number of positive counts, $N_P = C_P/F$\nNormalized number of negative counts, $N_N = C_N/F$\nNormalized number of br-subtracted counts, $N_D = C_D/F$\nwhere $F$ is the normalization factor\n\nCreate new bhp arrays that are normalized.",
"num_fissions = 2194651200.00\n\nbhp_nn_pos, norm_factor = bicorr.build_bhp(bhm_pos,dt_bin_edges_pos,type_is=[0], num_fissions = num_fissions)\nbhp_nn_neg = bicorr.build_bhp(bhm_neg,dt_bin_edges_neg,type_is=[0], num_fissions = num_fissions)[0]",
"Assuming there is no uncertainty in $F$, the errors in the relative counts are:\n\n$\\sigma_{N_P} = \\sigma_{C_P}/F = \\sqrt{C_P}/F$\n$\\sigma_{N_N} = \\sigma_{C_N}/F = \\sqrt{C_N}/F$\n$\\sigma_{N_D} = \\sigma_{C_D}/F = \\sqrt{C_P+C_N}/F$",
"Np = bicorr.calc_nn_sum(bhp_nn_pos,dt_bin_edges_pos)\nNn = bicorr.calc_nn_sum(bhp_nn_neg[::-1,::-1],dt_bin_edges_pos)\nNd = Np-Nn\n\nprint('BR subtraction removes ', Nn/Np*100, ' % of counts')\n\nerr_Np = np.sqrt(Np/norm_factor)\nerr_Nn = np.sqrt(Nn/norm_factor)\nerr_Nd = np.sqrt((Np+Nn)/norm_factor)\n\nprint('The BR_subtracted counts is ',Nd,' +/- ',err_Nd)\n\nprint('Relative error = ', err_Nd/Nd)",
"Functionalize calc_nn_sum_br\nFunctionalize a method for calculating the number of counts after background subtraction.\nRun it to confirm function matches previous calculations. Run commands above to produce the correct bhp.\nAbsolute counts:",
"bhp_nn_pos = bicorr.build_bhp(bhm_pos,dt_bin_edges_pos,type_is=[0])[0]\nbhp_nn_neg = bicorr.build_bhp(bhm_neg,dt_bin_edges_neg,type_is=[0])[0]\n\nCp, Cn, Cd, Cd_err = bicorr.calc_nn_sum_br(bhp_nn_pos,bhp_nn_neg,dt_bin_edges_pos)\nprint(Cp, Cn, Cd, Cd_err)",
"Relative counts:",
"bhp_nn_pos = bicorr.build_bhp(bhm_pos,dt_bin_edges_pos,type_is=[0], num_fissions = num_fissions)[0]\nbhp_nn_neg = bicorr.build_bhp(bhm_neg,dt_bin_edges_neg,type_is=[0], num_fissions = num_fissions)[0]\n\nNp, Nn, Nd, err_Nd = bicorr.calc_nn_sum_br(bhp_nn_pos,bhp_nn_neg,dt_bin_edges_pos,norm_factor=norm_factor)\nprint(Np, Nn, Nd, err_Nd)",
"Calculate for all detector pairs\nIn the previous examples, I produced bhp for all pairs together. Now I will loop through each pair separately and calculate the sum. Work with absolute counts.\nFirst create positive and negative bhp for all pairs.",
"bhp_nn_pos = np.zeros((len(det_df),len(dt_bin_edges_pos)-1,len(dt_bin_edges_pos)-1))\nbhp_nn_neg = np.zeros((len(det_df),len(dt_bin_edges_neg)-1,len(dt_bin_edges_neg)-1))\n\nfor index in det_df.index.values: # index is same as in `bhm`\n bhp_nn_pos[index,:,:] = bicorr.build_bhp(bhm_pos,dt_bin_edges_pos,type_is=[0],pair_is=[index])[0]\n bhp_nn_neg[index,:,:] = bicorr.build_bhp(bhm_neg,dt_bin_edges_neg,type_is=[0],pair_is=[index])[0]",
"Save these to file for future use. \nI will call the files bhp_nn_all_pairs_1ns.npz. I have already saved something similar: bhp_nn_by_pair_1ns.npz, but that one has already cut out the fission chamber neighbors. I'll leave it in for this one. \nI want to save:\n* bhp_nn_pos\n* bhp_nn_neg\n* dt_bin_edges_pos\n* num_fissions\n* note: 'Calculated in methods/nn_sum_and_br_subtraction, all pairs included'",
"os.listdir('../analysis/Cf072115_to_Cf072215b/datap/')\n\nnp.savez('../analysis/Cf072115_to_Cf072215b/datap/bhp_nn_all_pairs_1ns',\n bhp_nn_pos = bhp_nn_pos,\n bhp_nn_neg = bhp_nn_neg,\n dt_bin_edges_pos = dt_bin_edges_pos)\n\nwhos",
"Save these to file for future use. \nThen, calculate sums for each pair by looping back through bhp\nThen store sums to a dataframe\nSave dataframe to file with emin, max?\nMake it easy to calculate sums and fill dataframe for new emin, emax values so that I can calculate anisotropy vs. Ethresh."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bashtage/statsmodels
|
examples/notebooks/interactions_anova.ipynb
|
bsd-3-clause
|
[
"Interactions and ANOVA\nNote: This script is based heavily on Jonathan Taylor's class notes https://web.stanford.edu/class/stats191/notebooks/Interactions.html\nDownload and format data:",
"%matplotlib inline\n\nfrom urllib.request import urlopen\nimport numpy as np\n\nnp.set_printoptions(precision=4, suppress=True)\n\nimport pandas as pd\n\npd.set_option(\"display.width\", 100)\nimport matplotlib.pyplot as plt\nfrom statsmodels.formula.api import ols\nfrom statsmodels.graphics.api import interaction_plot, abline_plot\nfrom statsmodels.stats.anova import anova_lm\n\ntry:\n salary_table = pd.read_csv(\"salary.table\")\nexcept: # recent pandas can read URL without urlopen\n url = \"http://stats191.stanford.edu/data/salary.table\"\n fh = urlopen(url)\n salary_table = pd.read_table(fh)\n salary_table.to_csv(\"salary.table\")\n\nE = salary_table.E\nM = salary_table.M\nX = salary_table.X\nS = salary_table.S",
"Take a look at the data:",
"plt.figure(figsize=(6, 6))\nsymbols = [\"D\", \"^\"]\ncolors = [\"r\", \"g\", \"blue\"]\nfactor_groups = salary_table.groupby([\"E\", \"M\"])\nfor values, group in factor_groups:\n i, j = values\n plt.scatter(group[\"X\"], group[\"S\"], marker=symbols[j], color=colors[i - 1], s=144)\nplt.xlabel(\"Experience\")\nplt.ylabel(\"Salary\")",
"Fit a linear model:",
"formula = \"S ~ C(E) + C(M) + X\"\nlm = ols(formula, salary_table).fit()\nprint(lm.summary())",
"Have a look at the created design matrix:",
"lm.model.exog[:5]",
"Or since we initially passed in a DataFrame, we have a DataFrame available in",
"lm.model.data.orig_exog[:5]",
"We keep a reference to the original untouched data in",
"lm.model.data.frame[:5]",
"Influence statistics",
"infl = lm.get_influence()\nprint(infl.summary_table())",
"or get a dataframe",
"df_infl = infl.summary_frame()\n\ndf_infl[:5]",
"Now plot the residuals within the groups separately:",
"resid = lm.resid\nplt.figure(figsize=(6, 6))\nfor values, group in factor_groups:\n i, j = values\n group_num = i * 2 + j - 1 # for plotting purposes\n x = [group_num] * len(group)\n plt.scatter(\n x,\n resid[group.index],\n marker=symbols[j],\n color=colors[i - 1],\n s=144,\n edgecolors=\"black\",\n )\nplt.xlabel(\"Group\")\nplt.ylabel(\"Residuals\")",
"Now we will test some interactions using anova or f_test",
"interX_lm = ols(\"S ~ C(E) * X + C(M)\", salary_table).fit()\nprint(interX_lm.summary())",
"Do an ANOVA check",
"from statsmodels.stats.api import anova_lm\n\ntable1 = anova_lm(lm, interX_lm)\nprint(table1)\n\ninterM_lm = ols(\"S ~ X + C(E)*C(M)\", data=salary_table).fit()\nprint(interM_lm.summary())\n\ntable2 = anova_lm(lm, interM_lm)\nprint(table2)",
"The design matrix as a DataFrame",
"interM_lm.model.data.orig_exog[:5]",
"The design matrix as an ndarray",
"interM_lm.model.exog\ninterM_lm.model.exog_names\n\ninfl = interM_lm.get_influence()\nresid = infl.resid_studentized_internal\nplt.figure(figsize=(6, 6))\nfor values, group in factor_groups:\n i, j = values\n idx = group.index\n plt.scatter(\n X[idx],\n resid[idx],\n marker=symbols[j],\n color=colors[i - 1],\n s=144,\n edgecolors=\"black\",\n )\nplt.xlabel(\"X\")\nplt.ylabel(\"standardized resids\")",
"Looks like one observation is an outlier.",
"drop_idx = abs(resid).argmax()\nprint(drop_idx) # zero-based index\nidx = salary_table.index.drop(drop_idx)\n\nlm32 = ols(\"S ~ C(E) + X + C(M)\", data=salary_table, subset=idx).fit()\n\nprint(lm32.summary())\nprint(\"\\n\")\n\ninterX_lm32 = ols(\"S ~ C(E) * X + C(M)\", data=salary_table, subset=idx).fit()\n\nprint(interX_lm32.summary())\nprint(\"\\n\")\n\n\ntable3 = anova_lm(lm32, interX_lm32)\nprint(table3)\nprint(\"\\n\")\n\n\ninterM_lm32 = ols(\"S ~ X + C(E) * C(M)\", data=salary_table, subset=idx).fit()\n\ntable4 = anova_lm(lm32, interM_lm32)\nprint(table4)\nprint(\"\\n\")",
"Replot the residuals",
"resid = interM_lm32.get_influence().summary_frame()[\"standard_resid\"]\n\nplt.figure(figsize=(6, 6))\nresid = resid.reindex(X.index)\nfor values, group in factor_groups:\n i, j = values\n idx = group.index\n plt.scatter(\n X.loc[idx],\n resid.loc[idx],\n marker=symbols[j],\n color=colors[i - 1],\n s=144,\n edgecolors=\"black\",\n )\nplt.xlabel(\"X[~[32]]\")\nplt.ylabel(\"standardized resids\")",
"Plot the fitted values",
"lm_final = ols(\"S ~ X + C(E)*C(M)\", data=salary_table.drop([drop_idx])).fit()\nmf = lm_final.model.data.orig_exog\nlstyle = [\"-\", \"--\"]\n\nplt.figure(figsize=(6, 6))\nfor values, group in factor_groups:\n i, j = values\n idx = group.index\n plt.scatter(\n X[idx],\n S[idx],\n marker=symbols[j],\n color=colors[i - 1],\n s=144,\n edgecolors=\"black\",\n )\n # drop NA because there is no idx 32 in the final model\n fv = lm_final.fittedvalues.reindex(idx).dropna()\n x = mf.X.reindex(idx).dropna()\n plt.plot(x, fv, ls=lstyle[j], color=colors[i - 1])\nplt.xlabel(\"Experience\")\nplt.ylabel(\"Salary\")",
"From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.",
"U = S - X * interX_lm32.params[\"X\"]\n\nplt.figure(figsize=(6, 6))\ninteraction_plot(\n E, M, U, colors=[\"red\", \"blue\"], markers=[\"^\", \"D\"], markersize=10, ax=plt.gca()\n)",
"Minority Employment Data",
"try:\n jobtest_table = pd.read_table(\"jobtest.table\")\nexcept: # do not have data already\n url = \"http://stats191.stanford.edu/data/jobtest.table\"\n jobtest_table = pd.read_table(url)\n\nfactor_group = jobtest_table.groupby([\"MINORITY\"])\n\nfig, ax = plt.subplots(figsize=(6, 6))\ncolors = [\"purple\", \"green\"]\nmarkers = [\"o\", \"v\"]\nfor factor, group in factor_group:\n ax.scatter(\n group[\"TEST\"],\n group[\"JPERF\"],\n color=colors[factor],\n marker=markers[factor],\n s=12 ** 2,\n )\nax.set_xlabel(\"TEST\")\nax.set_ylabel(\"JPERF\")\n\nmin_lm = ols(\"JPERF ~ TEST\", data=jobtest_table).fit()\nprint(min_lm.summary())\n\nfig, ax = plt.subplots(figsize=(6, 6))\nfor factor, group in factor_group:\n ax.scatter(\n group[\"TEST\"],\n group[\"JPERF\"],\n color=colors[factor],\n marker=markers[factor],\n s=12 ** 2,\n )\n\nax.set_xlabel(\"TEST\")\nax.set_ylabel(\"JPERF\")\nfig = abline_plot(model_results=min_lm, ax=ax)\n\nmin_lm2 = ols(\"JPERF ~ TEST + TEST:MINORITY\", data=jobtest_table).fit()\n\nprint(min_lm2.summary())\n\nfig, ax = plt.subplots(figsize=(6, 6))\nfor factor, group in factor_group:\n ax.scatter(\n group[\"TEST\"],\n group[\"JPERF\"],\n color=colors[factor],\n marker=markers[factor],\n s=12 ** 2,\n )\n\nfig = abline_plot(\n intercept=min_lm2.params[\"Intercept\"],\n slope=min_lm2.params[\"TEST\"],\n ax=ax,\n color=\"purple\",\n)\nfig = abline_plot(\n intercept=min_lm2.params[\"Intercept\"],\n slope=min_lm2.params[\"TEST\"] + min_lm2.params[\"TEST:MINORITY\"],\n ax=ax,\n color=\"green\",\n)\n\nmin_lm3 = ols(\"JPERF ~ TEST + MINORITY\", data=jobtest_table).fit()\nprint(min_lm3.summary())\n\nfig, ax = plt.subplots(figsize=(6, 6))\nfor factor, group in factor_group:\n ax.scatter(\n group[\"TEST\"],\n group[\"JPERF\"],\n color=colors[factor],\n marker=markers[factor],\n s=12 ** 2,\n )\n\nfig = abline_plot(\n intercept=min_lm3.params[\"Intercept\"],\n slope=min_lm3.params[\"TEST\"],\n ax=ax,\n color=\"purple\",\n)\nfig = abline_plot(\n intercept=min_lm3.params[\"Intercept\"] + min_lm3.params[\"MINORITY\"],\n slope=min_lm3.params[\"TEST\"],\n ax=ax,\n color=\"green\",\n)\n\nmin_lm4 = ols(\"JPERF ~ TEST * MINORITY\", data=jobtest_table).fit()\nprint(min_lm4.summary())\n\nfig, ax = plt.subplots(figsize=(8, 6))\nfor factor, group in factor_group:\n ax.scatter(\n group[\"TEST\"],\n group[\"JPERF\"],\n color=colors[factor],\n marker=markers[factor],\n s=12 ** 2,\n )\n\nfig = abline_plot(\n intercept=min_lm4.params[\"Intercept\"],\n slope=min_lm4.params[\"TEST\"],\n ax=ax,\n color=\"purple\",\n)\nfig = abline_plot(\n intercept=min_lm4.params[\"Intercept\"] + min_lm4.params[\"MINORITY\"],\n slope=min_lm4.params[\"TEST\"] + min_lm4.params[\"TEST:MINORITY\"],\n ax=ax,\n color=\"green\",\n)\n\n# is there any effect of MINORITY on slope or intercept?\ntable5 = anova_lm(min_lm, min_lm4)\nprint(table5)\n\n# is there any effect of MINORITY on intercept\ntable6 = anova_lm(min_lm, min_lm3)\nprint(table6)\n\n# is there any effect of MINORITY on slope\ntable7 = anova_lm(min_lm, min_lm2)\nprint(table7)\n\n# is it just the slope or both?\ntable8 = anova_lm(min_lm2, min_lm4)\nprint(table8)",
"One-way ANOVA",
"try:\n rehab_table = pd.read_csv(\"rehab.table\")\nexcept:\n url = \"http://stats191.stanford.edu/data/rehab.csv\"\n rehab_table = pd.read_table(url, delimiter=\",\")\n rehab_table.to_csv(\"rehab.table\")\n\nfig, ax = plt.subplots(figsize=(8, 6))\nfig = rehab_table.boxplot(\"Time\", \"Fitness\", ax=ax, grid=False)\n\nrehab_lm = ols(\"Time ~ C(Fitness)\", data=rehab_table).fit()\ntable9 = anova_lm(rehab_lm)\nprint(table9)\n\nprint(rehab_lm.model.data.orig_exog)\n\nprint(rehab_lm.summary())",
"Two-way ANOVA",
"try:\n kidney_table = pd.read_table(\"./kidney.table\")\nexcept:\n url = \"http://stats191.stanford.edu/data/kidney.table\"\n kidney_table = pd.read_csv(url, delim_whitespace=True)",
"Explore the dataset",
"kidney_table.head(10)",
"Balanced panel",
"kt = kidney_table\nplt.figure(figsize=(8, 6))\nfig = interaction_plot(\n kt[\"Weight\"],\n kt[\"Duration\"],\n np.log(kt[\"Days\"] + 1),\n colors=[\"red\", \"blue\"],\n markers=[\"D\", \"^\"],\n ms=10,\n ax=plt.gca(),\n)",
"You have things available in the calling namespace available in the formula evaluation namespace",
"kidney_lm = ols(\"np.log(Days+1) ~ C(Duration) * C(Weight)\", data=kt).fit()\n\ntable10 = anova_lm(kidney_lm)\n\nprint(\n anova_lm(ols(\"np.log(Days+1) ~ C(Duration) + C(Weight)\", data=kt).fit(), kidney_lm)\n)\nprint(\n anova_lm(\n ols(\"np.log(Days+1) ~ C(Duration)\", data=kt).fit(),\n ols(\"np.log(Days+1) ~ C(Duration) + C(Weight, Sum)\", data=kt).fit(),\n )\n)\nprint(\n anova_lm(\n ols(\"np.log(Days+1) ~ C(Weight)\", data=kt).fit(),\n ols(\"np.log(Days+1) ~ C(Duration) + C(Weight, Sum)\", data=kt).fit(),\n )\n)",
"Sum of squares\nIllustrates the use of different types of sums of squares (I,II,II)\n and how the Sum contrast can be used to produce the same output between\n the 3.\nTypes I and II are equivalent under a balanced design.\nDo not use Type III with non-orthogonal contrast - ie., Treatment",
"sum_lm = ols(\"np.log(Days+1) ~ C(Duration, Sum) * C(Weight, Sum)\", data=kt).fit()\n\nprint(anova_lm(sum_lm))\nprint(anova_lm(sum_lm, typ=2))\nprint(anova_lm(sum_lm, typ=3))\n\nnosum_lm = ols(\n \"np.log(Days+1) ~ C(Duration, Treatment) * C(Weight, Treatment)\", data=kt\n).fit()\nprint(anova_lm(nosum_lm))\nprint(anova_lm(nosum_lm, typ=2))\nprint(anova_lm(nosum_lm, typ=3))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google-research/google-research
|
aptamers_mlpd/figures/Figure_4_Machine_learning_guided_aptamer_discovery_(submission).ipynb
|
apache-2.0
|
[
"Copyright 2021 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nOverview\nThis notebook loads in all the simulated truncations (with model scores) as well as the experimentally tested subset of truncations. It then creates a plot showing the distribution of model scores for each tested truncation.",
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"Experimental PD affinity estimate\nThese affinity estimates are fairly small so we directly embed them below.",
"# Tuple of form: (Seq, Stringency, Kd)\n\nvalidated_core_seqs = [ ('TTTGGTGGATAGTAA', 1, '< 512 nM'),\n ('AGAGGATTTGGTGGATAGT', 0, '> 512nM'),\n ('AGAGGATTTGGTGGATAGTAAAT', 3, '< 32 nM'),\n ('GAGGATTTGGTGGATAGTAAATC', 4, '< 8 nM'),\n ('GAGGATTTGGTGGATAGTAAATCTTTG', 4, '< 8 nM'),\n ('AAGAGGATTTGGTGGATAGTAAATCTT', 4, '< 8 nM'),\n ('CAAGAGGATTTGGTGGATAGTAAATCTTTGC', 4, '< 8 nM'),\n ('GATAGTAAATCTTTGCCTATCCA', 0, '> 512nM'),\n ('GTGGATAGTAAATCTTTGCCTATCCAG', 0, '> 512nM'),\n ('TTTGGTGGATAGTAAATCTTTGC', 0, '> 512nM'),\n ('CAAGAGGATTTGGTGGATAGTAAATCTTTGCCTAT', 3, '< 32 nM'),\n ('CAAGAGGATTTGGTGGATAGTAAATCTTTGCCTATCCAG', 3, '< 32 nM'),\n ('GTTTTTGGTGGATAG', 0, '> 512nM'),\n ('GTTTTTGGTGGATAGCAAA', 3, '< 32 nM'),\n ('ACGTTTTTGGTGGATAGCAAATG', 3, '< 32 nM'),\n ('ACGTTTTTGGTGGATAGCAAATGCCAG', 3, '< 32 nM'),\n ('ACGTTTTTGGTGGATAGCAAATGCCAGGGCC', 3, '< 32 nM'),\n ('ACGTTTTTGGTGGATAGCAAATGCCAGGGCCCTTT', 3, '< 32 nM'),\n ('ACGTTTTTGGTGGATAGCAAATGCCAGGGCCCTTTTTTG', 3, '< 32 nM'),\n ('GGACTGGTGGATAGT', 0, '> 512nM'),\n ('CGGACTGGTGGATAGTAGA', 1, '< 512 nM'),\n ('CGGACTGGTGGATAGTAGAGCTG', 0, '> 512nM'),\n ('CACGGACTGGTGGATAGTAGAGC', 1, '< 512 nM'),\n ('CACGGACTGGTGGATAGTAGAGCTGTG', 3, '< 32 nM'),\n ('GCACGGACTGGTGGATAGTAGAGCTGTGTGA', 2, '< 128 nM'),\n ('CACGGACTGGTGGATAGTAGAGCTGTGTGAGGTCG', 0, '> 512nM'),\n ('CGCACGGACTGGTGGATAGTAGAGCTGTGTGAGGT', 2, '< 128 nM'),\n ('GTCGCACGGACTGGTGGATAGTAGAGCTGTGTGAGGTCG', 3, '< 32 nM'),\n ('GATGGTGGCTGGATAGTCA', 3, '< 32 nM'),\n ('GATGGTGGCTGGATAGTCACCTAGTGTCTGG', 3, '< 32 nM')]\nvalidated_core_seq_df = pd.DataFrame(validated_core_seqs, columns=['core_seq', 'stringency_level', 'Kd'])",
"Load in Data",
"# All ML score truncations for G12 and G13 in all screened sequence contexts.\n# Upload truncation_option_seed_scores_manuscript.csv\nfrom google.colab import files\n\nuploaded = files.upload()\n\nfor fn in uploaded.keys():\n print('User uploaded file \"{name}\" with length {length} bytes'.format(\n name=fn, length=len(uploaded[fn])))\n\nwith open('truncation_option_seed_scores_manuscript.csv') as f:\n truncation_scores_df = pd.read_csv(f)",
"Merge experimental and model score data",
"trunc_validated_df = truncation_scores_df.merge(validated_core_seq_df, how='inner', on='core_seq')",
"Examine distribution of scores for sequences",
"trunc_validated_df['core_seq_len'] = trunc_validated_df['core_seq'].apply(len)\n\ntrunc_validated_df_median = trunc_validated_df[\n [u'core_seq', u'seq', u'seed_seq', \n u'seed_label', u'model_score',\n u'model_delta', 'core_seq_len', \n 'inferer_name', 'Kd',\n 'stringency_level']].groupby(\n [u'core_seq', u'seed_seq', 'core_seq_len',\n u'seed_label', 'Kd',\n u'inferer_name']).median().reset_index()\n\ntrunc_validated_df_var = trunc_validated_df[[u'core_seq', u'seq', u'seed_seq', \n u'seed_label', u'model_score',\n u'model_delta', 'core_seq_len', \n 'inferer_name', 'Kd',\n 'stringency_level']].groupby(\n [u'core_seq', u'seed_seq', 'core_seq_len', \n u'seed_label', 'Kd',\n u'inferer_name']).var().reset_index()\n\n\n\n\n# Join Median and Var into one table to Summarize\ntrunc_validated_df_median_var = trunc_validated_df_median.merge(\n trunc_validated_df_var, \n on=['core_seq', 'inferer_name', 'seed_label', 'seed_seq', 'core_seq_len', 'Kd'], \n suffixes=('_median', '_var'))\n\ntrunc_validated_df_median_var[trunc_validated_df_median_var.inferer_name == 'SuperBin'].sort_values(\n by=['seed_label', 'core_seq_len'],\n ascending=False)[['seed_label', 'core_seq', 'core_seq_len', 'Kd', 'inferer_name', \n 'model_score_median', 'model_score_var']]",
"Generate Figure Plots",
"def plot_swarm_and_box_plots (median_df, full_df, inferer_name, seed_label):\n \"\"\"Plots swarm and boxplots for truncated sequences.\n\n Args:\n median_df: (pd.DataFrame) Median model scores for truncated sequence.\n full_df: (pd.DataFrame) All model scores evaluated for each truncation.\n inferer_name: (str) Name of model to plot data from (e.g. SuperBin)\n seed_label: (str) Seed sequence for which to plot truncations.\n \"\"\"\n # Subset out seed and model for inference\n seed_median_df = median_df[(median_df.inferer_name == inferer_name) & \n (median_df.seed_label == seed_label)].copy()\n seed_full_df = full_df[(full_df.inferer_name == inferer_name) &\n (full_df.seed_label == seed_label)].copy()\n\n # Use the median df to sort the data by relative model scores\n seed_median_df = seed_median_df.sort_values('model_delta', ascending=False)\n\n # Create an offset to enable spacing between values of the same core seq len\n core_seq_offset_dict = {} \n for core_seq_len in seed_median_df.core_seq_len.unique():\n for i, core_seq in enumerate(seed_median_df[seed_median_df.core_seq_len == core_seq_len].core_seq):\n core_seq_offset_dict[core_seq] = i\n\n # Apply these offsets back to the full set of evaluated points as well as medians\n seed_full_df['seq_len_offset'] = seed_full_df['core_seq'].apply(\n lambda x: core_seq_offset_dict[x])\n seed_full_df['seq_len_mod'] = seed_full_df['core_seq_len'] + seed_full_df['core_seq'].apply(\n lambda x: float(5 * core_seq_offset_dict[x]) / 10.)\n seed_full_df['seq_len_mod2'] = -1 * seed_full_df['seq_len_mod']\n\n # Create a categorical to enable ordering of colors\n seed_full_df['$K_D$'] = pd.Categorical(seed_full_df['Kd'],\n categories=['> 512nM', '< 512 nM', '< 128 nM', '< 32 nM', '< 8 nM'],\n ordered=True)\n\n # Create an ordering to enable spacing of points.\n boxplot_order = [-40. , -39.5, -39. , \n -35.5, -35. , -31.5,\n -31. , 30.5 , -28, -27.5, -27., \n -25. , -24.5, -24. , -23.5, -23. , \n -19.5, -19. , 15.5,\n -15., -14.5, ]\n\n # Only show the sizes we actually evaluated core sequences.\n boxplot_order_strs = [40 , '', 39 , \n '', 35 , '', \n 31 , '', '', '', 27, \n '', '', '', '', 23 , \n '', 19 , \n '', 15, '']\n\n\n # First render points via swarm\n boxplot_order_strs = map(str, boxplot_order_strs)\n plt.figure(figsize=(10, 5))\n ax = sns.swarmplot(data=seed_full_df, x='seq_len_mod2', y='model_score', \n edgecolor='k',\n palette='Greens', hue='$K_D$', order=boxplot_order, \n dodge=False, size=3, zorder=0, linewidth=.2)\n\n for artist in zip(ax.artists):\n artist.set_edgecolor('k')\n\n # Render boxplot on top\n ax =sns.boxplot(data=seed_full_df, x='seq_len_mod2', y='model_score', \n color='white', hue='$K_D$', order=boxplot_order, \n dodge=False, linewidth=3, boxprops={'facecolor':'None'},\n showcaps=False, showfliers=False ) \n \n # Formatting of Figure\n for l in ax.lines:\n # set median line style\n l.set_linestyle('-')\n l.set_color('k')\n l.set_linewidth(2)\n l.set_solid_capstyle('butt')\n l.set_alpha(0.5)\n\n xloc, xlab = plt.xticks()\n xloc_filt = []\n boxplot_order_strs_filt = []\n for i in range(len(boxplot_order_strs)):\n if boxplot_order_strs[i] != '':\n boxplot_order_strs_filt.append(boxplot_order_strs[i])\n xloc_filt.append(xloc[i])\n plt.xticks(xloc_filt, boxplot_order_strs_filt)\n plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n plt.show() \n \n \n\n# Figure 4A\nplot_swarm_and_box_plots(trunc_validated_df_median, trunc_validated_df,\n 'SuperBin', 'G12')\n\n \n\n# Figure 4A\nplot_swarm_and_box_plots(trunc_validated_df_median, trunc_validated_df,\n 'SuperBin', 'G13')\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mit-crpg/openmc
|
examples/jupyter/cad-based-geometry.ipynb
|
mit
|
[
"Using CAD-Based Geometries\nIn this notebook we'll be exploring how to use CAD-based geometries in OpenMC via the DagMC toolkit. The models we'll be using in this notebook have already been created using Trelis and faceted into a surface mesh represented as .h5m files in the Mesh Oriented DatABase format. We'll be retrieving these files using the function below.",
"import urllib.request\n\nfuel_pin_url = 'https://tinyurl.com/y3ugwz6w' # 1.2 MB\nteapot_url = 'https://tinyurl.com/y4mcmc3u' # 29 MB\n\ndef download(url):\n \"\"\"\n Helper function for retrieving dagmc models\n \"\"\"\n u = urllib.request.urlopen(url)\n \n if u.status != 200:\n raise RuntimeError(\"Failed to download file.\")\n \n # save file as dagmc.h5m\n with open(\"dagmc.h5m\", 'wb') as f:\n f.write(u.read())",
"This notebook is intended to demonstrate how DagMC problems are run in OpenMC. For more information on how DagMC models are created, please refer to the DagMC User's Guide.",
"%matplotlib inline\nfrom IPython.display import Image\nimport openmc",
"To start, we'll be using a simple U235 fuel pin surrounded by a water moderator, so let's create those materials.",
" # materials\nu235 = openmc.Material(name=\"fuel\")\nu235.add_nuclide('U235', 1.0, 'ao')\nu235.set_density('g/cc', 11)\nu235.id = 40\n\nwater = openmc.Material(name=\"water\")\nwater.add_nuclide('H1', 2.0, 'ao')\nwater.add_nuclide('O16', 1.0, 'ao')\nwater.set_density('g/cc', 1.0)\nwater.add_s_alpha_beta('c_H_in_H2O')\nwater.id = 41\n\nmats = openmc.Materials([u235, water])\nmats.export_to_xml()",
"Now let's get our DAGMC geometry. We'll be using prefabricated models in this notebook. For information on how to create your own DAGMC models, you can refer to the instructions here.\nLet's download the DAGMC model. These models come in the form of triangle surface meshes stored using the the Mesh Oriented datABase (MOAB) in an HDF5 file with the extension .h5m. An example of a coarse triangle mesh looks like:",
"Image(\"./images/cylinder_mesh.png\", width=350)",
"First we'll need to grab some pre-made DagMC models.",
"download(fuel_pin_url)",
"OpenMC expects that the model has the name \"dagmc.h5m\" so we'll name the file that and indicate to OpenMC that a DAGMC geometry is being used by setting the settings.dagmc attribute to True.",
"settings = openmc.Settings()\nsettings.dagmc = True\nsettings.batches = 10\nsettings.inactive = 2\nsettings.particles = 5000\nsettings.export_to_xml()",
"Unlike conventional geometries in OpenMC, we really have no way of knowing what our model looks like at this point. Thankfully DagMC geometries can be plotted just like any other OpenMC geometry to give us an idea of what we're now working with.\nNote that material assignments have already been applied to this model. Materials can be assigned either using ids or names of materials in the materials.xml file. It is recommended that material names are used for assignment for readability.",
"p = openmc.Plot()\np.width = (25.0, 25.0)\np.pixels = (400, 400)\np.color_by = 'material'\np.colors = {u235: 'yellow', water: 'blue'}\nopenmc.plot_inline(p)",
"Now that we've had a chance to examine the model a bit, we can finish applying our settings and add a source.",
"settings.source = openmc.Source(space=openmc.stats.Box([-4., -4., -4.],\n [ 4., 4., 4.]))\nsettings.export_to_xml()",
"Tallies work in the same way when using DAGMC geometries too. We'll add a tally on the fuel cell here.",
"tally = openmc.Tally()\ntally.scores = ['total']\ntally.filters = [openmc.CellFilter(1)]\ntallies = openmc.Tallies([tally])\ntallies.export_to_xml()",
"Note: Applying tally filters in DagMC models requires prior knowledge of the model. Here, we know that the fuel cell's volume ID in the CAD sofware is 1. To identify cells without use of CAD software, load them into the OpenMC plotter where cell, material, and volume IDs can be identified for native both OpenMC and DagMC geometries.\nNow we're ready to run the simulation just like any other OpenMC run.",
"openmc.run()",
"More Complicated Geometry\nNeat! But this pincell is something we could've done with CSG. Let's take a look at something more complex. We'll download a pre-built model of the Utah teapot and use it here.",
"download(teapot_url)\n\nImage(\"./images/teapot.jpg\", width=600)",
"Our teapot is made out of iron, so we'll want to create that material and make sure it is in our materials.xml file.",
"iron = openmc.Material(name=\"iron\")\niron.add_nuclide(\"Fe54\", 0.0564555822608)\niron.add_nuclide(\"Fe56\", 0.919015287728)\niron.add_nuclide(\"Fe57\", 0.0216036861685)\niron.add_nuclide(\"Fe58\", 0.00292544384231)\niron.set_density(\"g/cm3\", 7.874)\nmats = openmc.Materials([iron, water])\nmats.export_to_xml()",
"To make sure we've updated the file correctly, let's make a plot of the teapot.",
"p = openmc.Plot()\np.basis = 'xz'\np.origin = (0.0, 0.0, 0.0)\np.width = (30.0, 20.0)\np.pixels = (450, 300)\np.color_by = 'material'\np.colors = {iron: 'gray', water: 'blue'}\nopenmc.plot_inline(p)",
"Here we start to see some of the advantages CAD geometries provide. This particular file was pulled from the GrabCAD and pushed through the DAGMC workflow without modification (other than the addition of material assignments). It would take a considerable amount of time to create a model like this using CSG!",
"p.width = (18.0, 6.0)\np.basis = 'xz'\np.origin = (10.0, 0.0, 5.0)\np.pixels = (600, 200)\np.color_by = 'material'\nopenmc.plot_inline(p)",
"Now let's brew some tea! ... using a very hot neutron source. We'll use some well-placed point sources distributed throughout the model.",
"settings = openmc.Settings()\nsettings.dagmc = True\nsettings.batches = 10\nsettings.particles = 5000\nsettings.run_mode = \"fixed source\"\n\nsrc_locations = ((-4.0, 0.0, -2.0),\n ( 4.0, 0.0, -2.0),\n ( 4.0, 0.0, -6.0),\n (-4.0, 0.0, -6.0),\n (10.0, 0.0, -4.0),\n (-8.0, 0.0, -4.0))\n\n# we'll use the same energy for each source\nsrc_e = openmc.stats.Discrete(x=[12.0,], p=[1.0,])\n\n# create source for each location\nsources = []\nfor loc in src_locations:\n src_pnt = openmc.stats.Point(xyz=loc)\n src = openmc.Source(space=src_pnt, energy=src_e)\n sources.append(src)\n\nsrc_str = 1.0 / len(sources)\nfor source in sources:\n source.strength = src_str\n\nsettings.source = sources\nsettings.export_to_xml()",
"...and setup a couple mesh tallies. One for the kettle, and one for the water inside.",
"mesh = openmc.RegularMesh()\nmesh.dimension = (120, 1, 40)\nmesh.lower_left = (-20.0, 0.0, -10.0)\nmesh.upper_right = (20.0, 1.0, 4.0)\n\nmesh_filter = openmc.MeshFilter(mesh)\n\npot_filter = openmc.CellFilter([1])\npot_tally = openmc.Tally()\npot_tally.filters = [mesh_filter, pot_filter]\npot_tally.scores = ['flux']\n\nwater_filter = openmc.CellFilter([5])\nwater_tally = openmc.Tally()\nwater_tally.filters = [mesh_filter, water_filter]\nwater_tally.scores = ['flux']\n\n\ntallies = openmc.Tallies([pot_tally, water_tally])\ntallies.export_to_xml()\n\nopenmc.run()",
"Note that the performance is significantly lower than our pincell model due to the increased complexity of the model, but it allows us to examine tally results like these:",
"sp = openmc.StatePoint(\"statepoint.10.h5\")\n\nwater_tally = sp.get_tally(scores=['flux'], id=water_tally.id)\nwater_flux = water_tally.mean\nwater_flux.shape = (40, 120)\nwater_flux = water_flux[::-1, :]\n\npot_tally = sp.get_tally(scores=['flux'], id=pot_tally.id)\npot_flux = pot_tally.mean\npot_flux.shape = (40, 120)\npot_flux = pot_flux[::-1, :]\n\ndel sp\n\nfrom matplotlib import pyplot as plt\nfig = plt.figure(figsize=(18, 16))\n\nsub_plot1 = plt.subplot(121, title=\"Kettle Flux\")\nsub_plot1.imshow(pot_flux)\n\nsub_plot2 = plt.subplot(122, title=\"Water Flux\")\nsub_plot2.imshow(water_flux)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/bnu/cmip6/models/bnu-esm-1-1/ocnbgchem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: BNU\nSource ID: BNU-ESM-1-1\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:41\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'bnu', 'bnu-esm-1-1', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tylere/ee-jupyter-examples
|
2 - EE 101.ipynb
|
apache-2.0
|
[
"Earth Engine 101\nThis workbook is an introdution to Earth Engine analysis in an IPython Notebook, using the Python API. The content is similar to what is covered in the Introduction to the Earth Engine API workshop using the Earth Engine Javascript \"Playground\". \nLet's get started by importing a few moduled used in this tutorial.",
"from IPython.display import Image",
"Hello, World\nTo get used to using IPython Notebooks, let's print some simple output back to the notebook. Click on the box below, and then press the play (run) button from the toolbar above.",
"print(\"Hello, world!\")",
"That works, but we can also first store the content in a variable, and then print out the variable.",
"string = \"Hello, world!\"\nprint(string)",
"Hello, Images\nLet's work with something more interesting... a dataset provided by Earth Engine.\nAssuming that this server has been setup with access to the Earth Engine Python API, we should be able to import and initialise the Earth Engine Python module (named 'ee'). If the module loads successfully, nothing will be returned when you run the following code.",
"import ee\nee.Initialize()",
"Next, let's locate a dataset to display. Start by going to the Earth Engine Public Data Catalog (https://earthengine.google.org/#index).",
"Image('http://www.google.com/earth/outreach/images/tutorials_eeintro_05_data_catalog.png')",
"Type in the term SRTM in the search box, click the search button, and then select the dataset SRTM Digital Elevation Data Version 4 from the list of results. This will bring up a data description page for the SRTM Digital Elevation Data 30m dataset. The data description page provide a short description of the dataset and links to the data provider, but the key piece of information that we need for working with the dataset in Earth Engine is the Image ID, which for this dataset is CGIAR/SRTM90_V4. Let's use the Image ID to store a reference to this image dataset:",
"srtm = ee.Image(\"CGIAR/SRTM90_V4\")",
"And now, we can print out information about the dataset, using the .getInfo() method.",
"info = srtm.getInfo()\nprint(info)",
"What is returned by the .getInfo() command is a Python dictionary. If needed, we could parse out this information and make use of it in our analysis.\nAdd an Image to the Map\nIPython Notebooks can be used to display an image, using the Image module:",
"from IPython.display import Image\n\nImage(url=srtm.getThumbUrl())",
"Ok, we can see the outlines of the continents, but there is not a lot of contrast between different elevation areas. So let's improve upon that, but adding some visualization parameters.",
"Image(url=srtm.getThumbUrl({'min':0, 'max':3000}))",
"By default, the .getThumbUrl() method returns the entire extent of the image, which in this case is global. We can also specify a region, to show a smaller area.",
"point = ee.Geometry.Point(-122.0918, 37.422)\nregion_bay_area = point.buffer(50000).bounds().getInfo()['coordinates']\nImage(url=srtm.getThumbUrl({'min':0, 'max':1000, 'region':region_bay_area}))",
"Load and Filter an Image Collection\nSo far we have been working with a single image, but there are also interesting datasets that are distributed as a series of images (such as images collected by satellite). Head back to the Earth Engine Public Data Catalog, search for landsat 8 toa, and load up the data description page for the USGS Landsat 8 TOA Reflectance (Orthorectified) dataset. The ID for this Image Collection is LANDSAT/LC8_L1T_TOA.",
"# Create a reference to the image collection\nl8 = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')\n# Filter the collection down to a two week period\nfiltered = l8.filterDate('2013-05-01', '2013-05-15');\n# Use the mosaic reducer, to select the most recent pixel in areas of overlap\nl8_image = filtered.mosaic()\n# Define a region roughly covering California\npoint = ee.Geometry.Point(-118, 37)\nregion_california = point.buffer(500000).bounds().getInfo()['coordinates']\n# And finally display the image.\nImage(url=l8_image.getThumbUrl({'region':region_california}))",
"Playing with Image Bands\nUsing the default image visualization parameters, that doesn't look like much. So we add some visualization data, to display a true color image.",
"Image(url=l8_image.getThumbUrl({\n 'region':region_california,\n 'bands':'B4,B3,B2',\n 'min':0,\n 'max':0.3\n}))",
"And by changing the bands displayed, we can also display a false color image.",
"Image(url=l8_image.getThumbUrl({\n 'region':region_california,\n 'bands':'B5,B4,B3',\n 'min':0,\n 'max':0.3\n}))",
"Play with Reducing Image Collections\nNext expand the date range to cover an entire year, so that there are many overlapping images. We will continue to use the .mosaic() reducer, which retains the last (most recent) pixels in areas of image overlap. Clouds are readily apparent.",
"filtered = l8.filterDate('2013-01-01', '2014-01-01')\n",
"ImageCollection.mosaic Reducer",
"l8_image = filtered.mosaic()\nImage(url=l8_image.getThumbUrl({\n 'region':region_california,\n 'bands':'B4,B3,B2',\n 'min':0,\n 'max':0.3\n}))",
"ImageCollection.median Reducer",
"l8_image = filtered.median()\nImage(url=l8_image.getThumbUrl({\n 'region':region_california,\n 'bands':'B4,B3,B2',\n 'min':0,\n 'max':0.3\n}))",
"ImageCollection.min Reducer",
"l8_image = filtered.min()\nImage(url=l8_image.getThumbUrl({\n 'region':region_california,\n 'bands':'B4,B3,B2',\n 'min':0,\n 'max':0.3\n}))",
"ImageCollection.max Reducer",
"l8_image = filtered.max()\nImage(url=l8_image.getThumbUrl({\n 'region':region_california,\n 'bands':'B4,B3,B2',\n 'min':0,\n 'max':0.3\n}))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
quantumlib/ReCirq
|
docs/ftbbl.ipynb
|
apache-2.0
|
[
"Feature Testbed Benchmark Library\nThe Feature Testbed and Benchmarking Library (FTb/BL) consists of a series of workflow tools in cirq_google.workflow and a library of application-inspired algorithms conforming to those specifications living in ReCirq.\nComputer code can serve as the intermediary language between algorithms researchers, software developers, and providers of quantum hardware -- each of whom have their own unique domain expertise. Higher-order abstraction in Quantum Executables can support higher-order functionality in our Quantum Runtime.\nBy architecting features, tools, and techniques to be part of a well specified runtime we can:\n - Easily run experiments (\"A/B testing\") on the impact of features by toggling them in the runtime and running against a full library of applications.\n - Create a clear specification for how to leverage a feature, tool, or technique in our library of applications.\n - Mock out runtime features for ease of testing without consuming the scarce resource of quantum computer time.\n\nAlgorithmic Benchmark Library\nThe algorithmic benchmark library is a collection of quantum executables that probe different aspects of a quantum computer's performance. Each benchmark is based off of an algorithm of interest and operates on > 2 qubits (in contrast to traditional 1- and 2-qubit fidelity metrics).\nWe use the library card catalog in recirq.algorithmic_benchmark_library to get a description of each algorithmic benchmark.",
"try:\n import recirq\nexcept ImportError:\n !pip install --quiet git+https://github.com/quantumlib/ReCirq\n\nimport recirq.algorithmic_benchmark_library as algos\nfrom IPython.display import display\n\nfor algo in algos.BENCHMARKS:\n display(algo)",
"Select an example benchmark using executable_family\nEach benchmark has a name (e.g. \"loschmidt.tilted_square_lattice\") and a domain (e.g. this benchmark is inspired by the OTOC Experiment so it is given the domain of \"recirq.otoc\"). We combine these two properties to give the executable_family string, which serves as a globally-unique identifier for each benchmark. In ReCirq, the executable_family is the Python module path where the relevant code can be found.\nWe'll seelct the \"recirq.otoc.loschmidt.tilted_square_lattice\" benchmark by querying the card catalog using this unique executable_family identifier.",
"algo = algos.get_algo_benchmark_by_executable_family(\n executable_family='recirq.otoc.loschmidt.tilted_square_lattice')\nprint(type(algo))\nalgo\n\nfrom pathlib import Path\n\n# In ReCirq, the `executable_family` is the Python module path where\n# the relevant code can be found.\nalgo_src_dir = Path('..') / algo.executable_family.replace('.', '/')\nalgo_src_dir",
"Each AlgorithimBenchmark has a collection of BenchmarkConfigs\nAn algorithmic benchmark defines a class of quantum executables. Often times the specific size, shape, depth or other properties is left as a parameter. For each benchmark, we have a collection of BenchmarkConfigs that fully specify what to run and can be run repeatedly for controlled comparison over time or between processors.",
"for config in algo.configs:\n print(config.full_name)",
"We'll select the small-cz-v1 configuration, as described below.",
"config = algo.get_config_by_full_name('loschmidt.tilted_square_lattice.small-cz-v1')\nprint(config.description)",
"A benchmark has three steps:\n 1. Executable generation\n 2. Execution\n 3. Analysis\nUsually, these steps are done in order but independently and with differing frequencies. For a robust benchmark, executable generation should likely be done once and the serialized QuantumExecutableGroup cached and re-used for subsequent executions. Execution should happen on a regular cadence for historical data or as part of an A/B test for trialing different runtime configuraiton options. Analysis can happen at any moment and may incorperate the latest data or a collection of datasets across time, processors, or runtime configurations.",
"config",
"Step 1: Executable Generation\nHere, we generate a QuantumExecutableGroup for a given range of parameters. This step is usually done once for each BenchmarkConfig and the serialized result is saved and re-used for execution. We use a short python file more like a configuration file than a script to commit the parameters for a particular config. The filename can be found as config.gen_script.",
"# Helper function to show scripts from the ReCirq repository for comparison\nfrom IPython.display import Code, HTML\n\ndef show_python_script(path: Path):\n with path.open() as f:\n contents = f.read()\n\n display(HTML(f\"<u>The contents of {path}:</u>\"))\n display(Code(contents[contents.find('import'):], language='python'))\n\nshow_python_script(algo_src_dir / config.gen_script)",
"We've copied the important bit into the cell below so you can execute it within this notebook.",
"import numpy as np\nfrom recirq.otoc.loschmidt.tilted_square_lattice import get_all_tilted_square_lattice_executables\n\nexes = get_all_tilted_square_lattice_executables(\n min_side_length=2, max_side_length=3, side_length_step=1,\n n_instances=3,\n macrocycle_depths=np.arange(0, 4 + 1, 1),\n twoq_gate_name='cz',\n seed=52,\n)\nlen(exes)",
"Step 2: Execution\nThe QuantumExecutableGroup for our benchmark defines what to run. Now we configure how to run it. This is done with a QuantumRuntimeConfiguration.\nWe specify which processor to use (here: a simulated one. Try changing to EngineProcessorRecord to run on a real device), how to map the \"logical\" qubit identities in the problem to physical qubits (here: randomly), and how to set the random number generator's seed.",
"from cirq_google.workflow import (\n QuantumRuntimeConfiguration, \n SimulatedProcessorWithLocalDeviceRecord,\n EngineProcessorRecord,\n RandomDevicePlacer\n)\n\nrt_config = QuantumRuntimeConfiguration(\n processor_record=SimulatedProcessorWithLocalDeviceRecord('rainbow', noise_strength=0.005),\n qubit_placer=RandomDevicePlacer(),\n random_seed=52,\n)\nrt_config",
"Again, we use a short Python file in the repository to commit the configuration options for a particular config.",
"show_python_script(algo_src_dir / 'run-simulator.py')",
"Usually, we're very careful about saving everthing in a structured way relative to a base_data_dir. Since this notebook is run interactively, we'll make a temporary directory to serve as our base_data_dir.",
"import tempfile\nbase_data_dir = tempfile.mkdtemp()\nbase_data_dir",
"Actual execution is as simple as calling execute once the runtime configuration and executables are created.",
"from cirq_google.workflow import execute\nraw_results = execute(rt_config, exes, base_data_dir=base_data_dir)",
"Since we didn't input our own, a random run_id is generated for us. The run_ids must be unique within a data directory.",
"run_id = raw_results.shared_runtime_info.run_id\nrun_id",
"Step 3: Analysis and Plotting\nFinally, we can analyze one or more datasets and generate plots. Since we've decoupled problem generation and execution from this step you can slice and dice your data any way you want. Usually, analysis routines will use the accompanying analysis module for helper function and do much of the pd.DataFrame and matplotlib munging interactively in a Jupyter notebook. One of the plots from plots.ipynb is reproduced here.",
"import recirq.otoc.loschmidt.tilted_square_lattice.analysis as analysis\nimport cirq_google as cg\nimport pandas as pd\n\nraw_results = cg.ExecutableGroupResultFilesystemRecord.from_json(run_id=run_id, base_data_dir=base_data_dir)\\\n .load(base_data_dir=base_data_dir)\ndf = analysis.loschmidt_results_to_dataframe(raw_results)\ndf.head()\n\nvs_depth_df, vs_depth_gb_cols = analysis.agg_vs_macrocycle_depth(df)\nfit_df, exp_ansatz = analysis.fit_vs_macrocycle_depth(df)\ntotal_df = pd.merge(vs_depth_df, fit_df, on=vs_depth_gb_cols)\n\nfrom matplotlib import pyplot as plt\n\ncolors = plt.get_cmap('tab10')\n\nfor i, row in total_df.iterrows():\n plt.errorbar(\n x=row['macrocycle_depth'],\n y=row['success_probability_mean'],\n yerr=row['success_probability_std'],\n marker='o', capsize=5, ls='',\n color=colors(i),\n label=f'{row[\"width\"]}x{row[\"height\"]} ({row[\"n_qubits\"]}q) {row[\"processor_str\"]}; f={row[\"f\"]:.3f}'\n )\n \n xx = np.linspace(np.min(row['macrocycle_depth']), np.max(row['macrocycle_depth']))\n yy = exp_ansatz(xx, a=row['a'], f=row['f'])\n plt.plot(xx, yy, ls='--', color=colors(i))\n \nplt.legend(loc='best')\nplt.yscale('log')\nplt.xlabel('Macrocycle Depth')\nplt.ylabel('Success Probability')\nplt.tight_layout()",
"Appendix: QuantumExecutable and ExecutableSpec\n\nA QuantumExecutable contains a complete description of what to run. Think of it as a souped-up version of a cirq.Circuit. \nAn ExecutableSpec is a problem-specific dataclass minimally capturing the salient independent variables, usually with plain-old-datatypes suitable for databasing and plotting.\n\nEach benchmark provides a problem-specific subclass of ExecutableSpec and a function that turns those specs into problem-agnostic, fully-specified QuantumExecutables.\n\nQuantumExecutable\nEach QuantumExecutable fully specifies the quantum program to be run, but as high-level as possible. The most familiar part is circuit: cirq.Circuit, but the executable also includes measurement (i.e. repetitions) information, sweep parameters, and other data. We'll look at the fields on one of our executables below:",
"# Pick one `QuantumExecutable` from the `QuantumExecutableGroup` \nexe = exes.executables[0]\n\nimport dataclasses\nprint('exe fields:')\nprint([f.name for f in dataclasses.fields(exe)])",
"QuantumExectutable.spec is a reference to the ExecutableSpec used to create this executable. Here, it is a TiltedSquareLatticeLoschmidtSpec which derives from the ExecutableSpec base class. Each AlgorithmicBenchmark has its own class derived from ExecutableSpec. This correspondance is recorded as AlgorithmicBenchmark.spec_class.",
"print(algo.spec_class)\nprint(exe.spec.__class__)",
"ExecutableSpec\nThe ExecutableSpec is a problem-specific dataclass containing the relevant independent variables for a benchmark. Since each benchmark has its own subclass of ExecutableSpec, we'll continue using our example loschmidt benchmark and create an example TiltedSquareLatticeLoschmidtSpec:",
"import cirq\nfrom recirq.otoc.loschmidt.tilted_square_lattice import TiltedSquareLatticeLoschmidtSpec\n\nspec = TiltedSquareLatticeLoschmidtSpec(\n topology=cirq.TiltedSquareLattice(width=2, height=2),\n macrocycle_depth=0,\n instance_i=0,\n n_repetitions=1_000,\n twoq_gate_name='cz'\n)",
"I've chosen the parameters corresponding to our example executable, exe = exes.executables[0].",
"exe.spec == spec",
"Below, we re-create the executable using just the spec.",
"from recirq.otoc.loschmidt.tilted_square_lattice import tilted_square_lattice_spec_to_exe\n\nexe2 = tilted_square_lattice_spec_to_exe(exe.spec, rs=np.random.RandomState(52))\nexe == exe2"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Danghor/Algorithms
|
Python/Chapter-09/Union-Find-OO.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)",
"An Object-Oriented Implementation of the Union-Find Algorithm\nThe class UnionFind maintains three member variables:\n - mParent is a dictionary that assigns each node to its parent node.\n Initially, all nodes point to themselves.\n - mHeight is a dictionary that stores the height of the trees. If $x$ is a node, then\n $\\texttt{mHeight}[x]$ is the height of the tree rooted at $x$.\n Initially, all trees contain but a single node and therefore have the height $1$.",
"class UnionFind:\n def __init__(self, M):\n self.mParent = { x: x for x in M }\n self.mHeight = { x: 1 for x in M }",
"Given an element $x$ from the set $M$, the function $\\texttt{self}.\\texttt{find}(x)$ \nreturns the ancestor of $x$ that is at the root of the tree containing $x$.",
"def find(self, x):\n p = self.mParent[x]\n if p == x:\n return x\n return self.find(p)\n\nUnionFind.find = find\ndel find",
"Given two elements $x$ and $y$ and an object $o$ of type UnionFind, the call $o.\\texttt{union}(x, y)$ changes the unionFind object $o$ so that afterwards the equation\n$$ o.\\texttt{find}(x) = o.\\texttt{find}(y) $$\nholds.",
"def union(self, x, y):\n root_x = self.find(x)\n root_y = self.find(y)\n if root_x != root_y:\n if self.mHeight[root_x] < self.mHeight[root_y]:\n self.mParent[root_x] = root_y\n elif self.mHeight[root_x] > self.mHeight[root_y]:\n self.mParent[root_y] = root_x\n else:\n self.mParent[root_y] = root_x\n self.mHeight[root_x] += 1\n \nUnionFind.union = union\n\ndef partition(M, R):\n UF = UnionFind(M)\n for x, y in R:\n UF.union(x, y)\n Roots = { x for x in M if UF.find(x) == x }\n return [{y for y in M if UF.find(y) == r} for r in Roots]\n\ndef demo():\n M = set(range(1, 10))\n R = { (1, 4), (7, 9), (3, 5), (2, 6), (5, 8), (1, 9), (4, 7) }\n P = partition(M, R)\n return P\n\nP = demo()\nP"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/jax-md
|
notebooks/flocking.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/flocking.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n https://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.",
"#@title Imports & Utils\n\n# Imports\n\n!pip install -q git+https://www.github.com/google/jax-md\n\nimport numpy as onp\n\nfrom jax.config import config ; config.update('jax_enable_x64', True)\nimport jax.numpy as np\nfrom jax import random\nfrom jax import jit\nfrom jax import vmap\nfrom jax import lax\nvectorize = np.vectorize\n\nfrom functools import partial\n\nfrom collections import namedtuple\nimport base64\n\nimport IPython\nfrom google.colab import output\n\nimport os\n\nfrom jax_md import space, smap, energy, minimize, quantity, simulate, partition, util\nfrom jax_md.util import f32\n\n# Plotting\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n \nsns.set_style(style='white')\n\ndark_color = [56 / 256] * 3\nlight_color = [213 / 256] * 3\naxis_color = 'white'\n\ndef format_plot(x='', y='', grid=True): \n ax = plt.gca()\n \n ax.spines['bottom'].set_color(axis_color)\n ax.spines['top'].set_color(axis_color) \n ax.spines['right'].set_color(axis_color)\n ax.spines['left'].set_color(axis_color)\n \n ax.tick_params(axis='x', colors=axis_color)\n ax.tick_params(axis='y', colors=axis_color)\n ax.yaxis.label.set_color(axis_color)\n ax.xaxis.label.set_color(axis_color)\n ax.set_facecolor(dark_color)\n \n plt.grid(grid)\n plt.xlabel(x, fontsize=20)\n plt.ylabel(y, fontsize=20)\n \ndef finalize_plot(shape=(1, 1)):\n plt.gcf().patch.set_facecolor(dark_color)\n plt.gcf().set_size_inches(\n shape[0] * 1.5 * plt.gcf().get_size_inches()[1], \n shape[1] * 1.5 * plt.gcf().get_size_inches()[1])\n plt.tight_layout()\n\n# Progress Bars\n\nfrom IPython.display import HTML, display\nimport time\n\n\ndef ProgressIter(iter_fun, iter_len=0):\n if not iter_len:\n iter_len = len(iter_fun)\n out = display(progress(0, iter_len), display_id=True)\n for i, it in enumerate(iter_fun):\n yield it\n out.update(progress(i + 1, iter_len))\n\ndef progress(value, max):\n return HTML(\"\"\"\n <progress\n value='{value}'\n max='{max}',\n style='width: 45%'\n >\n {value}\n </progress>\n \"\"\".format(value=value, max=max))\n\nnormalize = lambda v: v / np.linalg.norm(v, axis=1, keepdims=True)\n\n# Rendering\n\nrenderer_code = IPython.display.HTML('''\n<canvas id=\"canvas\"></canvas>\n<script>\n Rg = null;\n Ng = null;\n\n var current_scene = {\n R: null,\n N: null,\n is_loaded: false,\n frame: 0,\n frame_count: 0,\n boid_vertex_count: 0,\n boid_buffer: [],\n predator_vertex_count: 0,\n predator_buffer: [],\n disk_vertex_count: 0,\n disk_buffer: null,\n box_size: 0\n };\n\n google.colab.output.setIframeHeight(0, true, {maxHeight: 5000});\n\n async function load_simulation() {\n buffer_size = 400;\n max_frame = 800;\n\n result = await google.colab.kernel.invokeFunction(\n 'notebook.GetObstacles', [], {});\n data = result.data['application/json'];\n\n if(data.hasOwnProperty('Disk')) {\n current_scene = put_obstacle_disk(current_scene, data.Disk);\n }\n\n for (var i = 0 ; i < max_frame ; i += buffer_size) {\n console.log(i);\n result = await google.colab.kernel.invokeFunction(\n 'notebook.GetBoidStates', [i, i + buffer_size], {}); \n \n data = result.data['application/json'];\n current_scene = put_boids(current_scene, data);\n }\n current_scene.is_loaded = true;\n\n result = await google.colab.kernel.invokeFunction(\n 'notebook.GetPredators', [], {}); \n data = result.data['application/json'];\n if (data.hasOwnProperty('R'))\n current_scene = put_predators(current_scene, data);\n\n result = await google.colab.kernel.invokeFunction(\n 'notebook.GetSimulationInfo', [], {});\n current_scene.box_size = result.data['application/json'].box_size;\n }\n\n function initialize_gl() {\n const canvas = document.getElementById(\"canvas\");\n canvas.width = 640;\n canvas.height = 640;\n\n const gl = canvas.getContext(\"webgl2\");\n\n if (!gl) {\n alert('Unable to initialize WebGL.');\n return;\n }\n\n gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight);\n gl.clearColor(0.2, 0.2, 0.2, 1.0);\n gl.enable(gl.DEPTH_TEST);\n\n const shader_program = initialize_shader(\n gl, VERTEX_SHADER_SOURCE_2D, FRAGMENT_SHADER_SOURCE_2D);\n const shader = {\n program: shader_program,\n attribute: {\n vertex_position: gl.getAttribLocation(shader_program, 'vertex_position'),\n },\n uniform: {\n screen_position: gl.getUniformLocation(shader_program, 'screen_position'),\n screen_size: gl.getUniformLocation(shader_program, 'screen_size'),\n color: gl.getUniformLocation(shader_program, 'color'),\n },\n };\n gl.useProgram(shader_program);\n\n const half_width = 200.0;\n\n gl.uniform2f(shader.uniform.screen_position, half_width, half_width);\n gl.uniform2f(shader.uniform.screen_size, half_width, half_width);\n gl.uniform4f(shader.uniform.color, 0.9, 0.9, 1.0, 1.0);\n\n return {gl: gl, shader: shader};\n }\n\n var loops = 0;\n\n function update_frame() {\n gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);\n\n if (!current_scene.is_loaded) {\n window.requestAnimationFrame(update_frame);\n return;\n }\n\n var half_width = current_scene.box_size / 2.;\n gl.uniform2f(shader.uniform.screen_position, half_width, half_width);\n gl.uniform2f(shader.uniform.screen_size, half_width, half_width);\n\n if (current_scene.frame >= current_scene.frame_count) {\n if (!current_scene.is_loaded) {\n window.requestAnimationFrame(update_frame);\n return;\n }\n loops++;\n current_scene.frame = 0;\n }\n\n gl.enableVertexAttribArray(shader.attribute.vertex_position);\n\n gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.boid_buffer[current_scene.frame]);\n gl.uniform4f(shader.uniform.color, 0.0, 0.35, 1.0, 1.0);\n gl.vertexAttribPointer(\n shader.attribute.vertex_position,\n 2,\n gl.FLOAT,\n false,\n 0,\n 0\n );\n gl.drawArrays(gl.TRIANGLES, 0, current_scene.boid_vertex_count);\n\n if(current_scene.predator_buffer.length > 0) {\n gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.predator_buffer[current_scene.frame]);\n gl.uniform4f(shader.uniform.color, 1.0, 0.35, 0.35, 1.0);\n gl.vertexAttribPointer(\n shader.attribute.vertex_position,\n 2,\n gl.FLOAT,\n false,\n 0,\n 0\n );\n gl.drawArrays(gl.TRIANGLES, 0, current_scene.predator_vertex_count);\n }\n \n if(current_scene.disk_buffer) {\n gl.bindBuffer(gl.ARRAY_BUFFER, current_scene.disk_buffer);\n gl.uniform4f(shader.uniform.color, 0.9, 0.9, 1.0, 1.0);\n gl.vertexAttribPointer(\n shader.attribute.vertex_position,\n 2,\n gl.FLOAT,\n false,\n 0,\n 0\n );\n gl.drawArrays(gl.TRIANGLES, 0, current_scene.disk_vertex_count);\n }\n\n current_scene.frame++;\n if ((current_scene.frame_count > 1 && loops < 5) || \n (current_scene.frame_count == 1 && loops < 240))\n window.requestAnimationFrame(update_frame);\n \n if (current_scene.frame_count > 1 && loops == 5 && current_scene.frame < current_scene.frame_count - 1)\n window.requestAnimationFrame(update_frame);\n }\n\n function put_boids(scene, boids) {\n const R = decode(boids['R']);\n const R_shape = boids['R_shape'];\n const theta = decode(boids['theta']);\n const theta_shape = boids['theta_shape'];\n\n function index(i, b, xy) {\n return i * R_shape[1] * R_shape[2] + b * R_shape[2] + xy; \n }\n\n var steps = R_shape[0];\n var boids = R_shape[1];\n var dimensions = R_shape[2];\n\n if(dimensions != 2) {\n alert('Can only deal with two-dimensional data.')\n }\n\n // First flatten the data.\n var buffer_data = new Float32Array(boids * 6);\n var size = 8.0;\n for (var i = 0 ; i < steps ; i++) {\n var buffer = gl.createBuffer();\n for (var b = 0 ; b < boids ; b++) {\n var xi = index(i, b, 0);\n var yi = index(i, b, 1);\n var ti = i * boids + b;\n var Nx = size * Math.cos(theta[ti]); //N[xi];\n var Ny = size * Math.sin(theta[ti]); //N[yi];\n buffer_data.set([\n R[xi] + Nx, R[yi] + Ny,\n R[xi] - Nx - 0.5 * Ny, R[yi] - Ny + 0.5 * Nx,\n R[xi] - Nx + 0.5 * Ny, R[yi] - Ny - 0.5 * Nx, \n ], b * 6);\n }\n gl.bindBuffer(gl.ARRAY_BUFFER, buffer);\n gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);\n\n scene.boid_buffer.push(buffer);\n }\n scene.boid_vertex_count = boids * 3;\n scene.frame_count += steps;\n return scene;\n }\n\n function put_predators(scene, boids) {\n // TODO: Unify this with the put_boids function.\n const R = decode(boids['R']);\n const R_shape = boids['R_shape'];\n const theta = decode(boids['theta']);\n const theta_shape = boids['theta_shape'];\n\n function index(i, b, xy) {\n return i * R_shape[1] * R_shape[2] + b * R_shape[2] + xy; \n }\n\n var steps = R_shape[0];\n var boids = R_shape[1];\n var dimensions = R_shape[2];\n\n if(dimensions != 2) {\n alert('Can only deal with two-dimensional data.')\n }\n\n // First flatten the data.\n var buffer_data = new Float32Array(boids * 6);\n var size = 18.0;\n for (var i = 0 ; i < steps ; i++) {\n var buffer = gl.createBuffer();\n for (var b = 0 ; b < boids ; b++) {\n var xi = index(i, b, 0);\n var yi = index(i, b, 1);\n var ti = theta_shape[1] * i + b;\n var Nx = size * Math.cos(theta[ti]);\n var Ny = size * Math.sin(theta[ti]);\n buffer_data.set([\n R[xi] + Nx, R[yi] + Ny,\n R[xi] - Nx - 0.5 * Ny, R[yi] - Ny + 0.5 * Nx,\n R[xi] - Nx + 0.5 * Ny, R[yi] - Ny - 0.5 * Nx, \n ], b * 6);\n }\n gl.bindBuffer(gl.ARRAY_BUFFER, buffer);\n gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);\n\n scene.predator_buffer.push(buffer);\n }\n scene.predator_vertex_count = boids * 3;\n return scene;\n }\n\n function put_obstacle_disk(scene, disk) {\n const R = decode(disk.R);\n const R_shape = disk.R_shape;\n const radius = decode(disk.D);\n const radius_shape = disk.D_shape;\n\n const disk_count = R_shape[0];\n const dimensions = R_shape[1];\n if (dimensions != 2) {\n alert('Can only handle two-dimensional data.');\n }\n if (radius_shape[0] != disk_count) {\n alert('Inconsistent disk radius count found.');\n }\n const segments = 32;\n\n function index(o, xy) {\n return o * R_shape[1] + xy;\n }\n\n // TODO(schsam): Use index buffers here.\n var buffer_data = new Float32Array(disk_count * segments * 6);\n for (var i = 0 ; i < disk_count ; i++) {\n var xi = index(i, 0);\n var yi = index(i, 1);\n for (var s = 0 ; s < segments ; s++) {\n const th = 2 * s / segments * Math.PI;\n const th_p = 2 * (s + 1) / segments * Math.PI;\n const rad = radius[i] * 0.8;\n buffer_data.set([\n R[xi], R[yi],\n R[xi] + rad * Math.cos(th), R[yi] + rad * Math.sin(th),\n R[xi] + rad * Math.cos(th_p), R[yi] + rad * Math.sin(th_p),\n ], i * segments * 6 + s * 6);\n }\n }\n var buffer = gl.createBuffer();\n gl.bindBuffer(gl.ARRAY_BUFFER, buffer);\n gl.bufferData(gl.ARRAY_BUFFER, buffer_data, gl.STATIC_DRAW);\n scene.disk_vertex_count = disk_count * segments * 3;\n scene.disk_buffer = buffer;\n return scene;\n }\n\n // SHADER CODE\n\n const VERTEX_SHADER_SOURCE_2D = `\n // Vertex Shader Program.\n attribute vec2 vertex_position;\n \n uniform vec2 screen_position;\n uniform vec2 screen_size;\n\n void main() {\n vec2 v = (vertex_position - screen_position) / screen_size;\n gl_Position = vec4(v, 0.0, 1.0);\n }\n `;\n\n const FRAGMENT_SHADER_SOURCE_2D = `\n precision mediump float;\n\n uniform vec4 color;\n\n void main() {\n gl_FragColor = color;\n }\n `;\n\n function initialize_shader(\n gl, vertex_shader_source, fragment_shader_source) {\n\n const vertex_shader = compile_shader(\n gl, gl.VERTEX_SHADER, vertex_shader_source);\n const fragment_shader = compile_shader(\n gl, gl.FRAGMENT_SHADER, fragment_shader_source);\n\n const shader_program = gl.createProgram();\n gl.attachShader(shader_program, vertex_shader);\n gl.attachShader(shader_program, fragment_shader);\n gl.linkProgram(shader_program);\n\n if (!gl.getProgramParameter(shader_program, gl.LINK_STATUS)) {\n alert(\n 'Unable to initialize shader program: ' + \n gl.getProgramInfoLog(shader_program)\n );\n return null;\n }\n return shader_program;\n }\n\n function compile_shader(gl, type, source) {\n const shader = gl.createShader(type);\n gl.shaderSource(shader, source);\n gl.compileShader(shader);\n\n if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {\n alert('An error occured compiling shader: ' + gl.getShaderInfoLog(shader));\n gl.deleteShader(shader);\n return null;\n }\n\n return shader;\n }\n\n // SERIALIZATION UTILITIES\n function decode(sBase64, nBlocksSize) {\n var chrs = atob(atob(sBase64));\n var array = new Uint8Array(new ArrayBuffer(chrs.length));\n\n for(var i = 0 ; i < chrs.length ; i++) {\n array[i] = chrs.charCodeAt(i);\n }\n\n return new Float32Array(array.buffer);\n }\n\n // RUN CELL\n\n load_simulation();\n gl_and_shader = initialize_gl();\n var gl = gl_and_shader.gl;\n var shader = gl_and_shader.shader;\n update_frame();\n</script>\n''')\n\ndef encode(R):\n return base64.b64encode(onp.array(R, onp.float32).tobytes())\n\ndef render(box_size, states, obstacles=None, predators=None):\n if isinstance(states, Boids):\n R = np.reshape(states.R, (1,) + states.R.shape)\n theta = np.reshape(states.theta, (1,) + states.theta.shape)\n elif isinstance(states, list):\n if all([isinstance(x, Boids) for x in states]):\n R, theta = zip(*states)\n R = onp.stack(R)\n theta = onp.stack(theta) \n \n if isinstance(predators, list):\n R_predators, theta_predators, *_ = zip(*predators)\n R_predators = onp.stack(R_predators)\n theta_predators = onp.stack(theta_predators)\n\n def get_boid_states(start, end):\n R_, theta_ = R[start:end], theta[start:end]\n return IPython.display.JSON(data={\n \"R_shape\": R_.shape,\n \"R\": encode(R_), \n \"theta_shape\": theta_.shape,\n \"theta\": encode(theta_)\n })\n output.register_callback('notebook.GetBoidStates', get_boid_states)\n\n def get_obstacles():\n if obstacles is None:\n return IPython.display.JSON(data={})\n else:\n return IPython.display.JSON(data={\n 'Disk': {\n 'R': encode(obstacles.R),\n 'R_shape': obstacles.R.shape,\n 'D': encode(obstacles.D),\n 'D_shape': obstacles.D.shape\n }\n })\n output.register_callback('notebook.GetObstacles', get_obstacles)\n\n def get_predators():\n if predators is None:\n return IPython.display.JSON(data={})\n else:\n return IPython.display.JSON(data={\n 'R': encode(R_predators),\n 'R_shape': R_predators.shape,\n 'theta': encode(theta_predators),\n 'theta_shape': theta_predators.shape\n })\n output.register_callback('notebook.GetPredators', get_predators)\n\n def get_simulation_info():\n return IPython.display.JSON(data={\n 'frames': R.shape[0],\n 'box_size': box_size\n })\n output.register_callback('notebook.GetSimulationInfo', get_simulation_info)\n\n return renderer_code",
"Warning: After running the simulations in this notebook, you have to wait a moment (5 - 30 seconds) for rendering.\nFlocks, Herds, and Schools: A Distributed Behavioral Model\nWe will go over the paper, \"Flocks, Herds, and Schools: A Distributed Behavioral Model\" published by C. W. Reynolds in SIGGRAPH 1987. The paper itself is fantastic and, as far as a description of flocking is concerned, there is little that we can offer. Therefore, rather than go through the paper directly, we will use JAX and JAX, MD to interactively build a simulation similar to Reynolds' in colab. To simplify our discussion, we will build a two-dimensional version of Reynolds' simulation.\nIn nature there are many examples in which large numbers of animals exhibit complex collective motion (schools of fish, flocks of birds, herds of horses, colonies of ants). In his seminal paper, Reynolds introduces a model of such collective behavior (henceforth refered to as \"flocking\") based on simple rules that can be computed locally for each entity (referred to as a \"boid\") in the flock based on its environment. This paper is written in the context of computer graphics and so Reynolds is going for biologically inspired simulations that look right rather than accuracy in any statistical sense. Ultimately, Reynolds measures success in terms of \"delight\" people find in watching the simulations; we will use a similar metric here.\nNote, we recommend running this notebook in \"Dark\" mode.\nBoids\nReynolds is interested in simulating bird-like entities that are described by a position, $R$, and an orientation, $\\theta$. This state can optionally augmented with extra information (for example, hunger or fear). We can define a Boids type that stores data for a collection of boids as two arrays. R is an ndarray of shape [boid_count, spatial_dimension] and theta is an ndarray of shape [boid_count]. An individual boid is an index into these arrays. It will often be useful to refer to the vector orientation of the boid $N = (\\cos\\theta, \\sin\\theta)$.",
"Boids = namedtuple('Boids', ['R', 'theta'])",
"We can instantiate a collection of boids randomly in a box of side length $L$. We will use periodic boundary conditions for our simulation which means that boids will be able to wrap around the sides of the box. To do this we will use the space.periodic command in JAX, MD.",
"# Simulation Parameters:\nbox_size = 800.0 # A float specifying the side-length of the box.\nboid_count = 200 # An integer specifying the number of boids.\ndim = 2 # The spatial dimension in which we are simulating.\n\n# Create RNG state to draw random numbers (see LINK).\nrng = random.PRNGKey(0)\n\n# Define periodic boundary conditions.\ndisplacement, shift = space.periodic(box_size)\n\n# Initialize the boids.\nrng, R_rng, theta_rng = random.split(rng, 3)\n\nboids = Boids(\n R = box_size * random.uniform(R_rng, (boid_count, dim)),\n theta = random.uniform(theta_rng, (boid_count,), maxval=2. * np.pi)\n)\n\ndisplay(render(box_size, boids))",
"Dynamics\nNow that we have defined our boids, we have to imbue them with some rules governing their motion. Reynolds notes that in nature flocks do not seem to have a maximum size, but instead can keep acquiring new boids and grow without bound. He also comments that each boid cannot possibly be keeping track of the entire flock and must, instead, be focusing on its local neighborhood. Reynolds then proposes three simple, local, rules that boids might try to follow:\n\nAlignment: Boids will try to align themselves in the direction of their neighbors.\nAvoidance: Boids will avoid colliding with their neighbors.\nCohesion: Boids will try to move towards the center of mass of their neighbors.\n\nIn his exposition, Reynolds is vague about the details for each of these rules and so we will take some creative liberties. We will try to phrase this problem as an energy model, so our goal will be to write down an \"energy\" function (similar to a \"loss\") $E(R, \\theta)$ such that low-energy configurations of boids satisfy each of the three rules above. \n\\\nWe will write the total energy as a sum of three terms, one for each of the rules above:\n$$E(R, \\theta) = E_{\\text{Align}}(R, \\theta) + E_{\\text{Avoid}}(R, \\theta) + E_{\\text{Cohesion}}(R,\\theta)$$\nWe will go through each of these rules separately below starting with alignment. Of course, any of these terms could be replaced by a learned solution. \n\\\nOnce we have an energy defined in this way, configurations of boids that move along low energy trajectories might display behavior that looks appealing. However, we still have a lot of freedom to decide how we want to define dynamics over the boids. Reynolds says he uses overdamped dynamics and so we will do something similar. In particular, we will update the position of the boids so that they try to move to minimize their energy. Simultaneously , we assume that the boids are swimming (or flying / walking). We choose a particularly simple model of this to start with and assume that the boids move at a fixed speed, $v$, along whatever direction they are pointing. We will use simple forward-Euler integration. This gives an update step,\n$${R_i}' = R_i + \\delta t(v\\hat N_i - \\nabla_{R_i}E(R, \\theta))$$\nwhere $\\delta t$ is a timestep that we are allowed to choose. We will often refer to the force, $F^{R_i} = -\\nabla_{R_i} E(R, \\hat N)$ as the negative gradient of the energy with respect to the position of the $i$'th boid.\n\\\nWe will update the orientations of the boids turn them towards \"low energy\" directions. To do this we will once again use a simple forward-Euler scheme,\n$$\n\\theta'i = \\theta_i - \\delta t\\nabla{\\theta_i}E(R,\\theta) \n$$\nThis is just one choice of dynamics, but there are probably many that would work equally well! Feel free to play around with it. One easy improvement that one could imagine making would be to use a more sophisticated integrator. We include a Runge-Kutta 4 integrator at the top of the notebook for an adventurous reader.\n\\\nTo see what this looks like before we define any interactions, we can run a simulation with $E(R,\\theta) = 0$ by first defining an update function that takes a boids state to a new boids state.",
"@vmap\ndef normal(theta):\n return np.array([np.cos(theta), np.sin(theta)])\n\ndef dynamics(energy_fn, dt, speed):\n @jit\n def update(_, state):\n R, theta = state['boids']\n\n dstate = quantity.force(energy_fn)(state)\n dR, dtheta = dstate['boids']\n n = normal(state['boids'].theta)\n\n state['boids'] = Boids(shift(R, dt * (speed * n + dR)), \n theta + dt * dtheta)\n\n return state\n\n return update",
"Now we can run a simulation and save the boid positions to a boids_buffer which will just be a list.",
"update = dynamics(energy_fn=lambda state: 0., dt=1e-1, speed=1.)\n\nboids_buffer = []\n\nstate = {\n 'boids': boids\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n\ndisplay(render(box_size, boids_buffer))",
"Alignment\nWhile the above simulation works and our boids are moving happily along, it is not terribly interesting. The first thing that we can add to this simulation is the alignment rule. When writing down these rules, it is often easier to express them for a single pair of boids and then use JAX's automatic vectorization via vmap to extend them to our entire simulation.\nGiven a pair of boids $i$ and $j$ we would like to choose an energy function that is minimized when they are pointing in the same direction. As discussed above, one of Reynolds' requirements was locality: boids should only interact with nearby boids. To do this, we introduce a cutoff $D_{\\text{Align}}$ and ignore pairs of boids such that $\\|\\Delta R_{ij}\\| > D_{\\text{Align}}$ where $\\Delta R_{ij} = R_i - R_j$. To make it so boids react smoothly we will have the energy start out at zero when $\\|R_i - R_j\\| = D_{\\text{Align}}$ and increase smoothly as they get closer. Together, these simple ideas lead us to the following proposal,\n$$\\epsilon_{\\text{Align}}(\\Delta R_{ij}, \\hat N_i, \\hat N_j) = \\begin{cases}\\frac{J_{\\text{Align}}}\\alpha\\left (1 - \\frac{\\|\\Delta R_{ij}\\|}{D_{\\text{Align}}}\\right)^\\alpha(1 - \\hat N_1 \\cdot \\hat N_2)^2 & \\text{if $\\|\\Delta R_{ij}\\| < D$}\\ 0 & \\text{otherwise}\\end{cases}$$\nThis energy will be maximized when $N_1$ and $N_2$ are anti-aligned and minimized when $N_1 = N_2$. In general, we would like our boids to turn to align themselves with their neighbors rather than shift their centers to move apart. Therefore, we'll insert a stop-gradient into the displacement.",
"def align_fn(dR, N_1, N_2, J_align, D_align, alpha):\n dR = lax.stop_gradient(dR)\n dr = space.distance(dR) / D_align\n energy = J_align / alpha * (1. - dr) ** alpha * (1 - np.dot(N_1, N_2)) ** 2\n return np.where(dr < 1.0, energy, 0.)",
"We can plot the energy for different alignments as well as different distances between boids. We see that the energy goes to zero for large distances and when the boids are aligned.",
"#@title Alignment Energy\nN_1 = np.array([1.0, 0.0])\nangles = np.linspace(0, np.pi, 60)\nN_2 = vmap(lambda theta: np.array([np.cos(theta), np.sin(theta)]))(angles)\ndistances = np.linspace(0, 1, 5)\ndRs = vmap(lambda r: np.array([r, 0.]))(distances)\n\nfn = partial(align_fn, J_align=1., D_align=1., alpha=2.)\nenergy = vmap(vmap(fn, (None, None, 0)), (0, None, None))(dRs, N_1, N_2)\n\nfor d, e in zip(distances, energy):\n plt.plot(angles, e, label='r = {}'.format(d), linewidth=3)\n\nplt.xlim([0, np.pi])\nformat_plot('$\\\\theta$', '$E(r, \\\\theta)$')\nplt.legend()\nfinalize_plot()",
"We can now our simulation with the alignment energy alone.",
"def energy_fn(state):\n boids = state['boids']\n E_align = partial(align_fn, J_align=0.5, D_align=45., alpha=3.)\n # Map the align energy over all pairs of boids. While both applications\n # of vmap map over the displacement matrix, each acts on only one normal.\n E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))\n\n dR = space.map_product(displacement)(boids.R, boids.R)\n N = normal(boids.theta)\n\n return 0.5 * np.sum(E_align(dR, N, N))\n\nupdate = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)\n\nboids_buffer = []\n\nstate = {\n 'boids': boids\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n\ndisplay(render(box_size, boids_buffer))",
"Now the boids align with one another and already the simulation is displaying interesting behavior! \nAvoidance\nWe can incorporate an avoidance rule that will keep the boids from bumping into one another. This will help them to form a flock with some volume rather than collapsing together. To this end, imagine a very simple model of boids that push away from one another if they get within a distance $D_{\\text{Avoid}}$ and otherwise don't repel. We can use a simple energy similar to Alignment but without any angular dependence,\n$$\n\\epsilon_{\\text{Avoid}}(\\Delta R_{ij}) = \\begin{cases}\\frac{J_{\\text{Avoid}}}{\\alpha}\\left(1 - \\frac{||\\Delta R_{ij}||}{D_{\\text{Avoid}}}\\right)^\\alpha & ||\\Delta R_{ij}||<D_{\\text{Avoid}} \\ 0 & \\text{otherwise}\\end{cases}\n$$\nThis is implemented in the following Python function. Unlike the case of alignment, here we want boids to move away from one another and so we don't need a stop gradient on $\\Delta R$.",
"def avoid_fn(dR, J_avoid, D_avoid, alpha):\n dr = space.distance(dR) / D_avoid\n return np.where(dr < 1., \n J_avoid / alpha * (1 - dr) ** alpha, \n 0.)",
"Plotting the energy we see that it is highest when boids are overlapping and then goes to zero smoothly until $||\\Delta R|| = D_{\\text{Align}}$.",
"#@title Avoidance Energy\n\ndr = np.linspace(0, 2., 60)\ndR = vmap(lambda r: np.array([0., r]))(dr)\nEs = vmap(partial(avoid_fn, J_avoid=1., D_avoid=1., alpha=3.))(dR)\nplt.plot(dr, Es, 'r', linewidth=3)\n\nplt.xlim([0, 2])\n\nformat_plot('$r$', '$E$')\nfinalize_plot()",
"We can now run a version of our simulation with both alignment and avoidance.",
"def energy_fn(state):\n boids = state['boids']\n \n E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)\n E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))\n\n # New Avoidance Code\n E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)\n E_avoid = vmap(vmap(E_avoid))\n #\n\n dR = space.map_product(displacement)(boids.R, boids.R)\n N = normal(boids.theta)\n\n return 0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR))\n\nupdate = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)\n\nboids_buffer = []\n\nstate = {\n 'boids': boids\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n\ndisplay(render(box_size, boids_buffer))",
"The avoidance term in the energy stops the boids from collapsing on top of one another.\nCohesion\nThe final piece of Reynolds' boid model is cohesion. Notice that in the above simulation, the boids tend to move in the same direction but they also often drift apart. To make the boids behave more like schools of fish or birds, which maintain a more compact arrangement, we add a cohesion term to the energy. \nThe goal of the cohesion term is to align boids towards the center of mass of their neighbors. Given a boid, $i$, we can compute the center of mass position of its neighbors as,\n$$\n\\Delta R_i = \\frac 1{|\\mathcal N|} \\sum_{j\\in\\mathcal N}\\Delta R_{ij}\n$$\nwhere we have let $\\mathcal N$ be the set of boids such that $||\\Delta R_{ij}|| < D_{\\text{Cohesion}}$. \nGiven the center of mass displacements, we can define a reasonable cohesion energy as,\n$$\n\\epsilon_{Cohesion}\\left(\\widehat{\\Delta R}i, N_i\\right) = \\frac12J{\\text{Cohesion}}\\left(1 - \\widehat {\\Delta R}_i\\cdot N\\right)^2\n$$\nwhere $\\widehat{\\Delta R}_i = \\Delta R_i / ||\\Delta R_i||$ is the normalized vector pointing in the direction of the center of mass. This function is minimized when the boid is pointing in the direction of the center of mass.\nWe can implement the cohesion energy in the following python function. Note that as with alignment, we will have boids control their orientation and so we will insert a stop gradient on the displacement vector.",
"def cohesion_fn(dR, N, J_cohesion, D_cohesion, eps=1e-7):\n dR = lax.stop_gradient(dR)\n dr = np.linalg.norm(dR, axis=-1, keepdims=True)\n \n mask = dr < D_cohesion\n\n N_com = np.where(mask, 1.0, 0)\n dR_com = np.where(mask, dR, 0)\n dR_com = np.sum(dR_com, axis=1) / (np.sum(N_com, axis=1) + eps)\n dR_com = dR_com / np.linalg.norm(dR_com + eps, axis=1, keepdims=True)\n return f32(0.5) * J_cohesion * (1 - np.sum(dR_com * N, axis=1)) ** 2\n\ndef energy_fn(state):\n boids = state['boids']\n \n E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)\n E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))\n\n E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)\n E_avoid = vmap(vmap(E_avoid))\n \n # New Cohesion Code\n E_cohesion = partial(cohesion_fn, J_cohesion=0.005, D_cohesion=40.)\n #\n\n dR = space.map_product(displacement)(boids.R, boids.R)\n N = normal(boids.theta)\n\n return (0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR)) + \n np.sum(E_cohesion(dR, N)))\n\nupdate = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)\n\nboids_buffer = []\n\nstate = {\n 'boids': boids\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n\ndisplay(render(box_size, boids_buffer))",
"Now the boids travel in tighter, more cohesive, packs. By tuning the range of the cohesive interaction and its strength you can change how strongly the boids attempt to stick together. However, if we raise it too high it can have some undesireable consequences.",
"def energy_fn(state):\n boids = state['boids']\n \n E_align = partial(align_fn, J_align=1., D_align=45., alpha=3.)\n E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))\n\n E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)\n E_avoid = vmap(vmap(E_avoid))\n\n E_cohesion = partial(cohesion_fn, J_cohesion=0.1, D_cohesion=40.) # Raised to 0.05.\n\n dR = space.map_product(displacement)(boids.R, boids.R)\n N = normal(boids.theta)\n\n return (0.5 * np.sum(E_align(dR, N, N) + E_avoid(dR)) + \n np.sum(E_cohesion(dR, N)))\n\nupdate = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)\n\nboids_buffer = []\n\nstate = {\n 'boids': boids\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n\ndisplay(render(box_size, boids_buffer))",
"Looking Ahead\nWhen the effect of cohesion is set to a large value, the boids cluster well. However, the motion of the individual flocks becomes less smooth and adopts an almost oscillatory behavior. This is caused by boids in the front of the pack getting pulled towards boids behind them.\nTo improve this situation, we follow Reynolds and note that animals don't really look in all directions. The behavior of our flocks might look more realistic if we encorporated \"field of view\" for the boids. To this end, in both the alignment function and the cohesion function we will ignore boids that are outside of the line of sight for the boid. We will have a particularly simple definition for line of sight by first defining, $\\widehat{\\Delta R_{ij}} \\cdot N_i = \\cos\\theta_{ij}$ where $\\theta_{ij}$ is the angle between the orientation of the boid and the vector from the boid to its neighbor. \nSince most animals that display flocking behavior have eyes in the side of their head, as opposed to the front, we will define $\\theta_{\\text{min}}$ and $\\theta_{\\text{max}}$ to bound the angular field of view of the boids. Then, we assume each boid can see neighbors if $\\cos\\theta_{\\text{min}} < \\cos\\theta < \\cos\\theta_\\text{max}$.",
"def field_of_view_mask(dR, N, theta_min, theta_max):\n dr = space.distance(dR)\n dR_hat = dR / dr\n ctheta = np.dot(dR_hat, N)\n # Cosine is monotonically decreasing on [0, pi].\n return np.logical_and(ctheta > np.cos(theta_max),\n ctheta < np.cos(theta_min))",
"We can then adapt the cohesion function to incorporate an arbitrary mask,",
"def cohesion_fn(dR, N, mask, # New mask parameter.\n J_cohesion, D_cohesion, eps=1e-7):\n dR = lax.stop_gradient(dR)\n dr = space.distance(dR)\n\n mask = np.reshape(mask, mask.shape + (1,))\n dr = np.reshape(dr, dr.shape + (1,))\n \n # Updated Masking Code\n mask = np.logical_and(dr < D_cohesion, mask)\n #\n \n N_com = np.where(mask, 1.0, 0)\n dR_com = np.where(mask, dR, 0)\n dR_com = np.sum(dR_com, axis=1) / (np.sum(N_com, axis=1) + eps)\n dR_com = dR_com / np.linalg.norm(dR_com + eps, axis=1, keepdims=True)\n return f32(0.5) * J_cohesion * (1 - np.sum(dR_com * N, axis=1)) ** 2",
"And finally run a simulation incorporating the field of view.",
"def energy_fn(state):\n boids = state['boids']\n \n E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)\n E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))\n\n E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)\n E_avoid = vmap(vmap(E_avoid))\n\n E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)\n\n dR = space.map_product(displacement)(boids.R, boids.R)\n N = normal(boids.theta)\n\n # New FOV code.\n fov = partial(field_of_view_mask, \n theta_min=0.,\n theta_max=np.pi / 3.)\n # As before, we have to vmap twice over the displacement matrix, but only once\n # over the normal.\n fov = vmap(vmap(fov, (0, None)))\n mask = fov(dR, N)\n #\n\n return (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) + \n np.sum(E_cohesion(dR, N, mask)))\n\nupdate = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)\n\nboids_buffer = []\n\nstate = {\n 'boids': boids\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n\ndisplay(render(box_size, boids_buffer))",
"Extras\nNow that the core elements of the simulation are working well enough, we can add some extras fairly easily. In particular, we'll try to add some obstacles and some predators.\nObstacles\nThe first thing we'll add are obstacles that the boids and (soon) the predators will try to avoid as they wander around the simulation. For the purposes of this notebook, we'll restrict ourselves to disk-like obstacles. Each disk will be described by a center position and a radius, $D_\\text{Obstacle}$.",
"Obstacle = namedtuple('Obstacle', ['R', 'D'])",
"Then we can instantiate some obstacles.",
"N_obstacle = 5\n\nR_rng, D_rng = random.split(random.PRNGKey(5))\nobstacles = Obstacle(\n box_size * random.uniform(R_rng, (N_obstacle, 2)),\n random.uniform(D_rng, (N_obstacle,), minval=30.0, maxval=100.0)\n)",
"In a similar spirit to the energy functions above, we would like an energy function that encourages the boids to avoid obstacles. For this purpose we will pick an energy function that is similar in form to the alignment function above,\n$$\n\\epsilon_\\text{Obstacle}(\\Delta R_{io}, N_i, D_o) = \\begin{cases}\\frac{J_\\text{Obstacle}}{\\alpha}\\left(1 - \\frac{\\|\\Delta R_{io}\\|}{D_o}\\right)^\\alpha\\left(1 + N_i\\cdot \\widehat{\\Delta R_{io}}\\right)^2 & \\|\\Delta R_{io}\\| < D_o \\ 0 & \\text{Otherwise}\\end{cases}\n$$\nfor $\\Delta R_{io}$ the displacement vector between a boid $i$ and an obstacle $o$. This energy is zero when the boid and the obstacle are not overlapping. When they are overlapping, the energy is minimized when the boid is facing away from the obstacle.\n\\\nWe can write down the boid-energy function in python.",
"def obstacle_fn(dR, N, D, J_obstacle):\n dr = space.distance(dR)\n dR = dR / np.reshape(dr, dr.shape + (1,))\n return np.where(dr < D,\n J_obstacle * (1 - dr / D) ** 2 * (1 + np.dot(N, dR)) ** 2,\n 0.)\n",
"Now we can run a simulation that includes obstacles.",
"def energy_fn(state):\n boids = state['boids']\n d = space.map_product(displacement)\n \n E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)\n E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))\n\n E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)\n E_avoid = vmap(vmap(E_avoid))\n\n E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)\n\n dR = d(boids.R, boids.R)\n N = normal(boids.theta)\n\n fov = partial(field_of_view_mask, \n theta_min=0.,\n theta_max=np.pi / 3.)\n fov = vmap(vmap(fov, (0, None)))\n mask = fov(dR, N)\n\n # New obstacle code\n obstacles = state['obstacles']\n dR_o = -d(boids.R, obstacles.R)\n D = obstacles.D\n E_obstacle = partial(obstacle_fn, J_obstacle=1000.)\n E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))\n #\n\n return (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) + \n np.sum(E_cohesion(dR, N, mask)) + np.sum(E_obstacle(dR_o, N, D)))\n\nupdate = dynamics(energy_fn=energy_fn, dt=1e-1, speed=1.)\n\nboids_buffer = []\n\nstate = {\n 'boids': boids,\n 'obstacles': obstacles\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n\ndisplay(render(box_size, boids_buffer, obstacles))",
"The boids are now successfully navigating obstacles in their environment.\nPredators\nNext we are going to introduce some predators into the environment for the boids to run away from. Much like the boids, the predators will be described by a position and an angle.",
"Predator = namedtuple('Predator', ['R', 'theta'])\n\npredators = Predator(R=np.array([[box_size / 2., box_size /2.]]),\n theta=np.array([0.0]))",
"The predators will also follow similar dynamics to the boids, swimming in whatever direction they are pointing at some speed that we can choose. Unlike in the previous versions of the simulation, predators naturally introduce some asymmetry to the system. In particular, we would like the boids to flee from the predators, but we want the predators to chase the boids. To achieve this behavior, we will consider a system reminiscient of a two-player game in which the boids move to minize an energy,\n$$\nE_\\text{Boid} = E_\\text{Align} + E_\\text{Avoid} + E_\\text{Cohesion} + E_\\text{Obstacle} + E_\\text{Boid-Predator}. \n$$\nSimultaneously, the predators move in an attempt to minimize a simpler energy,\n$$\nE_\\text{Predator} = E_\\text{Predator-Boid} + E_\\text{Obstacle}.\n$$\nTo add predators to the environment we therefore need to add two rules, one that dictates the boids behavior near a predator and one for the behavior of predators near a group of boids. In both cases we will see that we can draw significant inspiration from behaviors that we've already developed.\n\\\nWe will start with the boid-predator function since it is a bit simpler. In fact, we can use an energy that is virtually identical to the obstacle avoidance energy since the desired behavior is the same.\n$$\n\\epsilon_\\text{Boid-Predator}(\\Delta R_{ip}, N_i) = \\frac{J_\\text{Boid-Predator}}\\alpha\\left(1 - \\frac{\\|\\Delta R_{ip}\\|}{D_\\text{Boid-Predator}}\\right)^\\alpha (1 + \\widehat{\\Delta R_{ip}}\\cdot N_i)^2\n$$\nAs before, this function is minimized when the boid is pointing away from the predators. Because we don't want the predators to experience this term we must include a stop-gradient on the predator positions.",
"def boid_predator_fn(R_boid, N_boid, R_predator, J, D, alpha):\n N = N_boid\n dR = displacement(lax.stop_gradient(R_predator), R_boid)\n dr = np.linalg.norm(dR, keepdims=True)\n dR_hat = dR / dr\n return np.where(dr < D,\n J / alpha * (1 - dr / D) ** alpha * (1 + np.dot(dR_hat, N)),\n 0.)",
"For the predator-boid function we can borrow the cohesion energy that we developed above to have predators that turn towards the center-of-mass of boids in their field of view.",
"def predator_boid_fn(R_predator, N_predator, R_boids, J, D, eps=1e-7):\n # It is most convenient to define the predator_boid energy function\n # for a single predator and a whole flock of boids. As such we expect shapes,\n # R_predator : (spatial_dim,)\n # N_predator : (spatial_dim,)\n # R_boids : (n, spatial_dim,)\n \n N = N_predator \n\n # As such, we need to vectorize over the boids.\n d = vmap(displacement, (0, None))\n dR = d(lax.stop_gradient(R_boids), R_predator)\n dr = space.distance(dR)\n\n fov = partial(field_of_view_mask, \n theta_min=0.,\n theta_max=np.pi / 3.)\n # Here as well.\n fov = vmap(fov, (0, None))\n\n mask = np.logical_and(dr < D, fov(dR, N))\n mask = mask[:, np.newaxis]\n\n boid_count = np.where(mask, 1.0, 0)\n dR_com = np.where(mask, dR, 0)\n dR_com = np.sum(dR_com, axis=0) / (np.sum(boid_count, axis=0) + eps)\n dR_com = dR_com / np.linalg.norm(dR_com + eps, keepdims=True)\n return f32(0.5) * J * (1 - np.dot(dR_com, N)) ** 2",
"Now we can modify our dynamics to also update predators.",
"def dynamics(energy_fn, dt, boid_speed, predator_speed):\n # We extract common movement functionality into a `move` function.\n def move(boids, dboids, speed):\n R, theta, *_ = boids\n dR, dtheta = dboids\n n = normal(theta)\n\n return (shift(R, dt * (speed * n + dR)), \n theta + dt * dtheta)\n \n @jit\n def update(_, state):\n dstate = quantity.force(energy_fn)(state)\n\n state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))\n state['predators'] = Predator(*move(state['predators'], \n dstate['predators'], \n predator_speed))\n\n return state\n\n return update",
"Finally, we can put everything together and run the simulation.",
"def energy_fn(state):\n boids = state['boids']\n d = space.map_product(displacement)\n \n E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)\n E_align = vmap(vmap(E_align, (0, None, 0)), (0, 0, None))\n\n E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)\n E_avoid = vmap(vmap(E_avoid))\n\n E_cohesion = partial(cohesion_fn, J_cohesion=0.05, D_cohesion=40.)\n\n dR = d(boids.R, boids.R)\n N = normal(boids.theta)\n\n fov = partial(field_of_view_mask, \n theta_min=0.,\n theta_max=np.pi / 3.)\n fov = vmap(vmap(fov, (0, None)))\n mask = fov(dR, N)\n\n obstacles = state['obstacles']\n dR_bo = -d(boids.R, obstacles.R)\n D = obstacles.D\n E_obstacle = partial(obstacle_fn, J_obstacle=1000.)\n E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))\n \n # New predator code.\n predators = state['predators']\n E_boid_predator = partial(boid_predator_fn, J=256.0, D=75.0, alpha=3.)\n E_boid_predator = vmap(vmap(E_boid_predator, (0, 0, None)), (None, None, 0))\n\n N_predator = normal(predators.theta)\n E_predator_boid = partial(predator_boid_fn, J=0.1, D=95.0)\n E_predator_boid = vmap(E_predator_boid, (0, 0, None))\n\n dR_po = -d(predators.R, obstacles.R)\n #\n\n E_boid = (0.5 * np.sum(E_align(dR, N, N) * mask + E_avoid(dR)) + \n np.sum(E_cohesion(dR, N, mask)) + np.sum(E_obstacle(dR_bo, N, D)) + \n np.sum(E_boid_predator(boids.R, N, predators.R)))\n \n E_predator = (np.sum(E_obstacle(dR_po, N_predator, D)) + \n np.sum(E_predator_boid(predators.R, N_predator, boids.R)))\n\n return E_boid + E_predator\n\nupdate = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)\n \nboids_buffer = []\npredators_buffer = []\n\nstate = {\n 'boids': boids,\n 'obstacles': obstacles,\n 'predators': predators\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n predators_buffer += [state['predators']]\n\ndisplay(render(box_size, boids_buffer, obstacles, predators_buffer))",
"We see that our predator now moves around chasing the boids.\nInternal State\nUntil now, all of the data describing the boids, predators, and obstacles referred to their physical location and orientation. However, we can develop more interesting behavior if we allow the agents in our simulation to have extra data describing their internal state. As an example of this, we will allow predators to accelerate to chase boids if they get close.\n\\\nTo this end, we add at extra piece of data to our predator, $t_\\text{sprint}$ which is the last time the predator accelerated. If the predator gets within $D_\\text{sprint}$ of a boid and it has been at least $T_\\text{sprint}$ units of time since it last sprinted it will accelerate. In practice, to accelerate the predator we will adjust its speed so that,\n$$\ns(t) = s_0 + s_1 e^{-(t - t_\\text{sprint}) / C\\tau_\\text{sprint}}\n$$\nwhere $s_0$ is the normal speed of the predator, $s_0 + s_1$ is the peak speed, and $\\tau_\\text{sprint}$ determines how long the sprint lasts for. In practice, rather than storign $t_\\text{sprint}$ we will record $\\Delta t = t - t_\\text{sprint}$ which is the time since the last sprint.\n\\\nImplementing this first requires that we add the necessary data to the predators.",
"Predator = namedtuple('Predator', ['R', 'theta', 'dt']) \n\npredators = Predator(R=np.array([[box_size / 2., box_size /2.]]),\n theta=np.array([0.0]),\n dt=np.array([0.]))\n\ndef dynamics(energy_fn, dt, boid_speed, predator_speed):\n # We extract common movement functionality into a `move` function.\n def move(boids, dboids, speed):\n R, theta, *_ = boids\n dR, dtheta, *_ = dboids\n n = normal(theta)\n\n return (shift(R, dt * (speed * n + dR)), \n theta + dt * dtheta)\n \n @jit\n def update(_, state):\n dstate = quantity.force(energy_fn)(state)\n\n state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))\n\n # New code to accelerate the predators.\n D_sprint = 65.\n T_sprint = 300.\n tau_sprint = 50.\n sprint_speed = 2.0\n\n # First we find the distance from each predator to the nearest boid.\n d = space.map_product(space.metric(displacement))\n predator = state['predators']\n dr_min = np.min(d(state['boids'].R, predator.R), axis=1)\n\n # Check whether there is a near enough boid to bother sprinting and if\n # enough time has elapsed since the last sprint.\n mask = np.logical_and(dr_min < D_sprint, predator.dt > T_sprint)\n predator_dt = np.where(mask, 0., predator.dt + dt)\n\n # Adjust the speed according to whether or not we're sprinting.\n speed = predator_speed + sprint_speed * np.exp(-predator_dt / tau_sprint)\n\n predator_R, predator_theta = move(state['predators'],\n dstate['predators'], \n speed)\n state['predators'] = Predator(predator_R, predator_theta, predator_dt)\n #\n\n return state\n\n return update\n\nupdate = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)\n \nboids_buffer = []\npredators_buffer = []\n\nstate = {\n 'boids': boids,\n 'obstacles': obstacles,\n 'predators': predators\n}\n\nfor i in ProgressIter(range(400)):\n state = lax.fori_loop(0, 50, update, state)\n boids_buffer += [state['boids']]\n predators_buffer += [state['predators']]\n\ndisplay(render(box_size, boids_buffer, obstacles, predators_buffer))",
"Scaling Up\nUp to this point, we have simulated a relatively small flock of $n = 200$ boids. In part we have done this because we compute, at each step, an $n\\times n$ matrix of distances. Therefore the computational complexity of the flocking simulation scales as $\\mathcal O(n^2)$. However, as Reynolds' notes we have built in a locality assumption so that no boids interact provided they are further apart than $D =\\max{D_{\\text{Align}}, D_\\text{Avoid}, D_\\text{Cohesion}}$. JAX MD provides tools to construct a set of candidates for each boid in about $\\mathcal O(n\\log n)$ time by predcomputing a list of neighbors for each boid. Using neighbor lists we can scale to much larger simulations.\nWe create lists of all neighbors within a distance of $D + \\delta$ and pack them into an array of shape $n\\times n_\\text{max_neighbors}$. Using this technique we only need to rebuild the neighbor list if any particle has moved more than a distance of $\\delta$. We estimate max_neighbors from arrangements of particles and if any boid ever has more than this number of neighbors we must rebuild the neighbor list from scratch and recompile our simulation onto device. \nTo start with we setup a much larger system of boids.",
"# Simulation Parameters.\nbox_size = 2400.0 # A float specifying the side-length of the box.\nboid_count = 2000 # An integer specifying the number of boids.\nobstacle_count = 10 # An integer specifying the number of obstacles.\npredator_count = 10 # An integer specifying the number of predators.\ndim = 2 # The spatial dimension in which we are simulating.\n\n# Create RNG state to draw random numbers.\nrng = random.PRNGKey(0)\n\n# Define periodic boundary conditions.\ndisplacement, shift = space.periodic(box_size)\n\n# Initialize the boids.\n# To generate normal vectors that are uniformly distributed on S^N note that\n# one can generate a random normal vector in R^N and then normalize it.\nrng, R_rng, theta_rng = random.split(rng, 3)\n\nboids = Boids(\n R = box_size * random.uniform(R_rng, (boid_count, dim)),\n theta = random.uniform(theta_rng, (boid_count,), maxval=2 * np.pi)\n)\n\nrng, R_rng, D_rng = random.split(rng, 3)\nobstacles = Obstacle(\n R = box_size * random.uniform(R_rng, (obstacle_count, dim)),\n D = random.uniform(D_rng, (obstacle_count,), minval=100, maxval=300.) \n)\n\nrng, R_rng, theta_rng = random.split(rng, 3)\npredators = Predator(\n R = box_size * random.uniform(R_rng, (predator_count, dim)),\n theta = random.uniform(theta_rng, (predator_count,), maxval=2 * np.pi),\n dt = np.zeros((predator_count,))\n)\n\n\nneighbor_fn = partition.neighbor_list(displacement,\n box_size, \n r_cutoff=45., \n dr_threshold=10.,\n capacity_multiplier=1.5,\n format=partition.Sparse)\n\nneighbors = neighbor_fn.allocate(boids.R)\nprint(neighbors.idx.shape)",
"We see that dispite having 2000 boids, they each only have about 13 neighbors apiece at the start of the simulation. Of course this will grow over time and we will have to rebuild the neighbor list as it does. Next we make some minimal modifications to our energy function to rewrite the energy of our simulation to operate on neighbors. This mostly involves changing some of the vectorization patterns with vmap and creating a mask of which neighbors in the $n\\times n_\\text{max neighbors}$ arrays are filled. Finally, we have to make a slightly updated version of the cohesion function that takes into account the neighbors.",
"from jax import ops\n\ndef cohesion_fn(dR, N, mask, # New mask parameter.\n neighbor, J_cohesion, D_cohesion, eps=1e-7):\n dR = lax.stop_gradient(dR)\n dr = space.distance(dR)\n\n count = len(neighbor.reference_position)\n mask = np.reshape(mask, mask.shape + (1,))\n dr = np.reshape(dr, dr.shape + (1,))\n \n mask = (dr < D_cohesion) & mask\n \n N_com = np.where(mask, 1.0, 0)\n dR_com = np.where(mask, dR, 0)\n dR_com = ops.segment_sum(dR_com, neighbor.idx[0], count) \n dR_com /= (ops.segment_sum(N_com, neighbor.idx[0], count) + eps)\n dR_com = dR_com / np.linalg.norm(dR_com + eps, axis=1, keepdims=True)\n return f32(0.5) * J_cohesion * (1 - np.sum(dR_com * N, axis=1)) ** 2\n\ndef energy_fn(state, neighbors):\n boids = state['boids']\n d = space.map_product(displacement)\n \n fov = partial(field_of_view_mask, \n theta_min=0.,\n theta_max=np.pi / 3.)\n fov = vmap(fov)\n\n E_align = partial(align_fn, J_align=12., D_align=45., alpha=3.)\n E_align = vmap(E_align)\n\n E_avoid = partial(avoid_fn, J_avoid=25., D_avoid=30., alpha=3.)\n E_avoid = vmap(E_avoid)\n\n E_cohesion = partial(cohesion_fn, neighbor=neighbors, \n J_cohesion=0.05, D_cohesion=40.)\n\n # New code to extract displacement vector to neighbors and normals.\n senders, receivers = neighbors.idx\n Ra = boids.R[senders]\n Rb = boids.R[receivers]\n\n dR = -space.map_bond(displacement)(Ra, Rb)\n N = normal(boids.theta)\n Na, Nb = N[senders], N[receivers]\n #\n\n # New code to add a mask over neighbors as well as field-of-view.\n neighbor_mask = partition.neighbor_list_mask(neighbors)\n fov_mask = np.logical_and(neighbor_mask, fov(dR, Na))\n #\n\n obstacles = state['obstacles']\n dR_bo = -d(boids.R, obstacles.R)\n D = obstacles.D\n E_obstacle = partial(obstacle_fn, J_obstacle=1000.)\n E_obstacle = vmap(vmap(E_obstacle, (0, 0, None)), (0, None, 0))\n \n predators = state['predators']\n E_boid_predator = partial(boid_predator_fn, J=256.0, D=75.0, alpha=3.)\n E_boid_predator = vmap(vmap(E_boid_predator, (0, 0, None)), (None, None, 0))\n\n N_predator = normal(predators.theta)\n E_predator_boid = partial(predator_boid_fn, J=0.1, D=95.0)\n E_predator_boid = vmap(E_predator_boid, (0, 0, None))\n\n dR_po = -d(predators.R, obstacles.R)\n\n E_boid = (0.5 * np.sum(E_align(dR, Na, Nb) * fov_mask + \n E_avoid(dR) * neighbor_mask) + \n np.sum(E_cohesion(dR, N, fov_mask)) + \n np.sum(E_obstacle(dR_bo, N, D)) + \n np.sum(E_boid_predator(boids.R, N, predators.R)))\n \n E_predator = (np.sum(E_obstacle(dR_po, N_predator, D)) + \n np.sum(E_predator_boid(predators.R, N_predator, boids.R)))\n\n return E_boid + E_predator",
"Next we have to update our simulation to use and update the neighbor list.",
"def dynamics(energy_fn, dt, boid_speed, predator_speed):\n # We extract common movement functionality into a `move` function.\n def move(boids, dboids, speed):\n R, theta, *_ = boids\n dR, dtheta, *_ = dboids\n n = normal(theta)\n\n return (shift(R, dt * (speed * n + dR)), \n theta + dt * dtheta)\n \n @jit\n def update(_, state_and_neighbors):\n state, neighbors = state_and_neighbors\n\n # New code to update neighbor list.\n neighbors = neighbors.update(state['boids'].R) \n\n dstate = quantity.force(energy_fn)(state, neighbors)\n state['boids'] = Boids(*move(state['boids'], dstate['boids'], boid_speed))\n\n # Predator acceleration.\n D_sprint = 65.\n T_sprint = 300.\n tau_sprint = 50.\n sprint_speed = 2.0\n\n d = space.map_product(space.metric(displacement))\n predator = state['predators']\n dr_min = np.min(d(state['boids'].R, predator.R), axis=1)\n\n mask = np.logical_and(dr_min < D_sprint, predator.dt > T_sprint)\n predator_dt = np.where(mask, 0., predator.dt + dt)\n\n speed = predator_speed + sprint_speed * np.exp(-predator_dt / tau_sprint)\n speed = speed[:, np.newaxis]\n\n predator_R, predator_theta = move(state['predators'],\n dstate['predators'], \n speed)\n state['predators'] = Predator(predator_R, predator_theta, predator_dt)\n #\n\n return state, neighbors\n\n return update",
"And now we can conduct our larger simulation.",
"update = dynamics(energy_fn=energy_fn, dt=1e-1, boid_speed=1., predator_speed=.85)\n\nboids_buffer = []\npredators_buffer = []\n\nstate = {\n 'boids': boids,\n 'obstacles': obstacles,\n 'predators': predators\n}\n\nfor i in ProgressIter(range(800)):\n new_state, neighbors = lax.fori_loop(0, 50, update, (state, neighbors))\n\n # If the neighbor list can't fit in the allocation, rebuild it but bigger.\n if neighbors.did_buffer_overflow:\n print('REBUILDING')\n neighbors = neighbor_fn.allocate(state['boids'].R)\n state, neighbors = lax.fori_loop(0, 50, update, (state, neighbors))\n assert not neighbors.did_buffer_overflow\n else:\n state = new_state\n\n boids_buffer += [state['boids']]\n predators_buffer += [state['predators']]\n\ndisplay(render(box_size, boids_buffer, obstacles, predators_buffer))",
"At the end of the simulation we can see how large our neighbor list had to be to accomodate all of the boids.",
"print(neighbors.idx.shape)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maasencioh/NLP-TM
|
4-advancedML/Assigment1.ipynb
|
mit
|
[
"NLP and TM Módulo 4\nTaller 1: word2vec\nNombres: Miguel Angel Asencio Hurtado\nObtenga el archivo del modelo word2vec entrenado con WikiNews en Español: eswikinews.bin",
"# import word2vec model from gensim\nfrom gensim.models.word2vec import Word2Vec\n# load pre-trained model\nmodel = Word2Vec.load_word2vec_format('eswikinews.bin', binary=True)",
"1. Comparando composicionalidad y analogía.\nComposicionalidad y analogía son dos mecanismos diferentes que se pueden usar con representaciones distribuidas. La idea es usar independientemente composicionalidad y analogía para resolver el mismo problema. El problema a resolver es encontrar el presidente de un país dado.\nPrimero usaremos composicionalidad. La función siguiente debe recibir el nombre de un país y retornar una lista de palabras que posiblemente corresponden a presidentes.\nPor ejemplo, si la función se invoca con 'ecuador' como argumento:\n```python\n\n\n\npresidents_comp('ecuador')\n[u'jamil_mahuad',\n u'presidencia',\n u'jose_maria_velasco_ibarra',\n u'republica',\n u'rafael_correa',\n u'gustavo_noboa',\n u'lucio_gutierrez',\n u'abdala_bucaram',\n u'vicepresidente',\n u'gabriel_garcia_moreno']\n ```",
"def presidents_comp(country):\n return [elm[0] for elm in model.most_similar(positive=[country, 'presidente'])]\n\nfor country in ['colombia', 'venezuela', 'ecuador', 'brasil', 'argentina', 'chile']:\n print country\n for president in presidents_comp(country):\n print ' ', president",
"El siguiente paso es usar analogías para encontrar el presidente de un país dado.",
"def presidents_analogy(country):\n return [elm[0] for elm in model.most_similar(positive=[country, 'hugo_chavez'], negative=['venezuela'])]\n\nfor country in ['colombia', 'venezuela', 'ecuador', 'brasil', 'argentina', 'chile']:\n print country\n for president in presidents_analogy(country):\n print ' ', president",
"¿Cual versión funciona mejor? Explique claramente. ¿Por qué cree que este es el caso?\nR/ Funciona mejor la analogía, ya que al revisar la lista de resultados no existen resultados del tipo presidencia, republica, etc.\nEsto debe ser porque al tener más contexto es más facil entender de qué es que se está hablando, ya que en el caso del presidente la composición puede relacionar con noticias de política, más que de realizar una búsqueda de la relación.\n2. Escriba una función que calcule el antónimo de una palabra",
"def antonimo(palabra):\n if palabra is 'blanco':\n return 'negro'\n return [elm[0] for elm in model.most_similar(positive=[palabra, 'negro'], negative=['blanco'])][0]\n\nfor palabra in ['blanco', 'menor', 'rapido', 'arriba']:\n print palabra, antonimo(palabra)",
"Busque más ejemplos en los que funcione y otros en los que no funcione. Explique.",
"print ' FUNCIONA'\nfor palabra in 'salir verdad seco izquierda'.split():\n print palabra + ':', antonimo(palabra)\n \nprint '\\n NO FUNCIONA'\nfor palabra in 'rico paz joven comunismo'.split():\n print palabra + ':', antonimo(palabra)",
"R/ Funciona bastante bien en la mayoría de los casos, pero cuando no encuentra el antónimo, retorna una palabra en extremo relacionada, como un sinónimo o un derivado.\n3. Una de estas cosas no es como las otras...\nGensim provee la función doesnt_match, la cual permite encontrar, dentro de una lista de palabras, una palabra que está fuera de lugar. Por ejemplo:",
"model.doesnt_match(\"azul rojo abajo verde\".split())",
"La idea es implementar la misma funcionalidad por nuestra cuenta. La condición es que solo podemos usar la función similarity de Gensim la cual calcula la similitud de dos palabras:",
"print model.similarity('azul', 'rojo')\nprint model.similarity('azul', 'abajo')\n\nimport numpy as np\ndef no_es_como_las_otras(word_list):\n size = len(word_list)\n word_matrix = np.zeros(shape=(size, size))\n for row in xrange(size):\n for column in xrange(size):\n word_matrix[row, column] = model.similarity(word_list[row], word_list[column])\n sum_columns = word_matrix.sum(axis=0)\n return word_list[np.argmin(sum_columns)]\n\nprint no_es_como_las_otras(\"azul rojo abajo verde\".split())\nprint no_es_como_las_otras(\"azul izquierda abajo derecha\".split())\nprint no_es_como_las_otras(\"colombia suiza carro venezuela\".split())\nprint no_es_como_las_otras(\"colombia suiza argentina venezuela\".split())",
"Nota: no olvide incluir los nombres de los integrantes del grupo (máximo 2) en el encabezado del notebook. Remita el notebook al siguiente file request de Dropbox: https://www.dropbox.com/request/k4GFiKHjl8OuE9sCiq1N."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
karst87/ml
|
01_openlibs/tensorflow/03_segmentfault/00_introduction.ipynb
|
mit
|
[
"TensorFlow入门教程\nhttps://segmentfault.com/a/1190000007484465\n简介\nTensorFlow是目前最流行的深度学习框架。我们先引用一段官网对于TensorFlow的介绍,来看一下Google对于它这个产品的定位。\nTensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.\n上文并没有提到大红大紫的Deep Learning,而是聚焦在一个更广泛的科学计算应用领域。引文的关键词有:\n\nNumerical Computation:应用领域是数值计算,所以TensorFlow不仅能支持Deep Learning,还支持其他机器学习算法,甚至包括更一般的数值计算任务(如求导、积分、变换等)。\nData Flow Graph:用graph来描述一个计算任务。\nNode:代表一个数学运算(mathmatical operations,简称ops),这里面包括了深度学习模型经常需要使用的ops。\nEdge:指向node的edge代表这个node的输入,从node引出来的edge代表这个node的输出,输入和输出都是multidimensional data arrays,即多维数组,在数学上又称之为tensor。这也是TensorFlow名字的由来,表示多维数组在graph中流动。\nCPUs/GPUs:支持CPU和GPU两种设备,支持单机和分布式计算。\n\nTensorFlow提供多种语言的支持,其中支持最完善的是Python语言,因此本文将聚焦于Python API。\nHello World\n下面这段代码来自于TensorFlow官网的Get Started,展示了TensorFlow训练线性回归模型的能力。",
"import numpy as np\nimport tensorflow as tf\n\n# Create 100 phony x, y data points in NumPy, y = x * 0.1 + 0.3\nx_data = np.random.rand(100).astype(np.float32)\ny_data = x_data * 0.1 + 0.3\n\n# Try to find values for W and b that compute y_data = W * x_data + b\n# (We know that W should be 0.1 and b 0.3, but TensorFlow will\n# figure that out for us.)\nW = tf.Variable(tf.random_uniform([1], -1, 1))\nb = tf.Variable(tf.zeros([1]))\ny = W * x_data + b\n\n# Minimize the mean squared errors.\nloss = tf.reduce_mean(tf.square(y - y_data))\noptimizer = tf.train.GradientDescentOptimizer(0.5)\ntrain = optimizer.minimize(loss)\n\n# Before starting, initialize the variables. We will 'run' this first.\ninit = tf.global_variables_initializer()\n\n# Launch the graph.\nwith tf.Session() as sess:\n sess.run(init)\n # Fit the line.\n for step in range(201):\n sess.run(train)\n if step % 20 == 0:\n print(step, sess.run(W), sess.run(b))",
"下面我们来剖析一下关键代码。TensorFlow的代码往往由两个部分组成:\n\nA construction phase, that assembles a graph \nAn execution phase that uses a session to execute ops in the graph.\n\nSession是一个类,作用是把graph ops部署到Devices(CPUs/GPUs),并提供具体执行这些op的方法。\n为什么要这么设计呢?考虑到Python运行性能较低,我们在执行numerical computing的时候,都会尽量使用非python语言编写的代码,比如使用NumPy这种预编译好的C代码来做矩阵运算。\n在Python内部计算环境和外部计算环境(如NumPy)切换需要花费的时间称为overhead cost。对于一个简单运算,比如矩阵运算,从Python环境切换到Numpy,Numpy运算得到结果,再从Numpy切回Python,这个成本,比纯粹在Python内部做同类运算的成本要低很多。但是,一个复杂数值运算由多个基本运算组合而成,如果每个基本运算来一次这种环境切换,overhead cost就不可忽视了。为了减少来回的环境切换,TensorFlow的做法是,先在Python内定义好整个Graph,然后在Python外运行整个完整的Graph。因此TensorFlow的代码结构也就对应为两个阶段了。\nBuild Graph",
"W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))\nb = tf.Variable(tf.zeros([1]))",
"tf.Variable是TensorFlow的一个类,是取值可变的Tensor,构造函数的第一个参数是初始值initial_value。\ninitial_value: A Tensor, or Python object convertible to a Tensor, which is the initial value for the Variable.\ntf.zeros(shape, dtype=tf.float32, name=None)是一个op,用于生成取值全是0的Constant Value Tensor。\ntf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)是一个op,用于生成服从uniform distribution的Random Tensor。",
"y = W * x_data + b",
"y是线性回归运算产生的Tensor。运算符 * 和 + ,等价为tf.multiple()和tf.add()这两个TensorFlow提供的数学类ops。\ntf.multiple()的输入是W和x_data;\n- W是Variable,属于Tensor,可以直接作为op的输入;\n- x_data是numpy的多维数组ndarray,\nTensorFlow的ops接收到ndarray的输入时,会将其转化为tensor。tf.multiple()的输出是一个tensor,和b一起交给optf.add(),得到输出结果y。\n至此,线性回归的模型已经建立好,但这只是Graph的一部分,还需要定义损失。",
"loss = tf.reduce_mean(tf.square(y - y_data))",
"loss是最小二乘法需要的目标函数,是一个Tensor,具体的op不再赘述。",
"optimizer = tf.train.GradientDescentOptimizer(0.5)\ntrain = optimizer.minimize(loss)",
"这一步指定求解器,并设定求解器的最小化目标为损失。train代表了求解器执行一次的输出Tensor。这里我们使用了梯度下降求解器,每一步会对输入loss求一次梯度,然后将loss里Variable类型的Tensor按照梯度更新取值。",
"init = tf.global_variables_initializer()",
"Build Graph阶段的代码,只是在Python内定义了Graph的结构,并不会真正执行。在Launch Graph阶段,所有的变量要先进行初始化。每个变量可以单独初始化,但这样做有些繁琐,所以TensorFlow提供了一个方便的函数global_variables_initializer()可以在graph中添加一个初始化所有变量的op。\nWhen you launch the graph, variables have to be explicitly initialized before you can run Ops that use their value. All variables are automatically collected in the graph where they are created. By default, the constructor adds the new variable to the graph collection GraphKeys.GLOBAL_VARIABLES. The convenience function global_variables() returns the contents of that collection. The most common initialization pattern is to use the convenience function global_variables_initializer() to add an Op to the graph that initializes all the variables.\nLaunch Graph",
"# Launch the graph.\nsess = tf.Session()\nsess.run(init)\n\nfor step in range(201):\n sess.run(train)",
"train操作对应梯度下降法的一步迭代。当step为0时,train里的variable取值为初始值,根据初始值可以计算出梯度,然后将初始值根据梯度更新为更好的取值;当step为1时,train里的variable为上一步更新的值,根据这一步的值可以计算出一个新的梯度,然后将variable的取值更新为更好的取值;以此类推,直到达到最大迭代次数。",
"print(step, sess.run(W), sess.run(b))",
"如果我们将sess.run()赋值给Python环境的变量,或者传给Python环境的print,可以fetch执行op的输出Tensor取值,这些取值会转化为numpy的ndarray结构。因此,这就需要一次环境的切换,会增加overhead cost。所以我们一般会每隔一定步骤才fetch一下计算结果,以减少时间开销。\n基础练习:线性模型\nTensorFlow是一个面向数值计算的通用平台,可以方便地训练线性模型。下面这几篇文章采用TensorFlow完成Andrew Ng主讲的Deep Learning课程练习题,提供了整套源码。\n\n线性回归\n多元线性回归\n逻辑回归\n\n进阶练习1:深度学习\nTensorFlow虽然是面向通用的数值计算,但是对深度学习的支持是它最大的特色,也是它能够引爆业界获得目前这么大的流行度的主要原因。下面这几篇文章采用TensorFlow对MNIST进行建模,涵盖了Deep Learning中最重要的两类模型:卷积神经网络CNN和循环神经网络RNN。\n\nMNIST数据集\nSoftmax Regression\nCNN\nRNN\n\n进阶练习2:TensorBoard\nTensorFlow安装时自带了一个TensorBoard,可以对数据集进行可视化地探索分析,可以对学习过程进行可视化,可以对Graph进行可视化,对于我们分析问题和改进模型有极大的帮助。\n\nEmbeddings\nTensor与Graph可视化\n\n部署\n\n分布式TensorFlow\n读取文件"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
liganega/Gongsu-DataSci
|
notebooks/GongSu12_CSV_File_Data_Visualization.ipynb
|
gpl-3.0
|
[
"CSV 파일 다루기와 데이터 시각화\n주요 내용\n데이터 분석을 위해 가장 기본적으로 할 수 있고, 해야 하는 일이 데이터 시각화이다. \n데이터를 시각화하는 것은 어렵지 않지만, 적합한 시각화를 만드는 일은 매우 어려우며,\n많은 훈련과 직관이 요구된다.\n여기서는 데이터를 탐색하여 얻어진 데이터를 시각화하는 기본적인 방법 네 가지를 배운다.\n\n선그래프\n막대그래프\n히스토그램\n산점도\n\n오늘이 주요 예제\n서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.\n<p>\n<table cellspacing=\"20\">\n\n<tr>\n<td>\n<img src=\"images/Seoul_pop04.jpg\" style=\"width:360\">\n</td>\n</tr>\n\n</table>\n</p>\n\n이제 위 파일을 읽어서 서울과 수도권의 인구증가율 추이를 아래 그림에서처럼 선그래프로 나타내 보자.\n<p>\n<table cellspacing=\"20\">\n\n<tr>\n<td>\n<img src=\"images/Seoul_pop05.png\" style=\"width:360\">\n</td>\n</tr>\n\n</table>\n</p>\n\n데이터 시각화 도구 소개: matplotlib 라이브러리\n데이터 시각화를 위한 도구 중에서 간단한 막대 그래프, 히스토그램, 선 그래프, 산점도를 쉽게 그릴 수 있는\n많은 도구들을 포함한 라이브러리이다. \n이 라이브러리에 포함된 모듈 중에서 여기서는 pyplot 모듈에 포함된 가장 기본적인 몇 개의 도구들의 활용법을\n간단한 예제를 배우고자 한다.",
"import matplotlib.pyplot as plt\n\n%matplotlib inline",
"선그래프\ndata 디렉토리의 Seoul_pop1.csv 파일에는 1949년부터 5년 간격으로 측정된 서울시 인구수를 담은 데이터가 \n들어 있으며, 그 내용은 다음과 같다.\n1949 1,437,670\n1955 1,568,746\n1960 2,445,402\n1966 3,793,280\n1970 5,525,262\n1975 6,879,464\n1980 8,350,616\n1985 9,625,755\n1990 10,603,250\n1995 10,217,177\n2000 9,853,972\n2005 9,762,546\n2010 9,631,482\n출처: 국가통계포털(kosis.kr)\n파일에서 데이터 목록 추출하기\n연도별 서울시 인구수의 연도별 변화추이를 간단한 선그래프를 이용하여 확인하려면,\n먼저 x축에 사용될 년도 목록과 y축에 사용될 인구수 목록을 구해야 한다.\n먼저 이전에 배운 기술을 활용하고, 이후에 보다 쉽게 활용하는 고급기술을 활용한다.\n주의: 확장자가 csv인 파일은데이터가 쉼표(콤마)로 구분되어 정리되어 있는 파일을 의미한다.\ncsv는 Comma-Separated Values의 줄임말이다. \n따라서, csv 파일을 읽어들인 후, 각 줄을 쉼표 기준으로 분리(split)하면 이전에 공백 기분으로 데이터를 쪼개는 방식과\n동일한 결과를 얻을 수 있다. 즉, split 메소드의 인자로 여기서는 쉼표를 사용하면 된다.",
"data_f = open(\"data/Seoul_pop1.csv\")\n\n# 년도 리스트\nyears = []\n# 인구수 리스트\npopulations = []\n\nfor line in data_f: \n (year, population) = line.split(',') \n years.append(int(year))\n populations.append(int(population))\n\ndata_f.close() \n\nprint(years)\n\nprint(populations)\n\n# 그래프를 그릴 도화지 준비하기\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\n\n# x축에 년도, y축에 인구수가 있는 선 그래프 만들기\nplt.plot(years, populations, color='green', marker='o', linestyle='solid')\n\n# 제목 더하기\nplt.title(\"Seoul Population Change\")\n\n# y축에 레이블 추가하기\nplt.ylabel(\"10Million\")\nplt.show()",
"막대그래프\n동일한 데이터를 막대그래프를 이용하여 보여줄 수 있다.\n그렇게 하면 년도별 미세한 차이를 보다 자세히 나타낼 수 있다.",
"# 그래프를 그릴 도화지 준비하기\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\n\n# 막대그래프 그리기\nplt.bar(years, populations)\n\n# 제목 더하기\nplt.title(\"Seoul Population Change\")\n\n# y축에 레이블 추가하기\nplt.ylabel(\"10Million\")\nplt.show()",
"그런데 이렇게 하면 막대 그래프의 두께가 좀 좁아 보인다. 그리고\n년도가 정확히 5년 단위로 쪼개진 것이 아니기에 막대들 사이의 간격이 불규칙해 보인다.\n따라서 먼저 막대의 두께를 좀 조절해보자.\n힌트: plt.bar() 함수의 세 번째 인자는 막대들의 두께를 지정한다.",
"# 그래프를 그릴 도화지 준비하기\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\n\n# 막대그래프 그리기, 막대 두께 조절\nplt.bar(years, populations, 2.5)\n\n# 제목 더하기\nplt.title(\"Seoul Population Change\")\n\n# y축에 레이블 추가하기\nplt.ylabel(\"10Million\")\nplt.show()",
"막대들의 간격이 완전히 규칙적으로 되지는 않았지만 이전 그래프와는 좀 다른 느낌을 준다. \n이와 같이 막대들의 두께 뿐만아니라, 간격, 색상 모두 조절할 수 있지만, \n여기서는 그럴 수 있다는 사실만 언급하고 넘어간다.\n예제\n대한민국이 하계 올림픽에서 가장 많은 메일을 획득한 상위 여섯 종목과 메달 숫자는 아래와 같다.\n<p>\n<table cellspacing=\"20\">\n\n<tr>\n <td>종목</td>\n <td>메달 수</td>\n</tr>\n<tr>\n <td>Archery(양궁)</td>\n <td>39</td>\n</tr>\n<tr>\n <td>Badminton(배드민턴)</td>\n <td>19</td>\n</tr>\n<tr>\n <td>Boxing(복싱)</td>\n <td>20</td>\n</tr>\n<tr>\n <td>Judo(유도)</td>\n <td>43</td>\n</tr>\n<tr>\n <td>Taekwondo(태권도)</td>\n <td>19</td>\n</tr>\n<tr>\n <td>Wrestling(레슬링)</td>\n <td>36</td>\n</tr>\n\n<caption align='bottom'>출처: 위키피디아</caption>\n</table>\n</p>\n\n이제 위 데이터를 막대 그래프로 시각화할 수 있다.",
"sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']\nmedals = [39, 19, 20, 43, 19, 36]\n\nplt.bar(sports, medals)\nplt.ylabel(\"Medals\")\nplt.title(\"Olympic Medals\")\nplt.show()",
"x축에 종목 이름 대신에 숫자를 넣을 수도 있지만 정확한 정보를 전달하지는 못한다.",
"sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']\nmedals = [39, 19, 20, 43, 19, 36]\n\nplt.bar(range(6), medals)\nplt.ylabel(\"Medals\")\nplt.title(\"Olympic Medals\")\nplt.show()",
"따라서 x축에 6개의 막대가 필요하고 각각의 막대에 레이블 형식으로 종목 이름을 지정해야 한다.",
"sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']\nmedals = [39, 19, 20, 43, 19, 36]\n\nxs = range(6)\nplt.bar(xs, medals)\n\nplt.xticks(xs, sports)\n\nplt.ylabel(\"Medals\")\nplt.title(\"Olympic Medals\")\nplt.show()",
"여전히 그래프가 좀 어색하다. 막대들이 좀 두껍다. 이럴 때는 x축에 사용되는 점들의 간격을 좀 벌리는 게 좋다.",
"sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']\nmedals = [39, 19, 20, 43, 19, 36]\n\nxs = range(0, 12, 2)\nplt.bar(xs, medals)\n\nplt.xticks(xs, sports)\n\nplt.ylabel(\"Medals\")\nplt.title(\"Olympic Medals\")\nplt.show()",
"이번에는 막대 두께가 좁아 보인다. 그래서 좀 넓히는 게 좋다.",
"sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']\nmedals = [39, 19, 20, 43, 19, 36]\n\nxs = range(0, 12, 2)\nplt.bar(xs, medals, 1.2)\n\nplt.xticks(xs, sports)\n\nplt.ylabel(\"Medals\")\nplt.title(\"Olympic Medals\")\nplt.show()",
"지금까지 살펴보았듯이 적합한 시각화는 경우에 따라 상당히 많은 노력을 요구하기도 한다.\n여기서는 matplotlib.pyplot 라이브러리에 다양한 설정 옵션이 있다는 정도만 기억하면 좋겠다.\n히스토그램\n히스토 그램은 막대그래프와 비슷하다. 다만 막대 사이에 공간이 없다. \n따라서 연속적인 구간으로 구분된 범위에 포함된 숫자들의 도수를 나타내는 데에 효과적이다.\n아래 예제는 임의로 생성된 1000개의 실수들의 분포를 보여주는 히스토그램이다. \n주의: \n* numpy 모듈의 randn 함수는 표준정규분포를 따르도록 실수들을 임의로 생성한다. \n* 표준정규분포: 데이터들의 평균이 0이고 표준편차가 1인 정규분포\n* 여기서는 표준정규분포가 확률과 통계 분야에서 매우 중요한 역할을 수행한다는 정도만 알고 넘어간다.",
"import numpy as np\n\ngaussian_numbers = np.random.randn(1000)\nplt.hist(gaussian_numbers, bins=10)\n\nplt.title(\"Gaussian Histogram\")\nplt.xlabel(\"Value\")\nplt.ylabel(\"Frequency\")\nplt.show()",
"산점도\n두 변수 간의 연관관계를 보여 주는 데에 매우 적합한 그래프이다. \n예를 들어, 카카오톡에 등록된 친구 수와 하룻동안의 스마트폰 사용시간 사이의 연관성을 보여주는 데이터가 아래와 같이 주어졌다고 하자.\n주의: 아래 데이터는 강의를 위해 임의로 조작되었으며, 어떠한 근거도 갖지 않는다.",
"num_friends = [41, 26, 90, 50, 18, 124, 152, 88, 72, 51]\nphone_time = [4.1, 3.3, 5.7, 4.2, 3.2, 6.4, 6.0, 5.1, 6.2, 3.7]\n\nplt.scatter(num_friends, phone_time)\nplt.show()",
"위 산점도를 보면 카카오톡에 등록된 친구 수가 많을 수록 스마트폰 사용시간이 증가하는 경향을 한 눈에 확인할 수 있다.\n물론, 이는 주어진 (조작된) 데이터에 근거한 정보이다. \n오늘의 주요 예제 해결\n서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.\n<p>\n<table cellspacing=\"20\">\n\n<tr>\n<td>\n<img src=\"images/Seoul_pop04.jpg\">\n</td>\n</tr>\n\n</table>\n</p>\n\n위 도표의 데이터는 'Seoul_pop2.csv' 파일에 아래와 같이 저장되어 있다.\n\n```\n1949년부터 2010년 사이의 서울과 수도권 인구 증가율(%)\n구간,서울,수도권\n1949-1955,9.12,-5.83\n1955-1960,55.88,32.22\n1960-1966,55.12,32.76\n1966-1970,45.66,28.76\n1970-1975,24.51,22.93\n1975-1980,21.38,21.69\n1980-1985,15.27,18.99\n1985-1990,10.15,17.53\n1990-1995,-3.64,8.54\n1995-2000,-3.55,5.45\n2000-2005,-0.93,6.41\n2005-2010,-1.34,3.71\n```\n\n이제 위 파일을 읽어서 서울과 수도권의 인구증가율 추이를 선그래프로 나타내 보자.\n단계 1: csv 파일 읽어드리기\n확장자가 csv인 파일은 데이터를 저장하기 위해 주로 사용한다. \ncsv 파일을 읽어드리는 방법은 csv 모듈의 reader() 함수를 활용하면 매우 쉽다.",
"import csv\n\nwith open('data/Seoul_pop2.csv') as f:\n reader = csv.reader(f)\n for row in reader:\n if len(row) == 0 or row[0][0] == '#':\n continue\n else:\n print(row)",
"csv.reader 함수의 리턴값은 csv 파일의 내용을 줄 별로 리스트로 저장한 특별한 자료형이다.\n여기서는 위 예제처럼 사용하는 정도만 기억하면 된다.",
"type(reader)",
"주의: 위 코드의 5번 줄을 아래와 같이 하면 오류 발생\nif row[0][0] == '#' or len(row) == 0:\n이유: 'A or B'의 경우 먼저 A의 참, 거짓을 먼저 판단한 후에, A참이면 참으로 처리하고 끝낸다.\n그리고 A가 거짓이면 그제서야 B의 참, 거짓을 확인한다. \n그래서 A의 참, 거짓을 판단하면서 오류가 발생하면 바로 멈추게 된다.\n위 예제의 경우 row[0][0]이 셋째줄의 인덱스 오류가 발생하게 되서 코드 전체가 멈추게 된다.\n주의: \n다음 형식은\npython\nwith open('Seoul_pop2.csv') as f:\n 코드 \n아래 코드에 대응한다.\npython\nf = open('Seoul_pop2.csv')\n코드\nf.close()\n단계 2: 선그래프에 사용될 데이터 정리하기",
"year_intervals = []\nSeoul_pop = []\nCapital_region_pop = []\n\nwith open('data/Seoul_pop2.csv') as f:\n reader = csv.reader(f)\n for row in reader:\n if len(row) == 0 or row[0][0] == '#':\n continue\n else:\n year_intervals.append(row[0])\n Seoul_pop.append(float(row[1]))\n Capital_region_pop.append(float(row[2]))\n\nprint(year_intervals)\n\nprint(Seoul_pop)\n\nprint(Capital_region_pop)",
"단계 3: 그래프 그리기",
"# 그래프를 그릴 도화지 준비하기\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\n\n# x축에 년도, y축에 인구수가 있는 선 그래프 만들기\nplt.plot(range(12), Seoul_pop, color='green', marker='o', linestyle='solid', \\\n label='Seoul')\nplt.plot(range(12), Capital_region_pop, color='red', marker='o', linestyle='solid', \\\n label='Capital Region')\n\nplt.xticks(range(12), year_intervals, rotation=45)\n\nplt.title(\"Population Change\")\nplt.ylabel(\"Percentage\")\n\nplt.legend()\nplt.show()",
"연습문제\n연습\n원그래프(파이 차트)는 각 구성 요소가 전체에서 차지하는 비중을 한 눈에 알아볼 수 있도록 도와주는 그래프이다. \nmatplotlib 라이브러리를 이용하여 원그래프를 그리기 위해서는 pie 함수를 활용하면 된다. \n아래 코드는 한 학년 A, B, C, D 네 반 학생들의 숫자를 해당 학년 전체 학생들에서 차지하는 비중을 원그래프로\n보여준다.",
"classes = ['A', 'B', 'C', 'D']\nslices = [30, 50, 20, 40] # 하나의 조각을 의미하는 slice 변수이름 활용\n\ncolors = ['Blue', 'Green', 'Red', 'Yellow']\nexplode = [0.0, 0.0, 0.1, 0.0] # 특정 조각을 돌출시키고자 할 때 사용\n\nplt.pie(slices, explode = explode, autopct = '%2.3f%%', shadow = True, \\\n colors = colors, labels = classes)\n\nplt.show()",
"코드 설명:\n\nclasses: 각 학급의 이름의 리스트\nslices: 각 학급의 학생 수의 리스트. \n\nlabels: 각 학급에 해당하는 조각을 지정하는 색의 리스트 \n\n\nexplode: 특정 조각을 돌출시켜 강조하기 위한 값 지정\n\nautopct: 각 조각의 전체 대비 백분율 표시방법 지정 (부동소수점 서식 활용)\nshadow: 원 그래프에 그림자 추가 여부 확인"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/inpe/cmip6/models/sandbox-3/seaice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: INPE\nSource ID: SANDBOX-3\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:07\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inpe', 'sandbox-3', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/introduction_to_tensorflow/solutions/tensors-variables.ipynb
|
apache-2.0
|
[
"Introduction to Tensors and Variables\nLearning Objectives\n\nUnderstand Basic and Advanced Tensor Concepts\nUnderstand Single-Axis and Multi-Axis Indexing\nCreate Tensors and Variables\n\nIntroduction\nIn this notebook, we look at tensors, which are multi-dimensional arrays with a uniform type (called a dtype). You can see all supported dtypes at tf.dtypes.DType. If you're familiar with NumPy, tensors are (kind of) like np.arrays. All tensors are immutable like python numbers and strings: you can never update the contents of a tensor, only create a new one.\nWe also look at variables, a tf.Variable represents a tensor whose value can be changed by running operations (ops) on it. Specific ops allow you to read and modify the values of this tensor. Higher level libraries like tf.keras use tf.Variable to store model parameters.\nLoad necessary libraries\nWe will start by importing the necessary libraries for this lab.",
"import numpy as np\nimport tensorflow as tf\n\nprint(\"TensorFlow version: \", tf.version.VERSION)",
"Lab Task 1: Understand Basic and Advanced Tensor Concepts\nBasics\nLet's create some basic tensors.\nHere is a \"scalar\" or \"rank-0\" tensor . A scalar contains a single value, and no \"axes\".",
"# This will be an int32 tensor by default; see \"dtypes\" below.\nrank_0_tensor = tf.constant(4)\nprint(rank_0_tensor)",
"A \"vector\" or \"rank-1\" tensor is like a list of values. A vector has 1-axis:",
"# Let's make this a float tensor.\nrank_1_tensor = tf.constant([2.0, 3.0, 4.0])\nprint(rank_1_tensor)",
"A \"matrix\" or \"rank-2\" tensor has 2-axes:",
"# If we want to be specific, we can set the dtype (see below) at creation time\n# TODO 1a\nrank_2_tensor = tf.constant([[1, 2], [3, 4], [5, 6]], dtype=tf.float16)\nprint(rank_2_tensor)",
"<table>\n<tr>\n <th>A scalar, shape: <code>[]</code></th>\n <th>A vector, shape: <code>[3]</code></th>\n <th>A matrix, shape: <code>[3, 2]</code></th>\n</tr>\n<tr>\n <td>\n <img src=\"../images/tensor/scalar.png\" alt=\"A scalar, the number 4\" />\n </td>\n\n <td>\n <img src=\"../images/tensor/vector.png\" alt=\"The line with 3 sections, each one containing a number.\"/>\n </td>\n <td>\n <img src=\"../images/tensor/matrix.png\" alt=\"A 3x2 grid, with each cell containing a number.\">\n </td>\n</tr>\n</table>\n\nTensors may have more axes, here is a tensor with 3-axes:",
"# There can be an arbitrary number of\n# axes (sometimes called \"dimensions\")\nrank_3_tensor = tf.constant(\n [\n [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]],\n [[10, 11, 12, 13, 14], [15, 16, 17, 18, 19]],\n [[20, 21, 22, 23, 24], [25, 26, 27, 28, 29]],\n ]\n)\n\nprint(rank_3_tensor)",
"There are many ways you might visualize a tensor with more than 2-axes.\n<table>\n<tr>\n <th colspan=3>A 3-axis tensor, shape: <code>[3, 2, 5]</code></th>\n<tr>\n<tr>\n <td>\n <img src=\"../images/tensor/3-axis_numpy.png\"/>\n </td>\n <td>\n <img src=\"../images/tensor/3-axis_front.png\"/>\n </td>\n\n <td>\n <img src=\"../images/tensor/3-axis_block.png\"/>\n </td>\n</tr>\n\n</table>\n\nYou can convert a tensor to a NumPy array either using np.array or the tensor.numpy method:",
"# TODO 1b\nnp.array(rank_2_tensor)\n\n# TODO 1c\nrank_2_tensor.numpy()",
"Tensors often contain floats and ints, but have many other types, including:\n\ncomplex numbers\nstrings\n\nThe base tf.Tensor class requires tensors to be \"rectangular\"---that is, along each axis, every element is the same size. However, there are specialized types of Tensors that can handle different shapes: \n\nragged (see RaggedTensor below)\nsparse (see SparseTensor below)\n\nWe can do basic math on tensors, including addition, element-wise multiplication, and matrix multiplication.",
"a = tf.constant([[1, 2], [3, 4]])\nb = tf.constant([[1, 1], [1, 1]]) # Could have also said `tf.ones([2,2])`\n\nprint(tf.add(a, b), \"\\n\")\nprint(tf.multiply(a, b), \"\\n\")\nprint(tf.matmul(a, b), \"\\n\")\n\nprint(a + b, \"\\n\") # element-wise addition\nprint(a * b, \"\\n\") # element-wise multiplication\nprint(a @ b, \"\\n\") # matrix multiplication",
"Tensors are used in all kinds of operations (ops).",
"c = tf.constant([[4.0, 5.0], [10.0, 1.0]])\n\n# Find the largest value\nprint(tf.reduce_max(c))\n# TODO 1d\n# Find the index of the largest value\nprint(tf.argmax(c))\n# Compute the softmax\nprint(tf.nn.softmax(c))",
"About shapes\nTensors have shapes. Some vocabulary:\n\nShape: The length (number of elements) of each of the dimensions of a tensor.\nRank: Number of tensor dimensions. A scalar has rank 0, a vector has rank 1, a matrix is rank 2.\nAxis or Dimension: A particular dimension of a tensor.\nSize: The total number of items in the tensor, the product shape vector\n\nNote: Although you may see reference to a \"tensor of two dimensions\", a rank-2 tensor does not usually describe a 2D space.\nTensors and tf.TensorShape objects have convenient properties for accessing these:",
"rank_4_tensor = tf.zeros([3, 2, 4, 5])",
"<table>\n<tr>\n <th colspan=2>A rank-4 tensor, shape: <code>[3, 2, 4, 5]</code></th>\n</tr>\n<tr>\n <td>\n<img src=\"../images/tensor/shape.png\" alt=\"A tensor shape is like a vector.\">\n <td>\n<img src=\"../images/tensor/4-axis_block.png\" alt=\"A 4-axis tensor\">\n </td>\n </tr>\n</table>",
"print(\"Type of every element:\", rank_4_tensor.dtype)\nprint(\"Number of dimensions:\", rank_4_tensor.ndim)\nprint(\"Shape of tensor:\", rank_4_tensor.shape)\nprint(\"Elements along axis 0 of tensor:\", rank_4_tensor.shape[0])\nprint(\"Elements along the last axis of tensor:\", rank_4_tensor.shape[-1])\nprint(\"Total number of elements (3*2*4*5): \", tf.size(rank_4_tensor).numpy())",
"While axes are often referred to by their indices, you should always keep track of the meaning of each. Often axes are ordered from global to local: The batch axis first, followed by spatial dimensions, and features for each location last. This way feature vectors are contiguous regions of memory.\n<table>\n<tr>\n<th>Typical axis order</th>\n</tr>\n<tr>\n <td>\n<img src=\"../images/tensor/shape2.png\" alt=\"Keep track of what each axis is. A 4-axis tensor might be: Batch, Width, Height, Freatures\">\n </td>\n</tr>\n</table>\n\nLab Task 2: Understand Single-Axis and Multi-Axis Indexing\nSingle-axis indexing\nTensorFlow follow standard python indexing rules, similar to indexing a list or a string in python, and the bacic rules for numpy indexing.\n\nindexes start at 0\nnegative indices count backwards from the end\ncolons, :, are used for slices start:stop:step",
"rank_1_tensor = tf.constant([0, 1, 1, 2, 3, 5, 8, 13, 21, 34])\nprint(rank_1_tensor.numpy())",
"Indexing with a scalar removes the dimension:",
"print(\"First:\", rank_1_tensor[0].numpy())\nprint(\"Second:\", rank_1_tensor[1].numpy())\nprint(\"Last:\", rank_1_tensor[-1].numpy())",
"Indexing with a : slice keeps the dimension:",
"print(\"Everything:\", rank_1_tensor[:].numpy())\nprint(\"Before 4:\", rank_1_tensor[:4].numpy())\nprint(\"From 4 to the end:\", rank_1_tensor[4:].numpy())\nprint(\"From 2, before 7:\", rank_1_tensor[2:7].numpy())\nprint(\"Every other item:\", rank_1_tensor[::2].numpy())\nprint(\"Reversed:\", rank_1_tensor[::-1].numpy())",
"Multi-axis indexing\nHigher rank tensors are indexed by passing multiple indices. \nThe single-axis exact same rules as in the single-axis case apply to each axis independently.",
"print(rank_2_tensor.numpy())",
"Passing an integer for each index the result is a scalar.",
"# Pull out a single value from a 2-rank tensor\nprint(rank_2_tensor[1, 1].numpy())",
"You can index using any combination integers and slices:",
"# Get row and column tensors\nprint(\"Second row:\", rank_2_tensor[1, :].numpy())\nprint(\"Second column:\", rank_2_tensor[:, 1].numpy())\nprint(\"Last row:\", rank_2_tensor[-1, :].numpy())\nprint(\"First item in last column:\", rank_2_tensor[0, -1].numpy())\nprint(\"Skip the first row:\")\nprint(rank_2_tensor[1:, :].numpy(), \"\\n\")",
"Here is an example with a 3-axis tensor:",
"print(rank_3_tensor[:, :, 4])",
"<table>\n<tr>\n<th colspan=2>Selecting the last feature across all locations in each example in the batch </th>\n</tr>\n<tr>\n <td>\n<img src=\"../images/tensor/index1.png\" alt=\"A 3x2x5 tensor with all the values at the index-4 of the last axis selected.\">\n </td>\n <td>\n<img src=\"../images/tensor/index2.png\" alt=\"The selected values packed into a 2-axis tensor.\">\n </td>\n</tr>\n</table>\n\nManipulating Shapes\nReshaping a tensor is of great utility. \nThe tf.reshape operation is fast and cheap as the underlying data does not need to be duplicated.",
"# Shape returns a `TensorShape` object that shows the size on each dimension\nvar_x = tf.Variable(tf.constant([[1], [2], [3]]))\nprint(var_x.shape)\n\n# You can convert this object into a Python list, too\nprint(var_x.shape.as_list())",
"You can reshape a tensor into a new shape. Reshaping is fast and cheap as the underlying data does not need to be duplicated.",
"# TODO 2a\n# We can reshape a tensor to a new shape.\n# Note that we're passing in a list\nreshaped = tf.reshape(var_x, [1, 3])\n\nprint(var_x.shape)\nprint(reshaped.shape)",
"The data maintains it's layout in memory and a new tensor is created, with the requested shape, pointing to the same data. TensorFlow uses C-style \"row-major\" memory ordering, where incrementing the right-most index corresponds to a single step in memory.",
"print(rank_3_tensor)",
"If you flatten a tensor you can see what order it is laid out in memory.",
"# A `-1` passed in the `shape` argument says \"Whatever fits\".\nprint(tf.reshape(rank_3_tensor, [-1]))",
"Typically the only reasonable uses of tf.reshape are to combine or split adjacent axes (or add/remove 1s).\nFor this 3x2x5 tensor, reshaping to (3x2)x5 or 3x(2x5) are both reasonable things to do, as the slices do not mix:",
"print(tf.reshape(rank_3_tensor, [3 * 2, 5]), \"\\n\")\nprint(tf.reshape(rank_3_tensor, [3, -1]))",
"<table>\n<th colspan=3>\nSome good reshapes.\n</th>\n<tr>\n <td>\n<img src=\"../images/tensor/reshape-before.png\" alt=\"A 3x2x5 tensor\">\n </td>\n <td>\n <img src=\"../images/tensor/reshape-good1.png\" alt=\"The same data reshaped to (3x2)x5\">\n </td>\n <td>\n<img src=\"../images/tensor/reshape-good2.png\" alt=\"The same data reshaped to 3x(2x5)\">\n </td>\n</tr>\n</table>\n\nReshaping will \"work\" for any new shape with the same total number of elements, but it will not do anything useful if you do not respect the order of the axes.\nSwapping axes in tf.reshape does not work, you need tf.transpose for that.",
"# Bad examples: don't do this\n\n# You can't reorder axes with reshape.\nprint(tf.reshape(rank_3_tensor, [2, 3, 5]), \"\\n\")\n\n# This is a mess\nprint(tf.reshape(rank_3_tensor, [5, 6]), \"\\n\")\n\n# This doesn't work at all\ntry:\n tf.reshape(rank_3_tensor, [7, -1])\nexcept Exception as e:\n print(e)",
"<table>\n<th colspan=3>\nSome bad reshapes.\n</th>\n<tr>\n <td>\n<img src=\"../images/tensor/reshape-bad.png\" alt=\"You can't reorder axes, use tf.transpose for that\">\n </td>\n <td>\n<img src=\"../images/tensor/reshape-bad4.png\" alt=\"Anything that mixes the slices of data together is probably wrong.\">\n </td>\n <td>\n<img src=\"../images/tensor/reshape-bad2.png\" alt=\"The new shape must fit exactly.\">\n </td>\n</tr>\n</table>\n\nYou may run across not-fully-specified shapes. Either the shape contains a None (a dimension's length is unknown) or the shape is None (the rank of the tensor is unknown).\nExcept for tf.RaggedTensor, this will only occur in the context of TensorFlow's, symbolic, graph-building APIs: \n\ntf.function \nThe keras functional API.\n\nMore on DTypes\nTo inspect a tf.Tensor's data type use the Tensor.dtype property.\nWhen creating a tf.Tensor from a Python object you may optionally specify the datatype.\nIf you don't, TensorFlow chooses a datatype that can represent your data. TensorFlow converts Python integers to tf.int32 and python floating point numbers to tf.float32. Otherwise TensorFlow uses the same rules NumPy uses when converting to arrays.\nYou can cast from type to type.",
"# TODO 2b\nthe_f64_tensor = tf.constant([2.2, 3.3, 4.4], dtype=tf.float64)\nthe_f16_tensor = tf.cast(the_f64_tensor, dtype=tf.float16)\n# Now, let's cast to an uint8 and lose the decimal precision\nthe_u8_tensor = tf.cast(the_f16_tensor, dtype=tf.uint8)\nprint(the_u8_tensor)",
"Broadcasting\nBroadcasting is a concept borrowed from the equivalent feature in NumPy. In short, under certain conditions, smaller tensors are \"stretched\" automatically to fit larger tensors when running combined operations on them.\nThe simplest and most common case is when you attempt to multiply or add a tensor to a scalar. In that case, the scalar is broadcast to be the same shape as the other argument.",
"x = tf.constant([1, 2, 3])\n\ny = tf.constant(2)\nz = tf.constant([2, 2, 2])\n# All of these are the same computation\nprint(tf.multiply(x, 2))\nprint(x * y)\nprint(x * z)",
"Likewise, 1-sized dimensions can be stretched out to match the other arguments. Both arguments can be stretched in the same computation.\nIn this case a 3x1 matrix is element-wise multiplied by a 1x4 matrix to produce a 3x4 matrix. Note how the leading 1 is optional: The shape of y is [4].",
"# These are the same computations\nx = tf.reshape(x, [3, 1])\ny = tf.range(1, 5)\nprint(x, \"\\n\")\nprint(y, \"\\n\")\nprint(tf.multiply(x, y))",
"<table>\n<tr>\n <th>A broadcasted add: a <code>[3, 1]</code> times a <code>[1, 4]</code> gives a <code>[3,4]</code> </th>\n</tr>\n<tr>\n <td>\n<img src=\"../images/tensor/broadcasting.png\" alt=\"Adding a 3x1 matrix to a 4x1 matrix results in a 3x4 matrix\">\n </td>\n</tr>\n</table>\n\nHere is the same operation without broadcasting:",
"x_stretch = tf.constant([[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]])\n\ny_stretch = tf.constant([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]])\n\nprint(x_stretch * y_stretch) # Again, operator overloading",
"Most of the time, broadcasting is both time and space efficient, as the broadcast operation never materializes the expanded tensors in memory. \nYou see what broadcasting looks like using tf.broadcast_to.",
"print(tf.broadcast_to(tf.constant([1, 2, 3]), [3, 3]))",
"Unlike a mathematical op, for example, broadcast_to does nothing special to save memory. Here, you are materializing the tensor.\nIt can get even more complicated. This section of Jake VanderPlas's book Python Data Science Handbook shows more broadcasting tricks (again in NumPy).\ntf.convert_to_tensor\nMost ops, like tf.matmul and tf.reshape take arguments of class tf.Tensor. However, you'll notice in the above case, we frequently pass Python objects shaped like tensors.\nMost, but not all, ops call convert_to_tensor on non-tensor arguments. There is a registry of conversions, and most object classes like NumPy's ndarray, TensorShape, Python lists, and tf.Variable will all convert automatically.\nSee tf.register_tensor_conversion_function for more details, and if you have your own type you'd like to automatically convert to a tensor.\nRagged Tensors\nA tensor with variable numbers of elements along some axis is called \"ragged\". Use tf.ragged.RaggedTensor for ragged data.\nFor example, This cannot be represented as a regular tensor:\n<table>\n<tr>\n <th>A `tf.RaggedTensor`, shape: <code>[4, None]</code></th>\n</tr>\n<tr>\n <td>\n<img src=\"../images/tensor/ragged.png\" alt=\"A 2-axis ragged tensor, each row can have a different length.\">\n </td>\n</tr>\n</table>",
"ragged_list = [[0, 1, 2, 3], [4, 5], [6, 7, 8], [9]]\n\ntry:\n tensor = tf.constant(ragged_list)\nexcept Exception as e:\n print(e)",
"Instead create a tf.RaggedTensor using tf.ragged.constant:",
"# TODO 2c\nragged_tensor = tf.ragged.constant(ragged_list)\nprint(ragged_tensor)",
"The shape of a tf.RaggedTensor contains unknown dimensions:",
"print(ragged_tensor.shape)",
"String tensors\ntf.string is a dtype, which is to say we can represent data as strings (variable-length byte arrays) in tensors. \nThe strings are atomic and cannot be indexed the way Python strings are. The length of the string is not one of the dimensions of the tensor. See tf.strings for functions to manipulate them.\nHere is a scalar string tensor:",
"# Tensors can be strings, too here is a scalar string.\nscalar_string_tensor = tf.constant(\"Gray wolf\")\nprint(scalar_string_tensor)",
"And a vector of strings:\n<table>\n<tr>\n <th>A vector of strings, shape: <code>[3,]</code></th>\n</tr>\n<tr>\n <td>\n<img src=\"../images/tensor/strings.png\" alt=\"The string length is not one of the tensor's axes.\">\n </td>\n</tr>\n</table>",
"# If we have two string tensors of different lengths, this is OK.\ntensor_of_strings = tf.constant([\"Gray wolf\", \"Quick brown fox\", \"Lazy dog\"])\n# Note that the shape is (2,), indicating that it is 2 x unknown.\nprint(tensor_of_strings)",
"In the above printout the b prefix indicates that tf.string dtype is not a unicode string, but a byte-string. See the Unicode Tutorial for more about working with unicode text in TensorFlow.\nIf you pass unicode characters they are utf-8 encoded.",
"tf.constant(\"🥳👍\")",
"Some basic functions with strings can be found in tf.strings, including tf.strings.split.",
"# We can use split to split a string into a set of tensors\nprint(tf.strings.split(scalar_string_tensor, sep=\" \"))\n\n# ...but it turns into a `RaggedTensor` if we split up a tensor of strings,\n# as each string might be split into a different number of parts.\nprint(tf.strings.split(tensor_of_strings))",
"<table>\n<tr>\n <th>Three strings split, shape: <code>[3, None]</code></th>\n</tr>\n<tr>\n <td>\n<img src=\"../images/tensor/string-split.png\" alt=\"Splitting multiple strings returns a tf.RaggedTensor\">\n </td>\n</tr>\n</table>\n\nAnd tf.string.to_number:",
"text = tf.constant(\"1 10 100\")\nprint(tf.strings.to_number(tf.strings.split(text, \" \")))",
"Although you can't use tf.cast to turn a string tensor into numbers, you can convert it into bytes, and then into numbers.",
"byte_strings = tf.strings.bytes_split(tf.constant(\"Duck\"))\nbyte_ints = tf.io.decode_raw(tf.constant(\"Duck\"), tf.uint8)\nprint(\"Byte strings:\", byte_strings)\nprint(\"Bytes:\", byte_ints)\n\n# Or split it up as unicode and then decode it\nunicode_bytes = tf.constant(\"アヒル 🦆\")\nunicode_char_bytes = tf.strings.unicode_split(unicode_bytes, \"UTF-8\")\nunicode_values = tf.strings.unicode_decode(unicode_bytes, \"UTF-8\")\n\nprint(\"\\nUnicode bytes:\", unicode_bytes)\nprint(\"\\nUnicode chars:\", unicode_char_bytes)\nprint(\"\\nUnicode values:\", unicode_values)",
"The tf.string dtype is used for all raw bytes data in TensorFlow. The tf.io module contains functions for converting data to and from bytes, including decoding images and parsing csv.\nSparse tensors\nSometimes, your data is sparse, like a very wide embedding space. TensorFlow supports tf.sparse.SparseTensor and related operations to store sparse data efficiently.\n<table>\n<tr>\n <th>A `tf.SparseTensor`, shape: <code>[3, 4]</code></th>\n</tr>\n<tr>\n <td>\n<img src=\"../images/tensor/sparse.png\" alt=\"An 3x4 grid, with values in only two of the cells.\">\n </td>\n</tr>\n</table>",
"# Sparse tensors store values by index in a memory-efficient manner\n# TODO 2d\nsparse_tensor = tf.sparse.SparseTensor(\n indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]\n)\nprint(sparse_tensor, \"\\n\")\n\n# We can convert sparse tensors to dense\nprint(tf.sparse.to_dense(sparse_tensor))",
"Lab Task 3: Introduction to Variables\nA TensorFlow variable is the recommended way to represent shared, persistent state your program manipulates. This guide covers how to create, update, and manage instances of tf.Variable in TensorFlow.\nVariables are created and tracked via the tf.Variable class. A tf.Variable represents a tensor whose value can be changed by running ops on it. Specific ops allow you to read and modify the values of this tensor. Higher level libraries like tf.keras use tf.Variable to store model parameters. \nSetup\nThis notebook discusses variable placement. If you want to see on what device your variables are placed, uncomment this line.",
"import tensorflow as tf\n\n# Uncomment to see where your variables get placed (see below)\n# tf.debugging.set_log_device_placement(True)",
"Create a variable\nTo create a variable, provide an initial value. The tf.Variable will have the same dtype as the initialization value.",
"# TODO 3a\nmy_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])\nmy_variable = tf.Variable(my_tensor)\n\n# Variables can be all kinds of types, just like tensors\nbool_variable = tf.Variable([False, False, False, True])\ncomplex_variable = tf.Variable([5 + 4j, 6 + 1j])",
"A variable looks and acts like a tensor, and, in fact, is a data structure backed by a tf.Tensor. Like tensors, they have a dtype and a shape, and can be exported to NumPy.",
"print(\"Shape: \", my_variable.shape)\nprint(\"DType: \", my_variable.dtype)\nprint(\"As NumPy: \", my_variable.numpy)",
"Most tensor operations work on variables as expected, although variables cannot be reshaped.",
"print(\"A variable:\", my_variable)\nprint(\"\\nViewed as a tensor:\", tf.convert_to_tensor(my_variable))\nprint(\"\\nIndex of highest value:\", tf.argmax(my_variable))\n\n# This creates a new tensor; it does not reshape the variable.\nprint(\"\\nCopying and reshaping: \", tf.reshape(my_variable, ([1, 4])))",
"As noted above, variables are backed by tensors. You can reassign the tensor using tf.Variable.assign. Calling assign does not (usually) allocate a new tensor; instead, the existing tensor's memory is reused.",
"a = tf.Variable([2.0, 3.0])\n# This will keep the same dtype, float32\na.assign([1, 2])\n# Not allowed as it resizes the variable:\ntry:\n a.assign([1.0, 2.0, 3.0])\nexcept Exception as e:\n print(e)",
"If you use a variable like a tensor in operations, you will usually operate on the backing tensor. \nCreating new variables from existing variables duplicates the backing tensors. Two variables will not share the same memory.",
"a = tf.Variable([2.0, 3.0])\n# Create b based on the value of a\nb = tf.Variable(a)\na.assign([5, 6])\n\n# a and b are different\nprint(a.numpy())\nprint(b.numpy())\n\n# There are other versions of assign\nprint(a.assign_add([2, 3]).numpy()) # [7. 9.]\nprint(a.assign_sub([7, 9]).numpy()) # [0. 0.]",
"Lifecycles, naming, and watching\nIn Python-based TensorFlow, tf.Variable instance have the same lifecycle as other Python objects. When there are no references to a variable it is automatically deallocated.\nVariables can also be named which can help you track and debug them. You can give two variables the same name.",
"# Create a and b; they have the same value but are backed by different tensors.\na = tf.Variable(my_tensor, name=\"Mark\")\n# A new variable with the same name, but different value\n# Note that the scalar add is broadcast\nb = tf.Variable(my_tensor + 1, name=\"Mark\")\n\n# These are elementwise-unequal, despite having the same name\nprint(a == b)",
"Variable names are preserved when saving and loading models. By default, variables in models will acquire unique variable names automatically, so you don't need to assign them yourself unless you want to.\nAlthough variables are important for differentiation, some variables will not need to be differentiated. You can turn off gradients for a variable by setting trainable to false at creation. An example of a variable that would not need gradients is a training step counter.",
"step_counter = tf.Variable(1, trainable=False)",
"Placing variables and tensors\nFor better performance, TensorFlow will attempt to place tensors and variables on the fastest device compatible with its dtype. This means most variables are placed on a GPU if one is available.\nHowever, we can override this. In this snippet, we can place a float tensor and a variable on the CPU, even if a GPU is available. By turning on device placement logging (see Setup), we can see where the variable is placed. \nNote: Although manual placement works, using distribution strategies can be a more convenient and scalable way to optimize your computation.\nIf you run this notebook on different backends with and without a GPU you will see different logging. Note that logging device placement must be turned on at the start of the session.",
"with tf.device(\"CPU:0\"):\n\n # Create some tensors\n a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])\n c = tf.matmul(a, b)\n\nprint(c)",
"It's possible to set the location of a variable or tensor on one device and do the computation on another device. This will introduce delay, as data needs to be copied between the devices.\nYou might do this, however, if you had multiple GPU workers but only want one copy of the variables.",
"with tf.device(\"CPU:0\"):\n a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])\n b = tf.Variable([[1.0, 2.0, 3.0]])\n\nwith tf.device(\"GPU:0\"):\n # Element-wise multiply\n k = a * b\n\nprint(k)",
"Note: Because tf.config.set_soft_device_placement is turned on by default, even if you run this code on a device without a GPU, it will still run and the multiplication step happen on the CPU.\nFor more on distributed training, see our guide.\nNext steps\nTo understand how variables are typically used, see our guide on automatic distribution.\nCopyright 2020 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
joeddav/devol
|
example/demo.ipynb
|
mit
|
[
"from keras.datasets import mnist\nfrom keras.utils.np_utils import to_categorical\nimport numpy as np\nfrom keras import backend as K\nfrom devol import DEvol, GenomeHandler",
"Prepare dataset\nThis problem uses mnist, a handwritten digit classification problem used for many introductory deep learning examples. Here, we load the data and prepare it for use by the GPU. We also do a one-hot encoding of the labels.",
"(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\nK.set_image_data_format(\"channels_last\")\n\nx_train = x_train.reshape(x_train.shape[0], 28, 28, 1).astype('float32') / 255\nx_test = x_test.reshape(x_test.shape[0], 28, 28, 1).astype('float32') / 255\ny_train = to_categorical(y_train)\ny_test = to_categorical(y_test)\ndataset = ((x_train, y_train), (x_test, y_test))",
"Prepare the genome configuration\nThe GenomeHandler class handles the constraints that are imposed upon models in a particular genetic program. In this example, a genome is allowed up to 6 convolutional layeres, 3 dense layers, 256 feature maps in each convolution, and 1024 nodes in each dense layer. It also specifies three possible activation functions. See genome-handler.py for more information.",
"genome_handler = GenomeHandler(max_conv_layers=6, \n max_dense_layers=2, # includes final dense layer\n max_filters=256,\n max_dense_nodes=1024,\n input_shape=x_train.shape[1:],\n n_classes=10)",
"Create and run the genetic program\nThe next, and final, step is create a DEvol and run it. Here we specify a few settings pertaining to the genetic program. In this example, we have 10 generations of evolution, 20 members in each population, and 3 epochs of training used to evaluate each model's fitness. The program will save each genome's encoding, as well as the model's loss and accuracy, in a .csv file printed at the beginning of program.",
"devol = DEvol(genome_handler)\nmodel = devol.run(dataset=dataset,\n num_generations=20,\n pop_size=20,\n epochs=5)\nmodel.summary()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
barjacks/foundations-homework
|
05/Spotify_Homework_5_Skinner.ipynb
|
mit
|
[
"import requests",
"These are the search queries for the Spotify Web API",
"response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&market=US')\nLil_data = response.json()\n\nLil_data.keys()\n\nLil_data['artists'].keys()",
"1) With \"Lil Wayne\" and \"Lil Kim\" there are a lot of \"Lil\" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.",
"Lil_artists = Lil_data['artists']['items']\nfor artist in Lil_artists:\n print(artist['name'], artist['popularity'])",
"2 a) What genres are most represented in the search results?\nFinding all the genres and combining into one list.",
"Lil_artists = Lil_data['artists']['items']\n\nfor artist in Lil_artists:\n print(artist['name'], artist['popularity'])\n #joining\n if len(artist['genres']) == 0:\n print(\"No genres listed\")\n else:\n genres = \", \".join(artist['genres'])\n print(\"Genres: \", genres)\n \n\n \n\nLil_artists = Lil_data['artists']['items']\n\nLil_genres_list = []\nfor genres in Lil_artists:\n Lil_genres_list = genres[\"genres\"] + Lil_genres_list\nprint(Lil_genres_list)\n ",
"Counting the genres.",
"Genre_list = [[x,Lil_genres_list.count(x)] for x in set(Lil_genres_list)]\nprint(Genre_list)",
"Sorting the genres by occurences.",
"sorted(Genre_list, key = lambda x: int(x[1]), reverse=True)\n\nSorted_by_occurences_Genre_list = sorted(Genre_list, key = lambda x: int(x[1]), reverse=True)\nprint(\"The most frequent genre of the musicians called Lil is\", Sorted_by_occurences_Genre_list[0])\n",
"2 b) Edit your previous printout to also display a list of their genres in the format \"GENRE_1, GENRE_2, GENRE_3\". If there are no genres, print \"No genres listed\".",
"Lil_artists = Lil_data['artists']['items']\nfor artist in Lil_artists:\n if artist['genres'] == []:\n print(artist['name'], artist['popularity'], \"No genres listed.\")\n else:\n print(artist['name'], artist['popularity'], artist['genres'])",
"3 a) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.",
"for artist in Lil_artists: \n if artist['popularity'] >= 72 and artist['name'] != 'Lil Wayne':\n print(artist['name'])\n\n#Better solution:\nmost_popular_name = \"\"\nmost_popular_score = 0\nfor artist in Lil_artists:\n #print(\"Comparing\", artist['popularity'], 'to', most_popular_score)\n if artist['popularity'] > most_popular_score:\n print(\"checking for Lil Wayne\")\n if artist['name'] == 'Lil Wayne':\n print('go away')\n else:\n #The change you are keeping track of\n #a.k.a. what you are keeping track of\n print('not Lil Wayne, updating our notebook')\n most_popular_name = artist['name']\n most_popular_score = artist['popularity']\n \nprint(most_popular_name, most_popular_score)\n\n\n####### This doesn't work\n#name = 'Lil Soma'\n#target_score = 72\n#1 INITIAL CONDITION\n#second_best_artists = []\n#second_best_artists = [Lil Yachty]\n\n#Aggregation Problem\n#When you're looping through a series of serious objects\n#and sometimes you want to add one of those objects\n#to a different list\n\n#for artist in artists:\n# print('Looking at', artist['name'])\n #2 COndition\n #wehen we want someone on the list\n# if artist['popularity'] == 72:\n# print('!!! The artist is popularity is 72.')\n# second_best_artists.append(second_best_artists)\n \n \n\nLil_data['artists'].keys()\n\nfor artist in Lil_artists:\n if artist['name'] == [\"Lil Wayne\"]:\n print(artist['popularity'], \"is\")",
"3 b) Is it the same artist who has the largest number of followers?",
"type(artist['followers'])\n\nartist['followers']",
"Creating a list of the popularity values, so we can sort them and say which one is the highest)",
"Lil_artists = Lil_data['artists']['items']\nList_of_Followers = []\nfor artist in Lil_artists:\n List_of_Followers.append(artist['followers']['total'])\nprint(List_of_Followers)",
"Deciding which one is highest:",
"List_of_Followers.sort(reverse=True)\nprint(List_of_Followers)\n\nHighest_Number_of_Followers = (List_of_Followers[0])\n\nprint(Highest_Number_of_Followers)\n\nfor artist in Lil_artists: \n if artist['followers']['total'] > List_of_Followers[0] and artist['name'] != 'Lil Wayne':\n print(artist['name'], \"has more followers than Lil Wayne.\")\n else:\n print(\"Their are no artists with more followers that Lil Wayne.\")\n break",
"4) Print a list of Lil's that are more popular than Lil' Kim.\nEstablishing how high Lil' Kim's popularity is. Would this be possible in one go?",
"for artist in Lil_artists: \n if artist['name'] == \"Lil' Kim\":\n print(artist['popularity'])\n\nfor artist in Lil_artists: \n if artist['popularity'] > 62:\n print(artist['name'], artist['popularity'])\n\n",
"5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.\nTip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable.",
"for artist in Lil_artists:\n print(artist['name'], artist['id'])\n\nresponse = requests.get('https://api.spotify.com/v1/artists/5einkgXXrjhfYCyac1FANB/top-tracks?country=US')\nLil_Scrappy_data = response.json()\ntype(Lil_Scrappy_data)\n\nresponse = requests.get('https://api.spotify.com/v1/artists/5qK5bOC6wLtuLhG5KvU17c/top-tracks?country=US')\nLil_Mama_data = response.json()\ntype(Lil_Mama_data)\n\nLil_Scrappy_data.keys()\nLil_Mama_data.keys()\n\ntype(Lil_Scrappy_data.keys())\ntype(Lil_Mama_data.keys())\n\nScrappy_tracks = Lil_Scrappy_data['tracks']\n\nfor tracks in Scrappy_tracks:\n print(tracks['name'])\n\n\nMama_tracks = Lil_Mama_data['tracks']\n\nfor tracks in Mama_tracks:\n print(tracks['name'])",
"6 Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?\nNumber of Explicit Tracks for Lil Scrappy.",
"explicit_track_scrappy = 0\nnon_explicit_track_scrappy = 0\nunknown_scrappy = 0\nfor tracks in Scrappy_tracks:\n if tracks['explicit'] == True:\n explicit_track_scrappy = explicit_track_scrappy + 1\n elif tracks['explicit'] == False:\n non_explicit_track_scrappy = non_explicit_track_scrappy + 1\n else:\n unknown_scrappy = unknown_scrappy + 1\n\nexplicit_track_pop_total = 0\nnon_explicit_track_pop_total = 0\nfor tracks in Scrappy_tracks:\n if tracks['explicit'] == True:\n explicit_track_pop_total = explicit_track_pop_total + tracks['popularity']\n elif tracks['explicit'] == False:\n non_explicit_track_pop_total = non_explicit_track_pop_total + tracks['popularity']\n\nexplicit_track_duration_total = 0\nnon_explicit_track_duration_total = 0\nfor tracks in Scrappy_tracks:\n if tracks['explicit'] == True:\n explicit_track_duration_total = explicit_track_duration_total + tracks['duration_ms']\n elif tracks['explicit'] == False:\n non_explicit_track_duration_total = non_explicit_track_duration_total + tracks['duration_ms']\n \nprint(\"The average rating of explicit songs by Lil Scrappy is\", round(explicit_track_pop_total / explicit_track_scrappy), \".\")\nprint(\"The average rating of non-explicit songs by Lil Scrappy is\", round(non_explicit_track_pop_total / non_explicit_track_scrappy), \".\")\nprint(\"The duration of explicit song material of Lil Scrappy is\", round(explicit_track_duration_total / 1000), \"minutes, and of non explicit material is\", round(non_explicit_track_duration_total / 1000), \"minutes.\") \n\n",
"And this is the same for Lil Mama:",
"explicit_track_Mama = 0\nnon_explicit_track_Mama = 0\nunknown = 0\nfor tracks in Mama_tracks:\n if tracks['explicit'] == True:\n explicit_track_Mama = explicit_track_Mama + 1\n elif tracks['explicit'] == False:\n non_explicit_track_Mama = non_explicit_track_Mama + 1\n else:\n unknown = unknown + 1\n\nexplicit_track_pop_total_Mama = 0\nnon_explicit_track_pop_total_Mama = 0\nfor tracks in Mama_tracks:\n if tracks['explicit'] == True:\n explicit_track_pop_total_Mama = explicit_track_pop_total_Mama + tracks['popularity']\n elif tracks['explicit'] == False:\n non_explicit_track_pop_total_Mama = non_explicit_track_pop_total_Mama + tracks['popularity']\n \nexplicit_track_duration_total_Mama = 0\nnon_explicit_track_duration_total_Mama = 0\nfor tracks in Mama_tracks:\n if tracks['explicit'] == True:\n explicit_track_duration_total_Mama = explicit_track_duration_total_Mama + tracks['duration_ms']\n elif tracks['explicit'] == False:\n non_explicit_track_duration_total_Mama = non_explicit_track_duration_total_Mama + tracks['duration_ms'] \n \nprint(\"The average rating of explicit songs by Lil Mama is\", round(explicit_track_pop_total_Mama / explicit_track_Mama), \".\")\nprint(\"The average rating of non-explicit songs by Lil Mama is\", round(non_explicit_track_pop_total_Mama / non_explicit_track_Mama), \".\")\nprint(\"The duration of explicit song material of Lil Mama is\", round(explicit_track_duration_total_Mama / 1000), \"minutes, and of non explicit material is\", round(non_explicit_track_duration_total_Mama / 1000), \"minutes.\") \n\n",
"7 a) Since we're talking about Lils, what about Biggies? How many total \"Biggie\" artists are there? How many total \"Lil\"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?",
"response = requests.get('https://api.spotify.com/v1/search?query=Biggie&type=artist&limit=50&market=US')\nBiggie_data = response.json()\n\nresponse = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&market=US')\nLil_data = response.json()\n\nBiggie_artists = Biggie_data['artists']['total']\nLil_artists = Lil_data['artists']['total']\nprint(\"There are\", Biggie_artists, \"artists named Biggie on Spotify and\", Lil_artists, \"named Lil\",)\n\nTotal_Download_Time_Biggie = Biggie_artists / 50 * 5\nTotal_Download_Time_Lil = Lil_artists / 50 * 5\nprint(\"It would take\", round(Total_Download_Time_Biggie), \"seconds to download all the Biggie artists and\", round(Total_Download_Time_Lil), \"seconds to download the Lil artists.\" )",
"8) Out of the top 50 \"Lil\"s and the top 50 \"Biggie\"s, who is more popular on average?",
"Lil_artists_popularity = Lil_data['artists']['items']\npopularity_total = 0\nfor popularity in Lil_artists_popularity:\n popularity_total = popularity_total + popularity['popularity']\nprint(\"The average rating for the top 50 artists called Lil is:\", round(popularity_total / 50))\n\nBiggie_artists_popularity = Biggie_data['artists']['items']\nBiggie_popularity_total = 0\nfor popularity2 in Biggie_artists_popularity:\n Biggie_popularity_total = Biggie_popularity_total + popularity2['popularity']\nprint(\"The average rating for the top 50 artists called Biggie is:\", round(Biggie_popularity_total / 49) )\n\nLil_artists_popularity = Lil_data['artists']['items']\nfor popularity in Lil_artists_popularity:\n print(popularity['name'], popularity['popularity'])\n\nBiggie_popularity = Biggie_data['artists']['items']\nfor artist in Biggie_popularity:\n print(artist['name'], artist['popularity'])\n\nimport csv\n\nwith open('Biggie.csv', 'w') as mycsvfile:\n thedatawriter = csv.writer(mycsvfile)\n for artist in Biggie_popularity:\n thedatawriter.writerow(artist['name'])\n thedatawriter.writerow(artist['popularity'])\n\nimport csv\nb = open('test.csv', 'w')\na = csv.writer(b)\nfor artist in Biggie_popularity:\n \n \na.writerows(data)\nb.close()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs
|
site/en/tutorials/load_data/pandas_dataframe.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Load a pandas DataFrame\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/load_data/pandas_dataframe\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/pandas_dataframe.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/load_data/pandas_dataframe.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/load_data/pandas_dataframe.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThis tutorial provides examples of how to load <a href=\"https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html\" class=\"external\">pandas DataFrames</a> into TensorFlow.\nYou will use a small <a href=\"https://archive.ics.uci.edu/ml/datasets/heart+Disease\" class=\"external\">heart disease dataset</a> provided by the UCI Machine Learning Repository. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. You will use this information to predict whether a patient has heart disease, which is a binary classification task.\nRead data using pandas",
"import pandas as pd\nimport tensorflow as tf\n\nSHUFFLE_BUFFER = 500\nBATCH_SIZE = 2",
"Download the CSV file containing the heart disease dataset:",
"csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv')",
"Read the CSV file using pandas:",
"df = pd.read_csv(csv_file)",
"This is what the data looks like:",
"df.head()\n\ndf.dtypes",
"You will build models to predict the label contained in the target column.",
"target = df.pop('target')",
"A DataFrame as an array\nIf your data has a uniform datatype, or dtype, it's possible to use a pandas DataFrame anywhere you could use a NumPy array. This works because the pandas.DataFrame class supports the __array__ protocol, and TensorFlow's tf.convert_to_tensor function accepts objects that support the protocol.\nTake the numeric features from the dataset (skip the categorical features for now):",
"numeric_feature_names = ['age', 'thalach', 'trestbps', 'chol', 'oldpeak']\nnumeric_features = df[numeric_feature_names]\nnumeric_features.head()",
"The DataFrame can be converted to a NumPy array using the DataFrame.values property or numpy.array(df). To convert it to a tensor, use tf.convert_to_tensor:",
"tf.convert_to_tensor(numeric_features)",
"In general, if an object can be converted to a tensor with tf.convert_to_tensor it can be passed anywhere you can pass a tf.Tensor.\nWith Model.fit\nA DataFrame, interpreted as a single tensor, can be used directly as an argument to the Model.fit method.\nBelow is an example of training a model on the numeric features of the dataset.\nThe first step is to normalize the input ranges. Use a tf.keras.layers.Normalization layer for that.\nTo set the layer's mean and standard-deviation before running it be sure to call the Normalization.adapt method:",
"normalizer = tf.keras.layers.Normalization(axis=-1)\nnormalizer.adapt(numeric_features)",
"Call the layer on the first three rows of the DataFrame to visualize an example of the output from this layer:",
"normalizer(numeric_features.iloc[:3])",
"Use the normalization layer as the first layer of a simple model:",
"def get_basic_model():\n model = tf.keras.Sequential([\n normalizer,\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(1)\n ])\n\n model.compile(optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'])\n return model",
"When you pass the DataFrame as the x argument to Model.fit, Keras treats the DataFrame as it would a NumPy array:",
"model = get_basic_model()\nmodel.fit(numeric_features, target, epochs=15, batch_size=BATCH_SIZE)",
"With tf.data\nIf you want to apply tf.data transformations to a DataFrame of a uniform dtype, the Dataset.from_tensor_slices method will create a dataset that iterates over the rows of the DataFrame. Each row is initially a vector of values. To train a model, you need (inputs, labels) pairs, so pass (features, labels) and Dataset.from_tensor_slices will return the needed pairs of slices:",
"numeric_dataset = tf.data.Dataset.from_tensor_slices((numeric_features, target))\n\nfor row in numeric_dataset.take(3):\n print(row)\n\nnumeric_batches = numeric_dataset.shuffle(1000).batch(BATCH_SIZE)\n\nmodel = get_basic_model()\nmodel.fit(numeric_batches, epochs=15)",
"A DataFrame as a dictionary\nWhen you start dealing with heterogeneous data, it is no longer possible to treat the DataFrame as if it were a single array. TensorFlow tensors require that all elements have the same dtype.\nSo, in this case, you need to start treating it as a dictionary of columns, where each column has a uniform dtype. A DataFrame is a lot like a dictionary of arrays, so typically all you need to do is cast the DataFrame to a Python dict. Many important TensorFlow APIs support (nested-)dictionaries of arrays as inputs.\ntf.data input pipelines handle this quite well. All tf.data operations handle dictionaries and tuples automatically. So, to make a dataset of dictionary-examples from a DataFrame, just cast it to a dict before slicing it with Dataset.from_tensor_slices:",
"numeric_dict_ds = tf.data.Dataset.from_tensor_slices((dict(numeric_features), target))",
"Here are the first three examples from that dataset:",
"for row in numeric_dict_ds.take(3):\n print(row)",
"Dictionaries with Keras\nTypically, Keras models and layers expect a single input tensor, but these classes can accept and return nested structures of dictionaries, tuples and tensors. These structures are known as \"nests\" (refer to the tf.nest module for details).\nThere are two equivalent ways you can write a Keras model that accepts a dictionary as input.\n1. The Model-subclass style\nYou write a subclass of tf.keras.Model (or tf.keras.Layer). You directly handle the inputs, and create the outputs:",
" def stack_dict(inputs, fun=tf.stack):\n values = []\n for key in sorted(inputs.keys()):\n values.append(tf.cast(inputs[key], tf.float32))\n\n return fun(values, axis=-1)\n\n#@title\nclass MyModel(tf.keras.Model):\n def __init__(self):\n # Create all the internal layers in init.\n super().__init__(self)\n\n self.normalizer = tf.keras.layers.Normalization(axis=-1)\n\n self.seq = tf.keras.Sequential([\n self.normalizer,\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(1)\n ])\n\n def adapt(self, inputs):\n # Stack the inputs and `adapt` the normalization layer.\n inputs = stack_dict(inputs)\n self.normalizer.adapt(inputs)\n\n def call(self, inputs):\n # Stack the inputs\n inputs = stack_dict(inputs)\n # Run them through all the layers.\n result = self.seq(inputs)\n\n return result\n\nmodel = MyModel()\n\nmodel.adapt(dict(numeric_features))\n\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'],\n run_eagerly=True)",
"This model can accept either a dictionary of columns or a dataset of dictionary-elements for training:",
"model.fit(dict(numeric_features), target, epochs=5, batch_size=BATCH_SIZE)\n\nnumeric_dict_batches = numeric_dict_ds.shuffle(SHUFFLE_BUFFER).batch(BATCH_SIZE)\nmodel.fit(numeric_dict_batches, epochs=5)",
"Here are the predictions for the first three examples:",
"model.predict(dict(numeric_features.iloc[:3]))",
"2. The Keras functional style",
"inputs = {}\nfor name, column in numeric_features.items():\n inputs[name] = tf.keras.Input(\n shape=(1,), name=name, dtype=tf.float32)\n\ninputs\n\nx = stack_dict(inputs, fun=tf.concat)\n\nnormalizer = tf.keras.layers.Normalization(axis=-1)\nnormalizer.adapt(stack_dict(dict(numeric_features)))\n\nx = normalizer(x)\nx = tf.keras.layers.Dense(10, activation='relu')(x)\nx = tf.keras.layers.Dense(10, activation='relu')(x)\nx = tf.keras.layers.Dense(1)(x)\n\nmodel = tf.keras.Model(inputs, x)\n\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'],\n run_eagerly=True)\n\ntf.keras.utils.plot_model(model, rankdir=\"LR\", show_shapes=True)",
"You can train the functional model the same way as the model subclass:",
"model.fit(dict(numeric_features), target, epochs=5, batch_size=BATCH_SIZE)\n\nnumeric_dict_batches = numeric_dict_ds.shuffle(SHUFFLE_BUFFER).batch(BATCH_SIZE)\nmodel.fit(numeric_dict_batches, epochs=5)",
"Full example\nIf you're passing a heterogeneous DataFrame to Keras, each column may need unique preprocessing. You could do this preprocessing directly in the DataFrame, but for a model to work correctly, inputs always need to be preprocessed the same way. So, the best approach is to build the preprocessing into the model. Keras preprocessing layers cover many common tasks.\nBuild the preprocessing head\nIn this dataset some of the \"integer\" features in the raw data are actually Categorical indices. These indices are not really ordered numeric values (refer to the <a href=\"https://archive.ics.uci.edu/ml/datasets/heart+Disease\" class=\"external\">the dataset description</a> for details). Because these are unordered they are inappropriate to feed directly to the model; the model would interpret them as being ordered. To use these inputs you'll need to encode them, either as one-hot vectors or embedding vectors. The same applies to string-categorical features.\nNote: If you have many features that need identical preprocessing it's more efficient to concatenate them together before applying the preprocessing.\nBinary features on the other hand do not generally need to be encoded or normalized.\nStart by by creating a list of the features that fall into each group:",
"binary_feature_names = ['sex', 'fbs', 'exang']\n\ncategorical_feature_names = ['cp', 'restecg', 'slope', 'thal', 'ca']",
"The next step is to build a preprocessing model that will apply appropriate preprocessing to each input and concatenate the results.\nThis section uses the Keras Functional API to implement the preprocessing. You start by creating one tf.keras.Input for each column of the dataframe:",
"inputs = {}\nfor name, column in df.items():\n if type(column[0]) == str:\n dtype = tf.string\n elif (name in categorical_feature_names or\n name in binary_feature_names):\n dtype = tf.int64\n else:\n dtype = tf.float32\n\n inputs[name] = tf.keras.Input(shape=(), name=name, dtype=dtype)\n\ninputs",
"For each input you'll apply some transformations using Keras layers and TensorFlow ops. Each feature starts as a batch of scalars (shape=(batch,)). The output for each should be a batch of tf.float32 vectors (shape=(batch, n)). The last step will concatenate all those vectors together.\nBinary inputs\nSince the binary inputs don't need any preprocessing, just add the vector axis, cast them to float32 and add them to the list of preprocessed inputs:",
"preprocessed = []\n\nfor name in binary_feature_names:\n inp = inputs[name]\n inp = inp[:, tf.newaxis]\n float_value = tf.cast(inp, tf.float32)\n preprocessed.append(float_value)\n\npreprocessed",
"Numeric inputs\nLike in the earlier section you'll want to run these numeric inputs through a tf.keras.layers.Normalization layer before using them. The difference is that this time they're input as a dict. The code below collects the numeric features from the DataFrame, stacks them together and passes those to the Normalization.adapt method.",
"normalizer = tf.keras.layers.Normalization(axis=-1)\nnormalizer.adapt(stack_dict(dict(numeric_features)))",
"The code below stacks the numeric features and runs them through the normalization layer.",
"numeric_inputs = {}\nfor name in numeric_feature_names:\n numeric_inputs[name]=inputs[name]\n\nnumeric_inputs = stack_dict(numeric_inputs)\nnumeric_normalized = normalizer(numeric_inputs)\n\npreprocessed.append(numeric_normalized)\n\npreprocessed",
"Categorical features\nTo use categorical features you'll first need to encode them into either binary vectors or embeddings. Since these features only contain a small number of categories, convert the inputs directly to one-hot vectors using the output_mode='one_hot' option, supported by both the tf.keras.layers.StringLookup and tf.keras.layers.IntegerLookup layers.\nHere is an example of how these layers work:",
"vocab = ['a','b','c']\nlookup = tf.keras.layers.StringLookup(vocabulary=vocab, output_mode='one_hot')\nlookup(['c','a','a','b','zzz'])\n\nvocab = [1,4,7,99]\nlookup = tf.keras.layers.IntegerLookup(vocabulary=vocab, output_mode='one_hot')\n\nlookup([-1,4,1])",
"To determine the vocabulary for each input, create a layer to convert that vocabulary to a one-hot vector:",
"for name in categorical_feature_names:\n vocab = sorted(set(df[name]))\n print(f'name: {name}')\n print(f'vocab: {vocab}\\n')\n\n if type(vocab[0]) is str:\n lookup = tf.keras.layers.StringLookup(vocabulary=vocab, output_mode='one_hot')\n else:\n lookup = tf.keras.layers.IntegerLookup(vocabulary=vocab, output_mode='one_hot')\n\n x = inputs[name][:, tf.newaxis]\n x = lookup(x)\n preprocessed.append(x)",
"Assemble the preprocessing head\nAt this point preprocessed is just a Python list of all the preprocessing results, each result has a shape of (batch_size, depth):",
"preprocessed",
"Concatenate all the preprocessed features along the depth axis, so each dictionary-example is converted into a single vector. The vector contains categorical features, numeric features, and categorical one-hot features:",
"preprocesssed_result = tf.concat(preprocessed, axis=-1)\npreprocesssed_result",
"Now create a model out of that calculation so it can be reused:",
"preprocessor = tf.keras.Model(inputs, preprocesssed_result)\n\ntf.keras.utils.plot_model(preprocessor, rankdir=\"LR\", show_shapes=True)",
"To test the preprocessor, use the <a href=\"https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html\" class=\"external\">DataFrame.iloc</a> accessor to slice the first example from the DataFrame. Then convert it to a dictionary and pass the dictionary to the preprocessor. The result is a single vector containing the binary features, normalized numeric features and the one-hot categorical features, in that order:",
"preprocessor(dict(df.iloc[:1]))",
"Create and train a model\nNow build the main body of the model. Use the same configuration as in the previous example: A couple of Dense rectified-linear layers and a Dense(1) output layer for the classification.",
"body = tf.keras.Sequential([\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(1)\n])",
"Now put the two pieces together using the Keras functional API.",
"inputs\n\nx = preprocessor(inputs)\nx\n\nresult = body(x)\nresult\n\nmodel = tf.keras.Model(inputs, result)\n\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'])",
"This model expects a dictionary of inputs. The simplest way to pass it the data is to convert the DataFrame to a dict and pass that dict as the x argument to Model.fit:",
"history = model.fit(dict(df), target, epochs=5, batch_size=BATCH_SIZE)",
"Using tf.data works as well:",
"ds = tf.data.Dataset.from_tensor_slices((\n dict(df),\n target\n))\n\nds = ds.batch(BATCH_SIZE)\n\nimport pprint\n\nfor x, y in ds.take(1):\n pprint.pprint(x)\n print()\n print(y)\n\nhistory = model.fit(ds, epochs=5)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kuo77122/deep-learning-nd
|
Lesson14-Sentiment Analysis/sentiment-network/Sentiment_Classification_Projects.ipynb
|
mit
|
[
"Sentiment Classification & How To \"Frame Problems\" for a Neural Network\nby Andrew Trask\n\nTwitter: @iamtrask\nBlog: http://iamtrask.github.io\n\nWhat You Should Already Know\n\nneural networks, forward and back-propagation\nstochastic gradient descent\nmean squared error\nand train/test splits\n\nWhere to Get Help if You Need it\n\nRe-watch previous Udacity Lectures\nLeverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)\nShoot me a tweet @iamtrask\n\nTutorial Outline:\n\n\nIntro: The Importance of \"Framing a Problem\" (this lesson)\n\n\nCurate a Dataset\n\nDeveloping a \"Predictive Theory\"\n\nPROJECT 1: Quick Theory Validation\n\n\nTransforming Text to Numbers\n\n\nPROJECT 2: Creating the Input/Output Data\n\n\nPutting it all together in a Neural Network (video only - nothing in notebook)\n\n\nPROJECT 3: Building our Neural Network\n\n\nUnderstanding Neural Noise\n\n\nPROJECT 4: Making Learning Faster by Reducing Noise\n\n\nAnalyzing Inefficiencies in our Network\n\n\nPROJECT 5: Making our Network Train and Run Faster\n\n\nFurther Noise Reduction\n\n\nPROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary\n\n\nAnalysis: What's going on in the weights?\n\n\nLesson: Curate a Dataset<a id='lesson_1'></a>\nThe cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.",
"def pretty_print_review_and_label(i):\n print(labels[i] + \"\\t:\\t\" + reviews[i][:80] + \"...\")\n\ng = open('reviews.txt','r') # What we know!\nreviews = list(map(lambda x:x[:-1],g.readlines()))\ng.close()\n\ng = open('labels.txt','r') # What we WANT to know!\nlabels = list(map(lambda x:x[:-1].upper(),g.readlines()))\ng.close()",
"Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.",
"len(reviews)\n\nreviews[0]\n\nlabels[0]",
"Lesson: Develop a Predictive Theory<a id='lesson_2'></a>",
"print(\"labels.txt \\t : \\t reviews.txt\\n\")\npretty_print_review_and_label(2137)\npretty_print_review_and_label(12816)\npretty_print_review_and_label(6267)\npretty_print_review_and_label(21934)\npretty_print_review_and_label(5297)\npretty_print_review_and_label(4998)",
"Project 1: Quick Theory Validation<a id='project_1'></a>\nThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.\nYou'll find the Counter class to be useful in this exercise, as well as the numpy library.",
"from collections import Counter\nimport numpy as np",
"We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.",
"# Create three Counter objects to store positive, negative and total counts\npositive_counts = Counter()\nnegative_counts = Counter()\ntotal_counts = Counter()",
"TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.\nNote: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.",
"# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects\nfor label, review in zip(labels, reviews):\n words = review.lower().replace(\",\", \" \").replace(\".\", \" \").split(\" \")\n total_counts.update(words)\n if label == \"POSITIVE\" :\n positive_counts.update(words)\n else:\n negative_counts.update(words)\n ",
"Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.",
"# Examine the counts of the most common words in positive reviews\npositive_counts.most_common()\n\n# Examine the counts of the most common words in negative reviews\nnegative_counts.most_common()",
"As you can see, common words like \"the\" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.\nTODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios. \n\nHint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.",
"# Create Counter object to store positive/negative ratios\npos_neg_ratios = Counter()\n\n# TODO: Calculate the ratios of positive and negative uses of the most common words\n# Consider words to be \"common\" if they've been used at least 100 times\nfor word, count in total_counts.most_common():\n ratio = positive_counts[word]/(float(negative_counts[word])+1.0)\n pos_neg_ratios.update({word:ratio})\n if count <100:\n break",
"Examine the ratios you've calculated for a few words:",
"print(\"Pos-to-neg ratio for 'the' = {}\".format(pos_neg_ratios[\"the\"]))\nprint(\"Pos-to-neg ratio for 'amazing' = {}\".format(pos_neg_ratios[\"amazing\"]))\nprint(\"Pos-to-neg ratio for 'terrible' = {}\".format(pos_neg_ratios[\"terrible\"]))",
"Looking closely at the values you just calculated, we see the following:\n\nWords that you would expect to see more often in positive reviews – like \"amazing\" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.\nWords that you would expect to see more often in negative reviews – like \"terrible\" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.\nNeutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like \"the\" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.\n\nOk, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like \"amazing\" has a value above 4, whereas a very negative word like \"terrible\" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:\n\nRight now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.\nWhen comparing absolute values it's easier to do that around zero than one. \n\nTo fix these issues, we'll convert all of our ratios to new values using logarithms.\nTODO: Go through all the ratios you calculated and convert their values using the following formulas:\n\n\nFor any postive words, convert the ratio using np.log(ratio)\nFor any negative words, convert the ratio using -np.log(1/(ratio + 0.01))\n\n\nThat second equation may look strange, but what it's doing is dividing one by a very small number, which will produce a larger positive number. Then, it takes the log of that, which produces numbers similar to the ones for the postive words. Finally, we negate the values by adding that minus sign up front. In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but oppositite signs.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nx = np.arange(0,5,0.01)\ny = [np.log(x) if x >= 1 else -np.log(1/(x + 0.01)) for x in x]\nimport matplotlib.pyplot as plt\nplt.plot(x,y)\nplt.grid(True)\nplt.show()\n\n# TODO: Convert ratios to logs\nfor word, ratio in pos_neg_ratios.most_common():\n if ratio >=1:\n pos_neg_ratios[word] = np.log(ratio)\n else:\n pos_neg_ratios[word] = -np.log(1/(ratio + 0.01))",
"Examine the new ratios you've calculated for the same words from before:",
"print(\"Pos-to-neg ratio for 'the' = {}\".format(pos_neg_ratios[\"the\"]))\nprint(\"Pos-to-neg ratio for 'amazing' = {}\".format(pos_neg_ratios[\"amazing\"]))\nprint(\"Pos-to-neg ratio for 'terrible' = {}\".format(pos_neg_ratios[\"terrible\"]))",
"If everything worked, now you should see neutral words with values close to zero. In this case, \"the\" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at \"amazing\"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And \"terrible\" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.\nNow run the following cells to see more ratios. \nThe first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)\nThe second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)\nYou should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.",
"# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]\n\n# Note: Above is the code Andrew uses in his solution video, \n# so we've included it here to avoid confusion.\n# If you explore the documentation for the Counter class, \n# you will see you could also find the 30 least common\n# words like this: pos_neg_ratios.most_common()[:-31:-1]",
"End of Project 1.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nTransforming Text into Numbers<a id='lesson_3'></a>\nThe cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.",
"from IPython.display import Image\n\nreview = \"This was a horrible, terrible movie.\"\n\nImage(filename='sentiment_network.png')\n\nreview = \"The movie was excellent\"\n\nImage(filename='sentiment_network_pos.png')",
"Project 2: Creating the Input/Output Data<a id='project_2'></a>\nTODO: Create a set named vocab that contains every word in the vocabulary.",
"# TODO: Create set named \"vocab\" containing all of the words from all of the reviews\nvocab = set(total_counts.keys())",
"Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074",
"vocab_size = len(vocab)\nprint(vocab_size)",
"Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.",
"from IPython.display import Image\nImage(filename='sentiment_network_2.png')",
"TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.",
"# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros\nlayer_0 = np.zeros((1,vocab_size))",
"Run the following cell. It should display (1, 74074)",
"layer_0.shape\n\nfrom IPython.display import Image\nImage(filename='sentiment_network.png')",
"layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.",
"# Create a dictionary of words in the vocabulary mapped to index positions\n# (to be used in layer_0)\nword2index = {}\nfor i,word in enumerate(vocab):\n word2index[word] = i\n \n# display the map of words to indices\nword2index",
"TODO: Complete the implementation of update_input_layer. It should count \n how many times each word is used in the given review, and then store\n those counts at the appropriate indices inside layer_0.",
"def update_input_layer(review):\n \"\"\" Modify the global layer_0 to represent the vector form of review.\n The element at a given index of layer_0 should represent\n how many times the given word occurs in the review.\n Args:\n review(string) - the string of the review\n Returns:\n None\n \"\"\"\n # use global avoide create new variable that may fill your RAM!\n global layer_0\n # clear out previous state by resetting the layer to be all 0s\n layer_0 *= 0\n \n # TODO: count how many times each word is used in the given review and store the results in layer_0 \n words = review.lower().replace(\",\", \" \").replace(\".\", \" \").split(\" \")\n for word in words:\n layer_0[0][word2index[word]] += 1\n ",
"Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.",
"update_input_layer(reviews[0])\nlayer_0",
"TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1, \n depending on whether the given label is NEGATIVE or POSITIVE, respectively.",
"def get_target_for_label(label):\n \"\"\"Convert a label to `0` or `1`.\n Args:\n label(string) - Either \"POSITIVE\" or \"NEGATIVE\".\n Returns:\n `0` or `1`.\n \"\"\"\n # TODO: Your code here\n return 1 if label==\"POSITIVE\" else 0",
"Run the following two cells. They should print out'POSITIVE' and 1, respectively.",
"labels[1]\n\nget_target_for_label(labels[0])",
"Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.",
"labels[1]\n\nget_target_for_label(labels[1])",
"End of Project 2.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nProject 3: Building a Neural Network<a id='project_3'></a>\nTODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:\n- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. \n- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.\n- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)\n- Implement the pre_process_data function to create the vocabulary for our training data generating functions\n- Ensure train trains over the entire corpus\nWhere to Get Help if You Need it\n\nRe-watch earlier Udacity lectures\nChapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)",
"import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n self.pre_process_data(reviews, labels)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n def pre_process_data(self, reviews, labels):\n \n # populate review_vocab with all of the words in the given reviews\n review_vocab = set()\n for review in reviews:\n for word in review.lower().replace(\",\", \" \").replace(\".\", \" \").split(\" \"):\n review_vocab.add(word)\n\n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n # populate label_vocab with all of the words in the given labels.\n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n\n # These are the weights between the input layer and the hidden layer.\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n # The input layer, a two-dimensional matrix with shape 1 x input_nodes\n self.layer_0 = np.zeros((1,input_nodes))\n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n \n for word in review.lower().replace(\",\", \" \").replace(\".\", \" \").split(\" \"):\n # NOTE: This if-check was not in the version of this method created in Project 2,\n # and it appears in Andrew's Project 3 solution without explanation. \n # It simply ensures the word is actually a key in word2index before\n # accessing it, which is important because accessing an invalid key\n # with raise an exception in Python. This allows us to ignore unknown\n # words encountered in new reviews.\n if(word in self.word2index.keys()):\n self.layer_0[0][self.word2index[word]] += 1\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def train(self, training_reviews, training_labels):\n \n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n\n # Remember when we started for printing time statistics\n start = time.time()\n \n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # Get the next review and its correct label\n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n self.update_input_layer(review)\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n # Keep track of correct predictions.\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n elif(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # Run a forward pass through the network, like in the \"train\" function.\n \n # Input Layer\n self.update_input_layer(review.lower())\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;\n # return NEGATIVE for other values\n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n ",
"Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)",
"Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). \nWe have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.",
"mlp.test(reviews[-1000:],labels[-1000:])",
"Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.",
"mlp.train(reviews[:-1000],labels[:-1000])",
"That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)\nmlp.train(reviews[:-1000],labels[:-1000])",
"With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.\nEnd of Project 3.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nUnderstanding Neural Noise<a id='lesson_4'></a>\nThe following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.",
"from IPython.display import Image\nImage(filename='sentiment_network.png')\n\ndef update_input_layer(review):\n \n global layer_0\n \n # clear out previous state, reset the layer to be all 0s\n layer_0 *= 0\n for word in review.lower().replace(\",\", \" \").replace(\".\", \" \").split(\" \"):\n layer_0[0][word2index[word]] += 1\n\nupdate_input_layer(reviews[0])\n\nlayer_0\n\nreview_counter = Counter()\n\nfor word in reviews[0].lower().replace(\",\", \" \").replace(\".\", \" \").split(\" \"):\n review_counter[word] += 1\n\nreview_counter.most_common()",
"Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>\nTODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:\n* Copy the SentimentNetwork class you created earlier into the following cell.\n* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.",
"import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n self.pre_process_data(reviews, labels)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n def pre_process_data(self, reviews, labels):\n \n # populate review_vocab with all of the words in the given reviews\n review_vocab = set()\n for review in reviews:\n for word in review.lower().replace(\",\", \" \").replace(\".\", \" \").split(\" \"):\n review_vocab.add(word)\n\n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n # populate label_vocab with all of the words in the given labels.\n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n \n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n\n # These are the weights between the input layer and the hidden layer.\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n \n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n # The input layer, a two-dimensional matrix with shape 1 x input_nodes\n self.layer_0 = np.zeros((1,input_nodes))\n \n def update_input_layer(self,review):\n\n # clear out previous state, reset the layer to be all 0s\n self.layer_0 *= 0\n \n for word in review.lower().replace(\",\", \" \").replace(\".\", \" \").split(\" \"):\n # NOTE: This if-check was not in the version of this method created in Project 2,\n # and it appears in Andrew's Project 3 solution without explanation. \n # It simply ensures the word is actually a key in word2index before\n # accessing it, which is important because accessing an invalid key\n # with raise an exception in Python. This allows us to ignore unknown\n # words encountered in new reviews.\n if(word in self.word2index.keys()):\n self.layer_0[0][self.word2index[word]] = 1\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n def train(self, training_reviews, training_labels):\n \n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n\n # Remember when we started for printing time statistics\n start = time.time()\n \n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # Get the next review and its correct label\n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n # Input Layer\n self.update_input_layer(review)\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n # Keep track of correct predictions.\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n elif(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # Run a forward pass through the network, like in the \"train\" function.\n \n # Input Layer\n self.update_input_layer(review.lower())\n\n # Hidden layer\n layer_1 = self.layer_0.dot(self.weights_0_1)\n\n # Output layer\n layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))\n \n # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;\n # return NEGATIVE for other values\n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n ",
"Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\nmlp.train(reviews[:-1000],labels[:-1000])",
"That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.",
"mlp.test(reviews[-1000:],labels[-1000:])",
"End of Project 4.\nAndrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.\nAnalyzing Inefficiencies in our Network<a id='lesson_5'></a>\nThe following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.",
"Image(filename='sentiment_network_sparse.png')\n\nlayer_0 = np.zeros(10)\n\nlayer_0\n\nlayer_0[4] = 1\nlayer_0[9] = 1\n\nlayer_0\n\nweights_0_1 = np.random.randn(10,5)\n\nlayer_0.dot(weights_0_1)\n\nindices = [4,9]\n\nlayer_1 = np.zeros(5)\n\nfor index in indices:\n layer_1 += (1 * weights_0_1[index])\n\nlayer_1\n\nImage(filename='sentiment_network_sparse_2.png')\n\nlayer_1 = np.zeros(5)\n\nfor index in indices:\n layer_1 += (weights_0_1[index])\n\nlayer_1",
"Project 5: Making our Network More Efficient<a id='project_5'></a>\nTODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:\n* Copy the SentimentNetwork class from the previous project into the following cell.\n* Remove the update_input_layer function - you will not need it in this version.\n* Modify init_network:\n\n\nYou no longer need a separate input layer, so remove any mention of self.layer_0\nYou will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero\nModify train:\nChange the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.\nAt the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.\nRemove call to update_input_layer\nUse self's layer_1 instead of a local layer_1 object.\nIn the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.\nWhen updating weights_0_1, only update the individual weights that were used in the forward pass.\nModify run:\nRemove call to update_input_layer \nUse self's layer_1 instead of a local layer_1 object.\nMuch like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.",
"import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n self.pre_process_data(reviews, labels)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n def pre_process_data(self, reviews, labels):\n \n # populate review_vocab with all of the words in the given reviews\n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n review_vocab.add(word)\n\n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n # populate label_vocab with all of the words in the given labels.\n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n\n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n\n # These are the weights between the input layer and the hidden layer.\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n\n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n ## New for Project 5: Removed self.layer_0; added self.layer_1\n # The input layer, a two-dimensional matrix with shape 1 x hidden_nodes\n self.layer_1 = np.zeros((1,hidden_nodes))\n \n ## New for Project 5: Removed update_input_layer function\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n ## New for Project 5: changed name of first parameter form 'training_reviews' \n # to 'training_reviews_raw'\n def train(self, training_reviews_raw, training_labels):\n\n ## New for Project 5: pre-process training reviews so we can deal \n # directly with the indices of non-zero inputs\n training_reviews = list()\n for review in training_reviews_raw:\n indices = set()\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n indices.add(self.word2index[word])\n training_reviews.append(list(indices))\n\n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n\n # Remember when we started for printing time statistics\n start = time.time()\n \n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # Get the next review and its correct label\n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n ## New for Project 5: Removed call to 'update_input_layer' function\n # because 'layer_0' is no longer used\n\n # Hidden layer\n ## New for Project 5: Add in only the weights for non-zero items\n self.layer_1 *= 0\n for index in review:\n self.layer_1 += self.weights_0_1[index]\n\n # Output layer\n ## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) \n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n ## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'\n self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n \n ## New for Project 5: Only update the weights that were used in the forward pass\n for index in review:\n self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n # Keep track of correct predictions.\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n elif(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # Run a forward pass through the network, like in the \"train\" function.\n \n ## New for Project 5: Removed call to update_input_layer function\n # because layer_0 is no longer used\n\n # Hidden layer\n ## New for Project 5: Identify the indices used in the review and then add\n # just those weights to layer_1 \n self.layer_1 *= 0\n unique_indices = set()\n for word in review.lower().split(\" \"):\n if word in self.word2index.keys():\n unique_indices.add(self.word2index[word])\n for index in unique_indices:\n self.layer_1 += self.weights_0_1[index]\n \n # Output layer\n ## New for Project 5: changed to use self.layer_1 instead of local layer_1\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))\n \n # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;\n # return NEGATIVE for other values\n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n\n",
"Run the following cell to recreate the network and train it once again.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)\nmlp.train(reviews[:-1000],labels[:-1000])",
"That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.",
"mlp.test(reviews[-1000:],labels[-1000:])",
"End of Project 5.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nFurther Noise Reduction<a id='lesson_6'></a>",
"Image(filename='sentiment_network_sparse_2.png')\n\n# words most frequently seen in a review with a \"POSITIVE\" label\npos_neg_ratios.most_common()\n\n# words most frequently seen in a review with a \"NEGATIVE\" label\nlist(reversed(pos_neg_ratios.most_common()))[0:30]\n\nfrom bokeh.models import ColumnDataSource, LabelSet\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.io import output_notebook\noutput_notebook()\n\nhist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"Word Positive/Negative Affinity Distribution\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)\n\nfrequency_frequency = Counter()\n\nfor word, cnt in total_counts.most_common():\n frequency_frequency[cnt] += 1\n\nhist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"The frequency distribution of the words in our corpus\")\np.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color=\"#555555\")\nshow(p)",
"Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>\nTODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:\n* Copy the SentimentNetwork class from the previous project into the following cell.\n* Modify pre_process_data:\n\n\nAdd two additional parameters: min_count and polarity_cutoff\nCalculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)\nAndrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. \nChange so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.\nChange so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff\nModify __init__:\nAdd the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data",
"import time\nimport sys\nimport numpy as np\n\n# Encapsulate our neural network in a class\nclass SentimentNetwork:\n ## New for Project 6: added min_count and polarity_cutoff parameters\n def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):\n \"\"\"Create a SentimenNetwork with the given settings\n Args:\n reviews(list) - List of reviews used for training\n labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews\n min_count(int) - Words should only be added to the vocabulary \n if they occur more than this many times\n polarity_cutoff(float) - The absolute value of a word's positive-to-negative\n ratio must be at least this big to be considered.\n hidden_nodes(int) - Number of nodes to create in the hidden layer\n learning_rate(float) - Learning rate to use while training\n \n \"\"\"\n # Assign a seed to our random number generator to ensure we get\n # reproducable results during development \n np.random.seed(1)\n\n # process the reviews and their associated labels so that everything\n # is ready for training\n ## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call\n self.pre_process_data(reviews, labels, polarity_cutoff, min_count)\n \n # Build the network to have the number of hidden nodes and the learning rate that\n # were passed into this initializer. Make the same number of input nodes as\n # there are vocabulary words and create a single output node.\n self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)\n\n ## New for Project 6: added min_count and polarity_cutoff parameters\n def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):\n \n ## ----------------------------------------\n ## New for Project 6: Calculate positive-to-negative ratios for words before\n # building vocabulary\n #\n positive_counts = Counter()\n negative_counts = Counter()\n total_counts = Counter()\n\n for i in range(len(reviews)):\n if(labels[i] == 'POSITIVE'):\n for word in reviews[i].split(\" \"):\n positive_counts[word] += 1\n total_counts[word] += 1\n else:\n for word in reviews[i].split(\" \"):\n negative_counts[word] += 1\n total_counts[word] += 1\n\n pos_neg_ratios = Counter()\n\n for term,cnt in list(total_counts.most_common()):\n if(cnt >= 50):\n pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)\n pos_neg_ratios[term] = pos_neg_ratio\n\n for word,ratio in pos_neg_ratios.most_common():\n if(ratio > 1):\n pos_neg_ratios[word] = np.log(ratio)\n else:\n pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))\n #\n ## end New for Project 6\n ## ----------------------------------------\n\n # populate review_vocab with all of the words in the given reviews\n review_vocab = set()\n for review in reviews:\n for word in review.split(\" \"):\n ## New for Project 6: only add words that occur at least min_count times\n # and for words with pos/neg ratios, only add words\n # that meet the polarity_cutoff\n if(total_counts[word] > min_count):\n if(word in pos_neg_ratios.keys()):\n if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):\n review_vocab.add(word)\n else:\n review_vocab.add(word)\n\n # Convert the vocabulary set to a list so we can access words via indices\n self.review_vocab = list(review_vocab)\n \n # populate label_vocab with all of the words in the given labels.\n label_vocab = set()\n for label in labels:\n label_vocab.add(label)\n \n # Convert the label vocabulary set to a list so we can access labels via indices\n self.label_vocab = list(label_vocab)\n \n # Store the sizes of the review and label vocabularies.\n self.review_vocab_size = len(self.review_vocab)\n self.label_vocab_size = len(self.label_vocab)\n \n # Create a dictionary of words in the vocabulary mapped to index positions\n self.word2index = {}\n for i, word in enumerate(self.review_vocab):\n self.word2index[word] = i\n \n # Create a dictionary of labels mapped to index positions\n self.label2index = {}\n for i, label in enumerate(self.label_vocab):\n self.label2index[label] = i\n\n def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Store the learning rate\n self.learning_rate = learning_rate\n\n # Initialize weights\n\n # These are the weights between the input layer and the hidden layer.\n self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))\n\n # These are the weights between the hidden layer and the output layer.\n self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n \n ## New for Project 5: Removed self.layer_0; added self.layer_1\n # The input layer, a two-dimensional matrix with shape 1 x hidden_nodes\n self.layer_1 = np.zeros((1,hidden_nodes))\n \n ## New for Project 5: Removed update_input_layer function\n \n def get_target_for_label(self,label):\n if(label == 'POSITIVE'):\n return 1\n else:\n return 0\n \n def sigmoid(self,x):\n return 1 / (1 + np.exp(-x))\n \n def sigmoid_output_2_derivative(self,output):\n return output * (1 - output)\n \n ## New for Project 5: changed name of first parameter form 'training_reviews' \n # to 'training_reviews_raw'\n def train(self, training_reviews_raw, training_labels):\n\n ## New for Project 5: pre-process training reviews so we can deal \n # directly with the indices of non-zero inputs\n training_reviews = list()\n for review in training_reviews_raw:\n indices = set()\n for word in review.split(\" \"):\n if(word in self.word2index.keys()):\n indices.add(self.word2index[word])\n training_reviews.append(list(indices))\n\n # make sure out we have a matching number of reviews and labels\n assert(len(training_reviews) == len(training_labels))\n \n # Keep track of correct predictions to display accuracy during training \n correct_so_far = 0\n\n # Remember when we started for printing time statistics\n start = time.time()\n \n # loop through all the given reviews and run a forward and backward pass,\n # updating weights for every item\n for i in range(len(training_reviews)):\n \n # Get the next review and its correct label\n review = training_reviews[i]\n label = training_labels[i]\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n\n ## New for Project 5: Removed call to 'update_input_layer' function\n # because 'layer_0' is no longer used\n\n # Hidden layer\n ## New for Project 5: Add in only the weights for non-zero items\n self.layer_1 *= 0\n for index in review:\n self.layer_1 += self.weights_0_1[index]\n\n # Output layer\n ## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2)) \n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # Output error\n layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.\n layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)\n\n # Backpropagated error\n layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer\n layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error\n\n # Update the weights\n ## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'\n self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step\n \n ## New for Project 5: Only update the weights that were used in the forward pass\n for index in review:\n self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step\n\n # Keep track of correct predictions.\n if(layer_2 >= 0.5 and label == 'POSITIVE'):\n correct_so_far += 1\n elif(layer_2 < 0.5 and label == 'NEGATIVE'):\n correct_so_far += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the training process. \n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(training_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct_so_far) + \" #Trained:\" + str(i+1) \\\n + \" Training Accuracy:\" + str(correct_so_far * 100 / float(i+1))[:4] + \"%\")\n if(i % 2500 == 0):\n print(\"\")\n \n def test(self, testing_reviews, testing_labels):\n \"\"\"\n Attempts to predict the labels for the given testing_reviews,\n and uses the test_labels to calculate the accuracy of those predictions.\n \"\"\"\n \n # keep track of how many correct predictions we make\n correct = 0\n\n # we'll time how many predictions per second we make\n start = time.time()\n\n # Loop through each of the given reviews and call run to predict\n # its label. \n for i in range(len(testing_reviews)):\n pred = self.run(testing_reviews[i])\n if(pred == testing_labels[i]):\n correct += 1\n \n # For debug purposes, print out our prediction accuracy and speed \n # throughout the prediction process. \n\n elapsed_time = float(time.time() - start)\n reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0\n \n sys.stdout.write(\"\\rProgress:\" + str(100 * i/float(len(testing_reviews)))[:4] \\\n + \"% Speed(reviews/sec):\" + str(reviews_per_second)[0:5] \\\n + \" #Correct:\" + str(correct) + \" #Tested:\" + str(i+1) \\\n + \" Testing Accuracy:\" + str(correct * 100 / float(i+1))[:4] + \"%\")\n \n def run(self, review):\n \"\"\"\n Returns a POSITIVE or NEGATIVE prediction for the given review.\n \"\"\"\n # Run a forward pass through the network, like in the \"train\" function.\n \n ## New for Project 5: Removed call to update_input_layer function\n # because layer_0 is no longer used\n\n # Hidden layer\n ## New for Project 5: Identify the indices used in the review and then add\n # just those weights to layer_1 \n self.layer_1 *= 0\n unique_indices = set()\n for word in review.lower().split(\" \"):\n if word in self.word2index.keys():\n unique_indices.add(self.word2index[word])\n for index in unique_indices:\n self.layer_1 += self.weights_0_1[index]\n \n # Output layer\n ## New for Project 5: changed to use self.layer_1 instead of local layer_1\n layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))\n \n # Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;\n # return NEGATIVE for other values\n if(layer_2[0] >= 0.5):\n return \"POSITIVE\"\n else:\n return \"NEGATIVE\"\n\n",
"Run the following cell to train your network with a small polarity cutoff.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"And run the following cell to test it's performance. It should be",
"mlp.test(reviews[-1000:],labels[-1000:])",
"Run the following cell to train your network with a much larger polarity cutoff.",
"mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)\nmlp.train(reviews[:-1000],labels[:-1000])",
"And run the following cell to test it's performance.",
"mlp.test(reviews[-1000:],labels[-1000:])",
"End of Project 6.\nWatch the next video to see Andrew's solution, then continue on to the next lesson.\nAnalysis: What's Going on in the Weights?<a id='lesson_7'></a>",
"mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)\n\nmlp_full.train(reviews[:-1000],labels[:-1000])\n\nImage(filename='sentiment_network_sparse.png')\n\ndef get_most_similar_words(focus = \"horrible\"):\n most_similar = Counter()\n\n for word in mlp_full.word2index.keys():\n most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])\n \n return most_similar.most_common()\n\nget_most_similar_words(\"excellent\")\n\nget_most_similar_words(\"terrible\")\n\nimport matplotlib.colors as colors\n\nwords_to_visualize = list()\nfor word, ratio in pos_neg_ratios.most_common(500):\n if(word in mlp_full.word2index.keys()):\n words_to_visualize.append(word)\n \nfor word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:\n if(word in mlp_full.word2index.keys()):\n words_to_visualize.append(word)\n\npos = 0\nneg = 0\n\ncolors_list = list()\nvectors_list = list()\nfor word in words_to_visualize:\n if word in pos_neg_ratios.keys():\n vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])\n if(pos_neg_ratios[word] > 0):\n pos+=1\n colors_list.append(\"#00ff00\")\n else:\n neg+=1\n colors_list.append(\"#000000\")\n\nfrom sklearn.manifold import TSNE\ntsne = TSNE(n_components=2, random_state=0)\nwords_top_ted_tsne = tsne.fit_transform(vectors_list)\n\np = figure(tools=\"pan,wheel_zoom,reset,save\",\n toolbar_location=\"above\",\n title=\"vector T-SNE for most polarized words\")\n\nsource = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],\n x2=words_top_ted_tsne[:,1],\n names=words_to_visualize))\n\np.scatter(x=\"x1\", y=\"x2\", size=8, source=source,color=colors_list)\n\nword_labels = LabelSet(x=\"x1\", y=\"x2\", text=\"names\", y_offset=6,\n text_font_size=\"8pt\", text_color=\"#555555\",\n source=source, text_align='center')\np.add_layout(word_labels)\n\nshow(p)\n\n# green indicates positive words, black indicates negative words"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
d-k-b/udacity-deep-learning
|
weight-initialization/weight_initialization.ipynb
|
mit
|
[
"Weight Initialization\nIn this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker. \nTesting Weights\nDataset\nTo see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.\nWe'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.",
"%matplotlib inline\n\nimport tensorflow as tf\nimport helper\n\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nprint('Getting MNIST Dataset...')\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\nprint('Data Extracted.')",
"Neural Network\n<img style=\"float: left\" src=\"images/neural_network.png\"/>\nFor the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.",
"# Save the shapes of weights for each layer\nlayer_1_weight_shape = (mnist.train.images.shape[1], 256)\nlayer_2_weight_shape = (256, 128)\nlayer_3_weight_shape = (128, mnist.train.labels.shape[1])",
"Initialize Weights\nLet's start looking at some initial weights.\nAll Zeros or Ones\nIf you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.\nWith every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.\nLet's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.\nRun the cell below to see the difference between weights of all zeros against all ones.",
"all_zero_weights = [\n tf.Variable(tf.zeros(layer_1_weight_shape)),\n tf.Variable(tf.zeros(layer_2_weight_shape)),\n tf.Variable(tf.zeros(layer_3_weight_shape))\n]\n\nall_one_weights = [\n tf.Variable(tf.ones(layer_1_weight_shape)),\n tf.Variable(tf.ones(layer_2_weight_shape)),\n tf.Variable(tf.ones(layer_3_weight_shape))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'All Zeros vs All Ones',\n [\n (all_zero_weights, 'All Zeros'),\n (all_one_weights, 'All Ones')])",
"As you can see the accuracy is close to guessing for both zeros and ones, around 10%.\nThe neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.\nA good solution for getting these random weights is to sample from a uniform distribution.\nUniform Distribution\nA [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution.\n\ntf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a uniform distribution.\nThe generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nminval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.\nmaxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.\ndtype: The type of the output: float32, float64, int32, or int64.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).\n\n\nWe can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3.",
"helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3))",
"The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.\nNow that you understand the tf.random_uniform function, let's apply it to some initial weights.\nBaseline\nLet's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and maxval=1.0.",
"# Default for tf.random_uniform is minval=0 and maxval=1\nbasline_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Baseline',\n [(basline_weights, 'tf.random_uniform [0, 1)')])",
"The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.\nGeneral rule for setting weights\nThe general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where\n$y=1/\\sqrt{n}$ ($n$ is the number of inputs to a given neuron).\nLet's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).",
"uniform_neg1to1_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[0, 1) vs [-1, 1)',\n [\n (basline_weights, 'tf.random_uniform [0, 1)'),\n (uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])",
"We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?\nToo small\nLet's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.",
"uniform_neg01to01_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))\n]\n\nuniform_neg001to001_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))\n]\n\nuniform_neg0001to0001_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',\n [\n (uniform_neg1to1_weights, '[-1, 1)'),\n (uniform_neg01to01_weights, '[-0.1, 0.1)'),\n (uniform_neg001to001_weights, '[-0.01, 0.01)'),\n (uniform_neg0001to0001_weights, '[-0.001, 0.001)')],\n plot_n_batches=None)",
"Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\\sqrt{n}$.",
"import numpy as np\n\ngeneral_rule_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[-0.1, 0.1) vs General Rule',\n [\n (uniform_neg01to01_weights, '[-0.1, 0.1)'),\n (general_rule_weights, 'General Rule')],\n plot_n_batches=None)",
"The range we found and $y=1/\\sqrt{n}$ are really close.\nSince the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.\nNormal Distribution\nUnlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.\n\ntf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a normal distribution.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nmean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.\nstddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.\ndtype: The type of the output.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).",
"helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000]))",
"Let's compare the normal distribution against the previous uniform distribution.",
"normal_01_weights = [\n tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),\n tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),\n tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Uniform [-0.1, 0.1) vs Normal stddev 0.1',\n [\n (uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),\n (normal_01_weights, 'Normal stddev 0.1')])",
"The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.\nTruncated Normal Distribution\n\ntf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a truncated normal distribution.\nThe generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nmean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.\nstddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.\ndtype: The type of the output.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).",
"helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))",
"Again, let's compare the previous results with the previous distribution.",
"trunc_normal_01_weights = [\n tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),\n tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),\n tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Normal vs Truncated Normal',\n [\n (normal_01_weights, 'Normal'),\n (trunc_normal_01_weights, 'Truncated Normal')])",
"There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.\nWe've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.",
"helper.compare_init_weights(\n mnist,\n 'Baseline vs Truncated Normal',\n [\n (basline_weights, 'Baseline'),\n (trunc_normal_01_weights, 'Truncated Normal')])",
"That's a huge difference. You can barely see the truncated normal line. However, this is not the end your learning path. We've provided more resources for initializing weights in the classroom!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statsmodels/statsmodels.github.io
|
v0.12.2/examples/notebooks/generated/markov_regression.ipynb
|
bsd-3-clause
|
[
"Markov switching dynamic regression models\nThis notebook provides an example of the use of Markov switching models in statsmodels to estimate dynamic regression models with changes in regime. It follows the examples in the Stata Markov switching documentation, which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\n# NBER recessions\nfrom pandas_datareader.data import DataReader\nfrom datetime import datetime\nusrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))",
"Federal funds rate with switching intercept\nThe first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply:\n$$r_t = \\mu_{S_t} + \\varepsilon_t \\qquad \\varepsilon_t \\sim N(0, \\sigma^2)$$\nwhere $S_t \\in {0, 1}$, and the regime transitions according to\n$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =\n\\begin{bmatrix}\np_{00} & p_{10} \\\n1 - p_{00} & 1 - p_{10}\n\\end{bmatrix}\n$$\nWe will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \\mu_0, \\mu_1, \\sigma^2$.\nThe data used in this example can be found at https://www.stata-press.com/data/r14/usmacro.",
"# Get the federal funds rate data\nfrom statsmodels.tsa.regime_switching.tests.test_markov_regression import fedfunds\ndta_fedfunds = pd.Series(fedfunds, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))\n\n# Plot the data\ndta_fedfunds.plot(title='Federal funds rate', figsize=(12,3))\n\n# Fit the model\n# (a switching mean is the default of the MarkovRegession model)\nmod_fedfunds = sm.tsa.MarkovRegression(dta_fedfunds, k_regimes=2)\nres_fedfunds = mod_fedfunds.fit()\n\nres_fedfunds.summary()",
"From the summary output, the mean federal funds rate in the first regime (the \"low regime\") is estimated to be $3.7$ whereas in the \"high regime\" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed.",
"res_fedfunds.smoothed_marginal_probabilities[1].plot(\n title='Probability of being in the high regime', figsize=(12,3));",
"From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime.",
"print(res_fedfunds.expected_durations)",
"A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years.\nFederal funds rate with switching intercept and lagged dependent variable\nThe second example augments the previous model to include the lagged value of the federal funds rate.\n$$r_t = \\mu_{S_t} + r_{t-1} \\beta_{S_t} + \\varepsilon_t \\qquad \\varepsilon_t \\sim N(0, \\sigma^2)$$\nwhere $S_t \\in {0, 1}$, and the regime transitions according to\n$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =\n\\begin{bmatrix}\np_{00} & p_{10} \\\n1 - p_{00} & 1 - p_{10}\n\\end{bmatrix}\n$$\nWe will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \\mu_0, \\mu_1, \\beta_0, \\beta_1, \\sigma^2$.",
"# Fit the model\nmod_fedfunds2 = sm.tsa.MarkovRegression(\n dta_fedfunds.iloc[1:], k_regimes=2, exog=dta_fedfunds.iloc[:-1])\nres_fedfunds2 = mod_fedfunds2.fit()\n\nres_fedfunds2.summary()",
"There are several things to notice from the summary output:\n\nThe information criteria have decreased substantially, indicating that this model has a better fit than the previous model.\nThe interpretation of the regimes, in terms of the intercept, have switched. Now the first regime has the higher intercept and the second regime has a lower intercept.\n\nExamining the smoothed probabilities of the high regime state, we now see quite a bit more variability.",
"res_fedfunds2.smoothed_marginal_probabilities[0].plot(\n title='Probability of being in the high regime', figsize=(12,3));",
"Finally, the expected durations of each regime have decreased quite a bit.",
"print(res_fedfunds2.expected_durations)",
"Taylor rule with 2 or 3 regimes\nWe now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better.\nBecause the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions.",
"# Get the additional data\nfrom statsmodels.tsa.regime_switching.tests.test_markov_regression import ogap, inf\ndta_ogap = pd.Series(ogap, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))\ndta_inf = pd.Series(inf, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))\n\nexog = pd.concat((dta_fedfunds.shift(), dta_ogap, dta_inf), axis=1).iloc[4:]\n\n# Fit the 2-regime model\nmod_fedfunds3 = sm.tsa.MarkovRegression(\n dta_fedfunds.iloc[4:], k_regimes=2, exog=exog)\nres_fedfunds3 = mod_fedfunds3.fit()\n\n# Fit the 3-regime model\nnp.random.seed(12345)\nmod_fedfunds4 = sm.tsa.MarkovRegression(\n dta_fedfunds.iloc[4:], k_regimes=3, exog=exog)\nres_fedfunds4 = mod_fedfunds4.fit(search_reps=20)\n\nres_fedfunds3.summary()\n\nres_fedfunds4.summary()",
"Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below.",
"fig, axes = plt.subplots(3, figsize=(10,7))\n\nax = axes[0]\nax.plot(res_fedfunds4.smoothed_marginal_probabilities[0])\nax.set(title='Smoothed probability of a low-interest rate regime')\n\nax = axes[1]\nax.plot(res_fedfunds4.smoothed_marginal_probabilities[1])\nax.set(title='Smoothed probability of a medium-interest rate regime')\n\nax = axes[2]\nax.plot(res_fedfunds4.smoothed_marginal_probabilities[2])\nax.set(title='Smoothed probability of a high-interest rate regime')\n\nfig.tight_layout()",
"Switching variances\nWe can also accommodate switching variances. In particular, we consider the model\n$$\ny_t = \\mu_{S_t} + y_{t-1} \\beta_{S_t} + \\varepsilon_t \\quad \\varepsilon_t \\sim N(0, \\sigma_{S_t}^2)\n$$\nWe use maximum likelihood to estimate the parameters of this model: $p_{00}, p_{10}, \\mu_0, \\mu_1, \\beta_0, \\beta_1, \\sigma_0^2, \\sigma_1^2$.\nThe application is to absolute returns on stocks, where the data can be found at https://www.stata-press.com/data/r14/snp500.",
"# Get the federal funds rate data\nfrom statsmodels.tsa.regime_switching.tests.test_markov_regression import areturns\ndta_areturns = pd.Series(areturns, index=pd.date_range('2004-05-04', '2014-5-03', freq='W'))\n\n# Plot the data\ndta_areturns.plot(title='Absolute returns, S&P500', figsize=(12,3))\n\n# Fit the model\nmod_areturns = sm.tsa.MarkovRegression(\n dta_areturns.iloc[1:], k_regimes=2, exog=dta_areturns.iloc[:-1], switching_variance=True)\nres_areturns = mod_areturns.fit()\n\nres_areturns.summary()",
"The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy.",
"res_areturns.smoothed_marginal_probabilities[0].plot(\n title='Probability of being in a low-variance regime', figsize=(12,3));"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ethen8181/machine-learning
|
ab_tests/causal_inference/matching.ipynb
|
mit
|
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Causal-Inference\" data-toc-modified-id=\"Causal-Inference-1\"><span class=\"toc-item-num\">1 </span>Causal Inference</a></span><ul class=\"toc-item\"><li><span><a href=\"#The-Definition-of-Causal-Effect\" data-toc-modified-id=\"The-Definition-of-Causal-Effect-1.1\"><span class=\"toc-item-num\">1.1 </span>The Definition of Causal Effect</a></span></li><li><span><a href=\"#Assumptions-of-Estimating-Causal-Effect\" data-toc-modified-id=\"Assumptions-of-Estimating-Causal-Effect-1.2\"><span class=\"toc-item-num\">1.2 </span>Assumptions of Estimating Causal Effect</a></span></li><li><span><a href=\"#Confounders\" data-toc-modified-id=\"Confounders-1.3\"><span class=\"toc-item-num\">1.3 </span>Confounders</a></span></li><li><span><a href=\"#Randomized-Trials-v.s.-Observational-Studies\" data-toc-modified-id=\"Randomized-Trials-v.s.-Observational-Studies-1.4\"><span class=\"toc-item-num\">1.4 </span>Randomized Trials v.s. Observational Studies</a></span></li><li><span><a href=\"#Matching\" data-toc-modified-id=\"Matching-1.5\"><span class=\"toc-item-num\">1.5 </span>Matching</a></span><ul class=\"toc-item\"><li><span><a href=\"#Propensity-Scores\" data-toc-modified-id=\"Propensity-Scores-1.5.1\"><span class=\"toc-item-num\">1.5.1 </span>Propensity Scores</a></span></li></ul></li><li><span><a href=\"#Implementation\" data-toc-modified-id=\"Implementation-1.6\"><span class=\"toc-item-num\">1.6 </span>Implementation</a></span></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2 </span>Reference</a></span></li></ul></div>",
"# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', '..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(plot_style=False)\n\nos.chdir(path)\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format='retina'\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\n\n%watermark -a 'Ethen' -d -t -v -p numpy,scipy,pandas,sklearn,matplotlib,seaborn",
"Causal Inference\nA typical statement that people make in the real world follows a pattern like this:\n\nI took ibuprofen and my headache is gone, therefore the medicine worked.\n\nUpon seeing this statement, we may be tempted to view the statement above as a causal effect, where ibuprofen does in fact help with headache. The statement, however, does not tell us what would have happened if the person didn't take the medicine. Maybe headache would have been cured without taking the medicine.\nWe'll take a moment and introduce some notation to formalize the discussion of causal inferencing. We denote $Y^a$ as the outcome that would have been observed if treatment was set to $A = a$. In the context of causal inferencing, there are two possible actions that can be applied to an individual. $1$, treatment; $0$, control. Hence $Y^1$ denotes the outcome if the treatment was applied, whereas $Y^0$ measures the outcome if individual was under the control group.\nComing back to the statement above. The reason why it isn't a proper causal effect is because it's only telling us $Y^1=1$. It doesn't tell us what would have happened had we not taken ibuprofen, $Y^0=?$. And we can only state that there is a causal effect if $Y^1 \\neq Y^0$.\nThe two main messages that we're getting at in the section above are:\nFirst, in the context of causal inferencing, a lot of times we're mainly interested in the relationship between means of different potential outcomes. \n\\begin{align}\nE(Y^1 - Y^0)\n\\end{align}\nWhere the term, potential outcome, refers to the outcome we would see under each possible treatment option. More on this formula in the next section.\nSecond, our little statement above shows what is known as the fundamental problem of causal inferencing. Meaning we can only observe one potential outcome for each person. However, with certain assumptions, we can estimate population level causal effects. In other words, it is possible for us to answer questions such as: what would the rate of headache remission be if everyone took ibuprofen when they had a headache versus if no one did.\nThus the next question is, how do we use observed data to link observed outcome to potential outcome.\nThe Definition of Causal Effect\nIn the previous section, we flashed the idea that with causal inferencing, we're interesting in estimating $E(Y^1 - Y^0)$. This notation denotes average causal effect. What this means is: \nImagine a hypothetical world where our entire population received treatment $A=0$. Versus, some other hypothetical world where everyone received the other treatment, $A=1$. The most important thing here is the two hypothetical world should have the exact same population. The only difference is the difference in our treatment. If we were able to observe both of these worlds simultaneously, we could collect the outcome data from everyone in the populations, and then compute average difference.\nThis is what we mean by an average causal effect. It's computed over the whole population and we're saying, what would the average outcome be if everybody received one treatment, versus if everybody received another treatment. Of course, in reality, we're not going to see both of these worlds, but this is what we're hoping to estimate.\nIn reality, what we get from an experiment is $E(Y|A=1)$. Here, we are saying what is the expected value of $Y$ given $A=1$. An equivalent way of reading this is the expected value of $Y$ restricting to the sub-population who actually had $A=1$. The main point that we're getting at is:\n\\begin{align}\nE(Y^1 - Y^0) \\neq E(Y|A=1) - E(Y|A=0)\n\\end{align}\nThe reason being the sub-population might differ from the whole population in important ways. e.g. People at higher risk for the flu might be more likely to get the flu shot. Then if we take the expected value of $Y$ among people who actually got the flu shot, we're taking expected value of $Y$ among a higher risk population. And this will most likely be different than the expected value of the potential outcome $Y^1$ because $Y^1$ is the outcome if everyone in the whole population got treatment, i.e. it's not restricting to a sub-population.\nWe could of course still compute $E(Y|A=1) - E(Y|A=0)$, but we need to keep in mind that the people who received treatment $A=0$ might differ in fundamental ways from people who got treatment $A=1$. So we haven't isolated a treatment effect, because these are different people, and they might have different characteristics in general. So that's why this distinction between the two are very important. The causal effect where we're manipulating treatment on the same group of people versus what we actually observe, which is the difference in means among some populations that are defined by treatment.\nIn short, the take home message from this paragraph is that $E(Y|A=1) - E(Y|A=0)$ is generally not a causal effect, because it is comparing two different populations of people.\nAssumptions of Estimating Causal Effect\nHopefully, by now, we know the fundamental problem of causal inferencing is that we can only observe one treatment and one outcome for each person at a given point in time. The next question is how do we then use observed data to link observed outcomes to potential outcomes? To do so, we'll need make some assumptions.\nOur observed data typically consists of an outcome, $Y$, a treatment variable $A$, and then some set of covariates $X$ (additional information that we may wish to collect for individuals in the studies). The assumptions that we'll make are listed below.\n\nStable Unit Treatment Value Assumption (SUTVA): Not interference between units. Treatment assignment of one unit does not affect the outcome of another unit. Same sentence but phrased a bit differently: When we assign somebody a treatment how effective that is isn't dependent on what is happening with other people\nPositivity Assumption: For every set of values for $X$, treatment assignment was not deterministic: $P(A=a|X=x) > 0$ for all $a$ and $x$. This assumption ensures we can have some data at every level of $X$ for people who are treated and not treated. The reason we need this assumption is: if for a given value of $X$, everybody would/wouldn't treated, then there's really no way for us to learn what would've happened if they were/weren't treated. In some cases where people with certain diseases might be ineligible for a particular treatment, we wouldn't want to make inference about that population, so we would probably exclude them from the study.\nIgnorability Assumption: Given pre-treatment covariates $X$, treatment assignment is independent of the potential outcome. $Y^1, Y^0 \\perp A | X$ (here, $\\perp$ denotes independence). So among people with the same values of $X$, we could essentially think of treatment as being randomly assigned. This is also referred to as \"no unmeasured confounders' assumption\".\nConsistency Assumption: The potential outcome under the treatment $A=a$, $Y^a$ is equivalent to the observed outcome $Y$, if the actual treatment received is $A=a$. i.e. $Y = Y^a \\text{ if } A=a$.\n\nGiven the assumptions above, we can link our observe data to potential outcomes. Given that we have:\n\\begin{align}\nE(Y | A=a, X=x)\n&= E(Y^a | A=a, X=x) \\text{ due to consistency} \\ \n&= E(Y^a | X=x) \\text{ due to ignorability}\n\\end{align}\nTo elaborate on the ignorability assumption, the assumption said that conditional on the set of covariates $X$, the treatment assignment mechanism doesn't matter, it's just random. So, in other words, conditioning on $A$ isn't providing us any additional information about the mean of the potential outcome here.\nAfter laying down the assumptions behind causal inferencing, we'll also introduce two important concepts confounders and observational studies.\nConfounders\nConfounders are defined as variables that affect both the treatment and outcome. e.g. Imagine that older people are at higher risk of cardiovascular disease (the outcome), but are also more likely to receive statins (the treatment). In this case, age would be a confounder. Age is affecting both the treatment decision, which here is whether or not to receive statins, and is also directly affecting the outcome, which is cardiovascular disease. So, when it comes to confounder control, we are interested in first identifying a set of variables $X$ that will make the ignorability assumption hold. After finding a set of variables like this, will we then have hope of estimating causal effects.\nRandomized Trials v.s. Observational Studies\nIn a randomized trial, the treatment assignment, $A$, would be randomly decided. Thus if the randomized trial is actually randomized, the distribution of our covariates, $X$, will be the same in both groups. i.e. the covariates are said to be balanced. Thus if our outcome between different treatment groups end up differing, it will not be because of differences in $X$.\nSo you might be wondering, why not just always perform a randomized trial. Well, there are a couple of reasons:\n\nIt's expensive. We have to enroll people into a trial, there might be loads to protocols that we need to follow and it takes time and money to keep track of those people after they enrolled.\nSometimes it's unethical to randomize treatment. A typical example would be smoking.\n\nFor observational studies, though similar to a randomized trial, we're not actually intervening with the treatment assignment. We're only observing what happens in reality. Additionally, we can alway leverage retrospective data, where these data are already being collected for various other purposes not specific to this research that we're interested. Caveat is that these data might be messy and of lower quality.\nObservational studies are becoming more and more prevalent, but as it is not a randomized trial, typically, the distribution of the confounders that we are concerned about, will differ between the treatment groups. The next section will describe matching, which is a technique that aims to address this issue.\nMatching\nMatching is a method that attempts to control for confounding and make an observational study more like a randomized trial. The main idea is to match individuals in the treated group $A=1$ to similar individuals in the control group $A=0$ on the covariates $X$. This is similar to the notion of estimating the causal effect of the treatment on the treated.\ne.g. Say there is only 1 covariate that we care about, age. Then, in a randomized trial, for any particular age, there should be about the same number of treated and untreated people. In the cases where older people are more likely to get $A=1$, if we were to match treated people to control people of the same age, there will be about the same number of treated and controls at any age.\nOnce the data are matched, we can treat it as if it was a randomized trial. The advantage of this approach is that it can help reveal lack of overlap in covariate distribution.\nCaveat is that we can't exactly match on the full set of covariates, so what we'll do is try and make sure the distribution of covariates is balanced between the groups, also referred to as stochastic balance (The distribution of confounders being similar for treated and untreated subjects).\nPropensity Scores\nWe've stated during the matching procedure, each test group member is paired with a similar member of the control group. Here we'll elaborate on what we mean by \"similar\". Similarity is often times computed using propensity scores, which is defined as the probability of receiving treatment, rather than control, given covariates $X$.\nWe'll define $A=1$ for treatment and $A=0$ for control. The propensity score for subject $i$ is denoted as $\\pi_i$.\n\\begin{align}\n\\pi_i = P(A=1 | X_i)\n\\end{align}\nAs an example, if a person had a propensity score of 0.3, that would mean that given their particular covariates, there was a 30% chance that they'll receive the treatment. We can calculate this score by fitting our favorite classification algorithm to our data (input features are our covariates, and the labels are whether that person belongs to the treatment or control group).\nWith these knowledge in mind, let's get our hands dirty with some code, we'll discuss more as we go along.\nImplementation\nWe'll be using the Right Heart Catheterization dataset. The csv file can be downloaded from the following link.",
"# we'll only be working with a subset of the variables in the raw dataset,\n# feel free to experiment with more\nAGE = 'age'\nMEANBP1 = 'meanbp1'\nCAT1 = 'cat1'\nSEX = 'sex'\nDEATH = 'death' # outcome variable in the our raw data\nSWANG1 = 'swang1' # treatment variable in our raw data\nTREATMENT = 'treatment'\n\nnum_cols = [AGE, MEANBP1]\ncat_cols = [CAT1, SEX, DEATH, SWANG1]\n\ninput_path = 'data/rhc.csv'\ndtype = {col: 'category' for col in cat_cols}\ndf = pd.read_csv(input_path, usecols=num_cols + cat_cols, dtype=dtype)\nprint(df.shape)\ndf.head()",
"Usually, our treatment group will be smaller than the control group.",
"# replace this column with treatment yes or no\ndf[SWANG1].value_counts()\n\n# replace these values with shorter names\ndf[CAT1].value_counts()\n\ncat1_col_mapping = {\n 'ARF': 'arf',\n 'MOSF w/Sepsis': 'mosf_sepsis',\n 'COPD': 'copd',\n 'CHF': 'chf',\n 'Coma': 'coma',\n 'MOSF w/Malignancy': 'mosf',\n 'Cirrhosis': 'cirrhosis',\n 'Lung Cancer': 'lung_cancer',\n 'Colon Cancer': 'colon_cancer'\n}\ndf[CAT1] = df[CAT1].replace(cat1_col_mapping)\n\n# convert features' value to numerical value, and store the\n# numerical value to the original value mapping\ncol_mappings = {}\nfor col in (DEATH, SWANG1, SEX):\n col_mapping = dict(enumerate(df[col].cat.categories))\n col_mappings[col] = col_mapping\nprint(col_mappings)\n\nfor col in (DEATH, SWANG1, SEX):\n df[col] = df[col].cat.codes\n\ndf = df.rename({SWANG1: TREATMENT}, axis=1)\ndf.head()\n\ncat_cols = [CAT1]\ndf_one_hot = pd.get_dummies(df[cat_cols], drop_first=True)\ndf_cleaned = pd.concat([df[num_cols], df_one_hot, df[[SEX, TREATMENT, DEATH]]], axis=1)\ndf_cleaned.head()",
"Given all of these covariates and our column treatment that indicates whether the subject received the treatment or control, we wish to have a quantitative way of measuring whether our covariates are balanced between the two groups.\nTo assess whether balance has been achieved, we can look at standardized mean differences (smd), which is calculated by the difference in the means between the two groups divided by the pooled standard deviation.\n\\begin{align}\nsmd = \\frac{\\bar{X}_t - \\bar{X}_c}{\\sqrt{(s^2_t + s^2_c) / 2}}\n\\end{align}\nWhere:\n\n$\\bar{X}_t$, $\\bar{X}_c$ denotes the mean of that feature for the treatment and control group respectively. Note that people often times report the absolute value of this number.\n$s^2_t$, $s^2_c$ denotes the standard deviation of that feature for the treatment and control group respectively. For the denominator we're essentially calculating the pooled standard deviation.\n\nWe can calculate the standardized mean differences for every feature. If our calculated smd is 1, then that means there's a 1 standard deviation difference in means. The benefit of having standard deviation in the denominator is that this number becomes insensitive to the scale of the feature. After computing this measurement for all of our features, there is a rule of thumb that are commonly used to determine whether that feature is balanced or not, (similar to the 0.05 for p-value idea).\n\nSmaller than $0.1$. For a randomized trial, the smd between all of the covariates should typically fall into this bucket.\n$0.1$ - $0.2$. Not necessarily balanced, but small enough that people are usually not too worried about them. Sometimes, even after performing matching, there might still be a few covariates whose smd fall under this range.\n$0.2$. Values that are greater than this threshold are considered seriously imbalanced.",
"features = df_cleaned.columns.tolist()\nfeatures.remove(TREATMENT)\nfeatures.remove(DEATH)\nagg_operations = {TREATMENT: 'count'}\nagg_operations.update({\n feature: ['mean', 'std'] for feature in features\n})\n\ntable_one = df_cleaned.groupby(TREATMENT).agg(agg_operations)\n# merge MultiIndex columns together into 1 level\n# table_one.columns = ['_'.join(col) for col in table_one.columns.values]\ntable_one.head()\n\ndef compute_table_one_smd(table_one: pd.DataFrame, round_digits: int=4) -> pd.DataFrame:\n feature_smds = []\n for feature in features:\n feature_table_one = table_one[feature].values\n neg_mean = feature_table_one[0, 0]\n neg_std = feature_table_one[0, 1]\n pos_mean = feature_table_one[1, 0]\n pos_std = feature_table_one[1, 1]\n\n smd = (pos_mean - neg_mean) / np.sqrt((pos_std ** 2 + neg_std ** 2) / 2)\n smd = round(abs(smd), round_digits)\n feature_smds.append(smd)\n\n return pd.DataFrame({'features': features, 'smd': feature_smds})\n\n\ntable_one_smd = compute_table_one_smd(table_one)\ntable_one_smd",
"The next few code chunk will actually fit the propensity score.",
"# treatment will be our label for estimating the propensity score,\n# and death is the outcome that we care about, thus is also removed\n# from the step that is estimating the propensity score\ndeath = df_cleaned[DEATH]\ntreatment = df_cleaned[TREATMENT]\ndf_cleaned = df_cleaned.drop([DEATH, TREATMENT], axis=1)\n\ncolumn_transformer = ColumnTransformer(\n [('numerical', StandardScaler(), num_cols)],\n sparse_threshold=0,\n remainder='passthrough'\n)\ndata = column_transformer.fit_transform(df_cleaned)\ndata.shape\n\nlogistic = LogisticRegression(solver='liblinear')\nlogistic.fit(data, treatment)\n\npscore = logistic.predict_proba(data)[:, 1]\npscore\n\nroc_auc_score(treatment, pscore)",
"We won't be spending too much time tweaking the model here, checking some evaluation metric of the model serves as a quick sanity check.\nOnce the propensity score is estimated, it is useful to look for overlap before jumping straight to the matching process. By overlap, we are referring to compare the distribution of the propensity score for the subjects in the control and treatment group.",
"mask = treatment == 1\npos_pscore = pscore[mask]\nneg_pscore = pscore[~mask]\nprint('treatment count:', pos_pscore.shape)\nprint('control count:', neg_pscore.shape)",
"Looking at the plot below, we can see that our features, $X$, does in fact contain information about the user receiving treatment. The distributional difference between the propensity scores for the two group justifies the need for matching, since they are not directly comparable otherwise.\nAlthough, there's a distributional difference in the density plot, but in this case, what we see is that there's overlap everywhere, so this is actually the kind of plot we would like to see if we're going to do propensity score matching. What we mean by overlap is that no matter where we look on the plot, even though there might be more control than treatment or vice versa, there will still be some subject from either group. The notion of overlap means that our positivity assumption is probably reasonable. Remember positivity refers to the situation where all of the subjects in the study have at least some chance of receiving either treatment. And that appears to be the case here, hence this would be a situation where we would feel comfortable to proceed with our propensity score matching.",
"# change default style figure and font size\nplt.rcParams['figure.figsize'] = 8, 6\nplt.rcParams['font.size'] = 12\n\nsns.distplot(neg_pscore, label='control')\nsns.distplot(pos_pscore, label='treatment')\nplt.xlim(0, 1)\nplt.title('Propensity Score Distribution of Control vs Treatment')\nplt.ylabel('Density')\nplt.xlabel('Scores')\nplt.legend()\nplt.tight_layout()\nplt.show()",
"Keep in mind that not every plot will look like this, if there's major lack of overlap in some part of the propensity score distribution plot that means our positivity assumption would essentially be violated. Or in other words, we can't really estimate a causal effect in those area of the distribution since in those areas, those are subjects that have close to zero chance of being in the control/treatment group. One thing that we may wish to do when encountered with this scenario is either look and see if we're missing some covariates or get rid of individuals who have extreme propensity scores and focus on the areas where there are strong overlapping.\nThe next step is to perform matching. In general, the procedure looks like this:\n\nWe compute the distance between the estimated propensity score for each treated subject with every control. And for every treated subject we would find the subject in the control that has the closest distance to it. These pairs are \"matched\" together and will be included in the final dataset that will be used to estimate the causal effect.\n\nBut in practice there are actually many different variants to performing the step mentioned above. e.g.\nFirst: We mentioned that when there's a lack of balance, we can get rid of individuals who have extreme propensity scores. Some example of doing this includes removing control subjects whose propensity score is less than the minimum in the treatment group and removing treated subjects whose propensity score is greater than the maximum in the control group.\nSecond: Some people would only consider the treatment and control subject to be a match if the difference between their propensity score difference is less than a specified threshold, $\\delta$ (this threshold is also referred to as caliper). In other words, given a user in the treatment group, $u_t$, we find the set of candidate matches from the control group.\n\\begin{align}\nC(u_t) = { u_c \\in {\\text control} : |\\pi_{u_c} - \\pi_{u_t}| \\leq \\delta }\n\\end{align}\nIf $|C(u_t)| = 0$, $u_t$ is not matched, and is excluded from further consideration. Otherwise, we select the control user $u_c$ satisfying:\n\\begin{align}\n\\text{argmin}{u_c \\in C(u_t)} \\big|\\pi{u_c} - \\pi_{u_t}\\big|\n\\end{align}\nand retain the pair of users. Reducing the $\\delta$ parameter improves the balance of the final dataset at the cost of reducing its size, we can experiment with different values and see if we retain the majority of the test group.\nThird: A single user in the control group can potentially be matched to multiple users in the treatment group. To account for this, we can add a weighting to each matched record with the inverse of its frequency. i.e. if a control group user occurred 4 times in the matched dataset, we assigned that record a weight of 1/4. We may wish to check whether duplicates occurs a lot in the final matched dataset. Some implementation gives a flag whether this multiple matched control group scenario is allowed, i.e. whether matching with replacement is allowed. If replacement is not allowed, then matches generally will be found in the same order as the data are sorted. Thus, the match(es) for the first observation will be found first, the match(es) for the second observation will be found second, etc. Matching without replacement will generally increase bias.\nHere, we'll what we'll do is: for every record in the treatment we find its closest record in the control group without controlling for distance threshold and duplicates.",
"def get_similar(pos_pscore: np.ndarray, neg_pscore: np.ndarray, topn: int=5, n_jobs: int=1):\n from sklearn.neighbors import NearestNeighbors\n\n knn = NearestNeighbors(n_neighbors=topn + 1, metric='euclidean', n_jobs=n_jobs)\n knn.fit(neg_pscore.reshape(-1, 1))\n\n distances, indices = knn.kneighbors(pos_pscore.reshape(-1, 1))\n sim_distances = distances[:, 1:]\n sim_indices = indices[:, 1:]\n return sim_distances, sim_indices\n\n\nsim_distances, sim_indices = get_similar(pos_pscore, neg_pscore, topn=1)\nsim_indices",
"We can still check the number of occurrences for the matched control record. As mentioned in the previous section, we can add these information as weights to our dataset, but we won't be doing that here.",
"_, counts = np.unique(sim_indices[:, 0], return_counts=True)\nnp.bincount(counts)",
"After applying the matching procedure, it's important to check and validate that the matched dataset are indeed indistinguishable in terms of the covariates that we were using to balance the control and treatment group.",
"df_cleaned[TREATMENT] = treatment\ndf_cleaned[DEATH] = death\ndf_pos = df_cleaned[mask]\ndf_neg = df_cleaned[~mask].iloc[sim_indices[:, 0]]\ndf_matched = pd.concat([df_pos, df_neg], axis=0)\ndf_matched.head()\n\ntable_one_matched = df_matched.groupby(TREATMENT).agg(agg_operations)\ntable_one_smd_matched = compute_table_one_smd(table_one_matched)\ntable_one_smd_matched",
"Upon completing propensity score matching and verified that our covariates are now fairly balanced using standardized mean difference (smd), we can carry out a outcome analysis using a paired t-test. For all the various knobs that we've described when introducing the matching process, we can experiment with various options and see if our conclusions change.",
"num_matched_pairs = df_neg.shape[0]\nprint('number of matched pairs: ', num_matched_pairs)\n\n# pair t-test\nstats.ttest_rel(df_pos[DEATH].values, df_neg[DEATH].values)",
"This result tells us after using matching adjustment to ensure comparability between the treatment and control group, we find that receiving Right Heart Catheterization does have an effect on a patient's chance of dying.\nReference\n\nBlog: Comparative Statistics in Python using SciPy\nCousera: A Crash Course in Causality - Inferring Causal Effects from Observational Data\nGithub: pymatch - Matching techniques for observational studies\nPaper: B. Miroglio, D. Zeber, J. Kaye, R. Weiss - The Effect of Ad Blocking on User Engagement with the Web (2018)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/lucid
|
notebooks/differentiable-parameterizations/appendix/colab_gl.ipynb
|
apache-2.0
|
[
"Copyright 2018 Google LLC.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Using OpenGL with Colab Cloud GPUs\nThis notebook demonstrates obtaining OpenGL context on GPU Colab kernels.",
"!pip install -q lucid>=0.2.3\n!pip install -q moviepy\n\nimport numpy as np\nimport json\nimport moviepy.editor as mvp\nfrom google.colab import files\n\nimport lucid.misc.io.showing as show\n\nfrom lucid.misc.gl.glcontext import create_opengl_context\n\n# Now it's safe to import OpenGL and EGL functions\nimport OpenGL.GL as gl\n\n# create_opengl_context() creates GL context that is attached to an\n# offscreen surface of specified size. Note that rendering to buffers\n# of different size and format is still possible with OpenGL Framebuffers.\n#\n# Users are expected to directly use EGL calls in case more advanced\n# context management is required.\nWIDTH, HEIGHT = 640, 480\ncreate_opengl_context((WIDTH, HEIGHT))\n \n# OpenGL context is available here.\n\nprint(gl.glGetString(gl.GL_VERSION))\nprint(gl.glGetString(gl.GL_VENDOR)) \n#print(gl.glGetString(gl.GL_EXTENSIONS))\n\n# Let's render something!\n\ngl.glClear(gl.GL_COLOR_BUFFER_BIT)\ngl.glBegin(gl.GL_TRIANGLES)\ngl.glColor3f(1.0, 0.0, 0.0)\ngl.glVertex2f(0, 1)\ngl.glColor3f(0.0, 1.0, 0.0)\ngl.glVertex2f(-1, -1)\ngl.glColor3f(0.0, 0.0, 1.0)\ngl.glVertex2f(1, -1)\ngl.glEnd()\n\n# Read the result\nimg_buf = gl.glReadPixelsub(0, 0, WIDTH, HEIGHT, gl.GL_RGB, gl.GL_UNSIGNED_BYTE)\nimg = np.frombuffer(img_buf, np.uint8).reshape(HEIGHT, WIDTH, 3)[::-1]\nshow.image(img/255.0)",
"Render ShaderToy videos on GPU\nWe now have the full power of modern OpenGL in our hands! Let's do something interesting with it!\nFetching the source and rendering the amaizing shader by Kali from ShaderToy. You can also substitute a different shader_id, but note that only single-pass shaders that don't use textures are supported by the code below.",
"shader_id = 'Xtf3Rn' # https://www.shadertoy.com/view/Xtf3Rn\n\nshader_json = !curl -s 'https://www.shadertoy.com/shadertoy' \\\n -H 'Referer: https://www.shadertoy.com/view/$shader_id' \\\n --data 's=%7B%20%22shaders%22%20%3A%20%5B%22$shader_id%22%5D%20%7D'\nshader_data = json.loads(''.join(shader_json))[0]\n\nassert len(shader_data['renderpass']) == 1, \"Only single pass shareds are supported\"\nassert len(shader_data['renderpass'][0]['inputs']) == 0, \"Input channels are not supported\"\n\nshader_code = shader_data['renderpass'][0]['code']\n\nfrom OpenGL.GL import shaders\n\nvertexPositions = np.float32([[-1, -1], [1, -1], [-1, 1], [1, 1]])\nVERTEX_SHADER = shaders.compileShader(\"\"\"\n#version 330\nlayout(location = 0) in vec4 position;\nout vec2 UV;\nvoid main()\n{\n UV = position.xy*0.5+0.5;\n gl_Position = position;\n}\n\"\"\", gl.GL_VERTEX_SHADER)\n\nFRAGMENT_SHADER = shaders.compileShader(\"\"\"\n#version 330\nout vec4 outputColor;\nin vec2 UV;\n\nuniform sampler2D iChannel0;\nuniform vec3 iResolution;\nvec4 iMouse = vec4(0);\nuniform float iTime = 0.0;\n\"\"\" + shader_code + \"\"\"\nvoid main()\n{\n mainImage(outputColor, UV*iResolution.xy);\n}\n\n\"\"\", gl.GL_FRAGMENT_SHADER)\n\nshader = shaders.compileProgram(VERTEX_SHADER, FRAGMENT_SHADER)\n\n\ntime_loc = gl.glGetUniformLocation(shader, 'iTime')\nres_loc = gl.glGetUniformLocation(shader, 'iResolution')\n\ndef render_frame(time):\n gl.glClear(gl.GL_COLOR_BUFFER_BIT)\n with shader:\n gl.glUniform1f(time_loc, time)\n gl.glUniform3f(res_loc, WIDTH, HEIGHT, 1.0)\n \n gl.glEnableVertexAttribArray(0);\n gl.glVertexAttribPointer(0, 2, gl.GL_FLOAT, False, 0, vertexPositions)\n gl.glDrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4)\n img_buf = gl.glReadPixels(0, 0, WIDTH, HEIGHT, gl.GL_RGB, gl.GL_UNSIGNED_BYTE)\n img = np.frombuffer(img_buf, np.uint8).reshape(HEIGHT, WIDTH, 3)[::-1]\n return img\nshow.image(render_frame(10.0)/255.0, format='jpeg')",
"Use MoviePy to generate a video.",
"clip = mvp.VideoClip(render_frame, duration=10.0)\nclip.write_videofile('out.mp4', fps=60)\nfiles.download('out.mp4')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
StefanoAllesina/ISC
|
scientific/solutions/Axelrod1980_solution.ipynb
|
gpl-2.0
|
[
"Solution of Axelrod 1980",
"import numpy as np",
"Implement the five strategies",
"# We are going to implement five strategies. \n# Each strategy takes as input the history of the turns played so far\n# and returns 1 for cooperation and 0 for defection.\n\n# 1) Always defect\ndef always_defect(previous_steps):\n return 0\n\n# 2) Always cooperate\ndef always_cooperate(previous_steps):\n return 1\n\n# 3) Purely random, with probability of defecting 0.5\ndef random(previous_steps):\n if np.random.random(1) > 0.5:\n return 1\n return 0\n\n# 4) Tit for tat\ndef tit_for_tat(previous_steps):\n if len(previous_steps) == 0:\n return 1\n return previous_steps[-1]\n\n# 5) Tit for two tat\ndef tit_for_two_tat(previous_steps):\n if len(previous_steps) < 2:\n return 1\n # if the other player defected twice\n if sum(previous_steps[-2:]) == 0:\n # retaliate\n return 0\n return 1",
"Write a function that accepts the name of two strategies and competes them in a game of iterated prisoner's dilemma for a given number of turns.",
"def play_strategies(strategy_1, strategy_2, nsteps = 200):\n # The following two lines are a bit complicated:\n # we want to match a string (strategy_1) with a name of the function\n # and the call globals()[strategy_1] does just that. Now\n # pl1 is an \"alias\" for the same function.\n pl1 = globals()[strategy_1]\n pl2 = globals()[strategy_2]\n # If you prefer, you can deal with this problem by using\n # a series of if elif.\n \n # Now two vectors to store the moves of the players\n steps_pl1 = []\n steps_pl2 = []\n # And two variables for keeping the scores. \n # (because we said these are numbers of years in prison, we \n # use negative payoffs, with less negative being better)\n points_pl1 = 0\n points_pl2 = 0\n # Iterate over the number of steps\n for i in range(nsteps):\n # decide strategy:\n # player 1 chooses using the history of the moves by player 2\n last_pl1 = pl1(steps_pl2) \n # and vice versa\n last_pl2 = pl2(steps_pl1)\n # calculate payoff\n if last_pl1 == 1 and last_pl2 == 1:\n # both cooperate -> -1 point each\n points_pl1 = points_pl1 - 1\n points_pl2 = points_pl2 - 1\n elif last_pl1 == 0 and last_pl2 == 1:\n # pl2 lose\n points_pl1 = points_pl1 - 0\n points_pl2 = points_pl2 - 3\n elif last_pl1 == 1 and last_pl2 == 0:\n # pl1 lose\n points_pl1 = points_pl1 - 3\n points_pl2 = points_pl2 - 0\n else:\n # both defect\n points_pl1 = points_pl1 - 2\n points_pl2 = points_pl2 - 2\n # add the moves to the history\n steps_pl1.append(last_pl1)\n steps_pl2.append(last_pl2)\n # return the final scores\n return((points_pl1, points_pl2))\n\nplay_strategies(\"random\", \"always_defect\")",
"Implement a round-robin tournament, in which each strategy is played against every other (including against itself) for 10 rounds of 1000 turns each.",
"def round_robin(strategies, nround, nstep):\n nstrategies = len(strategies)\n # initialize list for results\n strategies_points = [0] * nstrategies\n # for each pair\n for i in range(nstrategies):\n for j in range(i, nstrategies):\n print(\"Playing\", strategies[i], \"vs.\", strategies[j])\n for k in range(nround):\n res = play_strategies(strategies[i], \n strategies[j], \n nstep)\n #print(res)\n strategies_points[i] = strategies_points[i] + res[0]\n strategies_points[j] = strategies_points[j] + res[1]\n print(\"\\nThe final results are:\")\n for i in range(nstrategies):\n print(strategies[i] + \":\", strategies_points[i])\n print(\"\\nand the winner is....\")\n print(strategies[strategies_points.index(max(strategies_points))])\n \n\nmy_strategies = [\"always_defect\",\n \"always_cooperate\", \n \"random\", \n \"tit_for_tat\", \n \"tit_for_two_tat\"]\n\nround_robin(my_strategies, 10, 1000)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
deepchem/deepchem
|
examples/tutorials/About_nODE_Using_Torchdiffeq_in_Deepchem.ipynb
|
mit
|
[
"About Neural ODE : Using Torchdiffeq with Deepchem\nAuthor : Anshuman Mishra : Linkedin\n\nBefore getting our hands dirty with code , let us first understand little bit about what Neural ODEs are ?\nNeuralODEs and torchdiffeq\nNeuralODE stands for \"Neural Ordinary Differential Equation. You heard right. Let me guess . Your first impression of the word is : \"Has it something to do with differential equations that we studied in the school ?\" \nSpot on ! Let's see the formal definition as stated by the original paper : \n```\nNeural ODEs are a new family of deep neural network models. Instead of specifying a discrete sequence of \nhidden layers, we parameterize the derivative of the hidden state using a neural network.\nThe output of the network is computed using a blackbox differential equation solver.These are continuous-depth models that have constant memory \ncost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed.\n```\nIn simple words perceive NeuralODEs as yet another type of layer like Linear, Conv2D, MHA...\nIn this tutorial we will be using torchdiffeq. This library provides ordinary differential equation (ODE) solvers implemented in PyTorch framework. The library provides a clean API of ODE solvers for usage in deep learning applications. As the solvers are implemented in PyTorch, algorithms in this repository are fully supported to run on the GPU.\nWhat will you learn after completing this tutorial ?\n\nHow to implement a Neural ODE in a Neural Network ?\nUsing torchdiffeq with deepchem.\n\nInstalling Libraries",
"!pip install torchdiffeq\n!pip install --pre deepchem",
"Import Libraries",
"import torch\nimport torch.nn as nn\n\nfrom torchdiffeq import odeint\nimport math\nimport numpy as np\n\nimport deepchem as dc\nimport matplotlib.pyplot as plt",
"Before diving into the core of this tutorial , let's first acquaint ourselves with usage of torchdiffeq. Let's solve following differential equation .\n$ \\frac{dz(t)}{dt} = f(t) = t $\nwhen $z(0) = 0$\nThe process to do it by hand is :\n$\\int dz = \\int tdt+C \\\\ z(t) = \\frac{t^2}{2} + C$\nLet's solve it using ODE Solver called odeint from torchdiffeq",
"def f(t,z):\n return t\n\nz0 = torch.Tensor([0])\nt = torch.linspace(0,2,100)\nout = odeint(f, z0, t);",
"Let's plot our result .It should be a parabola (remember general equation of parabola as $x^2 = 4ay$ )",
"plt.plot(t, out, 'go--')\nplt.axes().set_aspect('equal','datalim')\nplt.grid()\nplt.show()",
"What is Neural Differential Equation ?\nA neural differential equation is a differential equation using a neural network to parameterize the vector field. The canonical example is a neural ordinary differential equation :\n$y(0) = y_0$\n$\\frac{dy}{dt} (t) = f_\\theta(t,y(t)) $\nHere θ represents some vector of learnt parameters, $ f_\\theta : \\mathbb{R} \\times \\mathbb{R}^{d_1 \\times ... \\times d_k}$ is any standard neural architecture and $ y:[0, T] → \\mathbb{R}^{d_1 \\times ... d_k} $ is the solution. For many applications $f_\\theta$ will just be a simple feedforward network. Here $d_i $ is the dimension. \nReference\nThe central idea now is to use a differential equation solver as part of a learnt differentiable computation graph (the sort of computation graph ubiquitous to deep\nlearning)\n\nAs simple example, suppose we observe some picture $y_0 \\in \\mathbb{R}^{3 \\times 32 \\times 3}$ (RGB and 32x32 pixels), and wish to classify it as a picture of a cat or as a picture of a dog.\nWith torchdiffeq , we can solve even complex higher order differential equations too. Following is a real world example , a set of differential equations that models a spring - mass damper system\n$\\dot{x}= \\frac{dx}{dt} $\n$\\ddot{x} = -(k/m) x + p \\dot{x} $\n$\\dddot{x} = -r \\ddot{x} + gx$\nwith initial state t=0 , x=1\n$$\n\\left[ \\begin{array}{c} \\dot{x} \\\\ \\ddot{x} \\\\ \\dddot{x} \\end{array} \\right] = \\left[\\begin{array}{cc} 0 & 1 & 0\\\\ -\\frac{k}{m} & p & 0\\\\ 0 & g & -r \\end{array} \\right]\n\\left[ \\begin{array}{c} x \\\\ \\dot{x}\\\\ \\ddot{x} \\\\ \\end{array} \\right]\n$$\nThe right hand side may be regarded as a particular differentiable computation graph. The parameters may be fitted by setting up a loss between the trajectories of the model and the observed trajectories in the data, backpropagating through the model, and applying stochastic gradient descent.",
"class SystemOfEquations:\n\n def __init__(self, km, p, g, r):\n self.mat = torch.Tensor([[0,1,0],[-km, p, 0],[0,g,-r]])\n\n def solve(self, t, x0, dx0, ddx0):\n y0 = torch.cat([x0, dx0, ddx0])\n out = odeint(self.func, y0, t)\n return out\n \n def func(self, t, y):\n out = y@self.mat \n return out\n\n\nx0 = torch.Tensor([1])\ndx0 = torch.Tensor([0])\nddx0 = torch.Tensor([1])\n\nt = torch.linspace(0, 4*np.pi, 1000)\nsolver = SystemOfEquations(1,6,3,2)\nout = solver.solve(t, x0, dx0, ddx0)\n\nplt.plot(t, out, 'r')\nplt.axes()\nplt.grid()\nplt.show()",
"This is precisely the same procedure as the more general neural ODEs we introduced\nearlier. At first glance, the NDE approach of ‘putting a neural network in a differential\nequation’ may seem unusual, but it is actually in line with standard practice. All that\nhas happened is to change the parameterisation of the vector field.\nModel\nLet us have a look at how to embed an ODEsolver in a neural network .",
"from torchdiffeq import odeint_adjoint as odeadj\n\nclass f(nn.Module):\n def __init__(self, dim):\n super(f, self).__init__()\n self.model = nn.Sequential(\n nn.Linear(dim,124),\n nn.ReLU(),\n nn.Linear(124,124),\n nn.ReLU(),\n nn.Linear(124,dim),\n nn.Tanh()\n )\n\n def forward(self, t, x):\n return self.model(x)",
"function f in above code cell , is wrapped in an nn.Module (see codecell below) thus forming the dynamics of $\\frac{dy}{dt} (t) = f_\\theta(t,y(t)) $ embedded within a neural Network.\n ODE Block treats the received input x as the initial value of the differential equation. The integration interval of ODE Block is fixed at [0, 1]. And it returns the output of the layer at $ t = 1 $.",
"class ODEBlock(nn.Module):\n \n # This is ODEBlock. Think of it as a wrapper over ODE Solver , so as to easily connect it with our neurons !\n\n def __init__(self, f):\n super(ODEBlock, self).__init__()\n self.f = f\n self.integration_time = torch.Tensor([0,1]).float()\n\n def forward(self, x):\n self.integration_time = self.integration_time.type_as(x)\n out = odeadj(\n self.f,\n x,\n self.integration_time\n )\n\n return out[1]\n\n\nclass ODENet(nn.Module):\n \n #This is our main neural network that uses ODEBlock within a sequential module\n\n def __init__(self, in_dim, mid_dim, out_dim):\n super(ODENet, self).__init__()\n fx = f(dim=mid_dim)\n self.fc1 = nn.Linear(in_dim, mid_dim)\n self.relu1 = nn.ReLU(inplace=True)\n self.norm1 = nn.BatchNorm1d(mid_dim)\n self.ode_block = ODEBlock(fx)\n self.dropout = nn.Dropout(0.4)\n self.norm2 = nn.BatchNorm1d(mid_dim)\n self.fc2 = nn.Linear(mid_dim, out_dim)\n\n def forward(self, x):\n batch_size = x.shape[0]\n x = x.view(batch_size, -1)\n\n out = self.fc1(x)\n out = self.relu1(out)\n out = self.norm1(out)\n out = self.ode_block(out)\n out = self.norm2(out)\n out = self.dropout(out)\n out = self.fc2(out)\n\n return out",
"As mentioned before , Neural ODE Networks acts similar (has advantages though) to other neural networks , so we can solve any problem with them as the existing models do. We are gonna reuse the training process mentioned in this deepchem tutorial.\nSo Rather than demonstrating how to use NeuralODE model with a normal dataset, we shall use the Delaney solubility dataset provided under deepchem . Our model will learn to predict the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs) . For performance metrics we use pearson_r2_score . Here loss is computed directly from the model's output",
"tasks, dataset, transformers = dc.molnet.load_delaney(featurizer='ECFP', splitter='random')\ntrain_set, valid_set, test_set = dataset\nmetric = dc.metrics.Metric(dc.metrics.pearson_r2_score)",
"Time to Train\nWe train our model for 50 epochs, with L2 as Loss Function.",
"# Like mentioned before one can use GPUs with PyTorch and torchdiffeq\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n\nmodel = ODENet(in_dim=1024, mid_dim=1000, out_dim=1).to(device)\nmodel = dc.models.TorchModel(model, dc.models.losses.L2Loss())\n\nmodel.fit(train_set, nb_epoch=50)\n\nprint('Training set score : ', model.evaluate(train_set,[metric]))\nprint('Test set score : ', model.evaluate(test_set,[metric]))",
"Neural ODEs are invertible neural nets Reference\nInvertible neural networks have been a significant thread of research in the ICML community for several years. Such transformations can offer a range of unique benefits: \n\nThey preserve information, allowing perfect reconstruction (up to numerical limits) and obviating the need to store hidden activations in memory for backpropagation. \nThey are often designed to track the changes in probability density that applying the transformation induces (as in normalizing flows). \nLike autoregressive models, normalizing flows can be powerful generative models which allow exact likelihood computations; with the right architecture, they can also allow for much cheaper sampling than autoregressive models. \n\nWhile many researchers are aware of these topics and intrigued by several high-profile papers, few are familiar enough with the technical details to easily follow new developments and contribute. Many may also be unaware of the wide range of applications of invertible neural networks, beyond generative modelling and variational inference.\nCongratulations! Time to join the Community!\nCongratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:\nStar DeepChem on GitHub\nThis helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\nJoin the DeepChem Gitter\nThe DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NREL/bifacial_radiance
|
docs/tutorials/3 - Medium Level Example - Single Axis Tracking - hourly.ipynb
|
bsd-3-clause
|
[
"3 - Medium Level Example - 1-Axis tracker by hour (gendaylit)\nExample demonstrating the use of doing hourly smiulations with Radiance gendaylit for 1-axis tracking. This is a medium level example because it also explores a couple subtopics:\nSubtopics:\n<ul>\n <li> The structure of the tracker dictionary \"trackerDict\". </li>\n <li> How to calculate GCR </li>\n <li> How to make a cell-level module </li>\n <li> Various methods to use the trackerdictionary for analysis. </li>\n</ul>\n\nDoing full year simulations with gendaylit:\nPerforming the simulation hour by hour requires either a good computer or some patience, since there are ~4000 daylight-hours in the year. With a 32GB RAM, Windows 10 i7-8700 CPU @ 3.2GHz with 6 cores this takes 1 day. The code also allows for multiple cores or HPC use -- there is documentation/examples inside the software at the moment, but that is an advanced topic. The procedure can be broken into shorter steps for one day or a single timestamp simulation which is exemplified below.\nSteps:\n<ol>\n <li> <a href='#step1'> Load bifacial_radiance </a></li> \n <li> <a href='#step2'> Define all your system variables </a></li> \n <li> <a href='#step3'> Create Radiance Object, Set Albedo and Weather </a></li> \n <li> <a href='#step4'> Make Module: Cell Level Module Example </a></li> \n <li> <a href='#step5'> Calculate GCR</a></li> \n <li> <a href='#step6'> Set Tracking Angles </a></li> \n <li> <a href='#step7'> Generate the Sky </a></li> \n <li> <a href='#step8'> Make Scene 1axis </a></li> \n <li> <ol type=\"A\"><li><a href='#step9a'> Make Oct and AnalyzE 1 HOUR </a></li> \n <li> <a href='#step9b'> Make Oct and Analye Range of Hours </a></li> \n <li> <a href='#step9c'> Make Oct and Analyze All Tracking Dictionary </a></li> </ol>\n</ol>\n\nAnd finally: <ul> <a href='#condensed'> Condensed Version: All Tracking Dictionary </a></ul> \n<a id='step1'></a>\n1. Load bifacial_radiance\nPay attention: different importing method:\nSo far we've used \"from bifacial_radiance import *\" to import all the bifacial_radiance files into our working space in jupyter. For this journal we will do a \"import bifacial_radiance\" . This method of importing requires a different call for some functions as you'll see below. For example, instead of calling demo = RadianceObj(path = testfolder) as on Tutorial 2, in this case we will neeed to do demo = bifacial_radiance.RadianceObj(path = testfolder).",
"import bifacial_radiance\nimport numpy as np\nimport os # this operative system to do teh relative-path testfolder for this example.\nimport pprint # We will be pretty-printing the trackerdictionary throughout to show its structure.\nfrom pathlib import Path",
"<a id='step2'></a>\n2. Define all your system variables\nJust like in the condensed version show at the end of Tutorial 2, for this tutorial we will be starting all of our system variables from the beginning of the jupyter journal, instead than throughout the different cells (for the most part)",
"testfolder = Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_03'\nif not os.path.exists(testfolder):\n os.makedirs(testfolder)\n \n \nsimulationName = 'tutorial_03' # For adding a simulation name when defning RadianceObj. This is optional.\nmoduletype = 'test-module' # We will define the parameters for this below in Step 4.\nalbedo = \"litesoil\" # this is one of the options on ground.rad\nlat = 37.5 \nlon = -77.6\n\n# Scene variables\nnMods = 20\nnRows = 7\nhub_height = 2.3 # meters\npitch = 10 # meters # We will be using pitch instead of GCR for this example.\n\n# Traking parameters\ncumulativesky = False\nlimit_angle = 45 # tracker rotation limit angle\nangledelta = 0.01 # we will be doing hourly simulation, we want the angle to be as close to real tracking as possible.\nbacktrack = True \n\n#makeModule parameters\n# x and y will be defined later on Step 4 for this tutorial!!\nxgap = 0.01\nygap = 0.10\nzgap = 0.05\nnumpanels = 2\naxisofrotation = True # the scene will rotate around the torque tube, and not the middle of the bottom surface of the module\ndiameter = 0.1\ntubetype = 'Oct' # This will make an octagonal torque tube.\nmaterial = 'black' # Torque tube of this material (0% reflectivity)\n\n",
"<a id='step3'></a>\n3. Create Radiance Object, Set Albedo and Weather\nSame steps as previous two tutorials, so condensing it into one step. You hopefully have this down by now! :)\n<div class=\"alert alert-warning\">\nNotice that we are doing bifacial_radiance.RadianceObj because we change the import method for this example!\n</div>\n\nWe now constrain the days of our analysis in the readWeatherFile import step. For this example we are doing just two days in January. Format has to be a 'MM_DD' or 'YYYY-MM-DD_HHMM'",
"demo = bifacial_radiance.RadianceObj(simulationName, path = str(testfolder)) # Adding a simulation name. This is optional.\ndemo.setGround(albedo) \nepwfile = demo.getEPW(lat=lat, lon=lon) \n\nstarttime = '01_13'; endtime = '01_14'\nmetdata = demo.readWeatherFile(weatherFile=epwfile, starttime=starttime, endtime=endtime) \n",
"<a id='step4'></a>\n4. Make Module: Cell Level Module Example\nInstead of doing a opaque, flat single-surface module, in this tutorial we will create a module made up by cells. We can define variuos parameters to make a cell-level module, such as cell size and spacing between cells. To do this, we will pass a dicitonary with the needed parameters to makeModule, as shown below. \nNOTE: in v0.4.0 some keywords and methods for doing a CellModule and Torquetube simulation were changed.\n<div class=\"alert alert-warning\">\nSince we are making a cell-level module, the dimensions for x and y of the module will be calculated by the software -- dummy values can be initially passed just to get started, but these values are overwritten by addCellModule()\n </div>",
"numcellsx = 6\nnumcellsy = 12\nxcell = 0.156\nycell = 0.156\nxcellgap = 0.02\nycellgap = 0.02\n\n\nmymodule = demo.makeModule(name=moduletype, x=1, y=1, xgap=xgap, ygap=ygap, \n zgap=zgap, numpanels=numpanels) \nmymodule.addTorquetube(diameter=diameter, material=material,\n axisofrotation=axisofrotation, tubetype=tubetype)\nmymodule.addCellModule(numcellsx=numcellsx, numcellsy=numcellsy,\n xcell=xcell, ycell=ycell, xcellgap=xcellgap, ycellgap=ycellgap)\n\nprint(f'New module created. x={mymodule.x}m, y={mymodule.y}m')\nprint(f'Cell-module parameters: {mymodule.cellModule}')",
"<a id='step5'></a>\n5. Calculate GCR\nIn this example we passed the parameter \"pitch\". Pitch is the spacing between rows (for example, between hub-posts) in a field.\nTo calculate Ground Coverage Ratio (GCR), we must relate the pitch to the collector-width by:\n\nThe collector width for our system must consider the number of panels and the y-gap:\n\nCollector Width gets saved in your module parameters (and later on your scene and trackerdict) as \"sceney\". You can calculate your collector width with the equation, or you can use this method to know your GCR:",
"# For more options on makemodule, see the help description of the function. \n# Details about the module are stored in the new ModuleObj \nCW = mymodule.sceney\ngcr = CW / pitch\nprint (\"The GCR is :\", gcr)\nprint(f\"\\nModuleObj data keys: {mymodule.keys}\")",
"<a id='step6'></a>\n6. Set Tracking Angles\nThis function will read the weather file, and based on the sun position it will calculate the angle the tracker should be at for each hour. It will create metdata files for each of the tracker angles considered.\nFor doing hourly simulations, remember to set cumulativesky = False here!",
"trackerdict = demo.set1axis(metdata=metdata, limit_angle=limit_angle, backtrack=backtrack, \n gcr=gcr, cumulativesky=False)\n\nprint (\"Trackerdict created by set1axis: %s \" % (len(demo.trackerdict))) ",
"set1axis initializes the trackerdictionary Trackerdict. Trackerdict contains all hours selected from the weatherfile as keys. For example: trackerdict['2021-01-13_1200']. It is a return variable on many of the 1axis functions, but it is also stored inside of your Radiance Obj (i.e. demo.trackerdict). In this journal we are storing it as a variable to mute the option (otherwise it prints the returned trackerdict contents every time)",
"pprint.pprint(trackerdict['2021-01-13_1200'])\n",
"All of the following functions add up elements to trackerdictionary to keep track (ba-dum tupzz) of the Scene and simulation parameters. In advanced journals we will explore the inner structure of trackerdict. For now, just now it exists :)\n<a id='step7'></a>\n7. Generate the Sky\nWe will create skies for each hour we want to model with the function gendaylit1axis. \nFor this example we are doing just two days in January. The ability to limit the time using gendaylit1axis is deprecated. Use readWeatherFile instead.",
"trackerdict = demo.gendaylit1axis() ",
"Since we passed startdate and enddate to gendaylit, it will prune our trackerdict to only the desired days.\nLet's explore our trackerdict:",
"trackerkeys = sorted(trackerdict.keys())\nprint (\"Trackerdict option of hours are: \", trackerkeys)\nprint (\"\")\nprint (\"Contents of trackerdict for sample hour:\")\npprint.pprint(trackerdict[trackerkeys[0]])",
"<a id='step8'></a>\n8. Make Scene 1axis\nWe can use gcr or pitch fo our scene dictionary.",
"sceneDict = {'pitch': pitch,'hub_height':hub_height, 'nMods':nMods, 'nRows': nRows} \n\n# making the different scenes for the 1-axis tracking for the dates in trackerdict2.\ntrackerdict = demo.makeScene1axis(trackerdict=trackerdict, module=mymodule, sceneDict=sceneDict) ",
"The scene parameteres are now stored in the trackerdict. To view them and to access them:",
"pprint.pprint(trackerdict[trackerkeys[0]])\n\npprint.pprint(demo.trackerdict[trackerkeys[5]]['scene'].__dict__)",
"<a id='step9a'></a>\n9. Make Oct and Analyze\nA. Make Oct and Analyze 1Hour\nThere are various options now to analyze the trackerdict hours we have defined. We will start by doing just one hour, because it's the fastest. Make sure to select an hour that exists in your trackerdict!\nOptions of hours:",
"pprint.pprint(trackerkeys)\n\ndemo.makeOct1axis(singleindex='2021-01-13_0800')\nresults = demo.analysis1axis(singleindex='2021-01-13_0800')\nprint('\\n\\nHourly bifi gain: {:0.3}'.format(sum(demo.Wm2Back) / sum(demo.Wm2Front)))",
"The trackerdict now contains information about the octfile, as well as the Analysis Object results",
"print (\"\\n Contents of trackerdict for sample hour after analysis1axis: \")\npprint.pprint(trackerdict[trackerkeys[0]])\n\n\npprint.pprint(trackerdict[trackerkeys[0]]['AnalysisObj'].__dict__)",
"<a id='step9b'></a>\nB. Make Oct and Analye Range of Hours\nYou could do a list of indices following a similar procedure:",
"for time in ['2021-01-13_0900','2021-01-13_1300']: \n demo.makeOct1axis(singleindex=time)\n results=demo.analysis1axis(singleindex=time)\n\nprint('Accumulated hourly bifi gain: {:0.3}'.format(sum(demo.Wm2Back) / sum(demo.Wm2Front)))",
"Note that the bifacial gain printed above is for the accumulated irradiance between the hours modeled so far. \nThat is, demo.Wm2Back and demo.Wm2Front are for January 13, 8AM, 9AM and 1 PM. Compare demo.Wm2back below with what we had before:",
"demo.Wm2Back",
"To print the specific bifacial gain for a specific hour, you can use the following:",
"sum(trackerdict['2021-01-13_1300']['AnalysisObj'].Wm2Back) / sum(trackerdict['2021-01-13_1300']['AnalysisObj'].Wm2Front)",
"<a id='step9c'></a>\nC. Make Oct and Analyze All Tracking Dictionary\nThis takes considerably more time, depending on the number of entries on the trackerdictionary. If no starttime and endtime were specified on STEP readWeatherFile, this will run ALL of the hours in the year (~4000 hours).",
"demo.makeOct1axis()\nresults = demo.analysis1axis()\nprint('Accumulated hourly bifi gain for all the trackerdict: {:0.3}'.format(sum(demo.Wm2Back) / sum(demo.Wm2Front)))\n",
"<div class=\"alert alert-warning\">\nRemember you should clean your results first! This will have torquetube and sky results if performed this way so don't trust this simplistic bifacial_gain examples.\n</div>\n\n<a id='condensed'></a>\nCondensed Version: All Tracking Dictionary\nThis is the summarized version to run gendaylit for each entry in the trackingdictionary.",
"import bifacial_radiance\nimport os \n\nsimulationName = 'Tutorial 3'\nmoduletype = 'Custom Cell-Level Module' \ntestfolder = os.path.abspath(r'..\\..\\bifacial_radiance\\TEMP')\nalbedo = \"litesoil\" \nlat = 37.5 \nlon = -77.6\n\n# Scene variables\nnMods = 20\nnRows = 7\nhub_height = 2.3 # meters\npitch = 10 # meters \n\n# Traking parameters\ncumulativesky = False\nlimit_angle = 45 # degrees \nangledelta = 0.01 # \nbacktrack = True \n\n#makeModule parameters\n# x and y will do not need to be defined as they are calculated internally for cell-level modules\nxgap = 0.01\nygap = 0.10\nzgap = 0.05\nnumpanels = 2\n\ncellModuleParams = {'numcellsx': 6, \n'numcellsy': 12,\n'xcell': 0.156,\n'ycell': 0.156,\n'xcellgap': 0.02,\n'ycellgap': 0.02}\n\n\n\ntorquetube = True\naxisofrotation = True # the scene will rotate around the torque tube, and not the middle of the bottom surface of the module\ndiameter = 0.1\ntubetype = 'Oct' # This will make an octagonal torque tube.\nmaterial = 'black' # Torque tube material (0% reflectivity)\ntubeParams = {'diameter':diameter,\n 'tubetype':tubetype,\n 'material':material,\n 'axisofrotation':axisofrotation}\n\nstartdate = '11_06' \nenddate = '11_06'\ndemo = bifacial_radiance.RadianceObj(simulationName, path=testfolder) \ndemo.setGround(albedo) \nepwfile = demo.getEPW(lat,lon) \nmetdata = demo.readWeatherFile(epwfile, starttime=startdate, endtime=enddate) \nmymodule = bifacial_radiance.ModuleObj(name=moduletype, xgap=xgap, ygap=ygap, \n zgap=zgap, numpanels=numpanels,cellModule=cellModuleParams, tubeParams=tubeParams)\nsceneDict = {'pitch':pitch,'hub_height':hub_height, 'nMods': nMods, 'nRows': nRows} \ndemo.set1axis(limit_angle = limit_angle, backtrack = backtrack, gcr = gcr, cumulativesky = cumulativesky)\ndemo.gendaylit1axis()\ndemo.makeScene1axis(module=mymodule,sceneDict=sceneDict) #makeScene creates a .rad file with 20 modules per row, 7 rows.\ndemo.makeOct1axis()\ndemo.analysis1axis()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
root-mirror/training
|
SoftwareCarpentry/exercises/solution-rdataframe-dimuon.ipynb
|
gpl-2.0
|
[
"ROOT dataframe tutorial: Dimuon spectrum\nThis tutorial shows you how to analyze datasets using RDataFrame from a Python notebook. The example analysis performs the following steps:\n\nConnect a ROOT dataframe to a dataset containing 61 mio. events recorded by CMS in 2012\nFilter the events being relevant for your analysis\nCompute the invariant mass of the selected dimuon candidates\nPlot the invariant mass spectrum showing resonances up to the Z mass\n\nThis material is based on the analysis done by Stefan Wunsch, available here in CERN's Open Data portal.\n<center><img src=\"../images/dimuonSpectrum.png\"></center>",
"import ROOT ",
"Create a ROOT dataframe in Python\nFirst we will create a ROOT dataframe that is connected to a dataset named Events stored in a ROOT file. The file is pulled in via XRootD from EOS public, but note how it could also be stored in your CERNBox space or in any other EOS repository accessible from SWAN (e.g. the experiment ones).\nThe dataset Events is a TTree and has the following branches:\n| Branch name | Data type | Description |\n|-------------|-----------|-------------|\n| nMuon | unsigned int | Number of muons in this event |\n| Muon_pt | float[nMuon] | Transverse momentum of the muons stored as an array of size nMuon |\n| Muon_eta | float[nMuon] | Pseudo-rapidity of the muons stored as an array of size nMuon |\n| Muon_phi | float[nMuon] | Azimuth of the muons stored as an array of size nMuon |\n| Muon_charge | int[nMuon] | Charge of the muons stored as an array of size nMuon and either -1 or 1 |\n| Muon_mass | float[nMuon] | Mass of the muons stored as an array of size nMuon |",
"treename = \"Events\"\nfilename = \"root://eospublic.cern.ch//eos/opendata/cms/derived-data/AOD2NanoAODOutreachTool/Run2012BC_DoubleMuParked_Muons.root\"\ndf = ROOT.RDataFrame(treename, filename)",
"Run only on a part of the dataset\nThe full dataset contains half a year of CMS data taking in 2012 with 61 mio events. For the purpose of this example, we use the Range node to run only on a small part of the dataset. This feature also comes in handy in the development phase of your analysis.\nFeel free to experiment with this parameter!",
"# Take only the first 1M events\ndf_range = df.Range(1000000)",
"Filter relevant events for this analysis\nPhysics datasets are often general purpose datasets and therefore need extensive filtering of the events for the actual analysis. Here, we implement only a simple selection based on the number of muons and the charge to cut down the dataset in events that are relevant for our study.\nIn particular, we are applying two filters to keep:\n1. Events with exactly two muons\n2. Events with muons of opposite charge",
"df_2mu = df_range.Filter(\"nMuon == 2\", \"Events with exactly two muons\")\ndf_oc = df_2mu.Filter(\"Muon_charge[0] != Muon_charge[1]\", \"Muons with opposite charge\")",
"Perform complex operations in Python, efficiently!\nSince we still want to perform complex operations in Python but plain Python code is prone to be slow and not thread-safe, you should use as much as possible C++ functions to do the work in your event loop during runtime. This mechanism uses the C++ interpreter cling shipped with ROOT, making this possible in a single line of code.\nNote, that we are using here the Define node of the computation graph with a jitted function, calling into a function available in the ROOT library.",
"df_mass = df_oc.Define(\"Dimuon_mass\", \"ROOT::VecOps::InvariantMass(Muon_pt, Muon_eta, Muon_phi, Muon_mass)\")",
"Make a histogram of the newly created column",
"nbins = 30000\nlow = 0.25\nup = 300\nhisto_name = \"Dimuon_mass\"\nhisto_title = histo_name\n\nh = df_mass.Histo1D((histo_name, histo_title, nbins, low, up), \"Dimuon_mass\")",
"Book a Report of the dataframe filters",
"report = df.Report()",
"Start data processing\nThis is the final step of the analysis: retrieving the result. We are expecting to see a plot of the mass of the dimuon spectrum similar to the one shown at the beginning of this exercise (remember we are running on fewer entries in this exercise). Finally in the last cell we should see a report of the filters applied on the dataset.",
"%%time\n\nROOT.gStyle.SetOptStat(0)\nROOT.gStyle.SetTextFont(42)\nc = ROOT.TCanvas(\"c\", \"\", 800, 700)\nc.SetLogx()\nc.SetLogy()\nh.SetTitle(\"\")\nh.GetXaxis().SetTitle(\"m_{#mu#mu} (GeV)\")\nh.GetXaxis().SetTitleSize(0.04)\nh.GetYaxis().SetTitle(\"N_{Events}\")\nh.GetYaxis().SetTitleSize(0.04)\nh.Draw()\n\nlabel = ROOT.TLatex()\nlabel.SetNDC(True)\nlabel.SetTextSize(0.040)\nlabel.DrawLatex(0.100, 0.920, \"#bf{CMS Open Data}\")\nlabel.SetTextSize(0.030)\nlabel.DrawLatex(0.500, 0.920, \"#sqrt{s} = 8 TeV, L_{int} = 11.6 fb^{-1}\")\n\n%jsroot on\nc.Draw()\n\nreport.Print()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Usherwood/usherwood_ds
|
tutoriais/Conceitos Básicos 2 - Loops, Funções e Classes .ipynb
|
bsd-2-clause
|
[
"author = \"Peter J Usherwood\"\nLoops, Declarações de Lógica, Funções e Classes\nLoops, funções, e classes compartilham um princípo similar em programação, eles ajudam com repetição.\nOs Loops\nLoop são o método por iterando através de listas, este ato nos permite para fazer o mesmo código para cada elemento de uma lista.",
"a = [0,5,10,3,2]\n\nfor elemento in a: # 'for' e 'in' são palavres chaves, 'elemento' é só um nome variável dummy.\n print(elemento) # O travessão é muito importante, Python conheça onde um loop existe do travessão de 2 ou 4 espações. \n # nota: usando jupyter ou pycharm (ou muitos IDEs) pode usar 'tab' e ele vai traduiz para 4 espações!\n\n# O loop esta fechado aqui porque nós não somos no travessão \nprint('\\nfechado\\n')\n\n# Nós podemos mudar os elementos da lista no loop\ni = 0 # Isso vai representar o índice\nfor elemento in a:\n a[i] = elemento*4\n i = i+1 # Aumentar o índice para corresponder com o 'elemento'\n \nprint(a)\n\n# Nós podemos escreve um loop que vai fazer o mesmo do loop em cima mas usando:\n\na = [0,5,10,3,2]\n\nfor i in [0,1,2,3,4]:\n a[i] = a[i]*4\n\nprint(a)",
"Tambêm temos 'while loops', eles executam até uma condicão está satisfeita. Cuida para não fica presa em um while loop infinitivo!",
"x = 0\nwhile x < 5:\n print(x)\n x = x+1",
"As Declarações de Lógica\nEstas são expressões começando com 'if' e elas usam lógica para descidir onde o código vai fluxo. Isso significa peças do nosso código talvez não vai executar. \n- A declaração de lógica mais popular é uma 'decleração if'\n- Isso precisa em minimo dois peças, um 'if' e um 'else'\n- O 'if' tem uma decleração onde a resposta é verdade (True) ou falso (False).\n- Dento o 'if' há código que vai executar se a declaração do 'if' está verdade\n- O 'else' não tem uma decleração, ele é por todas as condições quando o 'if' está falso\n- Se a delcaração do 'if' é verdade o código não vai executar o 'else', vai mudar para o fim da decelração de lógical\n- Nós podemos usar terceiro componente, um 'elif (else if)', isso ato como um outro 'if' depois o primeiro 'if' e antés o 'else'\n- O mesmo do 'elif', se um 'if' (ou 'elif') acima excecuta, o 'decleração if' vai terminar e qualquer outros componentes não vai executar (o mesmo se eles estaria verdade)",
"a = [0,7,10,3,2]\n\nprint(a)\n\ni = 0\nfor e in a:\n if e > 5: # Lê isso como 'Se \"e\" está maior do que 5, entra'\n a[i] = 1\n elif e < 5: # Lê isso como 'Se a expressão em cima está falso, e \"e\" está menor que 5, entra'\n a[i] = -1\n elif e == 5: # Aqui '==' significa igual, '=' é usado para atribuir\n a[i] = 0\n else: # Nós precisamos terminar com um 'else' por todos os outros casos\n print('Isso não é possível')\n i = i + 1\n \nprint(a)\nprint('\\n')\n\n# Aqui o 'elif' não vai executar apensar de ele está verdade porque o primeiro 'if' está verdade assim o código exicuta isso \n# então muda para o fim\n\nb = 14\n\nif (b > 10) and (b < 15):\n print('Verdade')\nelif b > 5:\n print('Também está verdade mas este não vai executar pelos números maiores que 10')\nelse:\n print('Outro')\n\n\na = 1\n\nif a > 5:\n print('sim')\nelif a > 3:\n print('yes')\nelse:\n print('nao')\n",
"As Funções\n\nAs funçoes são blueprints por código você quer reusar\nElas tomam 'entradas', faz o código, então retornam as 'saídas'\nElas se comportam como as funções do Excel\nNós vimos exemploes das funções antés: print(var). print(var) aceita 'var' de uma entrada, e imprime o valor do 'var' para a tela como uma saída\nUma fanção tem dois componentes:\nA declaração, isso é onde nós escrevemos o blueprint\nA função chamada, onde nós usamos a função no nosso código. 'print(var)' é um exemplo de uma função chama",
"# A declaração\ndef contar_números(lista, número=5):\n \"\"\"\n Este é o modo padrão para comentar as funções, isso função conta o número de vezes o 'número' aparece na 'lista'\n \n :param litsa: Uma 'list' onde cada elemento é um inteiro, isso entrada é obligatório\n :param número: Um inteiro para estar contado, isso entrada não é obligatório se não 'número' for dado, o valor vai estar 5\n \n return contar_de_número: Um interio pela conta\n \"\"\"\n \n contar_de_número = 0\n for elemento in lista:\n if elemento == número:\n contar_de_número = contar_de_número + 1\n else:\n pass\n \n return contar_de_número\n\n# O código (com a chama)\nminha_lista = [1,4,7,7,3,5,5,7,6,7,7,7,7,7]\nmeu_número = 7\n\ncontar = contar_números(lista=minha_lista, número=meu_número) # Aqui nós damos um 'número' porque nós não queremos 5\nprint('Há', contar, 'do número', meu_número)\nprint('\\n')",
"As Classes\n\nAs classes são um outro tipo de blueprint. \nElas são uma coleção de funções e variáveis que tem um tema comum\nNós criamos 'instâncias' da classe, cada instância vai ter as mesmas variáveis e funções, mas valores diferentes pelas variávels",
"# A declaração\nclass Cachorro:\n \n def __init__(self, nome, idade):\n \"\"\"\n A função 'init' vai executar automaticamente quando nós instânciamos a classe, aqui nõs damos os valores por as variáveis\n \n :param nome: Um string pelo nome do cachorro\n :param idade: Um inteiro pela idade do cachorro\n \"\"\"\n \n self.nome = nome # Variáveis começando com 'self.' são as variávels da classe\n self.idade = idade\n \n def voltar_valores(self): # Funções com o 'self' entrada são as funçõoes da classe, elas podem usar todas as variáveis \n # da classe\n \n print('O nome do cachorro é:', self.nome)\n print('A idade do cachorro é:', self.idade)\n \n def late(self):\n \n print('woof')\n\n# O código (com as instáncias)\njasper = Cachorro(nome='Jasper', idade=10)\nluke = Cachorro(nome='Luke', idade=7)\n\n# Eles tem os mesmos funções\njasper.late()\nluke.late()\nprint('\\n')\n\n# Mas valores diferentes\njasper.voltar_valores()\nluke.voltar_valores()\nprint('\\n')\n\n# Acesso variávels\njasper.idade = 11 # Feliz aniversário\nprint(jasper.nome, 'tem', jasper.idade, 'anos')\nprint('\\n')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kwant-project/kwant-tutorial-2016
|
3.3.MagnetoResistance.ipynb
|
bsd-2-clause
|
[
"Giant Magneto Resistance\n<img src='images/GMR-cartoon.png'/>\nIn this example, we will learn how to play with the spin degree of freedom in a model. We will implement a crude\nmodel for a Ferromagnetic-Normal spacer-Ferromagnetic (FNF) spin valve and compute the conductance as a function of the angle $\\theta$ between the two magnetization. The ferromagnets are model by simply adding an sd exchange term of the form\n$$-J m.\\sigma$$\nin the Hamiltonian where $m$ is the direction of the magnetization, $\\sigma$ a vector of Pauli matrices and $J$ the exchange constant.",
"from types import SimpleNamespace\nfrom math import cos, sin, pi\n\n%run matplotlib_setup.ipy\nfrom matplotlib import pyplot\n\nimport numpy as np\nimport scipy.stats as reg\nimport kwant\n\nlat = kwant.lattice.square()\n\ns_0 = np.identity(2)\ns_z = np.array([[1, 0], [0, -1]])\ns_x = np.array([[0, 1], [1, 0]])\ns_y = np.array([[0, -1j], [1j, 0]])\n\ndef onsite(site, p):\n x = site.pos[0]\n if x > W and x < 2*W:\n return 4*s_0 + p.Exc*s_z\n if x > 3*W and x < 4*W:\n return 4*s_0 + p.Exc*cos(p.angle)*s_z + p.Exc*sin(p.angle)*s_x\n return 4*s_0\n\nW = 10\nH = kwant.Builder()\nH[(lat(x,y) for x in range(5*W) for y in range(W))] = onsite\nH[lat.neighbors()] = s_0\n\nsym = kwant.TranslationalSymmetry(lat.vec((1,0)))\nHlead =kwant.Builder(sym)\nHlead[(lat(0,y) for y in range(W))] = 4*s_0\nHlead[lat.neighbors()] = s_0\nH.attach_lead(Hlead)\nH.attach_lead(Hlead.reversed())\nkwant.plot(H);",
"In order to visualize the potential, it can be useful to have color maps of it.",
"ps = SimpleNamespace(Exc=2., E=1.2, angle=pi)\n\ndef V(site):\n Hd = onsite(site,ps)\n return (Hd[0,0] - Hd[1,1]).real\n\nkwant.plotter.map(H, V);",
"Now let us compute the angular magneto-resistance.\nTry playing with the parameters, what do you observe? Do you understand why?\nIs there anything wrong with our model?",
"Hf = H.finalized()\ndata = []\nangles = np.linspace(0,2*pi,100)\n\nparams = SimpleNamespace(Exc=0.2, E=2.3)\nfor params.angle in angles:\n smatrix = kwant.smatrix(Hf, params.E, args=[params])\n data.append(smatrix.transmission(1, 0))\n \npyplot.plot(angles, data);\npyplot.xlabel('angle')\npyplot.ylabel('Conductance in unit of $(e^2/h)$');",
"Magnetic texture : the example of a skyrmion\nLast, we can start playing with the magnetic texture, for instance a skyrmion as in the example below.\n$$H = - t \\sum_{<ij>} |i><j| + \\sum_i V_i \\ \\ |i><i|$$\n$$V_i = J \\ m ( r) \\cdot \\sigma $$\n$$ m ( r) = \\left(x/r \\sin\\theta \\ , \\ y/r \\sin\\theta \\ ,\\ \\cos\\theta \\right) $$\n$$\\theta (r) = \\tanh \\frac{r-r_0}{\\delta}$$\nAnother difference is that we will have 4 terminals and calculate the Hall resistance instead of the 2 terminal conductance.\nThis amounts to imposing the current and measuring the voltage, i.e. solving a small linear problem which is readily done with numpy.\nCan you calculate the longitudinal resistance?",
"def HedgeHog(site,ps):\n x,y = site.pos\n r = ( x**2 + y**2 )**0.5\n theta = (np.pi/2)*(np.tanh((ps.r0 - r)/ps.delta) + 1)\n if r != 0:\n Ex = (x/r)*np.sin(theta)*s_x + (y/r)*np.sin(theta)*s_y + np.cos(theta)*s_z\n else:\n Ex = s_z\n return 4*s_0 + ps.Ex * Ex\n \n\ndef Lead_Pot(site,ps):\n return 4*s_0 + ps.Ex * s_z\n\ndef MakeSystem(ps, show = False):\n H = kwant.Builder()\n\n def shape_2DEG(pos):\n x,y = pos\n return ( (abs(x) < ps.L) and (abs(y) < ps.W) ) or ( \n (abs(x) < ps.W) and (abs(y) < ps.L))\n \n H[lat.shape(shape_2DEG,(0,0))] = HedgeHog\n H[lat.neighbors()] = -s_0\n \n # ITS LEADS \n sym_x = kwant.TranslationalSymmetry((-1,0))\n H_lead_x = kwant.Builder(sym_x)\n shape_x = lambda pos: abs(pos[1])<ps.W and pos[0]==0 \n H_lead_x[lat.shape(shape_x,(0,0))] = Lead_Pot\n H_lead_x[lat.neighbors()] = -s_0\n \n sym_y = kwant.TranslationalSymmetry((0,-1))\n H_lead_y = kwant.Builder(sym_y)\n shape_y = lambda pos: abs(pos[0])<ps.W and pos[1]==0 \n H_lead_y[lat.shape(shape_y,(0,0))] = Lead_Pot \n H_lead_y[lat.neighbors()] = -s_0\n \n H.attach_lead(H_lead_x)\n H.attach_lead(H_lead_y)\n H.attach_lead(H_lead_y.reversed())\n H.attach_lead(H_lead_x.reversed())\n \n if show:\n kwant.plot(H)\n\n return H\n \n\ndef Transport(Hf,EE,ps):\n smatrix = kwant.smatrix(Hf, energy=EE, args=[ps])\n G=np.zeros((4,4))\n for i in range(4):\n a=0\n for j in range(4): \n G[i,j] = smatrix.transmission(i, j)\n if i != j:\n a += G[i,j]\n G[i,i] = -a \n \n V = np.linalg.solve(G[:3,:3], [1.,0,0])\n Hall = V[2] - V[1]\n \n return G, Hall\n\nps = SimpleNamespace(L=45, W=40, delta=10, r0=20, Ex=1.)\n\nH = MakeSystem(ps, show=True)\nHf = H.finalized()\n\ndef Vz(site):\n Hd = HedgeHog(site,ps)\n return (Hd[0,0] - Hd[1,1]).real \n\ndef Vy(site):\n Hd = HedgeHog(site, ps)\n return Hd[0,1].imag \n\nkwant.plotter.map(H, Vz);\nkwant.plotter.map(H, Vy);\n\n# HALL RESISTANCE\nps = SimpleNamespace(L=20, W=15, delta=3, r0=6, Ex=1.)\n\nH = MakeSystem(ps, show=False)\nEs = np.linspace(0.1,3.,50)\nHf = H.finalized()\ndataG , dataHall = [],[]\n\nfor EE in Es: \n ps.delta = EE\n energy = 2.\n G,Hall = Transport(Hf, energy, ps)\n dataHall.append(Hall)\n\npyplot.plot(Es, dataHall, 'o-', label=\"Skyrmion\")\npyplot.xlabel('Domain width $\\delta$')\npyplot.ylabel('Hall Resistance')\npyplot.title('Topologial Hall Resistance?')\npyplot.legend();"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/trax
|
trax/intro.ipynb
|
apache-2.0
|
[
"Trax Quick Intro\nTrax is an end-to-end library for deep learning that focuses on clear code and speed. It is actively used and maintained in the Google Brain team. This notebook (run it in colab) shows how to use Trax and where you can find more information.\n\nRun a pre-trained Transformer: create a translator in a few lines of code\nFeatures and resources: API docs, where to talk to us, how to open an issue and more\nWalkthrough: how Trax works, how to make new models and train on your own data\n\nWe welcome contributions to Trax! We welcome PRs with code for new models and layers as well as improvements to our code and documentation. We especially love notebooks that explain how models work and show how to use them to solve problems!\nGeneral Setup\nExecute the following few cells (once) before running any of the code samples.",
"#@title\n# Copyright 2020 Google LLC.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport numpy as np\n\n\n\n\n#@title\n# Import Trax\n\n!pip install -q -U trax\nimport trax",
"1. Run a pre-trained Transformer\nHere is how you create an Engligh-German translator in a few lines of code:\n\ncreate a Transformer model in Trax with trax.models.Transformer\ninitialize it from a file with pre-trained weights with model.init_from_file\ntokenize your input sentence to input into the model with trax.data.tokenize\ndecode from the Transformer with trax.supervised.decoding.autoregressive_sample\nde-tokenize the decoded result to get the translation with trax.data.detokenize",
"\n# Create a Transformer model.\n# Pre-trained model config in gs://trax-ml/models/translation/ende_wmt32k.gin\nmodel = trax.models.Transformer(\n input_vocab_size=33300,\n d_model=512, d_ff=2048,\n n_heads=8, n_encoder_layers=6, n_decoder_layers=6,\n max_len=2048, mode='predict')\n\n# Initialize using pre-trained weights.\nmodel.init_from_file('gs://trax-ml/models/translation/ende_wmt32k.pkl.gz',\n weights_only=True)\n\n# Tokenize a sentence.\nsentence = 'It is nice to learn new things today!'\ntokenized = list(trax.data.tokenize(iter([sentence]), # Operates on streams.\n vocab_dir='gs://trax-ml/vocabs/',\n vocab_file='ende_32k.subword'))[0]\n\n# Decode from the Transformer.\ntokenized = tokenized[None, :] # Add batch dimension.\ntokenized_translation = trax.supervised.decoding.autoregressive_sample(\n model, tokenized, temperature=0.0) # Higher temperature: more diverse results.\n\n# De-tokenize,\ntokenized_translation = tokenized_translation[0][:-1] # Remove batch and EOS.\ntranslation = trax.data.detokenize(tokenized_translation,\n vocab_dir='gs://trax-ml/vocabs/',\n vocab_file='ende_32k.subword')\nprint(translation)",
"2. Features and resources\nTrax includes basic models (like ResNet, LSTM, Transformer and RL algorithms\n(like REINFORCE, A2C, PPO). It is also actively used for research and includes\nnew models like the Reformer and new RL algorithms like AWR. Trax has bindings to a large number of deep learning datasets, including\nTensor2Tensor and TensorFlow datasets.\nYou can use Trax either as a library from your own python scripts and notebooks\nor as a binary from the shell, which can be more convenient for training large models.\nIt runs without any changes on CPUs, GPUs and TPUs.\n\nAPI docs\nchat with us\nopen an issue\nsubscribe to trax-discuss for news\n\n3. Walkthrough\nYou can learn here how Trax works, how to create new models and how to train them on your own data.\nTensors and Fast Math\nThe basic units flowing through Trax models are tensors - multi-dimensional arrays, sometimes also known as numpy arrays, due to the most widely used package for tensor operations -- numpy. You should take a look at the numpy guide if you don't know how to operate on tensors: Trax also uses the numpy API for that.\nIn Trax we want numpy operations to run very fast, making use of GPUs and TPUs to accelerate them. We also want to automatically compute gradients of functions on tensors. This is done in the trax.fastmath package thanks to its backends -- JAX and TensorFlow numpy.",
"from trax.fastmath import numpy as fastnp\ntrax.fastmath.use_backend('jax') # Can be 'jax' or 'tensorflow-numpy'.\n\nmatrix = fastnp.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\nprint(f'matrix =\\n{matrix}')\nvector = fastnp.ones(3)\nprint(f'vector = {vector}')\nproduct = fastnp.dot(vector, matrix)\nprint(f'product = {product}')\ntanh = fastnp.tanh(product)\nprint(f'tanh(product) = {tanh}')",
"Gradients can be calculated using trax.fastmath.grad.",
"def f(x):\n return 2.0 * x * x\n\ngrad_f = trax.fastmath.grad(f)\n\nprint(f'grad(2x^2) at 1 = {grad_f(1.0)}')\nprint(f'grad(2x^2) at -2 = {grad_f(-2.0)}')",
"Layers\nLayers are basic building blocks of Trax models. You will learn all about them in the layers intro but for now, just take a look at the implementation of one core Trax layer, Embedding:\n```\nclass Embedding(base.Layer):\n \"\"\"Trainable layer that maps discrete tokens/IDs to vectors.\"\"\"\ndef init(self,\n vocab_size,\n d_feature,\n kernel_initializer=init.RandomNormalInitializer(1.0)):\n \"\"\"Returns an embedding layer with given vocabulary size and vector size.\nArgs:\n vocab_size: Size of the input vocabulary. The layer will assign a unique\n vector to each id in `range(vocab_size)`.\n d_feature: Dimensionality/depth of the output vectors.\n kernel_initializer: Function that creates (random) initial vectors for\n the embedding.\n\"\"\"\nsuper().__init__(name=f'Embedding_{vocab_size}_{d_feature}')\nself._d_feature = d_feature # feature dimensionality\nself._vocab_size = vocab_size\nself._kernel_initializer = kernel_initializer\n\ndef forward(self, x):\n \"\"\"Returns embedding vectors corresponding to input token IDs.\nArgs:\n x: Tensor of token IDs.\n\nReturns:\n Tensor of embedding vectors.\n\"\"\"\nreturn jnp.take(self.weights, x, axis=0, mode='clip')\n\ndef init_weights_and_state(self, input_signature):\n \"\"\"Randomly initializes this layer's weights.\"\"\"\n del input_signature\n shape_w = (self._vocab_size, self._d_feature)\n w = self._kernel_initializer(shape_w, self.rng)\n self.weights = w\n```\nLayers with trainable weights like Embedding need to be initialized with the signature (shape and dtype) of the input, and then can be run by calling them.",
"from trax import layers as tl\n\n# Create an input tensor x.\nx = np.arange(15)\nprint(f'x = {x}')\n\n# Create the embedding layer.\nembedding = tl.Embedding(vocab_size=20, d_feature=32)\nembedding.init(trax.shapes.signature(x))\n\n# Run the layer -- y = embedding(x).\ny = embedding(x)\nprint(f'shape of y = {y.shape}')",
"Models\nModels in Trax are built from layers most often using the Serial and Branch combinators. You can read more about those combinators in the layers intro and\nsee the code for many models in trax/models/, e.g., this is how the Transformer Language Model is implemented. Below is an example of how to build a sentiment classification model.",
"model = tl.Serial(\n tl.Embedding(vocab_size=8192, d_feature=256),\n tl.Mean(axis=1), # Average on axis 1 (length of sentence).\n tl.Dense(2), # Classify 2 classes.\n)\n\n# You can print model structure.\nprint(model)",
"Data\nTo train your model, you need data. In Trax, data streams are represented as python iterators, so you can call next(data_stream) and get a tuple, e.g., (inputs, targets). Trax allows you to use TensorFlow Datasets easily and you can also get an iterator from your own text file using the standard open('my_file.txt').",
"train_stream = trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=True)()\neval_stream = trax.data.TFDS('imdb_reviews', keys=('text', 'label'), train=False)()\nprint(next(train_stream)) # See one example.",
"Using the trax.data module you can create input processing pipelines, e.g., to tokenize and shuffle your data. You create data pipelines using trax.data.Serial and they are functions that you apply to streams to create processed streams.",
"data_pipeline = trax.data.Serial(\n trax.data.Tokenize(vocab_file='en_8k.subword', keys=[0]),\n trax.data.Shuffle(),\n trax.data.FilterByLength(max_length=2048, length_keys=[0]),\n trax.data.BucketByLength(boundaries=[ 32, 128, 512, 2048],\n batch_sizes=[512, 128, 32, 8, 1],\n length_keys=[0]),\n trax.data.AddLossWeights()\n )\ntrain_batches_stream = data_pipeline(train_stream)\neval_batches_stream = data_pipeline(eval_stream)\nexample_batch = next(train_batches_stream)\nprint(f'shapes = {[x.shape for x in example_batch]}') # Check the shapes.",
"Supervised training\nWhen you have the model and the data, use trax.supervised.training to define training and eval tasks and create a training loop. The Trax training loop optimizes training and will create TensorBoard logs and model checkpoints for you.",
"from trax.supervised import training\n\n# Training task.\ntrain_task = training.TrainTask(\n labeled_data=train_batches_stream,\n loss_layer=tl.WeightedCategoryCrossEntropy(),\n optimizer=trax.optimizers.Adam(0.01),\n n_steps_per_checkpoint=500,\n)\n\n# Evaluaton task.\neval_task = training.EvalTask(\n labeled_data=eval_batches_stream,\n metrics=[tl.WeightedCategoryCrossEntropy(), tl.WeightedCategoryAccuracy()],\n n_eval_batches=20 # For less variance in eval numbers.\n)\n\n# Training loop saves checkpoints to output_dir.\noutput_dir = os.path.expanduser('~/output_dir/')\n!rm -rf {output_dir}\ntraining_loop = training.Loop(model,\n train_task,\n eval_tasks=[eval_task],\n output_dir=output_dir)\n\n# Run 2000 steps (batches).\ntraining_loop.run(2000)",
"After training the model, run it like any layer to get results.",
"example_input = next(eval_batches_stream)[0][0]\nexample_input_str = trax.data.detokenize(example_input, vocab_file='en_8k.subword')\nprint(f'example input_str: {example_input_str}')\nsentiment_log_probs = model(example_input[None, :]) # Add batch dimension.\nprint(f'Model returned sentiment probabilities: {np.exp(sentiment_log_probs)}')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ga7g08/ga7g08.github.io
|
_notebooks/2016-03-21-Using-Scipy-KS-test.ipynb
|
mit
|
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as ss\nimport seaborn",
"Using the scipy Kolmogorov–Smirnov test\nA short note on how to use the scipy Kolmogorov–Smirnov test because quite frankly the documentation was not compelling and I always forget how the scipy.stats module handles distributions arguments.\nSet up some test data\nUsing a beta distribution generate rvs from it and save them as d1, then make a second set of data containing the fist set and 50 drawd from a different distribution: this measn d2 not beta. We plot the Gaussian KDE to show how large the effect is graphically",
"xvals = np.linspace(0, 0.5, 500)\n\nargs = [0.5, 20]\nd1 = ss.beta.rvs(args[0], args[1], size=1000)\nnoise = ss.norm.rvs(0.4, 0.1, size=50)\nd2 = np.append(d1[:len(d1)-len(noise)], noise)\n\nfig, (ax1, ax2) = plt.subplots(nrows = 2)\n\nax1.plot(xvals, ss.gaussian_kde(d1).evaluate(xvals), \"-k\",\n label=\"d1\")\nax1.legend()\nax2.plot(xvals, ss.gaussian_kde(d2).evaluate(xvals), \"-k\",\n label=\"d2\")\nax2.legend()\n\nplt.show()",
"Now compute KS test\nThe trick here is to give the scipy.stats.kstest the data, then a callable function of the cumulative distribution function. There is some clever way to just pass in a string of the name, but I was unconvinved that the argumements were going to the proper place, so I prefer this method",
"beta_cdf = lambda x: ss.beta.cdf(x, args[0], args[1])\nss.kstest(d1, beta_cdf)\n\nss.kstest(d2, beta_cdf)",
"Conclusion\nClearly this works as expected: for d1 we get a fairly large $p$-value suggesting we should accept the null hypothesis (the data is beta-distributed with the given $a$ and $b$, while for $d2$ we get a low $p$-value suggesting we might discard the null hypothesis (which we know to be true since that was the way the data was built). \nIt remains unclear how to choose the $p$ threshold."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
TwistedHardware/mltutorial
|
notebooks/tf/3. Variables.ipynb
|
gpl-2.0
|
[
"<table>\n <tr>\n <td style=\"text-align:left;\"><div style=\"font-family: monospace; font-size: 2em; display: inline-block; width:60%\">3. Variables</div><img src=\"images/roshan.png\" style=\"width:30%; display: inline; text-align: left; float:right;\"></td>\n <td></td>\n </tr>\n</table>\n\nSo far all the tensors that we created were all constants where you can do operations that generate new tensors but you can never change the value of any tensor after creating it. To start doing \"stateful\" programming which keeps and updates values (or state) or tesnors you need to wrap your tensors in an instance of ft.Varbiable().\nAs usaul before we start, let's import TensorFlow and start an interactive session.",
"import tensorflow as tf\nimport sys\n\nprint(\"Python Version:\",sys.version.split(\" \")[0])\nprint(\"TensorFlow Version:\",tf.VERSION)",
"Creating Variables\nLet start a new interactive session and create some variables. To create a variables, your this function:\ntf.Variable(initial_value=None, trainable=True, collections=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, expected_shape=None, import_scope=None, constraint=None)",
"sess = tf.InteractiveSession()\n\na = tf.Variable(tf.ones((2,2)), name=\"a\")\na",
"The first parameter that we passed to when creating an instanse of tf.Variable() is initial_value. This can accept a tensor that has values or a tensor initilizer method. We will discuss some of these later in this tutorial.\nYou can also use ft.get_variable() function to create a variable.\ntf.get_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None, constraint=None)",
"b = tf.get_variable(\"b\", [2,2])\nb",
"This creates a variable named b with shape (2,2).\nTo initlize the value of your variable, you could use one of the many inilization method available in TensorFlow.\ntf.zeros_initializer",
"c = tf.get_variable(\"c\", [2,2], dtype=tf.float32, initializer=tf.zeros_initializer)\nc",
"Similar to tf.zeros_initializer there is tf.ones_initializer which initializes your tensor with ones.\nThere are also tf.random_normal_initializer and random_uniform_initializer that inialize your variables with a normal or uniform distribution. For truncated normal distribution, you can use tf.truncated_normal_initializer which will limit your normal distribution to 2 standard diviations from the mean.\nYou can also initialize your variables with a constant.",
"d = tf.get_variable(\"d\", initializer=tf.constant([1,2,3]))\nd",
"Initialize Variables\nBefore you can use any of your variables, you should first run an operation that initializes them. To initialize all the variabes that that created already, you can use tf.global_variables_initializer to create the operation this you have to run that operation using your session.",
"init_op = tf.global_variables_initializer()\nsess.run(init_op)",
"Now that you initlized your variables, you can start executing this and getting their values. To do that, you can just call eval() method of the variable.",
"d.eval()",
"Manually Initializing Variables\nSometimes specially in an interactive environment, you would want to initialize some extra variables after you initialized all your variables using tf.global_variables_initializer. To do that, you can run one variable initializer which is an operation that can be retrieved for a single variable from the initializer.",
"e = tf.get_variable(\"e\", initializer=tf.constant([2,2,2]))\nsess.run(e.initializer)",
"If you try to redecalre your variable with the same name, you will get an error message because TensorFlow does't know if you want to resue the same variable or you want a new one.\nTo avoid that clarify that you mean to reuse the same variable and you just to reinitize the variable using",
"with tf.variable_scope(reuse=tf.AUTO_REUSE, name_or_scope=\"e\"):\n e = tf.get_variable(\"e\", initializer=tf.constant([2,2,2]))\n\ne",
"Assigning Value to Variables\nSo far variables are not much different that any constant tensor. Variables get interesting once you can change their values. To do that you can use tf.assign() function or the assign() method of a variable. These are operation and should be run using a your session for them to perform their assignment.",
"sess.run(d.assign([2,3,4]))\nd.eval()\n\nsess.run(tf.assign_add(d, [2,2,2]))\nd.eval()\n\nsess.run(tf.assign_sub(d, [3,3,3]))\nd.eval()",
"Variable Scrope\nYou can group variables in a few way in TensorFlow and one these methods is Variable Scope",
"with tf.name_scope(\"dense1\"):\n a = tf.get_variable(\"a\", (3,))\n\n\nwith tf.name_scope(\"dense1\"):\n b = tf.Variable(tf.ones((2,2)), name=\"b\")\nb\n\nwith tf.name_scope(\"dense1\"):\n c = tf.Variable(tf.ones((2,2)), name=\"c\")\n w = c+1\nw\n\nwith tf.variable_scope(\"dense1\"):\n d = tf.get_variable(\"d\", (3,))\n e = d + c\n\nwith tf.variable_scope(\"dense1\", ):\n f = tf.get_variable(\"f\", (3,))\n g = tf.get_variable(\"g\", (3,))\n h = f + g\nh",
"The Match Behind It\nSince we are talking about variables we cannot escape the fact that we will use them in the next tutorial for creating a predective model. This means we will have to get a head start start with some basic concepts about calculus. Calculus is branch of math that studies change. It can study the change of a variable as it relates to another variable. So basically it studies the realtionship between two or more variables. There are two main studies in calculus:\n\nDiffrentiation\nIntegration\n\nDiffrentiation\nWe will focus for now on diffrentiation because the we will use that to train neural network with an algorithm called \"Back Probagation\". Diffrentiation is the study of the rate of change or the slope.\nWe will use numpy and matplotlib for illustration so let's import them",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Slope\nSlope is the mesure of change of a variable as another variables changes. In here we will see the change in $y$ as $x$ changes. The mathematical way of saying that is:\n$$\\frac{dy}{dx} = \\frac{\\Delta y}{\\Delta x}$$\n$\\Delta$ is the capital letter delta and it means the change of. So the change of $y$ as $x$ changes.\n\nSlope is positive if numbers are increasing.\nSlope is nagative if numbers are decreasing.\nSlope is zero if numbers are not changing.\n\nLet's look at three lines to show how slope works.",
"x = np.arange(5)\nfig = plt.figure(figsize=(12,3))\naxarr = fig.subplots(1,3)\n\naxarr[0].plot(x, x*2)\naxarr[0].set_title(\"Positive Slope\")\naxarr[1].plot(x, x*-2)\naxarr[1].set_title(\"Negative Slope\")\naxarr[2].plot(x, x*0 + 2)\naxarr[2].set_title(\"Zero Slope\")\nplt.show();",
"Linear Diffrentiation\nConsider this situation where you are driving at a constant speed of 10 $km/h$.\nWe can represent this in a mathematical way with a simple function that looks like this:\n$Distance = 10 \\times Time$\nOr to make it more abstract we can call $Distance$ $y$ because it will be represented on the Y axis and call $Time$ $x$ because it will be represented on the X axis. So our function for this relationship will look like this:\n$y = 10x$",
"speed = 10\ntime = np.arange(0,9)\ndistance = speed * time\n\nplt.xlabel(\"Time $Hours$\")\nplt.ylabel(\"Distance $km$\")\nplt.grid()\nplt.plot(time, distance);",
"After 1 hour you have travelled 10 $km$ and after 4 hours would have travelled 40 $km$. This realtionship is called linear because it can be represented by a straight line. With linear relationships we can measure the slope using any two points $(x_1,y_1)$ and $(x_2,y_2)$. If we get these two points are 1 hour and 4 hours we get these two points $(1,10)$ and $(4,40)$. To measure the slope (represented by the letter $m$) now we use this function:\n$$m = \\frac{y_2 - y_1}{x_2 - x_1}$$\nIf we substitute our points we can measure the slope.\n$$m = \\frac{40 - 10}{4 - 1} = \\frac{30}{3} = 10$$\nThere is nothing interesting about this finding! We already knew the speed was 10 $km/h$. This is a linear function and most of calculus is more interested in non-linear functions. So let's see how does that work.\nPolynomial Diffrentiation\nConsider a situation where you are not travelling at constant speed but perhapse your speed in increasing over time. So in the begenning you are starting at a low speed and over time your speed increases. We call this acceleration and it is measured in $m/s^2$ but for a unit that we can relate to I'll use $km/h^2$. Let's illustrate this and it would make more sense.\nWe can represent this in a mathematical way with a simple function that looks like this:\n$Distance = Acceleration * Time^2$\nAgain to abstract this function we can write it like this:\n$y = ax^2$",
"acceleration = 2\ntime = np.arange(0,9)\ndistance = acceleration * time**2\n\nplt.xlabel(\"Time $Hours$\")\nplt.ylabel(\"Distance $km$\")\nplt.grid()\nplt.plot(time, distance);",
"First, let's look at the distance over time.",
"distance",
"We can notice that we travelled 2 $km$ in the first hour and 8 $km$ after two hours making the distance we travelled for the second hour 6 $km$. This means our speed is changing over time. To measure the slope for this, we will need to specify at what time do we want the speed because it is changing over time.\nTo get the slope we derive another function that measure the slope of this function. This function is called derevative and has the notation $f'$ or $\\frac{dy}{dx}$\nThere is no standard mthematical way to come up with the the derevative function. It depends of the function type. Let's look a polipomial function and see how to get the derevative of that function:\n$f(x) = ax^n$\nto detive a polinomial function we use this rule:\n$f'(x)=n \\times ax^{n-1}$\nto apply this to our function from before:\n$y = 2x^2$\n$y' = 2 \\times 2x^{2-1} = 4x$\nto to get the speed at any point in time, we can use the derevative function. So the speed after 3 hours is:\n$y' = 4 \\times 3 = 12$\nNow let's visualize both of these functions to see how does they ralate to each other.",
"acceleration = 2\ntime = np.arange(0,9)\ndistance = acceleration * time**2\nspeed = 4 * time\n\nplt.xlabel(\"Time $Hours$\")\nplt.ylabel(\"Distance $km$\")\nplt.grid()\nl1 = plt.plot(time, distance, label=\"Distance\")\nplt.twinx()\nplt.ylabel(\"Speed $km/h$\")\nl2 = plt.plot(time, speed, \"r\", label=\"Speed\")\nplt.legend(l1+l2, (\"Distance\", \"Speed\"));",
"Since you know the rule now, let's try to apply it to out first example\n$y = 10x = 10x^1$\n$y' = 1 \\times 10x^{1-1} = 10x^0 = 10 \\times 1 = 10$\nAnything to the power 0 is equal to 1. It might be weird, but that's just math!\nComplex Ploynominal Functions\nThe same rule applies for any polynominal function. Let's try some examples:\n$f(x) = 2x^2 + 4x + 10$\n$f'(x) = 2 \\times 2x^{2-1} + 1 \\times 4x{1-1} + 0 = 4x + 4$\nNotice that we can apply the rule to each part of the function seperatly. Any part of the functiona that's not muliplied by x will give you a derevative of 0.\nTrigonometry\nDerevatives of trigonometric functions can be derived to some level straight forward out of a lookup table. This is an incomlete table of functions and their derevatives:\n| Function $f(x)$ | Deravative $f'(x)$|\n| --------------- | ------------------ |\n| $sin(x)$ | $cos(x)$ |\n| $cos(x)$ | $-sin(x)$ |\n| $tan(x)$ | $sec^2(x)$ |\n| $cot(x)$ | $-csc^2(x)$ |\n| $sec(x)$ | $sec(x)tan(x)$ |\n| $csc(x)$ | $-csc(x)cost(x)$ |\nYou don't have to memorize this. This is available everywhere!\nJust for fun let's visualize one of them as see if it makes sense.\n$f(x) = sin(x)$\n$f'(x) = cos(x)$",
"x = np.arange(0,10, 0.1)\nsin_x = np.sin(x)\ncos_x = np.cos(x)\n\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.grid()\nplt.plot(x, sin_x, label=\"sin(x)\")\nplt.plot(x, cos_x, \"r\", label=\"cos(x)\")\nplt.legend();",
"Next\nDid I get existed about calculus yet? well you will love the next part where we get into some more interesting derevativesa dn a more intuative understanding of why do we need that whole derevative in the first place.\n<center>This work in <b>open sourced</b> and licensed under GNU General Public License v2.0<br />\nCopyright © 2018 Abdullah Alrasheed and other contributes<br /><br />Roshan Logo is not open sourced and is not covered by the GNU license</center>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.