repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
teuben/pitp2016
|
yt-demo/example4.ipynb
|
gpl-3.0
|
[
"Even if your data is not strictly related to fields commonly used in\nastrophysical codes or your code is not supported yet, you can still feed it to\nyt to use its advanced visualization and analysis facilities. The only\nrequirement is that your data can be represented as three-dimensional NumPy arrays with a consistent grid structure. What follows are some common examples of loading in generic array data that you may find useful. \nGeneric Unigrid Data\nThe simplest case is that of a single grid of data spanning the domain, with one or more fields. The data could be generated from a variety of sources; we'll just give three common examples:\nData generated \"on-the-fly\"\nThe most common example is that of data that is generated in memory from the currently running script or notebook.",
"import yt\nimport numpy as np",
"In this example, we'll just create a 3-D array of random floating-point data using NumPy:",
"arr = np.random.random(size=(64,64,64))",
"To load this data into yt, we need associate it with a field. The data dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call load_uniform_grid:",
"data = dict(density = (arr, \"g/cm**3\"))\nbbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\nds = yt.load_uniform_grid(data, arr.shape, length_unit=\"Mpc\", bbox=bbox, nprocs=64)",
"load_uniform_grid takes the following arguments and optional keywords:\n\ndata : This is a dict of numpy arrays, where the keys are the field names\ndomain_dimensions : The domain dimensions of the unigrid\nlength_unit : The unit that corresponds to code_length, can be a string, tuple, or floating-point number\nbbox : Size of computational domain in units of code_length\nnprocs : If greater than 1, will create this number of subarrays out of data\nsim_time : The simulation time in seconds\nmass_unit : The unit that corresponds to code_mass, can be a string, tuple, or floating-point number\ntime_unit : The unit that corresponds to code_time, can be a string, tuple, or floating-point number\nvelocity_unit : The unit that corresponds to code_velocity\nmagnetic_unit : The unit that corresponds to code_magnetic, i.e. the internal units used to represent magnetic field strengths.\nperiodicity : A tuple of booleans that determines whether the data will be treated as periodic along each axis\n\nThis example creates a yt-native dataset ds that will treat your array as a\ndensity field in cubic domain of 3 Mpc edge size and simultaneously divide the \ndomain into nprocs = 64 chunks, so that you can take advantage\nof the underlying parallelism. \nThe optional unit keyword arguments allow for the default units of the dataset to be set. They can be:\n* A string, e.g. length_unit=\"Mpc\"\n* A tuple, e.g. mass_unit=(1.0e14, \"Msun\")\n* A floating-point value, e.g. time_unit=3.1557e13\nIn the latter case, the unit is assumed to be cgs. \nThe resulting ds functions exactly like a dataset like any other yt can handle--it can be sliced, and we can show the grid boundaries:",
"slc = yt.SlicePlot(ds, \"z\", [\"density\"])\nslc.set_cmap(\"density\", \"Blues\")\nslc.annotate_grids(cmap=None)\nslc.show()",
"Particle fields are detected as one-dimensional fields. The number of\nparticles is set by the number_of_particles key in\ndata. Particle fields are then added as one-dimensional arrays in\na similar manner as the three-dimensional grid fields:",
"posx_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\nposy_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\nposz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)\ndata = dict(density = (np.random.random(size=(64,64,64)), \"Msun/kpc**3\"), \n number_of_particles = 10000,\n particle_position_x = (posx_arr, 'code_length'), \n particle_position_y = (posy_arr, 'code_length'),\n particle_position_z = (posz_arr, 'code_length'))\nbbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])\nds = yt.load_uniform_grid(data, data[\"density\"][0].shape, length_unit=(1.0, \"Mpc\"), mass_unit=(1.0,\"Msun\"), \n bbox=bbox, nprocs=4)",
"In this example only the particle position fields have been assigned. number_of_particles must be the same size as the particle\narrays. If no particle arrays are supplied then number_of_particles is assumed to be zero. Take a slice, and overlay particle positions:",
"slc = yt.SlicePlot(ds, \"z\", [\"density\"])\nslc.set_cmap(\"density\", \"Blues\")\nslc.annotate_particles(0.25, p_size=12.0, col=\"Red\")\nslc.show()",
"Generic AMR Data\nIn a similar fashion to unigrid data, data gridded into rectangular patches at varying levels of resolution may also be loaded into yt. In this case, a list of grid dictionaries should be provided, with the requisite information about each grid's properties. This example sets up two grids: a top-level grid (level == 0) covering the entire domain and a subgrid at level == 1.",
"grid_data = [\n dict(left_edge = [0.0, 0.0, 0.0],\n right_edge = [1.0, 1.0, 1.0],\n level = 0,\n dimensions = [32, 32, 32]), \n dict(left_edge = [0.25, 0.25, 0.25],\n right_edge = [0.75, 0.75, 0.75],\n level = 1,\n dimensions = [32, 32, 32])\n ]",
"We'll just fill each grid with random density data, with a scaling with the grid refinement level.",
"for g in grid_data: \n g[\"density\"] = (np.random.random(g[\"dimensions\"]) * 2**g[\"level\"], \"g/cm**3\")",
"Particle fields are supported by adding 1-dimensional arrays to each grid and\nsetting the number_of_particles key in each grid's dict. If a grid has no particles, set number_of_particles = 0, but the particle fields still have to be defined since they are defined elsewhere; set them to empty NumPy arrays:",
"grid_data[0][\"number_of_particles\"] = 0 # Set no particles in the top-level grid\ngrid_data[0][\"particle_position_x\"] = (np.array([]), \"code_length\") # No particles, so set empty arrays\ngrid_data[0][\"particle_position_y\"] = (np.array([]), \"code_length\")\ngrid_data[0][\"particle_position_z\"] = (np.array([]), \"code_length\")\ngrid_data[1][\"number_of_particles\"] = 1000\ngrid_data[1][\"particle_position_x\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")\ngrid_data[1][\"particle_position_y\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")\ngrid_data[1][\"particle_position_z\"] = (np.random.uniform(low=0.25, high=0.75, size=1000), \"code_length\")",
"Then, call load_amr_grids:",
"ds = yt.load_amr_grids(grid_data, [32, 32, 32])",
"load_amr_grids also takes the same keywords bbox and sim_time as load_uniform_grid. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:",
"slc = yt.SlicePlot(ds, \"z\", [\"density\"])\nslc.annotate_particles(0.25, p_size=15.0, col=\"Pink\")\nslc.show()",
"Caveats for Loading Generic Array Data\n\nParticles may be difficult to integrate.\nData must already reside in memory before loading it in to yt, whether it is generated at runtime or loaded from disk. \nSome functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.\nNo consistency checks are performed on the hierarchy\nConsistency between particle positions and grids is not checked; load_amr_grids assumes that particle positions associated with one grid are not bounded within another grid at a higher level, so this must be ensured by the user prior to loading the grid data."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
megatharun/basic-python-for-researcher
|
Tutorial 4 - The Sequence.ipynb
|
artistic-2.0
|
[
"<span style=\"color: #B40486\">BASIC PYTHON FOR RESEARCHERS</span>\nby Megat Harun Al Rashid bin Megat Ahmad\nlast updated: April 14, 2016\n\n<span style=\"color: #29088A\">4. The Sequence</span>\nSequence is a type of data structure. It is similar to array. Each element of a sequence can be accessed according to its index. There are several type of sequences:\n1. Strings\n2. Lists\n3. Tuples\n4. Dictionaries\n5. Sets\n6. Frozen sets\nThe most commonly used is lists, tuples and dictionaries, which we will explored here.\n\n4.1 The list sequence\nA $list$ can be constructed using the bracket [] with the elements/components of the $list$ separated by commas.",
"List_num = [1,2,3,4,5]\nprint List_num\nE = len(List_num)\nprint \"There's %d elements in the list %s\" % (E,List_num)",
"Elements in a $list$ can be made of numbers, strings, a mixture of both or other type of sequences. Element can be accessed by specifying the element position in the $list$ (similar to accessing the strings, as discussed in Tutorial 2). The number of elements in $list$ can be known using the <span style=\"color: #0000FF\">$len$()</span> function.",
"List_str = [\"Blythe\",\"Rafa\",\"Felicity\",\"Kiyoko\"]\n\nprint \"What is the word happy in Arabic?\"\nprint \"The answer is %s and it's starts with the capital letter %s.\" % \\\n(List_str[1],List_str[1][0])",
"In the above example, the elements in $List$_$str$ $list$ is accessed by specifying the positional index (in square bracket after the variable name of the list) of the $list$ and after that the element of the strings is accessed by specifying a second bracketted index as strings is also a $list$.\nAcessed element of a $list$ can be operated (just like variable)",
"List_mix = [\"The Time Machine\", 1895, \"The Invisible Man\", 1897, \\\n \"The Shape of Things to Come\", 1933]\nprint '\"%s\" was first published in %d with \\n\"%s\" published %d years \\\nlater.' % (List_mix[0],List_mix[1],List_mix[4],List_mix[5]-List_mix[1])",
"<span style=\"color: #F5DA81; background-color: #610B4B\">Example 4.1</span>: The followings are some of the infamous implementations of Python programming language: CPython, Cython, PyPy, IronPython, Jython and Unladen Swallow. Put this sequence in a list and rearrange the sequence according to your preferred implementations in a list that contains only three implementations. Print the new list.",
"Python_Impl = ['CPython','Cython','PyPy','IronPython','Jython','Unladen Swallow']\nNew_Python_Impl = [Python_Impl[1],Python_Impl[0],Python_Impl[2]]\n\nprint New_Python_Impl\n",
"<span style=\"color: #F5DA81; background-color: #610B4B\">Example 4.2</span>: From this list: ['Python','Java','C','Perl','Sed','Awk','Lisp','Ruby'], create back the original list of Python implementations.",
"PN = ['Python','Java','C','Perl','Sed','Awk','Lisp','Ruby']\n\nPyth_ImplN = [PN[2]+PN[0],PN[2]+PN[0][1:],PN[0][:2]*2,\\\n PN[6][1].upper()+PN[3][2]+PN[0][4:]+PN[0],\\\n PN[1][0]+PN[0][1:],PN[7][1].upper()+PN[0][-1]+PN[3][-1]+\\\n PN[1][1]+PN[4][-1]+PN[4][-2]+PN[0][-1]+' '+\\\n PN[-2][-2].upper()+PN[-3][1]+PN[1][1]+PN[3][3]*2+\\\n PN[0][-2]+PN[-3][1]]\n\nprint Pyth_ImplN",
"What we have seen are one-dimensional homogeneous and non-homogeneous $lists$.",
"x = [12,45,78,14,23]\ny = [\"Dickens\",\"Hardy\",\"Austen\",\"Steinbeck\"]\nZ = [3E8,'light',\"metre\"]",
"$List$ can also be multi-dimensional.",
"# Homogeneous multi dimensional list (2D):\n\n# List_name[row][column]\n\nx2 = [[12,32],[43,9]]\nprint x2\nprint x2[1] # Second row\nprint x2[0][1] # First row, second column",
"In a matrix representation, this is:\n$$\\left( \\begin{array}{cc}\n12 & 32 \\\n43 & 9\\end{array} \\right)$$\nand to get the matrix determinant:",
"# Matrix determinant\n\ndet_x2 = x2[0][0]*x2[1][1]-x2[0][1]*x2[1][0]\nprint \"Determinant of x2 is %d\" % det_x2",
"A multi-dimensional $list$ is actually $lists$ within $list$:",
"x1 = [0.1,0.2,0.3,0.4,0.5]\nx2 = [0,12,34,15,1]\nx = [x1,x2]\nprint x # A 2x5 Array",
"$List$ can also be non-homogeneous multi-dimensional:",
"Data_3D = [[[2,3,5],[1,7,0]],[5,\"ArXiv\"]]\n\n#print number 7\nprint Data_3D[0][1][1]\n\nprint 'Mr. Perelman published the solution \\\nto Poinc%sre conjecture in \"%s\".' % (u\"\\u00E1\", Data_3D[1][1])",
"Data_3D is actually $lists$ inside $lists$ inside $list$ but non-homogeneously.\n<img src=\"Tutorial3/array.png\" width=\"500\" >\nThe elements in the $list$ can be subtituted.",
"# Extracting and substitution\n\nL1 = Data_3D[0]; print L1\nL2 = [Data_3D[1]]+[Data_3D[0][0]]\nprint L2\nprint L2[0][1]\nData_3D[1][1] = \"PlosOne\"\nprint Data_3D",
"Iterating on elements in list requires the sequential accessing of the list. This can be done using <span style=\"color: #0000FF\">$for$</span> and <span style=\"color: #0000FF\">$while$</span> control structures as well as the <span style=\"color: #0000FF\">$enumerate$()</span> function.",
"# Looping: for\n\ndwarf = [\"Eris\",\"Pluto\",\"Makemake\",\"Haumea\",\"Sedna\"]\nprint dwarf\n\n\nfor name in dwarf:\n print name\n\nfor z in range(len(dwarf)):\n print \"%d\\t%s\" % (z,dwarf[z])\n\nfor x,z in enumerate(dwarf,1):\n print \"%d\\t%s\" % (x,z)\n\nz = 0\nwhile z < len(dwarf):\n print \"%d\\t%s\" % (z+1,dwarf[z])\n z = z + 1",
"<span style=\"color: #F5DA81; background-color: #610B4B\">Example 4.3</span>: Calculate and print each value of x*y with:\nx = [12.1,7.3,6.2,9.9,0.5]\ny = [4.5,6.1,3.9,1.7,8.0]",
"x = [12.1,7.3,6.2,9.9,0.5]\ny = [4.5,6.1,3.9,1.7,8.0]\n\ni = 0\nxy = [] # Creating empty list\nwhile i < (len(x)):\n xy = xy + [x[i]*y[i]] # Appending result into list\n print '%.1f x %.1f = %.2f' % (x[i],y[i],xy[i])\n i = i + 1\nprint '\\n' \nprint xy",
"<span style=\"color: #F5DA81; background-color: #610B4B\">Example 4.4</span>: Calculate and print each value of x2*y2 with:\nx2 = [[12.1,7.3],[6.2,9.9]]\ny2 = [[4.5,6.1],[3.9,1.7]]",
"x2 = [[12.1,7.3],[6.2,9.9]]\ny2 = [[4.5,6.1],[3.9,1.7]]\n\nj = 0\nxy2 = []\nxy3 = []\nwhile j < (len(x2)):\n k = 0\n for k in range(len(x2)):\n xy3 = xy3 + [x2[j][k]*y2[j][k]]\n print '%.1f x %.1f = %.2f' % (x2[j][k],y2[j][k],xy3[k])\n k = k + 1\n xy2 = xy2 + [xy3]\n xy3 = []\n j = j + 1\nprint '\\n' \nprint xy2",
"<span style=\"color: #F5DA81; background-color: #610B4B\">Example 4.5</span>: Just create a list that contains the $f(x)$ value of a Gaussian distribution with $\\sigma$ = 0.4 and $\\mu$ = 5.\nThe Gaussian function:$$f(x) = e^{\\frac{-(x-\\mu)^2}{2\\sigma^2}}$$",
"from math import *\n\nsigma = 0.4\nmu = 5.0\n\nx_val = []\nctr = 3\nwhile ctr < 7:\n x_val = x_val + [ctr]\n ctr = ctr + 0.1\n\nfx = []\nfor n in range(0,len(x_val),1):\n intensity = exp(-(x_val[n]-mu)**2/(2*sigma**2))\n fx = fx + [intensity]\n print '%f\\t%s' % (intensity,int(intensity*50)*'*')\n\nfx",
"4.1.1 Converting data from a file into a list\nEach line in a file can be directly converted to a list using the <span style=\"color: #0000FF\">$readlines$()</span> function. For instance, in section 2.4 of tutorial 2, instead of using <span style=\"color: #0000FF\">$read$()</span> function, we can use the <span style=\"color: #0000FF\">$readlines$()</span> function to convert each line in the file $les miserables.txt$ as elements of a list $linecontent$:",
"# Opening a file \nfile_read = open(\"Tutorial2/les miserables.txt\")\nlinecontent = file_read.readlines()\nfile_read.close()",
"The elements of $linecontent$ is now the lines in $les miserables.txt$ (including the escape character):",
"linecontent",
"4.2 The Tuples\nA $tuple$ can be declared using the round bracket. A $tuple$ is actually a $list$ that contains element that cannot be modified or subtituted. Apart from that, its has similar properties with $list$.",
"t1 = (1,2,3,4)\nt1",
"Attempting to substitute a $tuple$ element will give an error.",
"t1[1] = 5",
"4.3 The Dictionaries\n$Dictionaries$ are similar to \"associative arrays\" in many other programming language. $Dictionaries$ are indexed by keys that can be strings or numbers. Acessing data in $dictionaries$ is by specifying the keys instead of index number. Data can be anything including other $dictionaries$. $Dictionaries$ can be declared by using the curly brackets with the pair of key and data separated by '$:$' and each pair of this element separated by '$,$'.",
"# Nearby stars to the earth\n\nStars = {1:'Sun', 2:'Alpha Centauri', 3:\"Barnard's Star\",\\\n 4:'Luhman 16', 5:'WISE 0855-0714'}\n\nStars[3] # Specify the key instead of index number",
"In the above example the keys are made of integers whereas the data are all made of strings. It can also be the opposite:",
"# Distance of nearby stars to the earth\n\nStars_Dist = {'Sun':0, 'Alpha Centauri':4.24, \"Barnard's Star\":6.00,\\\n 'Luhman 16':6.60, 'WISE 0855-0714':7.0}\n\nprint 'Alpha Centauri is %.2f light years from earth.' % \\\n(Stars_Dist['Alpha Centauri'])",
"These informations can be made more structured by using $list$ as data in $dictionary$.",
"# A more structured dictionaries data\n\nStars_List = {1:['Sun',0], 2:['Alpha Centauri',4.24],\\\n 3:[\"Barnard's Star\",6.00], 4:['Luhman 16',6.60],\\\n 5:['WISE 0855-0714',7.0]}\n\nprint '%s is the fourth closest star at about %.2f light \\\n\\nyears from earth.' % (Stars_List[4][0],Stars_List[4][1])",
"Below is the example of $dictionary$ that contains $dictionary$ type data and the ways to access them.",
"# Declaring dictionaries data for the dictionary 'Author'\n\nCoetzee = {1974:'Dusklands',\n 1977:'In The Heart Of The Country',\n 1980:'Waiting For The Barbarians',\n 1983:'Life & Times Of Michael K'}\n\nMcCarthy = {1992:'All the Pretty Horses',\n 1994:'The Crossing',\n 1998:'Cities of the Plain',\n 2005:'No Country for Old Men',\n 2006:'The Road'}\n\nSteinbeck = {1937:'Of Mice And Men',\n 1939:'The Grapes Of Wrath',\n 1945:'Cannery Row',\n 1952:'East Of Eden',\n 1961:'The Winter Of Our Discontent'}\n\nLewis = {'Narnia Series':{1950:'The Lion, the Witch and the Wardrobe',\n 1951:'Prince Caspian: The Return to Narnia',\n 1952:'The Voyage of the Dawn Treader',\n 1953:'The Silver Chair',\n 1954:'The Horse and His Boy',\n 1955:\"The Magician's Nephew\",\n 1956:'The Last Battle'\n }}\n\n# Assigning keys and data for the dictionary 'Author'\n# one of it is a dictionary list\n\nAuthor = {'South Africa':Coetzee,'USA':[McCarthy,Steinbeck],\n 'British':Lewis}\n\nAuthor['South Africa'][1983]\n\nAuthor['USA'][1][1939]\n\nAuthor['British']['Narnia Series'][1953]",
"More on lists and dictionaries can be found on https://docs.python.org/2/tutorial/datastructures.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
PyDataMadrid2016/Conference-Info
|
workshops_materials/20160408_0900_Basic_Python_Packages_for_Science/Basic Python Packages for Science.ipynb
|
mit
|
[
"Basic Python Packages for Science\nThe Aeropython’s guide to the Python Galaxy!\n\nSiro Moreno Martín\nAlejandro Sáez Mollejo\n0. Introduction\nPython in the Scientific environment\nPrincipal Python Packages for scientific purposes\nAnaconda & conda\n\nhttp://conda.pydata.org/docs/intro.html\nConda is a package manager application that quickly installs, runs, and updates packages and their dependencies. The conda command is the primary interface for managing installations of various packages. It can query and search the package index and current installation, create new environments, and install and update packages into existing conda environments.",
"from IPython.display import HTML\nHTML('<iframe src=\"http://conda.pydata.org/docs/_downloads/conda-cheatsheet.pdf\" width=\"700\" height=\"400\"></iframe>')",
"Main objectives of this workshop\n\nProvide you with a first insight into the principal Python tools & libraries used in Science:\nconda.\nJupyter Notebook.\nNumPy, matplotlib, SciPy\n\n\nProvide you with the basic skills to face basic tasks such as:\n\n\n\nShow other common libraries:\nPandas, scikit-learn (some talks & workshops will focus on these packages)\nSymPy\nNumba ¿?\n\n\n\n1. Jupyter Notebook\n\nThe Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.\nIt has been widely recognised as a great way to distribute scientific papers, because of the capability to have an integrated format with text and executable code, highly reproducible. Top level investigators around the world are already using it, like the team behind the Gravitational Waves discovery (LIGO), whose analysis was translated to an interactive dowloadable Jupyter notebook. You can see it here: https://github.com/minrk/ligo-binder/blob/master/GW150914_tutorial.ipynb\n2. Using arrays: NumPy\n\nndarray object\n| index | 0 | 1 | 2 | 3 | ... | n-1 | n |\n| ---------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| value | 2.1 | 3.6 | 7.8 | 1.5 | ... | 5.4 | 6.3 |\n\nN-dimensional data structure.\nHomogeneously typed.\nEfficient!\n\nA universal function (or ufunc for short) is a function that operates on ndarrays. It is a “vectorized function\".",
"import numpy as np\n\nmy_list = list(range(0,100000))\n%timeit sum(my_list)\n\narray = np.arange(0, 100000)\n%timeit np.sum(array)",
"Array creation",
"one_dim_array = np.array([1, 2, 3, 4])\none_dim_array\n\ntwo_dim_array = np.array([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]])\ntwo_dim_array\n\ntwo_dim_array.size\n\ntwo_dim_array.shape\n\ntwo_dim_array.dtype\n\nzeros_arr = np.zeros([3, 3])\nones_arr = np.ones([10])\neye_arr = np.eye(5)\n\nrange_arr = np.arange(15)\nrange_arr\n\nrange_arr.reshape([3, 5])\n\nnp.linspace(0, 10, 21)",
"Basic slicing",
"one_dim_array[0]\n\ntwo_dim_array[-1, -1]",
"[start:stop:step]",
"my_arr = np.arange(100)\nmy_arr[0::2]\n\nchess_board = np.zeros([8, 8], dtype=int)\n\nchess_board[0::2, 1::2] = 1\nchess_board[1::2, 0::2] = 1\n\nchess_board",
"2. Drawing: Matplotlib",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.matshow(chess_board, cmap=plt.cm.gray)",
"Operations & linalg",
"# numpy functions\nx = np.linspace(1, 10)\ny = np.sin(x)\n\nplt.plot(x, y)\n\ny_2 = (1 + np.log(x)) ** 2\n\n# Our first plot\nplt.plot(x, y_2, 'r-*')\n\n# Creating a 2d array\ntwo_dim_array = np.array([[10, 25, 33],\n [40, 25, 16],\n [77, 68, 91]])\n\ntwo_dim_array.T\n\n# matrix multiplication\ntwo_dim_array @ two_dim_array\n\n# matrix vector\none_dim_array = np.array([2.5, 3.6, 3.8])\n\ntwo_dim_array @ one_dim_array\n\n# inv\nnp.linalg.inv(two_dim_array)\n\n# eigenvectors & eigenvalues\nnp.linalg.eig(two_dim_array)",
"Air quality data",
"from IPython.display import HTML\nHTML('<iframe src=\"http://www.mambiente.munimadrid.es/sica/scripts/index.php\" \\\n width=\"700\" height=\"400\"></iframe>')",
"Loading the data",
"# Linux command \n!head ./data/barrio_del_pilar-20160322.csv\n\n# Windows\n# !gc log.txt | select -first 10 # head\n\n# loading the data\n# ./data/barrio_del_pilar-20160322.csv\ndata1 = np.genfromtxt('./data/barrio_del_pilar-20160322.csv', skip_header=3, delimiter=';', usecols=(2,3,4))\ndata1",
"Dealing with missing values",
"np.mean(data1, axis=0)\n\nnp.nanmean(data1, axis=0)\n\n# masking invalid data\ndata1 = np.ma.masked_invalid(data1)\nnp.mean(data1, axis=0)\n\ndata2 = np.genfromtxt('./data/barrio_del_pilar-20151222.csv', skip_header=3, delimiter=';', usecols=(2,3,4))\ndata2 = np.ma.masked_invalid(data2)",
"Plotting the data\n Maximum values from: http://www.mambiente.munimadrid.es/opencms/export/sites/default/calaire/Anexos/valores_limite_1.pdf\n\nNO2\nMedia anual: 40 µg/m3\nMedia horaria: 200 µg/m3",
"plt.plot(data1[:, 1], label='2016')\nplt.plot(data2[:, 1], label='2015')\n\nplt.legend()\n\nplt.hlines(200, 0, 200, linestyles='--')\nplt.ylim(0, 220)\n\nfrom IPython.display import HTML\nHTML('<iframe src=\"http://ccaa.elpais.com/ccaa/2015/12/24/madrid/1450960217_181674.html\" width=\"700\" height=\"400\"></iframe>')",
"CO \nMáxima diaria de las medias móviles octohorarias: 10 mg/m³",
"# http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.convolve.html\ndef moving_average(x, N=8):\n return np.convolve(x, np.ones(N)/N, mode='same')\n\nplt.plot(moving_average(data1[:, 0]), label='2016')\n\nplt.plot(moving_average(data2[:, 0]), label='2015')\n\nplt.hlines(10, 0, 250, linestyles='--')\nplt.ylim(0, 11)\n\nplt.legend()",
"O3\nMáxima diaria de las medias móviles octohorarias: 120 µg/m3\nUmbral de información. 180 µg/m3\nMedia horaria. Umbral de alerta. 240 µg/m3",
"plt.plot(moving_average(data1[:, 2]), label='2016')\n#plt.plot(data1[:, 2])\n\nplt.plot(moving_average(data2[:, 2]), label='2015')\n#plt.plot(data2[:, 2])\n\nplt.hlines(180, 0, 250, linestyles='--')\nplt.ylim(0, 190)\n\nplt.legend()",
"4. Scientific functions: SciPy\n\n```\nscipy.linalg: ATLAS LAPACK and BLAS libraries\nscipy.stats: distributions, statistical functions...\nscipy.integrate: integration of functions and ODEs\nscipy.optimization: local and global optimization, fitting, root finding...\nscipy.interpolate: interpolation, splines...\nscipy.fftpack: Fourier trasnforms\nscipy.signal, scipy.special, scipy.io\n```\nTemperature data\nNow, we will use some temperature data from the Spanish Ministry of Agriculture.",
"HTML('<iframe src=\"http://eportal.magrama.gob.es/websiar/Ficha.aspx?IdProvincia=28&IdEstacion=1\" width=\"700\" height=\"400\"></iframe>')",
"The file contains data from 2004 to 2015 (included). Each row corresponds to a day of the year, so evey 365 lines contain data from a whole year*\nNote1: 29th February has been removed for leap-years.\nNote2: Missing values have been replaced with the immediately prior valid data.\nThese kind of events are better handled with Pandas!",
"!head data/M01_Center_Finca_temperature_data_2004_2015.csv\n\n# Loading the data\ntemp_data = np.genfromtxt('data/M01_Center_Finca_temperature_data_2004_2015.csv',\n skip_header=1,\n delimiter=';')\n\n# Importing SciPy stats\nimport scipy.stats as st\n\n# Applying some functions: describe, mode, mean...\nst.describe(temp_data)\n\nst.mode(temp_data)\n\nnp.mean(temp_data, axis=0)\n\nnp.median(temp_data, axis=0)",
"We can also get information about percentiles!",
"st.scoreatpercentile(temp_data, per=25, axis=0)\n\nst.percentileofscore(temp_data[:,0], score=0)\n\nst.percentileofscore(temp_data[:,1], score=0)\n\nst.percentileofscore(temp_data[:,2], score=0)",
"Rearranging the data",
"temp_data2 = np.zeros([365, 3, 12])\n\nfor year in range(12):\n temp_data2[:, :, year] = temp_data[year*365:(year+1)*365, :]\n\n# Calculating mean of mean temp\nmean_mean = np.mean(temp_data2[:, 0, :], axis=1)\n# max of max\nmax_max = np.max(temp_data2[:, 1, :], axis=1)\n# min of min\nmin_min = np.min(temp_data2[:, 2, :], axis=1)",
"Let's visualize data!\nUsing matplotlib styles http://matplotlib.org/users/whats_new.html#styles",
"%matplotlib inline\nplt.style.available\n\nplt.style.use('ggplot')\n\ndays = np.arange(1, 366)\n\nplt.fill_between(days, max_max, min_min, alpha=0.7)\nplt.plot(days, mean_mean)\nplt.xlim(1, 365)",
"Let's see if 2015 was a normal year...",
"plt.plot(days, temp_data2[:,0,-1], lw=2)\nplt.plot(days, mean_mean)\nplt.xlim(1, 365)\n\nplt.fill_between(days, max_max, min_min, alpha=0.7)\nplt.fill_between(days, temp_data2[:,1,-1], temp_data2[:,2,-1],\n color='purple', alpha=0.5)\nplt.plot(days, temp_data2[:,0,-1], lw=2)\nplt.xlim(1, 365)",
"For example, lets represent a function over a 2D domain!\nFor this we will use the contour function, which requires some special inputs...",
"#we will use numpy functions in order to work with numpy arrays\ndef funcion(x,y):\n return np.cos(x) + np.sin(y)\n\n# 0D: works!\nfuncion(3,5)\n\n# 1D: works!\nx = np.linspace(0,5, 100)\nplt.plot(x, funcion(x,1))",
"In oder to plot the 2D function, we will need a grid.\nFor 1D domain, we just needed one 1D array containning the X position and another 1D array containing the value.\nNow, we will create a grid, a distribution of points covering a surface. For the 2D domain, we will need:\n- One 2D array containing the X coordinate of the points.\n- One 2D array containing the Y coordinate of the points.\n- One 2D array containing the function value at the points.\nThe three matrices must have the exact same dimensions, because each cell of them represents a particular point.",
"#We can create the X and Y matrices by hand, or use a function designed to make ir easy:\n\n#we create two 1D arrays of the desired lengths:\nx_1d = np.linspace(0, 5, 5)\ny_1d = np.linspace(-2, 4, 7)\n#And we use the meshgrid function to create the X and Y matrices!\nX, Y = np.meshgrid(x_1d, y_1d)\n\nX\n\nY",
"Note that with the meshgrid function we can only create rectangular grids",
"#Using Numpy arrays, calculating the function value at the points is easy!\nZ = funcion(X,Y)\n\n#Let's plot it!\nplt.contour(X, Y, Z)\nplt.colorbar()",
"We can try a little more resolution...",
"x_1d = np.linspace(0, 5, 100)\ny_1d = np.linspace(-2, 4, 100)\nX, Y = np.meshgrid(x_1d, y_1d)\nZ = funcion(X,Y)\nplt.contour(X, Y, Z)\nplt.colorbar()",
"The countourf function is simmilar, but it colours also between the lines. In both functions, we can manually adjust the number of lines/zones we want to differentiate on the plot.",
"plt.contourf(X, Y, Z, np.linspace(-2, 2, 6),cmap=plt.cm.Spectral) #With cmap, a color map is specified\nplt.colorbar()\n\nplt.contourf(X, Y, Z, np.linspace(-2, 2, 100),cmap=plt.cm.Spectral)\nplt.colorbar()\n\n#We can even combine them!\nplt.contourf(X, Y, Z, np.linspace(-2, 2, 100),cmap=plt.cm.Spectral)\nplt.colorbar()\ncs = plt.contour(X, Y, Z, np.linspace(-2, 2, 9), colors='k')\nplt.clabel(cs)\n",
"These functions can be enormously useful when you want to visualize something.\nAnd remember!\nAlways visualize data!\nLet's try it with Real data!",
"time_vector = np.loadtxt('data/ligo_tiempos.txt')\nfrequency_vector = np.loadtxt('data/ligo_frecuencias.txt')\nintensity_matrix = np.loadtxt('data/ligo_datos.txt')",
"The time and frequency vectors contain the values at which the instrument was reading, and the intensity matrix, the postprocessed strength measured for each frequency at each time.\nWe need again to create the 2D arrays of coordinates.",
"time_2D, freq_2D = np.meshgrid(time_vector, frequency_vector)\n\nplt.figure(figsize=(10,6)) #We can manually adjust the sice of the picture\nplt.contourf(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 200),cmap='bone')\nplt.xlabel('time (s)')\nplt.ylabel('Frequency (Hz)')\nplt.colorbar()",
"Wow! What is that? Let's zoom into it!",
"\nplt.figure(figsize=(10,6))\nplt.contourf(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 200),cmap = plt.cm.Spectral)\nplt.colorbar()\nplt.contour(time_2D, freq_2D,intensity_matrix,np.linspace(0, 0.02313, 9), colors='k')\nplt.xlabel('time (s)')\nplt.ylabel('Frequency (Hz)')\n\nplt.axis([9.9, 10.05, 0, 300])",
"IPython Widgets\nThe IPython Widgets are interactive tools to use in the notebook. They are fun and very useful to quickly understand how different parameters affect a certain function.\nThis is based on a section of the PyConEs 14 talk by Kiko Correoso \"Hacking the notebook\": http://nbviewer.jupyter.org/github/kikocorreoso/PyConES14_talk-Hacking_the_Notebook/blob/master/notebooks/Using%20Interact.ipynb",
"from ipywidgets import interact\n\n#Lets define a extremely simple function:\ndef ejemplo(x):\n print(x)\n\ninteract(ejemplo, x =10) #Try changing the value of x to True, 'Hello' or ['hello', 'world']\n\n#We can control the slider values with more precission:\ninteract(ejemplo, x = (9,10,0.1))",
"If you want a dropdown menu that passes non-string values to the Python function, you can pass a dictionary. The keys in the dictionary are used for the names in the dropdown menu UI and the values are the arguments that are passed to the underlying Python function.",
"interact(ejemplo, x={'one': 10, 'two': 20})",
"Let's have some fun! We talked before about frequencys and waves. Have you ever learn about AM and FM modulation? It's the process used to send radio communications!",
"x = np.linspace(-1, 7, 1000)\n\nfig = plt.figure()\n\nplt.subplot(211)#This allows us to display multiple sub-plots, and where to put them\nplt.plot(x, np.sin(x))\nplt.grid(False)\nplt.title(\"Audio signal: modulator\")\n\nplt.subplot(212)\nplt.plot(x, np.sin(50 * x))\nplt.grid(False)\nplt.title(\"Radio signal: carrier\")\n\n#Am modulation simply works like this:\nam_wave = np.sin(50 * x) * (0.5 + 0.5 * np.sin(x))\nplt.plot(x, am_wave)",
"In order to interact with it, we will need to transform it into a function",
"def am_mod (f_carr=50, f_mod=1, depth=0.5): #The default values will be the starting points of the sliders\n x = np.linspace(-1, 7, 1000)\n am_wave = np.sin(f_carr * x) * (1- depth/2 + depth/2 * np.sin(f_mod * x))\n \n plt.plot(x, am_wave)\n \n\ninteract(am_mod,\n f_carr = (1,100,2),\n f_mod = (0.2, 2, 0.1),\n depth = (0, 1, 0.1))",
"Other options...\n5. Other packages\nSymbolic calculations with SymPy\n\nSymPy is a Python package for symbolic math. We will not cover it in depth, but let's take a picure of the basics!",
"# Importación\nfrom sympy import init_session\n\ninit_session(use_latex='matplotlib') #We must start calling this function",
"The basic unit of this package is the symbol. A simbol object has name and graphic representation, which can be different:",
"coef_traccion = symbols('c_T')\ncoef_traccion\n\nw = symbols('omega')\nW = symbols('Omega')\nw, W",
"By default, SymPy takes symbols as complex numbers. That can lead to unexpected results in front of certain operations, like logarithms. We can explicitly signal that a symbol is real when we create it. We can also create several symbols at a time.",
"x, y, z, t = symbols('x y z t', real=True)\nx.assumptions0",
"Expressions can be created from symbols:",
"expr = cos(x)**2 + sin(x)**2\nexpr\n\nsimplify(expr)\n\n#We can substitute pieces of the expression:\nexpr.subs(x, y**2)\n\n#We can particularize on a certain value:\n(sin(x) + 3 * x).subs(x, pi)\n\n#We can evaluate the numerical value with a certain precission:\n(sin(x) + 3 * x).subs(x, pi).evalf(25)",
"We can manipulate the expression in several ways. For example:",
"expr1 = (x ** 3 + 3 * y + 2) ** 2\nexpr1\n\nexpr1.expand()",
"We can derivate and integrate:",
"expr = cos(2*x)\nexpr.diff(x, x, x)\n\nexpr_xy = y ** 3 * sin(x) ** 2 + x ** 2 * cos(y)\nexpr_xy\n\ndiff(expr_xy, x, 2, y, 2)\n\nint2 = 1 / sin(x)\nintegrate(int2)\n\nx, a = symbols('x a', real=True)\n\nint3 = 1 / (x**2 + a**2)**2\nintegrate(int3, x)",
"We also have ecuations and differential ecuations:",
"a, x, t, C = symbols('a, x, t, C', real=True)\necuacion = Eq(a * exp(x/t), C)\necuacion\n\nsolve(ecuacion ,x)\n\nx = symbols('x')\nf = Function('y')\necuacion_dif = Eq(f(x).diff(x,2) + f(x).diff(x) + f(x), cos(x))\necuacion_dif\n\ndsolve(ecuacion_dif, f(x))",
"Data Analysis with pandas\n\nPandas is a package that focus on data structures and data analysis tools. We will not cover it because the next workshop, by Kiko Correoso, will develop it in depth.\nMachine Learning with scikit-learn\n\nScikit-learn is a very complete Python package focusing on machin learning, and data mining and analysis. We will not cover it in depth because it will be the focus of many more talks at the PyData.\nA world of possibilities...\n\nThanks for yor attention!\n\nAny Questions?",
"# Notebook style\nfrom IPython.core.display import HTML\ncss_file = './static/style.css'\nHTML(open(css_file, \"r\").read())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/2677ee623a2aeff54fe63131444b1844/plot_channel_epochs_image.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Visualize channel over epochs as an image\nThis will produce what is sometimes called an event related\npotential / field (ERP/ERF) image.\nTwo images are produced, one with a good channel and one with a channel\nthat does not show any evoked field.\nIt is also demonstrated how to reorder the epochs using a 1D spectral\nembedding as described in [1]_.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.4\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\n\n# Create epochs, here for gradiometers + EOG only for simplicity\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=('grad', 'eog'), baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, eog=150e-6))",
"Show event-related fields images",
"# and order with spectral reordering\n# If you don't have scikit-learn installed set order_func to None\nfrom sklearn.cluster.spectral import spectral_embedding # noqa\nfrom sklearn.metrics.pairwise import rbf_kernel # noqa\n\n\ndef order_func(times, data):\n this_data = data[:, (times > 0.0) & (times < 0.350)]\n this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]\n return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),\n n_components=1, random_state=0).ravel())\n\n\ngood_pick = 97 # channel with a clear evoked response\nbad_pick = 98 # channel with no evoked response\n\n# We'll also plot a sample time onset for each trial\nplt_times = np.linspace(0, .2, len(epochs))\n\nplt.close('all')\nmne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5,\n order=order_func, vmin=-250, vmax=250,\n overlay_times=plt_times, show=True)",
"References\n.. [1] Graph-based variability estimation in single-trial event-related\n neural responses. A. Gramfort, R. Keriven, M. Clerc, 2010,\n Biomedical Engineering, IEEE Trans. on, vol. 57 (5), 1051-1061\n https://ieeexplore.ieee.org/document/5406156"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yandex-load/volta
|
firmware/arduino_due_1MHz/analyze_current.ipynb
|
mpl-2.0
|
[
"print(\"Hello\")\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n%matplotlib inline",
"Читаем данные из порта USB в файл:\ncat /dev/cu.usbmodem1421 > output3.bin\nОни будут в бинарном формате, прочитаем их в DataFrame и сконвертируем в миллиамперы:",
"df = pd.DataFrame(np.fromfile(\"./output.bni\", dtype=np.uint16).astype(np.float32) * (3300 / 2**12))\n#df.describe()",
"Данных много, миллион сэмплов в секунду. Мы насобирали почти 70 миллионов сэмплов. Если строить их все сразу, питон ОЧЕНЬ задумается. Поэтому будем строить кусочки. 100 сэмплов, или 100 микросекунд:",
"fig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\ndf[20000:20100].plot(ax=ax)",
"Возьмем более мелкий масштаб, для этого сгруппируем данные по 10 мкс и возьмем среднее:",
"df_r = df.groupby(df.index//10).mean()\n\nfig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\ndf_r[:30000].plot(ax=ax)",
"Посмотрим на таймстемпы в logcat. У нас три события из Браузера, остальное -- включение/выключение фонарика.\n05:05:51.540\n05:05:52.010\n05:05:52.502\n05:05:52.857\n05:05:53.317\n05:05:53.660\n05:05:54.118\n05:05:54.504\n05:05:54.966\n05:05:55.270\n05:06:01.916 14509 14509 I cr_Ya:DownloadTracking: PageLoadStarted, ElapsedRealtimeMillis: 1241509\n05:06:03.453 14509 14509 I cr_Ya:DownloadTracking: DownloadStarted, ElapsedRealtimeMillis: 1243046\n05:06:09.147 14509 14509 I cr_Ya:DownloadTracking: DownloadFinished, ElapsedRealtimeMillis: 1248740\n05:06:13.336\n05:06:13.691\n05:06:14.051\n05:06:14.377\n05:06:14.783\n05:06:15.089\n05:10:32.190\n05:10:34.015\n05:10:37.349\n05:10:37.491\nЕще раз сделаем ресемплинг, чтобы в одной точке была одна миллисекунда и построим все данные:",
"df_r1000 = df.groupby(df.index//1000).mean()\nfig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\ndf_r1000.plot(ax=ax)",
"Интересные всплески потребления начинаются где-то с 40000-ной миллисекунды (их пять подряд, мы моргали лампочкой пять раз).",
"fig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\ndf_r1000[40000:41000].plot(ax=ax)",
"Предполагаем, что первый всплеск был в 40200-ю миллисекунду. Теперь посчитаем относительные времена:",
"times = [\n51540,\n52010,\n52502,\n52857,\n53317,\n53660,\n54118,\n54504,\n54966,\n55270,\n60000 + 1916, # PageLoadStarted\n60000 + 3453, # DownloadStarted\n60000 + 9147, # DownloadFinished\n60000 + 13336,\n60000 + 13691,\n60000 + 14051,\n60000 + 14377,\n60000 + 14783,\n60000 + 15089,\n60000 + 32190,\n60000 + 34015,\n60000 + 37349,\n60000 + 37491,\n]",
"И построим их на нашем графике:",
"sync = 40200\nfig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\nsync = 40205\ndf_r1000[40000:43000].plot(ax=ax)\n\nfor t in times:\n sns.plt.axvline(sync + t - times[0])\n",
"У второй вспышки более резкий фронт, поэтому попробуем синхронизироваться более точно по нему (и используем микросекундные данные):",
"fig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\ndf[41100000:41250000].plot(ax=ax)\nsns.plt.axvline(40200000 + 470000 + 498000 + 5000)",
"То же для первой вспышки, видно, что фронт у нее размытый:",
"fig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\ndf[40100000:40250000].plot(ax=ax)\nsns.plt.axvline(40200000 + 5000)",
"Теперь построим данные за весь тесткейс, учитывая изменение синхронизации:",
"fig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\nsync = 40205\ndf_r1000[40000:65000].plot(ax=ax)\n\nfor t in times:\n sns.plt.axvline(sync + t - times[0])",
"И увеличим до периода загрузки файла:",
"fig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\nsync = 40205\ndf_r1000[52000:58000].plot(ax=ax)\n\nfor t in times:\n sns.plt.axvline(sync + t - times[0])",
"Фиксим неприятный баг\nМожно заметить необычные пики на графике, которые, как будто, предсказывают значение основного тренда:",
"df[1010000:1025000].plot()",
"Расстояние между пиками -- 256 сэмплов:",
"for i in range(5):\n df[10230+256*i:10250+256*(i+1)].plot()",
"В самом начале -- пустые сэмплы с пиками, по числу буферов. На столько же \"предсказывается\" значение:",
"df[:2048].plot()",
"Оказалось, в исходнике баг, описание тут: https://forum.arduino.cc/index.php?topic=137635.msg2965504#msg2965504\nФиксим, пробуем -- все ок!",
"df4 = pd.DataFrame(np.fromfile(\"./output3.bin\", dtype=np.uint16).astype(np.float32) * (3300 / 2**12))\n\nfig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\ndf4[:16384].plot(ax=ax)",
"Одна миллисекунда, как мы ее видим:",
"fig = sns.plt.figure(figsize=(16, 6))\nax = sns.plt.subplot()\ndf4[6000000:6001000].plot(ax=ax)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/tf-estimator-tutorials
|
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
|
apache-2.0
|
[
"import pandas as pd\nimport numpy as np\nimport shutil\nimport multiprocessing\nfrom datetime import datetime\n\nimport tensorflow as tf\nfrom tensorflow.python.feature_column import feature_column\nfrom tensorflow.contrib.learn import learn_runner\nfrom tensorflow.contrib.learn import make_export_strategy\nfrom tensorflow import data\n\nprint(tf.__version__)",
"TF Custom Estimator to Build a NN Autoencoder for Feature Extraction",
"MODEL_NAME = 'auto-encoder-01'\n\nTRAIN_DATA_FILES_PATTERN = 'data/data-*.csv'\n\nRESUME_TRAINING = False\n\nMULTI_THREADING = True",
"1. Define Dataset Metadata",
"FEATURE_COUNT = 64\n\nHEADER = ['key']\nHEADER_DEFAULTS = [[0]]\nUNUSED_FEATURE_NAMES = ['key']\nCLASS_FEATURE_NAME = 'CLASS'\nFEATURE_NAMES = [] \n\nfor i in range(FEATURE_COUNT):\n HEADER += ['x_{}'.format(str(i+1))]\n FEATURE_NAMES += ['x_{}'.format(str(i+1))]\n HEADER_DEFAULTS += [[0.0]]\n\nHEADER += [CLASS_FEATURE_NAME]\nHEADER_DEFAULTS += [['NA']]\n\nprint(\"Header: {}\".format(HEADER))\nprint(\"Features: {}\".format(FEATURE_NAMES))\nprint(\"Class Feature: {}\".format(CLASS_FEATURE_NAME))\nprint(\"Unused Features: {}\".format(UNUSED_FEATURE_NAMES))",
"2. Define CSV Data Input Function",
"def parse_csv_row(csv_row):\n \n columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)\n features = dict(zip(HEADER, columns))\n \n for column in UNUSED_FEATURE_NAMES:\n features.pop(column)\n\n target = features.pop(CLASS_FEATURE_NAME)\n\n return features, target\n\ndef csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL, \n skip_header_lines=0, \n num_epochs=None, \n batch_size=200):\n \n shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False\n \n print(\"\")\n print(\"* data input_fn:\")\n print(\"================\")\n print(\"Input file(s): {}\".format(files_name_pattern))\n print(\"Batch size: {}\".format(batch_size))\n print(\"Epoch Count: {}\".format(num_epochs))\n print(\"Mode: {}\".format(mode))\n print(\"Shuffle: {}\".format(shuffle))\n print(\"================\")\n print(\"\")\n \n file_names = tf.matching_files(files_name_pattern)\n\n dataset = data.TextLineDataset(filenames=file_names)\n dataset = dataset.skip(skip_header_lines)\n \n if shuffle:\n dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)\n \n num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1\n \n dataset = dataset.batch(batch_size)\n dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row), num_parallel_calls=num_threads)\n \n dataset = dataset.repeat(num_epochs)\n iterator = dataset.make_one_shot_iterator()\n \n features, target = iterator.get_next()\n\n return features, target\n\nfeatures, target = csv_input_fn(files_name_pattern=\"\")\nprint(\"Feature read from CSV: {}\".format(list(features.keys())))\nprint(\"Target read from CSV: {}\".format(target))",
"3. Define Feature Columns\na. Load normalizarion params",
"df_params = pd.read_csv(\"data/params.csv\", header=0, index_col=0)\nlen(df_params)\ndf_params['feature_name'] = FEATURE_NAMES\ndf_params.head()",
"b. Create normalized feature columns",
"def standard_scaler(x, mean, stdv):\n return (x-mean)/stdv\n\ndef maxmin_scaler(x, max_value, min_value):\n return (x-min_value)/(max_value-min_value)\n\ndef get_feature_columns():\n \n feature_columns = {}\n \n\n# feature_columns = {feature_name: tf.feature_column.numeric_column(feature_name)\n# for feature_name in FEATURE_NAMES}\n\n for feature_name in FEATURE_NAMES:\n\n feature_max = df_params[df_params.feature_name == feature_name]['max'].values[0]\n feature_min = df_params[df_params.feature_name == feature_name]['min'].values[0]\n normalizer_fn = lambda x: maxmin_scaler(x, feature_max, feature_min)\n \n feature_columns[feature_name] = tf.feature_column.numeric_column(feature_name, \n normalizer_fn=normalizer_fn\n )\n \n\n return feature_columns\n\nprint(get_feature_columns())",
"4. Define Autoencoder Model Function",
"def autoencoder_model_fn(features, labels, mode, params):\n \n feature_columns = list(get_feature_columns().values())\n \n input_layer_size = len(feature_columns)\n \n encoder_hidden_units = params.encoder_hidden_units\n \n # decoder units are the reverse of the encoder units, without the middle layer (redundant)\n decoder_hidden_units = encoder_hidden_units.copy() \n decoder_hidden_units.reverse()\n decoder_hidden_units.pop(0)\n \n output_layer_size = len(FEATURE_NAMES)\n \n he_initialiser = tf.contrib.layers.variance_scaling_initializer()\n l2_regulariser = tf.contrib.layers.l2_regularizer(scale=params.l2_reg)\n \n \n print(\"[{}]->{}-{}->[{}]\".format(len(feature_columns)\n ,encoder_hidden_units\n ,decoder_hidden_units,\n output_layer_size))\n\n is_training = (mode == tf.estimator.ModeKeys.TRAIN)\n \n # input layer\n input_layer = tf.feature_column.input_layer(features=features, \n feature_columns=feature_columns)\n \n # Adding Gaussian Noise to input layer\n noisy_input_layer = input_layer + (params.noise_level * tf.random_normal(tf.shape(input_layer)))\n \n # Dropout layer\n dropout_layer = tf.layers.dropout(inputs=noisy_input_layer, \n rate=params.dropout_rate, \n training=is_training)\n\n# # Dropout layer without Gaussian Nosing\n# dropout_layer = tf.layers.dropout(inputs=input_layer, \n# rate=params.dropout_rate, \n# training=is_training)\n\n # Encoder layers stack\n encoding_hidden_layers = tf.contrib.layers.stack(inputs= dropout_layer,\n layer= tf.contrib.layers.fully_connected,\n stack_args=encoder_hidden_units,\n #weights_initializer = he_init,\n weights_regularizer =l2_regulariser,\n activation_fn = tf.nn.relu\n )\n # Decoder layers stack\n decoding_hidden_layers = tf.contrib.layers.stack(inputs=encoding_hidden_layers,\n layer=tf.contrib.layers.fully_connected, \n stack_args=decoder_hidden_units,\n #weights_initializer = he_init,\n weights_regularizer =l2_regulariser,\n activation_fn = tf.nn.relu\n )\n # Output (reconstructed) layer\n output_layer = tf.layers.dense(inputs=decoding_hidden_layers, \n units=output_layer_size, activation=None)\n \n # Encoding output (i.e., extracted features) reshaped\n encoding_output = tf.squeeze(encoding_hidden_layers)\n \n # Reconstruction output reshaped (for serving function)\n reconstruction_output = tf.squeeze(tf.nn.sigmoid(output_layer))\n \n # Provide an estimator spec for `ModeKeys.PREDICT`.\n if mode == tf.estimator.ModeKeys.PREDICT:\n \n # Convert predicted_indices back into strings\n predictions = {\n 'encoding': encoding_output,\n 'reconstruction': reconstruction_output\n }\n export_outputs = {\n 'predict': tf.estimator.export.PredictOutput(predictions)\n }\n \n # Provide an estimator spec for `ModeKeys.PREDICT` modes.\n return tf.estimator.EstimatorSpec(mode,\n predictions=predictions,\n export_outputs=export_outputs)\n \n # Define loss based on reconstruction and regularization\n \n# reconstruction_loss = tf.losses.mean_squared_error(tf.squeeze(input_layer), reconstruction_output) \n# loss = reconstruction_loss + tf.losses.get_regularization_loss()\n \n reconstruction_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.squeeze(input_layer), logits=tf.squeeze(output_layer))\n loss = reconstruction_loss + tf.losses.get_regularization_loss()\n \n # Create Optimiser\n optimizer = tf.train.AdamOptimizer(params.learning_rate)\n\n # Create training operation\n train_op = optimizer.minimize(\n loss=loss, global_step=tf.train.get_global_step())\n\n # Calculate root mean squared error as additional eval metric\n eval_metric_ops = {\n \"rmse\": tf.metrics.root_mean_squared_error(\n tf.squeeze(input_layer), reconstruction_output)\n }\n \n # Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes.\n estimator_spec = tf.estimator.EstimatorSpec(mode=mode,\n loss=loss,\n train_op=train_op,\n eval_metric_ops=eval_metric_ops)\n return estimator_spec\n\n\ndef create_estimator(run_config, hparams):\n estimator = tf.estimator.Estimator(model_fn=autoencoder_model_fn, \n params=hparams, \n config=run_config)\n \n print(\"\")\n print(\"Estimator Type: {}\".format(type(estimator)))\n print(\"\")\n\n return estimator",
"5. Run Experiment using Estimator Train_And_Evaluate\na. Set the parameters",
"TRAIN_SIZE = 2000\nNUM_EPOCHS = 1000\nBATCH_SIZE = 100\nNUM_EVAL = 10\n\nTOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS\nCHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))\n\nhparams = tf.contrib.training.HParams(\n num_epochs = NUM_EPOCHS,\n batch_size = BATCH_SIZE,\n encoder_hidden_units=[30,3],\n learning_rate = 0.01,\n l2_reg = 0.0001,\n noise_level = 0.0,\n max_steps = TOTAL_STEPS,\n dropout_rate = 0.05\n)\n\nmodel_dir = 'trained_models/{}'.format(MODEL_NAME)\n\nrun_config = tf.contrib.learn.RunConfig(\n save_checkpoints_steps=CHECKPOINT_STEPS,\n tf_random_seed=19830610,\n model_dir=model_dir\n)\n\nprint(hparams)\nprint(\"Model Directory:\", run_config.model_dir)\nprint(\"\")\nprint(\"Dataset Size:\", TRAIN_SIZE)\nprint(\"Batch Size:\", BATCH_SIZE)\nprint(\"Steps per Epoch:\",TRAIN_SIZE/BATCH_SIZE)\nprint(\"Total Steps:\", TOTAL_STEPS)\nprint(\"Required Evaluation Steps:\", NUM_EVAL) \nprint(\"That is 1 evaluation step after each\",NUM_EPOCHS/NUM_EVAL,\" epochs\")\nprint(\"Save Checkpoint After\",CHECKPOINT_STEPS,\"steps\")",
"b. Define TrainSpec and EvaluSpec",
"train_spec = tf.estimator.TrainSpec(\n input_fn = lambda: csv_input_fn(\n TRAIN_DATA_FILES_PATTERN,\n mode = tf.contrib.learn.ModeKeys.TRAIN,\n num_epochs=hparams.num_epochs,\n batch_size=hparams.batch_size\n ),\n max_steps=hparams.max_steps,\n hooks=None\n)\n\neval_spec = tf.estimator.EvalSpec(\n input_fn = lambda: csv_input_fn(\n TRAIN_DATA_FILES_PATTERN,\n mode=tf.contrib.learn.ModeKeys.EVAL,\n num_epochs=1,\n batch_size=hparams.batch_size\n ),\n# exporters=[tf.estimator.LatestExporter(\n# name=\"encode\", # the name of the folder in which the model will be exported to under export\n# serving_input_receiver_fn=csv_serving_input_fn,\n# exports_to_keep=1,\n# as_text=True)],\n steps=None,\n hooks=None\n)",
"d. Run Experiment via train_and_evaluate",
"if not RESUME_TRAINING:\n print(\"Removing previous artifacts...\")\n shutil.rmtree(model_dir, ignore_errors=True)\nelse:\n print(\"Resuming training...\") \n\n \ntf.logging.set_verbosity(tf.logging.INFO)\n\ntime_start = datetime.utcnow() \nprint(\"Experiment started at {}\".format(time_start.strftime(\"%H:%M:%S\")))\nprint(\".......................................\") \n\nestimator = create_estimator(run_config, hparams)\n\ntf.estimator.train_and_evaluate(\n estimator=estimator,\n train_spec=train_spec, \n eval_spec=eval_spec\n)\n\ntime_end = datetime.utcnow() \nprint(\".......................................\")\nprint(\"Experiment finished at {}\".format(time_end.strftime(\"%H:%M:%S\")))\nprint(\"\")\ntime_elapsed = time_end - time_start\nprint(\"Experiment elapsed time: {} seconds\".format(time_elapsed.total_seconds()))\n ",
"6. Use the trained model to encode data (prediction)",
"import itertools\n\nDATA_SIZE = 2000\n\ninput_fn = lambda: csv_input_fn(\n TRAIN_DATA_FILES_PATTERN,\n mode=tf.contrib.learn.ModeKeys.INFER,\n num_epochs=1,\n batch_size=500\n)\n\nestimator = create_estimator(run_config, hparams)\n\npredictions = estimator.predict(input_fn=input_fn)\npredictions = itertools.islice(predictions, DATA_SIZE)\npredictions = list(map(lambda item: list(item[\"encoding\"]), predictions))\n\nprint(predictions[:5])",
"Visualise Encoded Data",
"y = pd.read_csv(\"data/data-01.csv\", header=None, index_col=0)[65]\n\ndata_reduced = pd.DataFrame(predictions, columns=['c1','c2','c3'])\ndata_reduced['class'] = y\ndata_reduced.head()\n\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(15,10))\nax = fig.add_subplot(111, projection='3d')\nax.scatter(xs=data_reduced.c2/1000000, ys=data_reduced.c3/1000000, zs=data_reduced.c1/1000000, c=data_reduced['class'], marker='o')\nplt.show()",
"Notes:\n\n\nYou can effectively implement a (linear) PCA by having only one hidden layer with no activation function\n\n\nTo improve the efficiency of training the model, the weights of the encoder and decoder layers can be tied (i.e., have the same values)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
greenelab/GCB535
|
28_Prelab_ML-I/ML1-prelab.ipynb
|
bsd-3-clause
|
[
"Welcome to Machine Learning!\nThis is the section of the class where we learn how to make a computer look at our data and identify aspects of the data that we didn't know to look for. The first section of this module begins with videos that give a brief background and introduction. In the following units, we'll start putting this vocabulary to use!\nVideo 1: Introduction\nYou can find Casey's introduction to machine learning for GCB 535 here: https://youtu.be/Cj_giNsKZYc\nVideo 2: Types of Machine Learning Methods\nYou can find Casey's discussion of different classes of machine learning methods here: https://youtu.be/4n2m3bLY2ps\nPrelab Questions:\nQ1: What type of question would you address with an unsupervised algorithm?\nQ2: Would you use a supervised or unsupervised algorithm to find genes involved in mitochondrial biogenesis if you have already identified a few genes that play a role in the process?\nQ3: Why?\nQ4: For the situation described in Q2, what are the Features, Examples, Labels, and Predictions?\nVideo 3: Example of Supervised Machine Learning\nYou can find Casey's discussion how you might structure an analysis to use a supervised algorithm to predict the effective therapeutic dose of a drug here: https://youtu.be/9N19ogr9mZc\nQ5: Why are the samples in Video 2 features, while the samples here are examples?\nVideo 4: Example of Unupervised Machine Learning\nYou can find Casey's discussion how you might look for disease subtypes with unsupervised algorithms here: https://youtu.be/y400v_AAJSE\nQ6: What are the Features, Examples and Labels for the question discussed in Video 4?\nk-Means Clustering\nLet's meet our first machine learning algorithm: k-means clustering. K-means has been used to identify subtypes of disease. For example, we discuss this paper by Tothill et al. in our k-means introduction video. Before you dive into the nuts and bolts of an implementation of k-means clustering, let's try to get an intuitive understanding of how this method works: https://youtu.be/qL7TBaMtooM\nQ7: Is k-means clustering a supervised or unsupervised algorithm?\nk-Means Demo Code:\nNow we're actually going to use some code that will perform k-means clustering. First we need to get some python packages that we're going to use out of the way.",
"%matplotlib inline\n# this crazy line lets us make figures in an ipython notebook\n\nimport random\nimport sys\nfrom math import sqrt\n\nimport matplotlib.pyplot as plt\nimport numpy as np",
"The next function is used to assign an observation to the centroid that is nearest to it.",
"def assign_nearest(centroids, point):\n \"\"\"\n assigns the point to its nearest centroid\n \n params:\n centroids - a list of centroids, each of which has 2 dimensions\n point - a point, which has two dimensions\n \n returns:\n the index of the centroid the point is closest to.\n \"\"\"\n nearest_idx = 0\n nearest_dist = sys.float_info.max # largest float on your computer\n for i in range(len(centroids)):\n # sqrt((x1-x2)^2 + (y1-y2)^2)\n dist = sqrt((centroids[i][0]-point[0])**2 + (centroids[i][1]-point[1])**2)\n if dist < nearest_dist: # smallest distance thus far\n nearest_idx = i\n nearest_dist = dist\n \n return nearest_idx",
"The next function actually performs k-means clustering. You need to understand how the algorithm works at the level of the video lecture. You don't need to understand every line of this, but you should feel free to dive in if you're interested!",
"def kmeans(data, k):\n \"\"\"\n performs k-means clustering for two-dimensional data.\n \n params:\n data - A numpy array of shape N, 2\n k - The number of clusters.\n \n returns:\n a dictionary with three elements\n - ['centroids']: a list of the final centroid positions.\n - ['members']: a list [one per centroid] of the points assigned to\n that centroid at the conclusion of clustering.\n - ['paths']: a list [one per centroid] of lists [one per iteration]\n containing the points occupied by each centroid.\n \"\"\"\n \n # http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ndarray.shape.html#numpy.ndarray.shape\n # .shape returns the size of the input numpy array in each dimension\n # if there are not 2 dimensions, we can't handle it here.\n if data.shape[1] != 2:\n return 'This implementation only supports two dimensional data.'\n if data.shape[0] < k:\n return 'This implementation requires at least as many points as clusters.'\n \n # pick random points as initial centroids\n centroids = []\n for x in random.sample(data, k):\n # note the use of tuples here\n centroids.append(tuple(x.tolist()))\n \n paths = []\n for i in range(k):\n paths.append([centroids[i],])\n \n # we'll store all previous states\n # so if we ever hit the same point again we know to stop\n previous_states = set()\n \n # continue until we repeat the same centroid positions\n assignments = None\n while not tuple(centroids) in previous_states:\n previous_states.add(tuple(centroids))\n assignments = []\n for point in data:\n assignments.append(assign_nearest(centroids, point))\n \n centroids_sum = [] # Make a list for each centroid to store position sum\n centroids_n = [] # Make a list for each centroid to store counts\n for i in range(k):\n centroids_sum.append((0,0))\n centroids_n.append(0)\n \n for i in range(len(assignments)):\n centroid = assignments[i]\n centroids_n[centroid] += 1 # found a new member of this centroid\n # add the point\n centroids_sum[centroid] = (centroids_sum[centroid][0] + data[i][0],\n centroids_sum[centroid][1] + data[i][1])\n \n for i in range(k):\n new_centroid = (centroids_sum[i][0]/centroids_n[i], centroids_sum[i][1]/centroids_n[i])\n centroids[i] = new_centroid\n paths[i].append(new_centroid)\n \n r_dict = {}\n r_dict['centroids'] = centroids\n r_dict['paths'] = paths\n r_dict['members'] = assignments\n return r_dict\n ",
"This next cell is full of plotting code. It uses something called matplotlib\nto show kmeans clustering. Specifically it shows the path centroids took,\nwhere they ended up, and which points were assigned to them. Feel free\nto take a look at this, but understanding it goes beyond the scope of the\nclass.",
"def plot_km(km, points):\n \"\"\"\n Plots the results of a kmeans run.\n \n params:\n km - a kmeans result object that contains centroids, paths, and members\n \n returns:\n a matplotlib figure object\n \"\"\"\n \n (xmin, ymin) = np.amin(points, axis=0)\n (xmax, ymax) = np.amax(points, axis=0)\n \n\n plt.figure(1)\n plt.clf()\n plt.plot(points[:, 0], points[:, 1], 'k.', markersize=2)\n \n for path in km['paths']:\n nppath = np.asarray(path)\n plt.plot(nppath[:, 0], nppath[:, 1])\n\n # Plot the calculated centroids as a red X\n centroids = np.asarray(km['centroids'])\n plt.scatter(centroids[:, 0], centroids[:, 1],\n marker='x', s=169, linewidths=3,\n color='r', zorder=10)\n\n plt.title('K-means clustering of simulated data.\\n'\n 'estimated (red), path (lines)')\n plt.xlim(xmin, xmax)\n plt.ylim(ymin, ymax)\n plt.xticks(())\n plt.yticks(())\n plt.yticks(())\n plt.show()",
"The next line will load a file of data using the numpy function loadtxt. We've created a population of points.",
"pop = np.loadtxt('kmeans-population.csv', delimiter=',')",
"Now we can use the k-means function to cluster! In this case, we're saying we want to find three clusters.",
"km_result = kmeans(pop, 3)",
"Now we can plot the results!",
"plot_km(km_result, pop)",
"Woo! You're done with this prelab! Feel free to run the kmeans clustering and plotting lines a few times to see how the algorithm works. For our in class exercise, we're going to perform k-means clustering in an exercise we call The Duck Strikes Back.\nExtra Information\nThe k-means implementation above is functional and could be used in practice. However, much more optimized implementation is available in the scikit learn package that we're going to use for the supervised machine learning applications in this course. For more information on using that implementation, check out the documentation: http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DiXiT-eu/collatex-tutorial
|
unit5/3_collation-outputs.ipynb
|
gpl-3.0
|
[
"Collation outputs\n\nIntroduction\nIn practice\nTable: HTML\nTable: JSON\nTable: XML and XML/TEI\nGraph: SVG\n\n\nExercise\nWhat's next\n\n\nIntroduction\nIn this tutorial we will be trying different outputs for our collation, meaning different graphical representations, formats and visualizations of the result.\nThe visualization of the collation result is an open discussion: several possibilities have been suggested and used and new ones are always being proposed. When the output of the collation is a printed format, such as a book, it is rare to see anything different from the traditional critical apparatus. Now that output formats are more frequently digital (or at least have a digital component), collation tools tend to offer more than one visualization option. This is the case for both Juxta and CollateX. The different visualizations are not incompatible; on the contrary, they can be complementary, highlighting different aspects of the result and suitable for different users or different stages of the workflow.\nIn the previous tutorials we used the alignment table and the graph. The alignment table, in use since the 1960's, is the equivalent of the matrix of bioinformatic for sequence alignment (for example, strings of DNA). In contrast, the graph is meant to represent the fluidity of the text and its variation. The idea of a graph-oriented model for expressing textual variance has been originally developed by Desmond Schmidt (2008). You can refer to this video, for a presentation on Apparatus vs. Graph – an Interface as Scholarly Argument by Tara Andrews and Joris van Zundert.\nOther outputs, such as the histogram and the side-by-side visualization offered by Juxta, allow users to visualize the result of the comparison between two witnesses only. This reflects the way the algorithm is built and shows that the graphical representation is connected with the approach to collation that informs the software.\nCollateX has two main ways to conceive of the collation result: as a table (with many different formatting options) and as a graph:\n- table formats\n - plain text table (no need to specify the output)\n - HTML table (output='html')\n - HTML vertical table with colors (output='html2')\n - JSON (output='json')\n - XML (output='xml')\n - XML/TEI (output='tei')\n- graph format\n - SVG (output='svg')\nIn practice\nEven though we have already encountered some of these outputs, it is worth going through them one more time focussing on part of the code that needs to change to produce the different formats. \nTable: plain text\nIn this tutorial we will use some simple texts already used in the previous tutorial: the fox and dog example.\nLet's start with the most simple output, for which we don't need to specify any output format (note that you can name the variable containing the output anything you like, but in this tutorial we call it alignment_table, table or graph)\nIn the code cell below the lines starting with a hash (#) are comments and are not executed. They are there in this instance to help you remember what the different parts of the code do. You do not need to use them in your notebook (although sometimes it is helpful to add comments to your code so you remember what things do).",
"#import the collatex library\nfrom collatex import *\n#create an instance of the collateX engine\ncollation = Collation()\n#add witnesses to the collateX instance\ncollation.add_plain_witness( \"A\", \"The quick brown fox jumped over the lazy dog.\")\ncollation.add_plain_witness( \"B\", \"The brown fox jumped over the dog.\" )\ncollation.add_plain_witness( \"C\", \"The bad fox jumped over the lazy dog.\" )\n#collate the witnesses and store the result in a vaiable called 'table'\n#as we have not specified an output this will be sored in plain text\ntable = collate(collation)\n#print the collation result\nprint(table)",
"Table: HTML\nNow let's try a different output. This time we still want a table format but instead of it being in plain text we would like it exported in HTML (the markup language used for web pages), and we would like it to be displayed vertically with nice colors to highlight the comparison. To achieve this all you need to do is add the keyword output to the collate command and give it that value html2.",
"table = collate(collation, output='html2')",
"Before moving to the other outputs, try to produce the simple HTML output by changing the code above. The value required in the output keyword should be html.\nTable: JSON\nThe same alignment table can be exported in a variety of formats, as we have seen, including JSON (Javascript Object Notation), a format widely used for storing and interchanging data nowadays. In order to produce JSON as output, we need to specify json as the output format.",
"table = collate(collation, output='json')\nprint(table)",
"Table: XML and XML/TEI\nWe can use the same procedure in order to export the table in XML or XML/TEI (the latter produces a condensed version of the table only listing witnesses at points of divergence - also called a negative apparatus). To do this you just specify a different output format. Let's start with the XML output (that you can later post-process using XSLT or other tools).",
"table = collate(collation, output='xml')\nprint(table)",
"And, finally, you can test the XML/TEI output that produces XML following the TEI parallel segmentation encoding guidelines.",
"table = collate(collation, output='tei')\nprint(table)",
"Graph: SVG\nAnd now for something different: try with the graph, exported in the SVG format",
"graph = collate(collation, output='svg')",
"NOTE: If you are working in an IDE such as PyCharm, you may get an error message when generating the graph. First make sure that you have GraphViz and its bindings installed correctly. If the error message regards the syntax, however, rest assured that you have done nothing wrong: the error is related to the SVG generated code. We are aware of the problem. If you wish to generate a variant graph of your collation: Jupyter notebooks ignores the syntax error so you might consider working in Jupyter notebooks for the moment.\nExercise\nIn this tutorial we have used the fox and dog example. Now try to produce a JSON or TEI output of the first paragraph of Darwin's On the origin of species, that we have already used in the first tutorial. You can find the data in fixtures/Darwin/txt (only the first paragraph: xxxx_par1).\nAlternatively, or if you still have time, you can use the data in fixtures/Woolf/Lighthouse-1 and produce new outputs.\nWhat's next\nIn the next tutorial, Collate outside the notebook, we will leave the notebook and learn how to create and run Python scripts using PyCharm and the terminal, and how to save the collation results in a new file."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
|
apache-2.0
|
[
"Hand tuning hyperparameters\nLearning Objectives:\n * Use the LinearRegressor class in TensorFlow to predict median housing price, at the granularity of city blocks, based on one input feature\n * Evaluate the accuracy of a model's predictions using Root Mean Squared Error (RMSE)\n * Improve the accuracy of a model by hand-tuning its hyperparameters\nThe data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Using only one input feature -- the number of rooms -- predict house value.",
"!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst",
"Set Up\nIn this first cell, we'll load the necessary libraries.",
"import math\nimport shutil\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\nprint(tf.__version__)\ntf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)\npd.options.display.max_rows = 10\npd.options.display.float_format = '{:.1f}'.format",
"Next, we'll load our data set.",
"df = pd.read_csv(\"https://storage.googleapis.com/ml_universities/california_housing_train.csv\", sep=\",\")",
"Examine the data\nIt's a good idea to get to know your data a little bit before you work with it.\nWe'll print out a quick summary of a few useful statistics on each column.\nThis will include things like mean, standard deviation, max, min, and various quantiles.",
"df.head()\n\ndf.describe()",
"In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). Can we use total_rooms as our input feature? What's going on with the values for that feature?\nThis data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well",
"df['num_rooms'] = df['total_rooms'] / df['households']\ndf.describe()\n\n# Split into train and eval\nnp.random.seed(seed=1) #makes split reproducible\nmsk = np.random.rand(len(df)) < 0.8\ntraindf = df[msk]\nevaldf = df[~msk]",
"Build the first model\nIn this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use num_rooms as our input feature.\nTo train our model, we'll use the LinearRegressor estimator. The Estimator takes care of a lot of the plumbing, and exposes a convenient way to interact with data, training, and evaluation.",
"OUTDIR = './housing_trained'\ndef train_and_evaluate(output_dir, num_train_steps):\n estimator = tf.compat.v1.estimator.LinearRegressor(\n model_dir = output_dir, \n feature_columns = [tf.feature_column.numeric_column('num_rooms')])\n \n #Add rmse evaluation metric\n def rmse(labels, predictions):\n pred_values = tf.cast(predictions['predictions'],tf.float64)\n return {'rmse': tf.compat.v1.metrics.root_mean_squared_error(labels, pred_values)}\n estimator = tf.compat.v1.estimator.add_metrics(estimator,rmse)\n \n train_spec=tf.estimator.TrainSpec(\n input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = traindf[[\"num_rooms\"]],\n y = traindf[\"median_house_value\"], # note the scaling\n num_epochs = None,\n shuffle = True),\n max_steps = num_train_steps)\n eval_spec=tf.estimator.EvalSpec(\n input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = evaldf[[\"num_rooms\"]],\n y = evaldf[\"median_house_value\"], # note the scaling\n num_epochs = 1,\n shuffle = False),\n steps = None,\n start_delay_secs = 1, # start evaluating after N seconds\n throttle_secs = 10, # evaluate every N seconds\n )\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n \n# Run training \nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\ntrain_and_evaluate(OUTDIR, num_train_steps = 100)",
"1. Scale the output\nLet's scale the target values so that the default parameters are more appropriate.",
"SCALE = 100000\nOUTDIR = './housing_trained'\ndef train_and_evaluate(output_dir, num_train_steps):\n estimator = tf.compat.v1.estimator.LinearRegressor(\n model_dir = output_dir, \n feature_columns = [tf.feature_column.numeric_column('num_rooms')])\n \n #Add rmse evaluation metric\n def rmse(labels, predictions):\n pred_values = tf.cast(predictions['predictions'],tf.float64)\n return {'rmse': tf.compat.v1.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}\n estimator = tf.compat.v1.estimator.add_metrics(estimator,rmse)\n \n train_spec=tf.estimator.TrainSpec(\n input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = traindf[[\"num_rooms\"]],\n y = traindf[\"median_house_value\"] / SCALE, # note the scaling\n num_epochs = None,\n shuffle = True),\n max_steps = num_train_steps)\n eval_spec=tf.estimator.EvalSpec(\n input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = evaldf[[\"num_rooms\"]],\n y = evaldf[\"median_house_value\"] / SCALE, # note the scaling\n num_epochs = 1,\n shuffle = False),\n steps = None,\n start_delay_secs = 1, # start evaluating after N seconds\n throttle_secs = 10, # evaluate every N seconds\n )\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n\n# Run training \nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\ntrain_and_evaluate(OUTDIR, num_train_steps = 100)",
"2. Change learning rate and batch size\nCan you come up with better parameters?",
"SCALE = 100000\nOUTDIR = './housing_trained'\ndef train_and_evaluate(output_dir, num_train_steps):\n myopt = tf.compat.v1.train.FtrlOptimizer(learning_rate = 0.2) # note the learning rate\n estimator = tf.compat.v1.estimator.LinearRegressor(\n model_dir = output_dir, \n feature_columns = [tf.feature_column.numeric_column('num_rooms')],\n optimizer = myopt)\n \n #Add rmse evaluation metric\n def rmse(labels, predictions):\n pred_values = tf.cast(predictions['predictions'],tf.float64)\n return {'rmse': tf.compat.v1.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}\n estimator = tf.compat.v1.estimator.add_metrics(estimator,rmse)\n \n train_spec=tf.estimator.TrainSpec(\n input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = traindf[[\"num_rooms\"]],\n y = traindf[\"median_house_value\"] / SCALE, # note the scaling\n num_epochs = None,\n batch_size = 512, # note the batch size\n shuffle = True),\n max_steps = num_train_steps)\n eval_spec=tf.estimator.EvalSpec(\n input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = evaldf[[\"num_rooms\"]],\n y = evaldf[\"median_house_value\"] / SCALE, # note the scaling\n num_epochs = 1,\n shuffle = False),\n steps = None,\n start_delay_secs = 1, # start evaluating after N seconds\n throttle_secs = 10, # evaluate every N seconds\n )\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n\n# Run training \nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\ntrain_and_evaluate(OUTDIR, num_train_steps = 100) ",
"Is there a standard method for tuning the model?\nThis is a commonly asked question. The short answer is that the effects of different hyperparameters is data dependent. So there are no hard and fast rules; you'll need to run tests on your data.\nHere are a few rules of thumb that may help guide you:\n\nTraining error should steadily decrease, steeply at first, and should eventually plateau as training converges.\nIf the training has not converged, try running it for longer.\nIf the training error decreases too slowly, increasing the learning rate may help it decrease faster.\nBut sometimes the exact opposite may happen if the learning rate is too high.\nIf the training error varies wildly, try decreasing the learning rate.\nLower learning rate plus larger number of steps or larger batch size is often a good combination.\nVery small batch sizes can also cause instability. First try larger values like 100 or 1000, and decrease until you see degradation.\n\nAgain, never go strictly by these rules of thumb, because the effects are data dependent. Always experiment and verify.\n3: Try adding more features\nSee if you can do any better by adding more features.\nDon't take more than 5 minutes on this portion."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dinrker/PredictiveModeling
|
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
|
mit
|
[
"Goals of this Lesson\n\nGradient Descent for PCA\nNonlinear Dimensionality Reduction\nAutoencoder: Model and Learning\nAutoencoding Images\nDenoising Autoencoder",
"from IPython.display import Image\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport time\n%matplotlib inline",
"Again we need functions for shuffling the data and calculating classification errrors.",
"### function for shuffling the data and labels\ndef shuffle_in_unison(features, labels):\n rng_state = np.random.get_state()\n np.random.shuffle(features)\n np.random.set_state(rng_state)\n np.random.shuffle(labels)\n \n### calculate classification errors\n# return a percentage: (number misclassified)/(total number of datapoints)\ndef calc_classification_error(predictions, class_labels):\n n = predictions.size\n num_of_errors = 0.\n for idx in xrange(n):\n if (predictions[idx] >= 0.5 and class_labels[idx]==0) or (predictions[idx] < 0.5 and class_labels[idx]==1):\n num_of_errors += 1\n return num_of_errors/n",
"0.1 Load the dataset of handwritten digits\nWe are going to use the MNIST dataset throughout this session. Let's load the data...",
"mnist = pd.read_csv('../data/mnist_train_100.csv', header=None)\n\n# load the 70,000 x 784 matrix\nfrom sklearn.datasets import fetch_mldata\nmnist = fetch_mldata('MNIST original').data\n\n# reduce to 5k instances\nnp.random.shuffle(mnist)\n#mnist = mnist[:5000,:]/255.\nprint \"Dataset size: %d x %d\"%(mnist.shape)\n\n# subplot containing first image\nax1 = plt.subplot(1,2,1)\ndigit = mnist[1,:]\nax1.imshow(np.reshape(digit, (28, 28)), cmap='Greys_r')\n\n# subplot containing second image\nax2 = plt.subplot(1,2,2)\ndigit = mnist[2,:]\nax2.imshow(np.reshape(digit, (28, 28)), cmap='Greys_r')\nplt.show()",
"1 Gradient Descent for PCA\nRecall the Principal Component Analysis model we covered in the last session. Again, the goal of PCA is for a given datapoint $\\mathbf{x}{i}$, find a lower-dimensional representation $\\mathbf{h}{i}$ such that $\\mathbf{x}{i}$ can be 'predicted' from $\\mathbf{h}{i}$ using a linear transformation. Again, the loss function can be written as: $$ \\mathcal{L}{\\text{PCA}} = \\sum{i=1}^{N} (\\mathbf{x}{i} - \\mathbf{x}{i}\\mathbf{W}\\mathbf{W}^{T})^{2}.$$ \nInstead of using the closed-form solution we discussed in the previous session, here we'll use gradient descent. The reason for doing this will become clear later in the session, as we move on to cover a non-linear version of PCA. To run gradient descent, we of course need the derivative of the loss w.r.t. the parameters, which are in this case, the transformation matrix $\\mathbf{W}$:\n$$ \\nabla_{\\mathbf{W}} \\mathcal{L}{\\text{PCA}} = -4\\sum{i=1}^{N} (\\mathbf{x}{i} - \\mathbf{\\tilde x}{i})^{T}\\mathbf{h}_{i} $$\nNow let's run our stochastic gradient PCA on the MNIST dataset...\n<span style=\"color:red\">Caution: Running the following PCA code could take several minutes or more, depending on your computer's processing power.</span>",
"# set the random number generator for reproducability\nnp.random.seed(49)\n\n# define the dimensionality of the hidden rep.\nn_components = 200\n\n# Randomly initialize the Weight matrix\nW = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\\\n high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components))\n# Initialize the step-size\nalpha = 1e-3\n# Initialize the gradient\ngrad = np.infty\n# Set the tolerance \ntol = 1e-8\n# Initialize error\nold_error = 0\nerror = [np.infty]\nbatch_size = 250\n\n### train with stochastic gradients\nstart_time = time.time()\n\niter_idx = 1\n# loop until gradient updates become small\nwhile (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300):\n for batch_idx in xrange(mnist.shape[0]/batch_size):\n x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :]\n h = np.dot(x, W)\n x_recon = np.dot(h, W.T)\n \n # compute gradient\n diff = x - x_recon\n grad = (-4./batch_size)*np.dot(diff.T, h)\n \n # update parameters\n W = W - alpha*grad\n \n # track the error\n if iter_idx % 25 == 0:\n old_error = error[-1]\n diff = mnist - np.dot(np.dot(mnist, W), W.T)\n recon_error = np.mean( np.sum(diff**2, 1) )\n error.append(recon_error)\n print \"Epoch %d, Reconstruction Error: %.3f\" %(iter_idx, recon_error)\n \n iter_idx += 1\nend_time = time.time()\n\nprint\nprint \"Training ended after %i iterations, taking a total of %.2f seconds.\" %(iter_idx, end_time-start_time)\nprint \"Final Reconstruction Error: %.2f\" %(error[-1])\nreduced_mnist = np.dot(mnist, W)\nprint \"Dataset is now of size: %d x %d\"%(reduced_mnist.shape)",
"Let's visualize a reconstruction...",
"img_idx = 2\nreconstructed_img = np.dot(reduced_mnist[img_idx,:], W.T)\noriginal_img = mnist[img_idx,:]\n\n# subplot for original image\nax1 = plt.subplot(1,2,1)\nax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r')\nax1.set_title(\"Original Painting\")\n\n# subplot for reconstruction\nax2 = plt.subplot(1,2,2)\nax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r')\nax2.set_title(\"Reconstruction\")\nplt.show()",
"We can again visualize the transformation matrix $\\mathbf{W}^{T}$. It's rows act as 'filters' or 'feature detectors'. However, without the orthogonality constraint, we've loss the identifiably of the components...",
"# two components to show\ncomp1 = 0\ncomp2 = 150\n\n# subplot \nax1 = plt.subplot(1,2,1)\nfilter1 = W[:, comp1]\nax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r')\n\n# subplot \nax2 = plt.subplot(1,2,2)\nfilter2 = W[:, comp2]\nax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r')\n\nplt.show()",
"2. Nonlinear Dimensionality Reduction with Autoencoders\nIn the last session (and section) we learned about Principal Component Analysis, a technique that finds some linear projection that reduces the dimensionality of the data while preserving its variance. We looked at it as a form of unsupervised linear regression, where we predict the data itself instead of some associated value (i.e. a label). In this section, we will move on to a nonlinear dimensionality reduction technique called an Autoencoder and derive it's optimization procedure. \n2.1 Defining the Autoencoder Model\nRecall that PCA is comprised of a linear projection step followed by application of the inverse projection. An Autoencoder is the same model but with a non-linear transformation placed on the hidden representation. To reiterate, our goal is: for a datapoint $\\mathbf{x}{i}$, find a lower-dimensional representation $\\mathbf{h}{i}$ such that $\\mathbf{x}{i}$ can be 'predicted' from $\\mathbf{h}{i}$---but this time, not necessarily with a linear transformation. In math, this statement can be written as $$\\mathbf{\\tilde x}{i} = \\mathbf{h}{i} \\mathbf{W}^{T} \\text{ where } \\mathbf{h}{i} = f(\\mathbf{x}{i} \\mathbf{W}). $$ $\\mathbf{W}$ is a $D \\times K$ matrix of parameters that need to be learned--much like the $\\beta$ vector in regression models. $D$ is the dimensionality of the original data, and $K$ is the dimensionality of the compressed representation $\\mathbf{h}_{i}$. Lastly, we have the new component, the transformation function $f$. There are many possible function to choose for $f$; yet we'll use a framilar one, the logistic function $$f(z) = \\frac{1}{1+\\exp(-z)}.$$ The graphic below depicts the autoencoder's computation path: \n\nOptimization\nHaving defined the Autoencoder model, we look to write learning as an optimization process. Recall that we wish to make a reconstruction of the data, denoted $\\mathbf{\\tilde x}{i}$, as close as possible to the original input: $$\\mathcal{L}{\\text{AE}} = \\sum_{i=1}^{N} (\\mathbf{x}{i} - \\mathbf{\\tilde x}{i})^{2}.$$ We can make a substitution for $\\mathbf{\\tilde x}{i}$ from the equation above: $$ = \\sum{i=1}^{N} (\\mathbf{x}{i} - \\mathbf{h}{i}\\mathbf{W}^{T})^{2}.$$ And we can make another substitution for $\\mathbf{h}{i}$, bringing us to the final form of the loss function: $$ = \\sum{i=1}^{N} (\\mathbf{x}{i} - f(\\mathbf{x}{i}\\mathbf{W})\\mathbf{W}^{T})^{2}.$$ \n<span style=\"color:red\">STUDENT ACTIVITY (15 mins)</span>\nDerive an expression for the gradient: $$ \\nabla_{W}\\mathcal{L}_{\\text{AE}} = ? $$ \nTake $f$ to be the logistic function, which has a derivative of $f'(z) = f(z)(1-f(z))$. Those functions are provided for you below.",
"def logistic(x):\n return 1./(1+np.exp(-x))\n\ndef logistic_derivative(x):\n z = logistic(x)\n return np.multiply(z, 1-z)\n\ndef compute_gradient(x, x_recon, h, a):\n # parameters:\n # x: the original data\n # x_recon: the reconstruction of x\n # h: the hidden units (after application of f)\n # a: the pre-activations (before the application of f)\n\n return #TODO\n \nnp.random.seed(39)\n\n# dummy variables for testing\nx = np.random.normal(size=(5,3))\nx_recon = x + np.random.normal(size=x.shape)\nW = np.random.normal(size=(x.shape[1], 2))\na = np.dot(x, W)\nh = logistic(a)\ncompute_gradient(x, x_recon, h, a)",
"Should print \narray([[ 4.70101821, 2.26494039],\n [ 2.86585042, 0.0731302 ],\n [ 0.79869215, 0.15570277]])\nAutoencoder (AE) Overview\nData\nWe observe $\\mathbf{x}_{i}$ where\n\\begin{eqnarray}\n\\mathbf{x}{i} = (x{i,1}, \\dots, x_{i,D}) &:& \\mbox{set of $D$ explanatory variables (aka features). No labels.} \n\\end{eqnarray}\n Parameters\n$\\mathbf{W}$: Matrix with dimensionality $D \\times K$, where $D$ is the dimensionality of the original data and $K$ the dimensionality of the new features. The matrix encodes the transformation between the original and new feature spaces.\nError Function\n\\begin{eqnarray}\n\\mathcal{L} = \\sum_{i=1}^{N} ( \\mathbf{x}{i} - f(\\mathbf{x}{i} \\mathbf{W}) \\mathbf{W}^{T})^{2}\n\\end{eqnarray}\n2.2 Autoencoder Implementation\nNow let's train an Autoencoder...",
"# set the random number generator for reproducability\nnp.random.seed(39)\n\n# define the dimensionality of the hidden rep.\nn_components = 200\n\n# Randomly initialize the transformation matrix\nW = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\\\n high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components))\n\n# Initialize the step-size\nalpha = .01\n# Initialize the gradient\ngrad = np.infty\n# Initialize error\nold_error = 0\nerror = [np.infty]\nbatch_size = 250\n\n### train with stochastic gradients\nstart_time = time.time()\n\niter_idx = 1\n# loop until gradient updates become small\nwhile (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300):\n for batch_idx in xrange(mnist.shape[0]/batch_size):\n x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :]\n pre_act = np.dot(x, W) \n h = logistic(pre_act)\n x_recon = np.dot(h, W.T)\n \n # compute gradient\n grad = compute_gradient(x, x_recon, h, pre_act)\n \n # update parameters\n W = W - alpha/batch_size * grad\n \n # track the error\n if iter_idx % 25 == 0:\n old_error = error[-1]\n \n diff = mnist - np.dot(logistic(np.dot(mnist, W)), W.T)\n recon_error = np.mean( np.sum(diff**2, 1) )\n error.append(recon_error)\n print \"Epoch %d, Reconstruction Error: %.3f\" %(iter_idx, recon_error)\n \n iter_idx += 1\nend_time = time.time()\n\nprint\nprint \"Training ended after %i iterations, taking a total of %.2f seconds.\" %(iter_idx, end_time-start_time)\nprint \"Final Reconstruction Error: %.2f\" %(error[-1])\nreduced_mnist = np.dot(mnist, W)\nprint \"Dataset is now of size: %d x %d\"%(reduced_mnist.shape)\n\nimg_idx = 2\nreconstructed_img = np.dot(logistic(reduced_mnist[img_idx,:]), W.T)\noriginal_img = mnist[img_idx,:]\n\n# subplot for original image\nax1 = plt.subplot(1,2,1)\nax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r')\nax1.set_title(\"Original Digit\")\n\n# subplot for reconstruction\nax2 = plt.subplot(1,2,2)\nax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r')\nax2.set_title(\"Reconstruction\")\nplt.show()\n\n# two components to show\ncomp1 = 0\ncomp2 = 150\n\n# subplot \nax1 = plt.subplot(1,2,1)\nfilter1 = W[:, comp1]\nax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r')\n\n# subplot \nax2 = plt.subplot(1,2,2)\nfilter2 = W[:, comp2]\nax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r')\nplt.show()",
"2.3 SciKit Learn Version\nWe can hack the Scikit-Learn Regression neural network into an Autoencoder by feeding it the data back as the labels...",
"from sklearn.neural_network import MLPRegressor\n\n# set the random number generator for reproducability\nnp.random.seed(39)\n\n# define the dimensionality of the hidden rep.\nn_components = 200\n\n# define model\nae = MLPRegressor(hidden_layer_sizes=(n_components,), activation='logistic')\n\n### train Autoencoder\nstart_time = time.time()\nae.fit(mnist, mnist)\nend_time = time.time()\n\nrecon_error = np.mean(np.sum((mnist - ae.predict(mnist))**2, 1))\nW = ae.coefs_[0]\nb = ae.intercepts_[0]\nreduced_mnist = logistic(np.dot(mnist, W) + b)\n\nprint\nprint \"Training ended after a total of %.2f seconds.\" %(end_time-start_time)\nprint \"Final Reconstruction Error: %.2f\" %(recon_error)\nprint \"Dataset is now of size: %d x %d\"%(reduced_mnist.shape)\n\nimg_idx = 5\nreconstructed_img = np.dot(reduced_mnist[img_idx,:], ae.coefs_[1]) + ae.intercepts_[1]\noriginal_img = mnist[img_idx,:]\n\n# subplot for original image\nax1 = plt.subplot(1,2,1)\nax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r')\nax1.set_title(\"Original Digit\")\n\n# subplot for reconstruction\nax2 = plt.subplot(1,2,2)\nax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r')\nax2.set_title(\"Reconstruction\")\nplt.show()\n\n# two components to show\ncomp1 = 0\ncomp2 = 150\n\n# subplot \nax1 = plt.subplot(1,2,1)\nfilter1 = W[:, comp1]\nax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r')\n\n# subplot \nax2 = plt.subplot(1,2,2)\nfilter2 = W[:, comp2]\nax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r')\nplt.show()",
"2.4 Denoising Autoencoder (DAE)\nLastly, we are going to examine an extension to the Autoencoder called a Denoising Autoencoder (DAE). It has the following loss fuction: $$\\mathcal{L}{\\text{DAE}} = \\sum{i=1}^{N} (\\mathbf{x}{i} - f((\\hat{\\boldsymbol{\\zeta}} \\odot \\mathbf{x}{i})\\mathbf{W})\\mathbf{W}^{T})^{2} \\ \\text{ where } \\hat{\\boldsymbol{\\zeta}} \\sim \\text{Bernoulli}(p).$$ In words, what we're doing is drawning a Bernoulli (i.e. binary) matrix the same size as the input features, and feeding a corrupted version of $\\mathbf{x}_{i}$. The Autoencoder, then, must try to recreate the original data from a lossy representation. This has the effect of forcing the Autoencoder to use features that better generalize. \nLet's make the simple change that implements a DAE below...",
"# set the random number generator for reproducability\nnp.random.seed(39)\n\n# define the dimensionality of the hidden rep.\nn_components = 200\n\n# Randomly initialize the Beta vector\nW = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\\\n high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components))\n\n# Initialize the step-size\nalpha = .01\n# Initialize the gradient\ngrad = np.infty\n# Set the tolerance \ntol = 1e-8\n# Initialize error\nold_error = 0\nerror = [np.infty]\nbatch_size = 250\n\n### train with stochastic gradients\nstart_time = time.time()\n\niter_idx = 1\n# loop until gradient updates become small\nwhile (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300):\n for batch_idx in xrange(mnist.shape[0]/batch_size):\n x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :]\n \n # add noise to features\n x_corrupt = np.multiply(x, np.random.binomial(n=1, p=.8, size=x.shape))\n \n pre_act = np.dot(x_corrupt, W) \n h = logistic(pre_act)\n x_recon = np.dot(h, W.T)\n \n # compute gradient\n diff = x - x_recon\n grad = -2.*(np.dot(diff.T, h) + np.dot(np.multiply(np.dot(diff, W), logistic_derivative(pre_act)).T, x_corrupt).T)\n # NOTICE: during the 'backward pass', use the uncorrupted features\n \n # update parameters\n W = W - alpha/batch_size * grad\n \n # track the error\n if iter_idx % 25 == 0:\n old_error = error[-1]\n \n diff = mnist - np.dot(logistic(np.dot(mnist, W)), W.T)\n recon_error = np.mean( np.sum(diff**2, 1) )\n error.append(recon_error)\n print \"Epoch %d, Reconstruction Error: %.3f\" %(iter_idx, recon_error)\n \n iter_idx += 1\nend_time = time.time()\n\nprint\nprint \"Training ended after %i iterations, taking a total of %.2f seconds.\" %(iter_idx, end_time-start_time)\nprint \"Final Reconstruction Error: %.2f\" %(error[-1])\nreduced_mnist = np.dot(mnist, W)\nprint \"Dataset is now of size: %d x %d\"%(reduced_mnist.shape)\n\nimg_idx = 5\nreconstructed_img = np.dot(logistic(reduced_mnist[img_idx,:]), W.T)\noriginal_img = mnist[img_idx,:]\n\n# subplot for original image\nax1 = plt.subplot(1,2,1)\nax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r')\nax1.set_title(\"Original Painting\")\n\n# subplot for reconstruction\nax2 = plt.subplot(1,2,2)\nax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r')\nax2.set_title(\"Reconstruction\")\nplt.show()\n\n# two components to show\ncomp1 = 0\ncomp2 = 150\n\n# subplot \nax1 = plt.subplot(1,2,1)\nfilter1 = W[:, comp1]\nax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r')\n\n# subplot \nax2 = plt.subplot(1,2,2)\nfilter2 = W[:, comp2]\nax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r')\nplt.show()",
"When training larger autoencoders, you'll see filters that look like these...\nRegular Autoencoder:\n\nDenoising Autoencoder:\n\n<span style=\"color:red\">STUDENT ACTIVITY (until end of session)</span>\nYour task is to reproduce the faces experiment from the previous session but using an Autoencoder instead of PCA",
"from sklearn.datasets import fetch_olivetti_faces\n\nfaces_dataset = fetch_olivetti_faces(shuffle=True)\nfaces = faces_dataset.data # 400 flattened 64x64 images\nperson_ids = faces_dataset.target # denotes the identity of person (40 total)\n\nprint \"Dataset size: %d x %d\" %(faces.shape)\nprint \"And the images look like this...\"\nplt.imshow(np.reshape(faces[200,:], (64, 64)), cmap='Greys_r')\nplt.show()",
"This dataset contains 400 64x64 pixel images of 40 people each exhibiting 10 facial expressions. The images are in gray-scale, not color, and therefore flattened vectors contain 4096 dimensions.\n<span style=\"color:red\">Subtask 1: Run (Regular) Autoencoder</span>",
"### Your code goes here ###\n\n# train Autoencoder model on 'faces'\n\n###########################\n\nprint \"Training took a total of %.2f seconds.\" %(end_time-start_time)\nprint \"Final reconstruction error: %.2f%%\" %(recon_error) \nprint \"Dataset is now of size: %d x %d\"%(faces_reduced.shape)",
"<span style=\"color:red\">Subtask 2: Reconstruct an image</span>",
"### Your code goes here ###\n\n# Use learned transformation matrix to project back to the original 4096-dimensional space\n# Remember you need to use np.reshape() \n\n###########################",
"<span style=\"color:red\">Subtask 3: Train a Denoising Autoencoder</span>",
"### Your code goes here ###\n\n\n###########################",
"<span style=\"color:red\">Subtask 4: Generate a 2D scatter plot from both models</span>",
"### Your code goes here ###\n\n# Run AE for 2 components\n\n# Generate plot\n\n# Bonus: color the scatter plot according to the person_ids to see if any structure can be seen\n\n###########################",
"<span style=\"color:red\">Subtask 5: Train a denoising version of PCA and test its performance</span>",
"### Your code goes here ###\n\n# Run PCA but add noise to the input first\n\n###########################"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
srnas/barnaba
|
examples/example_06_single_strand_motif.ipynb
|
gpl-3.0
|
[
"%autosave 0\nfrom __future__ import print_function",
"Search for single-stranded RNA motifs\nWe will now search for single-stranded motifs within a structure/trajectory.\nThis is performed by using the ss_motif function.\npython\nresults = bb.ss_motif(query,target,threshold=0.6,out=None,bulges=0)\n\nquery is a PDB file with the structure you want to search for within the file target. If the keyword topology is specified, the query structure is searched in the target trajectory file.\nthreshold is the eRMSD threshold to consider a substructure in target to be significantly similar to query. \n Typical relevant hits have eRMSD in the 0.6-0.9 range.\nIf you specify the optional string keyword out, PDB structures below the threshold are written with the specified prefix. \nIt is possible to specify the maximum number of allowed inserted or bulged bases with the option bulges.\nThe search is performed not considering the sequence. It is possible to specify a sequence with the sequence option. Abbreviations (i.e N/R/Y) are accepted.\n\nThe function returns a list of hits. Each element in this list is in turn a list containing the following information:\n- element 0 is the frame index. This is relevant if the search is performed over a trajectory/multi model PDB.\n- element 1 is the eRMSD distance from the query\n- element 2 is the list of residues.\nIn the following example we search for structures similar to GNRA.pdb in a crystal structure of the H.Marismortui large ribosomal subunit (PDB 1S72).",
"import barnaba as bb\n\n# find all GNRA tetraloops in H.Marismortui large ribosomal subunit (PDB 1S72)\nquery = \"../test/data/GNRA.pdb\" \ntarget = \"../test/data/1S72.pdb\" \n\n# call function. \nresults = bb.ss_motif(query,target,threshold=0.6,out='gnra_loops',bulges=1)\n",
"Now we print the fragment residues and their eRMSD \ndistance from the query structure.",
"for j in range(len(results)):\n #seq = \"\".join([r.split(\"_\")[0] for r in results[j][2]])\n print(\"%2d eRMSD:%5.3f \" % (j,results[j][1]))\n print(\" Sequence: %s\" % \",\".join(results[j][2]))\n print()",
"We can also calculate RMSD distances for the different hits",
"import glob\n\npdbs = glob.glob(\"gnra_loops*.pdb\")\ndists = [bb.rmsd(query,f)[0] for f in pdbs]\n\nfor j in range(len(results)):\n seq = \"\".join([r.split(\"_\")[0] for r in results[j][2]])\n print(\"%2d eRMSD:%5.3f RMSD: %6.4f\" % (j,results[j][1],10.*dists[j]), end=\"\")\n print(\" Sequence: %s\" % seq)\n\n #print \"%50s %6.4f AA\" % (f,10.*dist[0])",
"Note that the first hit has a low eRMSD, but no GNRA sequence. Let's have a look at this structure:",
"import py3Dmol\n\nquery_s = open(query,'r').read()\nhit_0 = open(pdbs[0],'r').read()\n\np = py3Dmol.view(width=900,height=600,viewergrid=(1,2))\np.addModel(query_s,'pdb',viewer=(0,0))\np.addModel(hit_0,'pdb',viewer=(0,1))\np.setStyle({'stick':{}})\np.setBackgroundColor('0xeeeeee')\np.zoomTo()\np.show()",
"We can also check hit 14, that has low eRMSD but (relatively) high RMSD",
"hit_14 = open(pdbs[14],'r').read()\n\np = py3Dmol.view(width=900,height=600,viewergrid=(1,2))\np.addModel(query_s,'pdb',viewer=(0,0))\np.addModel(hit_14,'pdb',viewer=(0,1))\np.setStyle({'stick':{}})\np.setBackgroundColor('0xeeeeee')\np.zoomTo()\np.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
enury/collation-viz
|
pycoviz.ipynb
|
mit
|
[
"Exploring the collation of Calpurnius Flaccus\nIn this notebook, I present an interactive method to manipulate the collation of the Declamations of Calpurnius Flaccus obtained with CollateX.\nUnderstanding the data\nFirst it is necessary to understand the structure of the collation data. Here you can find the generic description of json structures. An object (dictionary) is made of one or more pairs of name/value. The relation name/value is similar to the relation between a word and its definition in a dictionary. An array (list) is an ordered list of values. A json object corresponds to a python dictionary, and a json array to a python list. Lists can be combined: a table is a list of lists.\nThere are two objects in the json collation, which are converted into their equivalent in python: \n* a list of witnesses (\"witnesses\")\n* a table with the aligned text versions (\"table\")\nThe list of witnesses works as a header for the table, which can be itself described as:\n* a table is a list of rows\n* a row is a list of cells\n* a cell is a list of tokens\n* a token is a dictionary with the following entries:\n * t : exact word as it appears in the witness\n * n : optional normalized version of the word\n * locus : optional exact location of the word in the manuscript/edition. Folio/page number, followed by line number\n * note : optional comment \n * decl: optional declamation number\n * link : optional link to the page of a digital facsimile, where the token appears\nThe order of the rows is important, because it follows the order of the text. Row 0 has the first word(s) of the text, whereas the last row has also the last word(s) of the text. The row's number is called ID number, and will be used later for variant location identification.\nContents <a name=\"ToC\"></a>\n\n\nImport Data.\nThe first step is to import the collation results into python. \n\n\nFunctions.\nThen we will create functions to filter the data according to our need. For instance, we may want to find all the unique variants of one witness. Or we would like to see the places where one group of witnesses agrees with each other against another group of witnesses. Some functions will allow us to diplay only a section of the collation, or information specific to one variant location. Including:\n\nfind agreements\ntable to html\nmove tokens\nadd/delete note\nsearch\n\nsave\n\n\nExploration of collation.\nFinally, we will use our functions to explore the collation of Calpurnius Flaccus, with interactive widgets:\n\nModify collation (move tokens, add/del rows, add/del notes, save the new json)\nFind agreements between witnesses, and save the result into a more complex html file\nSearch the collation\n\nClarify a reading\n\n\nSummary.\nSummary of all interactions gathered in one widget for ease of use.\n\n\nUsers may go straight to the summary to start exploring the collation of Calpurnius Flaccus. For those who would prefer to see the code behind the widgets and the various interactions, you can read through the functions or the interactive widgets.\nNotebook re-use\nAny JSON output from collateX, with tokens \"t\". The other elements are optional.\nSome transformations may be needed. For instance:\n * In collation import data : change the base base text. If you choose to have no base text, change also the description of the \"save_to_html\" function.\n * In the HTML template, title and credits should be updated. The transformation into complex HTML could also be adapted, according to the json model of the new user.\nPossible Improvements\n\nlarge amounts of modifications (moving many tokens at the same time)\nfind and delete empty rows automatically\nprevent deletion of non empty rows\navoid moving first/last tokens\nedit notes directly\nsave file with popup window\netc.\n\nIMPORT DATA <a name=\"part1\"></a>\nPython modules import",
"#python modules\nimport json\nimport sys\nimport re\nfrom IPython.display import display,HTML\nfrom datetime import datetime\n\n#ipywidgets modules\nfrom __future__ import print_function\nfrom ipywidgets import *\nimport ipywidgets as widgets",
"Collation import <a name='import1-2'></a>",
"#path to the file with json results of the collation\npath = 'json-collations/calpurnius-collation-joint-BCMNPH.json'\n#path = 'json-collations/calpurnius-collation-joint-BCMNPH-corr.json'\n\n#open the file\nwith open (path, encoding='utf-8') as jsonfile:\n #transform the json structure (arrays, objects) into python structure (lists, dictionaries)\n data = json.load(jsonfile)\n\n#list of witnesses\nwitnesses = data[\"witnesses\"]\nprint(witnesses)\n\n#table of the aligned text versions\ncollation = data[\"table\"]\n\n#base text: choose a witness which variants are considered true readings (in green)\n#for Calpurnius, the most recent edition of Hakanson is used as the base text\n#if you do not want a base text, set it as an empty string ''\nbase_text = 'LH'\n\n#the index of a witness is its position in the witness list:\n#for instance B1 has position 0, and P1594 has position 9.",
"FUNCTIONS <a name=\"part2\"></a> ↑\nTransform a cell c (list of tokens) into a string of text",
"#original text\ndef cell_to_string(c): \n #tokens t are joined together, separated by a space\n string = ' '.join(token['t'] for token in c)\n return string\n\n#text with normalized tokens\ndef cell_to_string_norm(c): \n string = ''\n #word division is not taken into account when comparing the normalized text\n #for this reason we do not add a space in between tokens\n for token in c:\n if 'n' in token:\n string += token['n']\n elif 't' in token:\n string += token['t']\n return string\n",
"Compare cells",
"#compare two cells, original text\ndef compare_cell(c1,c2):\n return cell_to_string(c1) == cell_to_string(c2)\n\n#compare two cells, normalized text\ndef compare_cell_norm(c1,c2):\n return cell_to_string_norm(c1) == cell_to_string_norm(c2) \n\n#compare a list of cells, original text\n#return true if all the cells are equivalent (they contain the same string of tokens) \ndef compare_multiple_cell(cell_list):\n #compare each cell to the next\n for c1,c2 in zip(cell_list, cell_list[1:]):\n if compare_cell(c1,c2) is False:\n comparison = False\n break\n else:\n comparison = True\n return comparison\n\n#compare a list of cells, normalized text\n#return true if all the cells are equivalent (they contain the same string of tokens)\ndef compare_multiple_cell_norm(cell_list):\n #compare each cell to the next\n for c1,c2 in zip(cell_list, cell_list[1:]):\n if compare_cell_norm(c1,c2) is False:\n comparison = False\n break\n else:\n comparison = True\n return comparison",
"Find agreements 1",
"#this function returns rows of the collation table (table) where a list of x witnesses (witlist) agree together.\n#we display only variant locations, and not places where all witnesses agree.\n\ndef find_agreements(table, witlist):\n result_table = []\n \n #transform widget tuple into actual list\n witlist = list(witlist)\n \n #transform the witnesses names (sigla) into indexes\n witindex = [witnesses.index(wit) for wit in witlist]\n nonwitindex = [witnesses.index(wit) for wit in witnesses if wit not in witlist]\n\n for row in table:\n #get list of cell for the x witnesses\n cell_list = [row[i] for i in witindex]\n #there must be agreement of the x witnesses (normalized tokens)\n if compare_multiple_cell_norm(cell_list) is True:\n for i in nonwitindex:\n #if they disagree with at least one of the others\n if compare_cell_norm(row[witindex[0]],row[i]) is False:\n #add row to the result\n result_table.append(row)\n #and go to next row\n break\n\n return result_table",
"Find agreements 2 <a name='func2-4'></a> ↑",
"#This function is similar to the previous one:\n#it returns rows of the collation table (table) where a list of x witnesses (witlist) agree together, but\n#do not agree with the witnesses in a second list (nonwitlist).\n\n#By default, the function will return the agreement of the x witnesses, against all the other witnesses.\n\ndef compare_witnesses(table, witlist, nonwitlist=[]):\n result_table = []\n \n #first list of x witnesses which agree together\n witindex = [witnesses.index(wit) for wit in witlist]\n #against all the other witnesses\n if not nonwitlist:\n nonwitindex = [witnesses.index(wit) for wit in witnesses if wit not in witlist]\n #except if a second list of y witnesses is specified\n else:\n nonwitindex = [witnesses.index(wit) for wit in nonwitlist]\n \n #go through the collation table, row by row\n #to find places where the x witnesses agree together against others\n for row in table:\n #get list of cell for the x witnesses\n cell_list = [row[i] for i in witindex]\n #there must be agreement of the x witnesses (normalised tokens)\n if compare_multiple_cell_norm(cell_list) is True:\n for i in nonwitindex:\n #if they agree with one of the other y witnesses\n if compare_cell_norm(row[witindex[0]],row[i]) is True:\n #go to next row\n break\n #but if they do not agree with any of the y witnesses \n else:\n #add row to the result\n result_table.append(row)\n return result_table",
"Find all variants in the collation table",
"def view_variants(table):\n result_table = []\n #go through the collation table, row by row\n for row in table:\n #if there is a variant in the row (i.e. at least one cell is different from another cell, normalized form)\n if compare_multiple_cell_norm(row) is False:\n #add row to the result\n result_table.append(row) \n return result_table",
"Transform the result table into an html table <a name='func2-6'></a> ↑",
"#this function returns a minimal HTML table, to display in the notebook.\n\ndef table_to_html(collation,table):\n \n #table in an HTML format\n html_table = ''\n #div is for a better slides view. For notebook use, comment it out\n #html_table += '<div style=\"overflow: scroll; width:960; height:417px; word-break: break-all;\">'\n html_table += '<table border=\"1\" style=\"width: 100%; border: 1px solid #000000; border-collapse: collapse;\" cellpadding=\"4\">'\n \n #add a header to the table with columns, one for each witnesses and one for the row ID\n html_table += '<tr>'\n #a column for each witness\n for wit in witnesses:\n html_table += '<th>'+wit+'</th>'\n #optional: column for the declamation number\n #html_table += '<th>Decl</th>'\n #column for the row id\n html_table += '<th>ID</th>'\n html_table += '</tr>'\n \n for row in table:\n #add a row to the html table\n html_table += '<tr>'\n #optional : a variable to store the declamation number (will not be defined in empty rows)\n #declamation = 0\n #fill row with cell for each witness\n for cell in row:\n #transform the tokens t into a string.\n #we display the original tokens, not the normalized form\n token = cell_to_string(cell)\n \n #some cells are empty. Thus the declamation number is only available in cell with at least 1 token\n #if len(cell)>0:\n # declamation = str(cell[0]['decl'])\n \n #if no base text is selected, background colour will be white\n if not base_text:\n bg = \"white\"\n #if the tokens are the same as the base text tokens (normalized form)\n #it is displayed as a \"true reading\" in a green cell\n elif compare_cell_norm(cell,row[witnesses.index(base_text)]):\n bg = \"d9ead3\"\n #otherwise it is diplayed as an \"error\" in a red cell\n else:\n bg = \"ffb1b1\"\n html_table += '<td bgcolor=\"'+bg+'\">'+token+'</td>'\n \n #optional: add declamation number \n #html_table += '<td>'+str(location)+'</td>'\n \n #add row ID\n html_table += '<td>'+str(collation.index(row))+'</td>' \n \n #close the row\n html_table += '</tr>'\n \n #close the table\n html_table += '</table>'\n #html_table += '</div>'\n\n return html_table \n \n\n#this function returns a fancier HTML, but can't be displayed in the notebook (yet)\n\ndef table_to_html_fancy(collation,table):\n #table in an HTML format\n html_table = '<table>'\n \n #add a header to the table with columns\n html_table += '<thead><tr>'\n #a column for each witness\n for wit in witnesses:\n html_table += '<th>'+'<p>'+wit+'</p>'+'</th>'\n #a column for the row id\n html_table += '<th><p>ID</p></th>'\n #close header\n html_table += '</tr></thead><tbody>'\n \n for row in table:\n #add a row to the html table\n html_table += '<tr>'\n for cell in row:\n #transform the tokens t into a string (original token) \n token = cell_to_string(cell)\n \n #if there is no base text\n if not base_text:\n #arbitrary class for the HTML cells. It will have no effect on the result.\n cl = \"foo\"\n #if the normalized token is the same as the base text\n #it is diplayed as a \"true reading\" in a cell with green left border\n elif compare_cell_norm(cell,row[witnesses.index(base_text)]):\n cl = \"green\"\n #otherwise as an \"error\" in a cell with an orange left border\n else:\n cl = \"orange\"\n #add token to the table, in a text paragraph \n html_table += '<td class=\"'+cl+'\">'+'<p>'+token\n \n #if there is a note to display, add a little 'i' to indicate there is more hidden information\n for t in cell:\n #in the cell, if we find a token with a note\n if 'note' in t:\n #add info indicator\n html_table += ' <a href=\"#\" class=\"expander right\"><i class=\"fa fa-info-circle\"></i></a>'\n #then stop (even if there are several notes, we display only one indicator)\n break\n\n #close the text paragraph in the cell\n html_table += '</p>'\n \n #add paragraphs for hidden content (notes. Not limited to notes only: normalized form could be added, etc.)\n for t in cell:\n if 'note' in t:\n html_table += '<p class=\"expandable hidden more-info\">Note: '+t['note']+'</p>'\n \n #when the cell is not empty, add hidden info of page/line numbers. Adapted to make 'locus' optional\n if len(cell)>0 and 'locus' in cell[0]:\n #if len(cell)>0 :\n #add link to images when possible\n if 'link' in cell[0]:\n url = cell[0]['link']\n html_table += '<p class=\"expandable-row hidden more-info\"><a target=\"blank\" href='+url+'>'+cell[0]['locus']+'</a></p>'\n else:\n html_table += '<p class=\"expandable-row hidden more-info\">'+cell[0]['locus']+'</p>'\n \n #close cell\n html_table += '</td>'\n \n #add row ID with indicator of hidden content\n html_table += '<td>'+'<p>'+str(collation.index(row))+' <a href=\"#\" class=\"expander-row right\"><i class=\"fa fa-ellipsis-v\"></i></a></p>'+'</td>' \n #close the row\n html_table += '</tr>'\n #close the table\n html_table += '</tbody></table>'\n \n return html_table",
"Print collation in a text orientation\nTo read a short passage quickly. The collation table is reversed so that each witness is displayed on a row instead of a column.",
"def print_witnesses_text(table):\n reverse_table = [[row[i] for row in table] for i in range(len(witnesses))]\n for index,row in enumerate(reverse_table):\n text = ''\n for cell in row:\n #the row starts and ends with a token, not a space\n if row.index(cell) == 0 or text == '' or not cell:\n text += cell_to_string(cell)\n #if it is not the start of the string or an empty cell, add a space to separate tokens\n else:\n text += ' '+cell_to_string(cell)\n text += ', '+str(witnesses[index])\n print(text)\n #return reverse_table",
"Print information about a reading",
"def print_info(rowID, wit):\n #select cell\n cell = collation[rowID][witnesses.index(wit)]\n #if cell is empty, there is no token\n if len(cell) == 0:\n print('-')\n else:\n for token in cell:\n #position of token in cell + content\n print(cell.index(token), ':', ', '.join(token[feature] for feature in token))",
"Get the ID of a row",
"def get_pos(row):\n return collation.index(row)",
"Move a token to the previous (up) or next (down) row <a name='func2-10'></a> ↑",
"def move_token_up(rowID, wit):\n try:\n #the token cannot be in the first row \n rowID > 0\n #select the first token\n token = collation[rowID][witnesses.index(wit)].pop(0)\n #append it at the end of the cell in the previous row\n collation[rowID-1][witnesses.index(wit)].append(token)\n print(\"Token '\"+token['t']+\"' moved up!\")\n except:\n print(\"There is no token to move.\")\n\ndef move_token_down(rowID, wit):\n try:\n #the token cannot be in the last row\n rowID < len(collation)-1\n #select the last token\n token = collation[rowID][witnesses.index(wit)].pop()\n #add it at the beginning of the cell in the next row\n collation[rowID+1][witnesses.index(wit)].insert(0, token)\n print(\"Token '\"+token['t']+\"' moved down!\")\n except:\n print(\"There is no token to move.\")",
"Add / delete a row",
"def add_row_after(rowID):\n #rowID must be within collation table\n if rowID < 0 or rowID > len(collation)-1:\n print('Row '+str(rowID)+' does not exist.')\n else:\n #create an empty row\n new_row = []\n #for each witness in the collation\n for wit in witnesses:\n #add an empty list of tokens to the row\n new_row.append([])\n #insert new row in the collation, after the row passed in argument (+1)\n collation.insert(rowID+1, new_row)\n print('Row added!')\n \ndef delete_row(rowID):\n #rowID must be within collation table\n if rowID < 0 or rowID > len(collation)-1:\n print('Row '+str(rowID)+' does not exist.')\n else:\n collation.pop(rowID)\n print('Row deleted!')\n",
"Add / Delete a note <a name='func2-11'></a> ↑",
"#add or modify a note\ndef add_note(wit, rowID, token, note):\n try:\n #select token\n t = collation[rowID][witnesses.index(wit)][token] \n if note is '':\n print('Your note is empty.')\n elif 'note' in t:\n #add comment to an already existing note\n t['note'] += ' '+note\n else:\n #or create a new note\n t['note'] = note\n except:\n print('This token is not valid.')\n\n#delete completely a token's note \ndef del_note(wit, rowID, token):\n try:\n #select token\n t = collation[rowID][witnesses.index(wit)][token]\n if 'note' in t:\n #delete note \n t.pop('note')\n else:\n #or print error message\n print('There is no note to delete')\n except:\n print('This token is not valid.')",
"Search <a name='func2-12'></a> ↑",
"def search(table,text):\n #result table to build\n result_table = []\n \n #go through each row of the collation table\n for row in table:\n #go through each cell\n for cell in row:\n #if the search text matches the cell text (original or normalized form)\n if text in cell_to_string_norm(cell) or text in cell_to_string(cell):\n #add row to the result table\n result_table.append(row)\n #go to next row\n break\n \n #if the result table is empty, the text was not found in the collation\n if result_table == []:\n print(text+\" was not found!\")\n \n return result_table",
"Save results <a name='func2-13'></a> ↑",
"#save the json file with update in the collation table\ndef save_json(path, table):\n #combine new collation table with witnesses, so as to have one data variable\n data = {'witnesses':witnesses, 'table':table}\n #open a file according to path\n with open(path, 'w') as outfile:\n #write the data in json format\n json.dump(data, outfile)\n\n#save a subset of the collation table into fancy HTML version, with a small text description\ndef save_table(descr, table, path):\n #path to template\n template_path = 'alignment-tables/template.html'\n \n #load the text of the template into a variable html\n with open(template_path, 'r', encoding='utf-8') as infile:\n html = infile.read()\n \n #add base text to description\n if base_text:\n descr += '<br>Agreement with the base text '+base_text+' is marked in green.'\n descr += ' Variation from '+base_text+' is marked in red.' \n \n #modify template: replace the comment with descr paragraph\n html = re.sub(r'<!--descr-->',descr,html)\n \n #modify template: replace the comment with table\n html = re.sub(r'<!--table-->',table,html)\n \n #save\n with open(path, 'w', encoding='utf-8') as outfile:\n outfile.write(html)\n",
"INTERACTIVE EXPLORATION OF THE COLLATION TABLE <a name=\"part3\"></a> ↑\nUpdate the collation results <a name='part3-1'></a> ↑\nIn this section, we will see how to view a selection of the collation table and update it.\nPossible updates:\n 1. Add/delete a row\n 3. Move one token up/down\n 4. Add notes\n 3. Save\nNote : it is only possible to move the last token (or word) in a cell down to the first place in the next cell; or vice-versa to move the first token of a cell up to the last place in the previous cell. This is to prevent any change in the word order of a witness, and to keep the correct text sequence.",
"#select an extract of the collation table with interactive widgets\n\n#widget for HTML display\nw1_html = widgets.HTML(value=\"\")\n\n#define the beginning of extract\nw_from = widgets.BoundedIntText(\n value=6,\n min=0,\n max=len(collation)-1,\n description='From:',\n continuous_update=True,\n)\n\n#define the end of extract\n#because of python list slicing, the last number is not included in the result.\n#to make it more intuitive, the \"to\" number is added +1 in collation_extract function\nw_to = widgets.BoundedIntText(\n value=11,\n min=0,\n max=len(collation)-1,\n description='To:',\n continuous_update=True,\n)\n\n#binding widgets with table_to_html function\ndef collation_extract(a, b):\n x = a\n y = b+1\n if y <= x:\n print(\"The table you have requested does not exist.\")\n w1_html.value = table_to_html(collation,collation[x:y])\n\n#uncomment the next lines to see the widgets\n##interactive selection of a collation table extract\n#interact(collation_extract, a=w_from, b=w_to)\n##display HTML widget (rows 6-11)\n#display(w1_html)\n\n#Widgets for:\n#move tokens up/down\n#add/delete rows\n#add/delete notes to a specific token\n\n#widget to select a witness \nw_wit = widgets.Dropdown(\n options = witnesses,\n description = 'Witness:',\n)\n#widget to select a row\nw_rowID = widgets.BoundedIntText(\n min=0,\n max=len(collation)-1,\n description='ID:',\n)\n#widget to select a specific token\nw_token = widgets.Text(\n min=0,\n description = 'Token position:',\n)\n#widget to enter text note\nw_note = widgets.Text(\n description = 'Note:',\n)\n\nout = widgets.Output()\n\n#link buttons and functions\n@out.capture(clear_output=True)#wait=True, clear_output=True\ndef modif_on_click(b):\n if b.description == 'add row after':\n #add row\n add_row_after(rowID=w_rowID.value)\n if b.description == 'delete row':\n #delete\n delete_row(rowID=w_rowID.value)\n if b.description == 'move token down':\n move_token_down(rowID=w_rowID.value, wit=w_wit.value)\n if b.description == 'move token up':\n move_token_up(rowID=w_rowID.value, wit=w_wit.value)\n\n#add row after\nb1 = widgets.Button(description=\"add row after\", \n style=ButtonStyle(button_color='#fae58b'))\nb1.on_click(modif_on_click)\n\n#uncomment the next line to see the widget\n#interact_manual(add_row_after, rowID=w_rowID, {'manual': True, 'manual_name': 'add row after'})\n\n#delete row\nb2 = widgets.Button(description=\"delete row\", \n style=b1.style)\nb2.on_click(modif_on_click)\n\n#uncomment the next line to see the widget\n#interact_manual(delete_row, rowID=w_rowID, {'manual': True, 'manual_name': 'delete row'})\n\n#move token down\nb3 = widgets.Button(description=\"move token down\", \n style=b1.style)\nb3.on_click(modif_on_click)\n\n#uncomment the next line to see the widget\n#interact_manual(move_token_down, rowID=w_rowID, wit=w_wit, {'manual': True, 'manual_name': 'move token down'})\n\n#move token up\nb4 = widgets.Button(description=\"move token up\", \n style=b1.style)\nb4.on_click(modif_on_click)\n\n#uncomment the next line to see the widget\n#interact_manual(move_token_up, rowID=w_rowID, wit=w_wit, {'manual': True, 'manual_name': 'move token up'})\n\n#add/delete notes\n#link add button and function\n@out.capture(clear_output=True)\ndef add_on_click(b):\n add_note(wit=w_wit.value, rowID=w_rowID.value, token=w_token.value, note=w_note.value)\n #check result\n print('Result:')\n print_info(w_rowID.value, w_wit.value)\n print('\\n')\n\n#add a note button\nw_add_note = widgets.Button(description='Add note', button_style='success') \nw_add_note.on_click(add_on_click)\n\n#link del button and function\n@out.capture(clear_output=True)\ndef del_on_click(b):\n del_note(wit=w_wit.value, rowID=w_rowID.value, token=w_token.value)\n #check result\n print('Result:')\n print_info(w_rowID.value, w_wit.value)\n\n#delete a note button\nw_del_note = widgets.Button(description='Delete note', button_style='danger')\nw_del_note.on_click(del_on_click)\n\n#dislpay widgets\n#uncomment the next line to see the widgets\n#display(w_wit, w_rowID, w_token, w_note)\n#display(w_add_note, w_del_note)\n\n#save new json\n\n#path to the new file\npath_new_json = 'json-collations/calpurnius-collation-joint-BCMNPH-corr.json'\n\n#alternative path: take the original collation file name, and add a date/time identifier\n#file_name = os.path.split(path)[1]\n#file_id = datetime.now().strftime('%Y-%m-%d-%H%M%S')\n#path_new_json = 'json-collations/'+file_id+'-'+file_name\n\n#save button to click\nw_button = widgets.Button(description=\"Save JSON\", button_style='info')\n\n#on click\ndef on_button_clicked(b):\n #save json of the whole collation\n save_json(path_new_json, collation)\n\n#link btw button and onclick function \nw_button.on_click(on_button_clicked)\n\n#save json\n#uncomment the next line to see the widget\n#display(w_button)",
"Find agreements of witnesses against others <a name='part3-2'></a> ↑\nIn the first drop-down menu, you will be able to choose one or more witness(es) which agree together in places where all the other witnesses have a different reading. It means that, if you pick only one witness, you will see it's unique variants.\nOn the other hand, you may want to see the agreements of witnesses against another group of witnesses selected in the second drop-down menu. This allows you for instance to ignore modern editors from the comparison. This option also allows to compare for groups of witnesses to see if they share erroneous readings (or innovations), for instance.\nFinally, if you chose only one witness in each dropdown menu, you will be able to see the differences between the two witnesses.",
"#widget for HTML display\nw2_html = widgets.HTML(value=\"\")\n\n#selection of a group of witnesses which share the same readings\nw1 = widgets.SelectMultiple(\n description=\"Agreements:\",\n options=witnesses\n)\n\n#selection of a secong group of witnesses\nw2 = widgets.SelectMultiple(\n description=\"Against:\",\n options=witnesses\n)\n\ndef collation_compare(table, a, b):\n #transform widget tuple into actual list\n if isinstance(a, (tuple)):\n witlist = list(a)\n nonwitlist = list(b)\n else:\n witlist = [a]\n nonwitlist = [b]\n if not a:\n print(\"No witness selected.\")\n else:\n #create the result table\n result = compare_witnesses(table, witlist, nonwitlist)\n #transform table to HTML\n html_table = table_to_html(table,result)\n #add an indication of the number of rows in the result table\n html_table += '<span>Total: '+str(len(result))+' rows in the table.</span>'\n #set HTML display value\n w2_html.value = html_table\n\n#-----------\n#save button\nw_save = widgets.Button(description=\"Save Table\", button_style='info')\n\n#description of the table\nw_descr = widgets.Text(value=\"Table description\")\n\ndef on_button_clicked(x):\n #transform widget tuple into actual list\n if isinstance(w1.value, (tuple)):\n witlist = list(w1.value)\n nonwitlist = list(w2.value)\n else:\n witlist = [w1.value]\n nonwitlist = [w2.value]\n if not w1.value:\n print(\"No table to save.\")\n else:\n #path for new result file\n file_id = datetime.now().strftime('%Y-%m-%d-%H%M%S')\n path_result = 'alignment-tables/collation-'+file_id+'.html'\n #description\n descr = str(w_descr.value)\n #html table\n table = table_to_html_fancy(collation,compare_witnesses(collation, witlist, nonwitlist))\n #save\n save_table(descr, table, path_result)\n\n#link button with saving action\nw_save.on_click(on_button_clicked)\n#---------------\n\n#find agreements between witnesses or unique readings\n#uncomment the next line to see the widgets\n#interact(collation_compare, table=fixed(collation), a=w1, b=w2)\n\n#display(w2_html)\n#display(w_descr)\n#display(w_save)",
"Search the collation <a name='part3-3'></a> ↑\nBasic search. It will check for token t or normalized form n, and return rows where there is at least one match.",
"#widget for HTML display\nw3_html = widgets.HTML(value=\"\")\n\n#do the search\ndef search_collation(table,text):\n w3_html.value = table_to_html(table,search(table,text))\n \n#search collation with interactive text input\n#uncomment the next line to see the widgets\n#interact(search_collation, table=fixed(collation),text=\"calpurnius\",__manual=True)\n#display(w3_html)",
"Clarify one reading <a name='part3-4'></a> ↑",
"#Examples: 459/C1, 932/M1, 9/LH\n#uncomment the next line to see the widget\n#interact(print_info, rowID=w_rowID, wit=w_wit)",
"Summary of Widgets <a name='summary'></a> ↑\nHere you will find all possible interactions gathered into one tab widget. It is a widget that displays several pages in tabs. each page is dedicated to a specific interaction: \n1. Extract: select a collation extract\n2. Modifications: update the collation by adding/deleting rows, moving tokens or adding/deleting notes\n3. Agreements: select groups of witnesses to see what readings they have in common, against other witnesses\n4. Search: search the collation for a single reading, for which you can see more information in the last page\n5. Clarify: see more information about one reading, i.e. all properties of the tokens in one witness, from a specific row",
"#Using the tab widget, gather all interactions in one place\ntab = widgets.Tab()\n\n#page 1 = view extract\nw_extract = interactive(collation_extract, a=w_from, b=w_to)\npage1 = widgets.VBox(children = [w_extract, w1_html])\n\n#page 2 = modify the collation\nw_modif1 = widgets.VBox(children=[w_rowID, b1, out])#add row\nw_modif2 = widgets.VBox(children=[w_rowID, b2, out])#delete row\nw_modif3 = widgets.VBox(children=[w_rowID, w_wit, b3, out])#move token down\nw_modif4 = widgets.VBox(children=[w_rowID, w_wit, b4, out])#move token up\nw_modif5 = widgets.VBox([w_wit, w_rowID, w_token, w_note, \n widgets.HBox(children=[w_add_note, w_del_note]), out])#add/del notes\n \naccordion = widgets.Accordion(children=[w_modif1, w_modif2, w_modif3, w_modif4, w_modif5])\naccordion.set_title(0, 'Add Row')\naccordion.set_title(1, 'Delete Row')\naccordion.set_title(2, 'Move Token Down')\naccordion.set_title(3, 'Move Token Up')\naccordion.set_title(4, 'Notes')\naccordion.selected_index = None\npage2 = widgets.VBox(children = [accordion, w_button])\n\n#page 3 = find agreements\nw_agr = interactive(collation_compare, table=fixed(collation), a=w1, b=w2)\npage3 = widgets.VBox(children = [w_agr, w2_html, w_descr, w_save])\n\n#page 4 = search\nw_search = interactive(search_collation, {'manual' : True, 'manual_name' : 'Search'}, table=fixed(collation),text=\"calpurnius\")\npage4 = widgets.VBox(children = [w_search, w3_html])\n\n#page 5 = clarify\nw_clar = interactive(print_info, rowID=w_rowID, wit=w_wit)\npage5 = widgets.VBox(children = [w_clar])\n\ntab.children = [page1, page2, page3, page4, page5]\ntab.set_title(0, 'Extract')\ntab.set_title(1, 'Modifications')\ntab.set_title(2, 'Find Agreements')\ntab.set_title(3, 'Search')\ntab.set_title(4, 'Clarify')\n\ndisplay(tab)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DominikDitoIvosevic/Uni
|
STRUCE/SU-2019-LAB01-0036477171.ipynb
|
mit
|
[
"Sveučilište u Zagrebu\nFakultet elektrotehnike i računarstva \nStrojno učenje 2019/2020\nhttp://www.fer.unizg.hr/predmet/su\n\nLaboratorijska vježba 1: Regresija\nVerzija: 1.2\nZadnji put ažurirano: 27. rujna 2019.\n(c) 2015-2019 Jan Šnajder, Domagoj Alagić \nObjavljeno: 30. rujna 2019.\nRok za predaju: 21. listopada 2019. u 07:00h\n\nUpute\nPrva laboratorijska vježba sastoji se od deset zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija. \nOsigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.\nVježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.",
"# Učitaj osnovne biblioteke...\nimport numpy as np\nimport sklearn\nimport matplotlib.pyplot as plt\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\n%pylab inline\n",
"Zadatci\n1. Jednostavna regresija\nZadan je skup primjera $\\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\\mathbf{X}$ dimenzija $N\\times n$ (u ovom slučaju $4\\times 1$) i vektorom oznaka $\\textbf{y}$, dimenzija $N\\times 1$ (u ovom slučaju $4\\times 1$), na sljedeći način:",
"X = np.array([[0],[1],[2],[4]])\ny = np.array([4,1,2,5])\n\nX1 = X\ny1 = y",
"(a)\nProučite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\\mathbf{\\Phi}$ koja ne koristi preslikavanje u prostor više dimenzije (samo će svakom primjeru biti dodane dummy jedinice; $m=n+1$).",
"from sklearn.preprocessing import PolynomialFeatures\n\npoly = PolynomialFeatures(1)\nphi = poly.fit_transform(X)\nprint(phi)\n\n# Vaš kôd ovdje",
"(b)\nUpoznajte se s modulom linalg. Izračunajte težine $\\mathbf{w}$ modela linearne regresije kao $\\mathbf{w}=(\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi})^{-1}\\mathbf{\\Phi}^\\intercal\\mathbf{y}$. Zatim se uvjerite da isti rezultat možete dobiti izračunom pseudoinverza $\\mathbf{\\Phi}^+$ matrice dizajna, tj. $\\mathbf{w}=\\mathbf{\\Phi}^+\\mathbf{y}$, korištenjem funkcije pinv.",
"from numpy import linalg\n\npinverse1 = pinv(phi)\npinverse2 = matmul(inv(matmul(transpose(phi), phi)), transpose(phi))\n\n#print(pinverse1)\n#print(pinverse2)\n\nw = matmul(pinverse2, y)\nprint(w)\n \n# Vaš kôd ovdje",
"Radi jasnoće, u nastavku je vektor $\\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ označen kao $\\tilde{\\mathbf{x}}$.\n(c)\nPrikažite primjere iz $\\mathcal{D}$ i funkciju $h(\\tilde{\\mathbf{x}})=\\mathbf{w}^\\intercal\\tilde{\\mathbf{x}}$. Izračunajte pogrešku učenja prema izrazu $E(h|\\mathcal{D})=\\frac{1}{2}\\sum_{i=1}^N(\\tilde{\\mathbf{y}}^{(i)} - h(\\tilde{\\mathbf{x}}))^2$. Možete koristiti funkciju srednje kvadratne pogreške mean_squared_error iz modula sklearn.metrics.\nQ: Gore definirana funkcija pogreške $E(h|\\mathcal{D})$ i funkcija srednje kvadratne pogreške nisu posve identične. U čemu je razlika? Koja je \"realnija\"?",
"import sklearn.metrics as mt\n\nwt = w #(np.array([w]))\n\nprint(wt)\nprint(phi)\n\nhx = np.dot(phi, w)\n\nE = mt.mean_squared_error(hx, y)\nprint(E)\n\n# Vaš kôd ovdje",
"(d)\nUvjerite se da za primjere iz $\\mathcal{D}$ težine $\\mathbf{w}$ ne možemo naći rješavanjem sustava $\\mathbf{w}=\\mathbf{\\Phi}^{-1}\\mathbf{y}$, već da nam doista treba pseudoinverz.\nQ: Zašto je to slučaj? Bi li se problem mogao riješiti preslikavanjem primjera u višu dimenziju? Ako da, bi li to uvijek funkcioniralo, neovisno o skupu primjera $\\mathcal{D}$? Pokažite na primjeru.",
"# Vaš kôd ovdje\n\ntry:\n w = matmul(inv(phi), y)\n print(w)\nexcept LinAlgError as err:\n print(\"Exception\")\n print(err)",
"(e)\nProučite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su težine koje izračunava ta funkcija (dostupne pomoću atributa coef_ i intercept_) jednake onima koje ste izračunali gore. Izračunajte predikcije modela (metoda predict) i uvjerite se da je pogreška učenja identična onoj koju ste ranije izračunali.",
"from sklearn.linear_model import LinearRegression\n\n# Vaš kôd ovdje\nlr = LinearRegression().fit(X, y)\n#print(lr.score(X, y))\n#print(lr.coef_)\n#print(lr.intercept_)\nprint([lr.intercept_, lr.coef_])\n\nprint(wt)\n\npr = lr.predict(X)\nE = mt.mean_squared_error(pr, y)\nprint(E)",
"2. Polinomijalna regresija i utjecaj šuma\n(a)\nRazmotrimo sada regresiju na većem broju primjera. Definirajte funkciju make_labels(X, f, noise=0) koja uzima matricu neoznačenih primjera $\\mathbf{X}{N\\times n}$ te generira vektor njihovih oznaka $\\mathbf{y}{N\\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\\mathcal{N}(0,\\sigma^2)$, gdje je $f:\\mathbb{R}^n\\to\\mathbb{R}$ stvarna funkcija koja je generirala podatke (koja nam je u stvarnosti nepoznata), a $\\sigma$ je standardna devijacija Gaussovog šuma, definirana parametrom noise. Za generiranje šuma možete koristiti funkciju numpy.random.normal. \nGenerirajte skup za učenje od $N=50$ primjera uniformno distribuiranih u intervalu $[-5,5]$ pomoću funkcije $f(x) = 5 + x -2 x^2 -5 x^3$ uz šum $\\sigma=200$:",
"from numpy.random import normal\n\ndef make_labels(X, f, noise=0) :\n # Vaš kôd ovdje\n N = numpy.random.normal\n fx = f(X)\n #nois = [N(0, noise) for _ in range(X.shape[0])]\n #print(nois)\n #y = f(X) + nois\n y = [ f(x) + N(0, noise) for x in X ]\n \n return y\n\n\ndef make_instances(x1, x2, N) :\n return np.array([np.array([x]) for x in np.linspace(x1,x2,N)])",
"Prikažite taj skup funkcijom scatter.",
"# Vaš kôd ovdje\nN = 50\ndef f(x):\n return 5 + x - 2*x*x - 5*x*x*x\nnoise = 200\n\nX2 = make_instances(-5, 5, N)\ny2 = make_labels(X2, f, noise)\n\n#print(X)\n#print(y)\n\ns = scatter(X2, y2)",
"(b)\nTrenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikažite naučeni model $h(\\mathbf{x})=\\mathbf{w}^\\intercal\\tilde{\\mathbf{x}}$ i primjere za učenje. Izračunajte pogrešku učenja modela.",
"# Vaš kôd ovdje\nimport sklearn.linear_model as lm\n\ndef polyX(d):\n\n p3 = PolynomialFeatures(d).fit_transform(X2)\n l2 = LinearRegression().fit(p3, y2)\n h2 = l2.predict(p3)\n\n E = mt.mean_squared_error(h2, y2)\n print('d: ' + str(d) + ' E: ' + str(E))\n #print(p3)\n plot(X2, h2, label = str(d))\n\nscatter(X2, y2)\npolyX(3)\n",
"3. Odabir modela\n(a)\nNa skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\\mathcal{H}_d$ različite složenosti, gdje je $d$ stupanj polinoma, $d\\in{1,3,5,10,20}$. Prikažite na istome grafikonu skup za učenje i funkcije $h_d(\\mathbf{x})$ za svih pet modela (preporučujemo koristiti plot unutar for petlje). Izračunajte pogrešku učenja svakog od modela.\nQ: Koji model ima najmanju pogrešku učenja i zašto?",
"# Vaš kôd ovdje\nfigure(figsize=(15,10))\nscatter(X2, y2)\npolyX(1)\npolyX(3)\npolyX(5)\npolyX(10)\npolyX(20)\n\ns = plt.legend(loc=\"center right\")\n\n",
"(b)\nRazdvojite skup primjera iz zadatka 2 pomoću funkcije model_selection.train_test_split na skup za učenja i skup za ispitivanje u omjeru 1:1. Prikažite na jednom grafikonu pogrešku učenja i ispitnu pogrešku za modele polinomijalne regresije $\\mathcal{H}_d$, sa stupnjem polinoma $d$ u rasponu $d\\in [1,2,\\ldots,20]$. Budući da kvadratna pogreška brzo raste za veće stupnjeve polinoma, umjesto da iscrtate izravno iznose pogrešaka, iscrtajte njihove logaritme.\nNB: Podjela na skupa za učenje i skup za ispitivanje mora za svih pet modela biti identična.\nQ: Je li rezultat u skladu s očekivanjima? Koji biste model odabrali i zašto?\nQ: Pokrenite iscrtavanje više puta. U čemu je problem? Bi li problem bio jednako izražen kad bismo imali više primjera? Zašto?",
"from sklearn.model_selection import train_test_split\n\n# Vaš kôd ovdje\n\nxTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)\n\ntestError = []\ntrainError = []\n\nfor d in range(1,33):\n\n polyXTrain = PolynomialFeatures(d).fit_transform(xTr) \n polyXTest = PolynomialFeatures(d).fit_transform(xTest)\n\n l2 = LinearRegression().fit(polyXTrain, yTr)\n h2 = l2.predict(polyXTest)\n\n E = mt.mean_squared_error(h2, yTest)\n #print('d: ' + str(d) + ' E: ' + str(E))\n testError.append(E)\n \n \n h2 = l2.predict(polyXTrain)\n\n E = mt.mean_squared_error(h2, yTr)\n #print('d: ' + str(d) + ' E: ' + str(E))\n trainError.append(E)\n #print(p3)\n #plot(polyXTest, h2, label = str(d))\n\nplot(numpy.log(numpy.array(testError)), label='test')\nplot(numpy.log(numpy.array(trainError)), label='train')\nlegend()",
"(c)\nTočnost modela ovisi o (1) njegovoj složenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) količini šuma. Kako biste to analizirali, nacrtajte grafikone pogrešaka kao u 3b, ali za sve kombinacija broja primjera $N\\in{100,200,1000}$ i količine šuma $\\sigma\\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebite funkciju subplots kako biste pregledno posložili grafikone u tablicu $3\\times 3$. Podatci se generiraju na isti način kao u zadatku 2.\nNB: Pobrinite se da svi grafikoni budu generirani nad usporedivim skupovima podataka, na sljedeći način. Generirajte najprije svih 1000 primjera, podijelite ih na skupove za učenje i skupove za ispitivanje (dva skupa od po 500 primjera). Zatim i od skupa za učenje i od skupa za ispitivanje načinite tri različite verzije, svaka s drugačijom količinom šuma (ukupno 2x3=6 verzija podataka). Kako bi simulirali veličinu skupa podataka, od tih dobivenih 6 skupova podataka uzorkujte trećinu, dvije trećine i sve podatke. Time ste dobili 18 skupova podataka -- skup za učenje i za testiranje za svaki od devet grafova.",
"# Vaš kôd ovdje\n\n# Vaš kôd ovdje\nfigure(figsize=(15,15))\n\nN = 1000\ndef f(x):\n return 5 + x - 2*x*x - 5*x*x*x\n\nX3 = make_instances(-5, 5, N)\n\nxAllTrain, xAllTest = train_test_split(X3, test_size=0.5)\ni = 0\nj = 0\n\nfor N in [100, 200, 1000]:\n for noise in [100, 200, 500]:\n j += 1\n \n xTrain = xAllTrain[:N]\n xTest = xAllTest[:N]\n yTrain = make_labels(xTrain, f, noise)\n yTest = make_labels(xTest, f, noise)\n\n trainError = []\n testError = []\n\n for d in range(1,21):\n\n polyXTrain = PolynomialFeatures(d).fit_transform(xTrain) \n polyXTest = PolynomialFeatures(d).fit_transform(xTest)\n\n l2 = LinearRegression().fit(polyXTrain, yTrain)\n h2 = l2.predict(polyXTest)\n\n testE = mt.mean_squared_error(h2, yTest)\n testError.append(testE)\n \n h2 = l2.predict(polyXTrain)\n trainE = mt.mean_squared_error(h2, yTrain)\n trainError.append(trainE)\n #print('d: ' + str(d) + ' E: ' + str(E))\n #print(p3)\n #plot(polyXTest, h2, label = str(d))\n\n subplot(3,3,j, title = \"N: \" + str(N) + \" noise: \" + str(noise))\n plot(numpy.log(numpy.array(trainError)), label = 'train') \n plot(numpy.log(numpy.array(testError)), label = 'test')\n plt.legend(loc=\"center right\")\n \n\n\n\n\n#print(X)\n#print(y)\n\n#s = scatter(X2, y2)",
"Q: Jesu li rezultati očekivani? Obrazložite.\n4. Regularizirana regresija\n(a)\nU gornjim eksperimentima nismo koristili regularizaciju. Vratimo se najprije na primjer iz zadatka 1. Na primjerima iz tog zadatka izračunajte težine $\\mathbf{w}$ za polinomijalni regresijski model stupnja $d=3$ uz L2-regularizaciju (tzv. ridge regression), prema izrazu $\\mathbf{w}=(\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I})^{-1}\\mathbf{\\Phi}^\\intercal\\mathbf{y}$. Napravite izračun težina za regularizacijske faktore $\\lambda=0$, $\\lambda=1$ i $\\lambda=10$ te usporedite dobivene težine.\nQ: Kojih je dimenzija matrica koju treba invertirati?\nQ: Po čemu se razlikuju dobivene težine i je li ta razlika očekivana? Obrazložite.",
"# Vaš kôd ovdje\nphi4 = PolynomialFeatures(3).fit_transform(X1)\n\ndef reg2(lambd):\n w = matmul( matmul(inv( matmul(transpose(phi4), phi4) + lambd * identity(len(phi4))), transpose(phi4)), y1)\n print(w)\n \nreg2(0)\nreg2(1)\nreg2(10)",
"(b)\nProučite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\\alpha$ odgovara parametru $\\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispišite težine $\\mathbf{w}$ (atributi coef_ i intercept_).\nQ: Jesu li težine identične onima iz zadatka 4a? Ako nisu, objasnite zašto je to tako i kako biste to popravili.",
"\nfrom sklearn.linear_model import Ridge\n\n#for s in ['auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']:\nfor l in [0, 1, 10]:\n\n r = Ridge(l, fit_intercept = False).fit(phi4, y1)\n print(r.coef_)\n print(r.intercept_)\n\n # Vaš kôd ovdje",
"5. Regularizirana polinomijalna regresija\n(a)\nVratimo se na slučaj $N=50$ slučajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\\mathcal{H}_{\\lambda,d}$ za $\\lambda\\in{0,100}$ i $d\\in{2,10}$ (ukupno četiri modela). Skicirajte pripadne funkcije $h(\\mathbf{x})$ i primjere (na jednom grafikonu; preporučujemo koristiti plot unutar for petlje).\nQ: Jesu li rezultati očekivani? Obrazložite.",
"# Vaš kôd ovdje\n\nN = 50\n\nfigure(figsize = (15, 15))\nx123 = scatter(X2, y2)\n\nfor lambd in [0, 100]:\n for d in [2, 10]:\n phi2 = PolynomialFeatures(d).fit_transform(X2)\n r = Ridge(lambd).fit(phi2, y2)\n h2 = r.predict(phi2)\n #print(d)\n plot(X2, h2, label=\"lambda \" + str(lambd) + \" d \" + str(d))\n \nx321 = plt.legend(loc=\"center right\")\n",
"(b)\nKao u zadataku 3b, razdvojite primjere na skup za učenje i skup za ispitivanje u omjeru 1:1. Prikažite krivulje logaritama pogreške učenja i ispitne pogreške u ovisnosti za model $\\mathcal{H}_{d=10,\\lambda}$, podešavajući faktor regularizacije $\\lambda$ u rasponu $\\lambda\\in{0,1,\\dots,50}$.\nQ: Kojoj strani na grafikonu odgovara područje prenaučenosti, a kojoj podnaučenosti? Zašto?\nQ: Koju biste vrijednosti za $\\lambda$ izabrali na temelju ovih grafikona i zašto?",
"# Vaš kôd ovdje\n\n\nxTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)\nfigure(figsize=(10,10))\ntrainError = []\ntestError = []\n\n#print(xTr)\n\nfor lambd in range(0,51):\n\n polyXTrain = PolynomialFeatures(10).fit_transform(xTr) \n polyXTest = PolynomialFeatures(10).fit_transform(xTest)\n\n l2 = Ridge(lambd).fit(polyXTrain, yTr)\n h2 = l2.predict(polyXTest)\n\n E = mt.mean_squared_error(h2, yTest)\n #print('d: ' + str(d) + ' E: ' + str(E))\n testError.append(log( E))\n \n h2 = l2.predict(polyXTrain)\n E = mt.mean_squared_error(h2, yTr)\n trainError.append(log(E))\n #print(p3)\n #plot(polyXTest, h2, label = str(d))\n#print(numpy.log(numpy.array(testError)))\nplot(numpy.log(numpy.array(testError)), label=\"test\")\nplot(numpy.log(numpy.array(trainError)), label=\"train\")\ngrid()\nlegend()",
"6. L1-regularizacija i L2-regularizacija\nSvrha regularizacije jest potiskivanje težina modela $\\mathbf{w}$ prema nuli, kako bi model bio što jednostavniji. Složenost modela može se okarakterizirati normom pripadnog vektora težina $\\mathbf{w}$, i to tipično L2-normom ili L1-normom. Za jednom trenirani model možemo izračunati i broj ne-nul značajki, ili L0-normu, pomoću sljedeće funkcije koja prima vektor težina $\\mathbf{w}$:",
"def nonzeroes(coef, tol=1e-6): \n return len(coef) - len(coef[np.isclose(0, coef, atol=tol)])",
"(a)\nZa ovaj zadatak upotrijebite skup za učenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=10$, mijenjajući hiperparametar $\\lambda$ u rasponu ${1,2,\\dots,100}$. Za svaki od treniranih modela izračunajte L{0,1,2}-norme vektora težina $\\mathbf{w}$ te ih prikažite kao funkciju od $\\lambda$. Pripazite što točno šaljete u funkciju za izračun normi.\nQ: Objasnite oblik obiju krivulja. Hoće li krivulja za $\\|\\mathbf{w}\\|_2$ doseći nulu? Zašto? Je li to problem? Zašto?\nQ: Za $\\lambda=100$, koliki je postotak težina modela jednak nuli, odnosno koliko je model rijedak?",
"# Vaš kôd ovdje\nd = 10\n\nl0 = []\nl1 = []\nl2 = []\n\n\nxTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)\n\nfor lambd in range(0,101):\n\n polyXTrain = PolynomialFeatures(10).fit_transform(xTr) \n polyXTest = PolynomialFeatures(10).fit_transform(xTest)\n\n r = Ridge(lambd).fit(polyXTrain, yTr)\n \n r.coef_[0] = r.intercept_\n \n l0.append(nonzeroes(r.coef_))\n #print(r.coef_)\n l1.append(numpy.linalg.norm(r.coef_, ord=1))\n l2.append(numpy.linalg.norm(r.coef_, ord=2))\n \n\nfigure(figsize=(10,10))\nplot(l0, label=\"l0\")\nlegend()\ngrid()\n\nfigure(figsize=(10,10))\nplot(l1, label=\"l1\")\nlegend()\ngrid()\n\nfigure(figsize=(10,10))\nplot(l2, label=\"l2\")\nlegend()\ngrid()\n",
"(b)\nGlavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome što L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge težine pritegnute na nulu. Pokažite da je to doista tako, ponovivši gornji eksperiment s L1-regulariziranom regresijom, implementiranom u klasi Lasso u modulu sklearn.linear_model. Zanemarite upozorenja.",
"# Vaš kôd ovdje\nd = 10\n\nl0 = []\nl1 = []\nl2 = []\n\n\nxTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)\n\nfor lambd in range(0,101):\n\n polyXTrain = PolynomialFeatures(10).fit_transform(xTr) \n polyXTest = PolynomialFeatures(10).fit_transform(xTest)\n\n r = sklearn.linear_model.Lasso(lambd).fit(polyXTrain, yTr)\n \n r.coef_[0] = r.intercept_\n \n l0.append(nonzeroes(r.coef_))\n #print(r.coef_)\n l1.append(numpy.linalg.norm(r.coef_, ord=1))\n l2.append(numpy.linalg.norm(r.coef_, ord=2))\n \n\nfigure(figsize=(10,10))\nplot(l0, label=\"l0\")\nlegend()\n\nfigure(figsize=(10,10))\nplot(l1, label=\"l1\")\nlegend()\n\nfigure(figsize=(10,10))\nplot(l2, label=\"l2\")\nlegend()",
"7. Značajke različitih skala\nČesto se u praksi možemo susreti sa podatcima u kojima sve značajke nisu jednakih magnituda. Primjer jednog takvog skupa je regresijski skup podataka grades u kojem se predviđa prosjek ocjena studenta na studiju (1--5) na temelju dvije značajke: bodova na prijamnom ispitu (1--3000) i prosjeka ocjena u srednjoj školi. Prosjek ocjena na studiju izračunat je kao težinska suma ove dvije značajke uz dodani šum.\nKoristite sljedeći kôd kako biste generirali ovaj skup podataka.",
"n_data_points = 500\nnp.random.seed(69)\n\n# Generiraj podatke o bodovima na prijamnom ispitu koristeći normalnu razdiobu i ograniči ih na interval [1, 3000].\nexam_score = np.random.normal(loc=1500.0, scale = 500.0, size = n_data_points) \nexam_score = np.round(exam_score)\nexam_score[exam_score > 3000] = 3000\nexam_score[exam_score < 0] = 0\n\n# Generiraj podatke o ocjenama iz srednje škole koristeći normalnu razdiobu i ograniči ih na interval [1, 5].\ngrade_in_highschool = np.random.normal(loc=3, scale = 2.0, size = n_data_points)\ngrade_in_highschool[grade_in_highschool > 5] = 5\ngrade_in_highschool[grade_in_highschool < 1] = 1\n\n# Matrica dizajna.\ngrades_X = np.array([exam_score,grade_in_highschool]).T\n\n# Završno, generiraj izlazne vrijednosti.\nrand_noise = np.random.normal(loc=0.0, scale = 0.5, size = n_data_points)\nexam_influence = 0.9\ngrades_y = ((exam_score / 3000.0) * (exam_influence) + (grade_in_highschool / 5.0) \\\n * (1.0 - exam_influence)) * 5.0 + rand_noise\ngrades_y[grades_y < 1] = 1\ngrades_y[grades_y > 5] = 5",
"a) Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj značajki (x-os). Iscrtajte dva odvojena grafa.",
"# Vaš kôd ovdje\n\nfigure(figsize=(10,10))\nscatter(exam_score, grades_y, label=\"l2\")\nlegend()\n\nfigure(figsize=(10,10))\nscatter(grade_in_highschool, grades_y, label=\"l2\")\nlegend()",
"b) Naučite model L2-regularizirane regresije ($\\lambda = 0.01$), na podacima grades_X i grades_y:",
"# Vaš kôd ovdje\nr7b = Ridge(0.01).fit(grades_X, grades_y)\nh2 = r7b.predict(grades_X)\nE = mt.mean_squared_error(h2, grades_y)\nprint(E)",
"Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.",
"from sklearn.preprocessing import StandardScaler\n\n# Vaš kôd ovdje\nssX = StandardScaler().fit_transform(grades_X)\nssY = StandardScaler().fit_transform(grades_y.reshape(-1, 1))\nr = Ridge(0.01).fit(ssX, ssY)\nh2 = r.predict(ssX)\nE = mt.mean_squared_error(h2, ssY)\nprint(E)",
"Q: Gledajući grafikone iz podzadatka (a), koja značajka bi trebala imati veću magnitudu, odnosno važnost pri predikciji prosjeka na studiju? Odgovaraju li težine Vašoj intuiciji? Objasnite. \n8. Multikolinearnost i kondicija matrice\na) Izradite skup podataka grades_X_fixed_colinear tako što ćete u skupu grades_X_fixed iz\nzadatka 7b duplicirati zadnji stupac (ocjenu iz srednje škole). Time smo efektivno uveli savršenu multikolinearnost.",
"# Vaš kôd ovdje\ngrades_X_fixed_colinear = [ [x[0], x[1], x[1]] for x in ssX]\n#print(grades_X_fixed_colinear)",
"Ponovno, naučite na ovom skupu L2-regularizirani model regresije ($\\lambda = 0.01$).",
"# Vaš kôd ovdje\nr8a = Ridge(0.01).fit(grades_X_fixed_colinear, ssY)\nh2 = r8a.predict(grades_X_fixed_colinear)\nE = mt.mean_squared_error(h2, ssY)\nprint(E)\n\nprint(r7b.coef_)\nprint(r8a.coef_)",
"Q: Usporedite iznose težina s onima koje ste dobili u zadatku 7b. Što se dogodilo?\nb) Slučajno uzorkujte 50% elemenata iz skupa grades_X_fixed_colinear i naučite dva modela L2-regularizirane regresije, jedan s $\\lambda=0.01$ i jedan s $\\lambda=1000$). Ponovite ovaj pokus 10 puta (svaki put s drugim podskupom od 50% elemenata). Za svaki model, ispišite dobiveni vektor težina u svih 10 ponavljanja te ispišite standardnu devijaciju vrijednosti svake od težina (ukupno šest standardnih devijacija, svaka dobivena nad 10 vrijednosti).",
"# Vaš kôd ovdje\n\nfor lambd in [0.01, 1000]:\n print(lambd)\n ws1 = []\n ws2 = []\n ws3 = []\n for i in range(10):\n \n xTrain, xTest, yTrain, yTest = train_test_split(grades_X_fixed_colinear, ssY, test_size=0.5)\n\n\n print(l2.coef_)\n l2 = Ridge(lambd).fit(xTrain, yTrain)\n ws1.append(l2.coef_[0][0])\n ws2.append(l2.coef_[0][1])\n ws3.append(l2.coef_[0][2])\n \n print(\"std dev: \" + str(np.std(ws1)))\n print(\"std dev: \" + str(np.std(ws2)))\n print(\"std dev: \" + str(np.std(ws3)))\n\n",
"Q: Kako regularizacija utječe na stabilnost težina?\nQ: Jesu li koeficijenti jednakih magnituda kao u prethodnom pokusu? Objasnite zašto.\nc) Koristeći numpy.linalg.cond izračunajte kondicijski broj matrice $\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I}$, gdje je $\\mathbf{\\Phi}$ matrica dizajna (grades_X_fixed_colinear). Ponovite i za $\\lambda=0.01$ i za $\\lambda=10$.",
"# Vaš kôd ovdje\n#print(grades_X_fixed_colinear)\nfor l in [0.01, 10]:\n #print(l * identity(len(grades_X_fixed_colinear)))\n mm = matmul(transpose(grades_X_fixed_colinear), grades_X_fixed_colinear)\n matr = mm + l * identity(len(mm))\n print(matr)\n print(np.linalg.cond(matr))",
"Q: Kako regularizacija utječe na kondicijski broj matrice $\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I}$?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mikheyev/phage-lab
|
src/Raw data.ipynb
|
mit
|
[
"What do the data look like?\nJupyter IPython notebooks, such as this one, allow you to run both Python code and, using 'magics' also shell commands. In this tutorial we'll use both, since we will be interfacing with a variety of software, as well as processing data.\nFirst, let's look around in the directory using standard Linux commands. We can execute a shell command by preceding it with an exclamation mark.",
"!ls -lh ../data/reads",
"We see that there are five files four of these are mutants, and and one reference original sample.\nWe will take a look inside one of the files and look at the distribution of read statistics.\nThe reads are in text files, which have been compressed using gzip, a common practice for storing raw data. You can look inside by decompressing a file, piping the output to a program called head, which will stop after a few lines. You don't want to print the contents of the entire file to screen, since it will likely crash IPython.",
"!gunzip -c ../data/reads/mutant1_OIST-2015-03-28.fq.gz | head -8",
"Each read in the fastq file format has four lines, one is a unique read name, one containing the sequence of bases, one +, and one containing quality scores. The quality scores correspond to the sequencer's confidence in making the base call.\nIt is good practice to examine the quality of your data before you proceed with the analysis. We'll use a popular tools called FastQC to do some exploratory analysis.",
"!fastqc ../data/reads/mutant1_OIST-2015-03-28.fq.gz\n\nfrom IPython.display import IFrame\nIFrame('../data/reads/mutant1_OIST-2015-03-28_fastqc.html', width=1000, height=1000)",
"Key statistics\n\nBasic Statistics. Reports number of sequences, and basic details\nPer base sequence quality. The distribution of sequence quality scored over the length of the read.\nThe quality scale is logarithmic. Notice that the quality degrades rapidly over the length of the read. This is a key characteristic of Illumina data, and product of their sequencing chemistry, which limits the upper read length to about 300 bp.\n\nWe can explore the contents of read files programmatically using a library within Python called Biopython. This allows to automate many tedious tasks.",
"import gzip\nfrom Bio import SeqIO\nwith gzip.open(\"../data/reads/mutant1_OIST-2015-03-28.fq.gz\", 'rt') as infile: # open and decompress input file\n for rec in SeqIO.parse(infile, \"fastq\"): # start looping over all records\n print(rec) #print record contents\n break # stop looping, we only want to see one record",
"You can see the methods associated with each object, suce as rec usig the dir command.",
"print(dir(rec)) # print methods associaat",
"For example, we can reverse complement the sequence:",
"rec.reverse_complement()",
"There are lots of other interesting functions to explore!\nExercises and questions\nExercises should be done in Python or bash.\n1. Write a for loop to run FastQC on all the samples, and examine their output.\n- If you look a the \"Per base sequence quality\" in FastQC, you'll see that the quality decreases. Why does that happen? \n- What does a score of 20 correspond to? (Hint: these are called phred scores)\n- Look at the quality scores associated with the first read in reads/mutant1_OIST-2015-03-28.fq.g named M00923:134:000000000-A5ELA:1:2109:24002:5853. What is the average error rate? How many errors can we expect per read?\n- Check out the \"Sequence Duplication Levels\" report. Why would there be duplicated sequences?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Preprocessing functional near-infrared spectroscopy (fNIRS) data\nThis tutorial covers how to convert functional near-infrared spectroscopy\n(fNIRS) data from raw measurements to relative oxyhaemoglobin (HbO) and\ndeoxyhaemoglobin (HbR) concentration, view the average waveform, and\ntopographic representation of the response.\nHere we will work with the fNIRS motor data <fnirs-motor-dataset>.",
"import os.path as op\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom itertools import compress\n\nimport mne\n\n\nfnirs_data_folder = mne.datasets.fnirs_motor.data_path()\nfnirs_cw_amplitude_dir = op.join(fnirs_data_folder, 'Participant-1')\nraw_intensity = mne.io.read_raw_nirx(fnirs_cw_amplitude_dir, verbose=True)\nraw_intensity.load_data()",
"Providing more meaningful annotation information\nFirst, we attribute more meaningful names to the trigger codes which are\nstored as annotations. Second, we include information about the duration of\neach stimulus, which was 5 seconds for all conditions in this experiment.\nThird, we remove the trigger code 15, which signaled the start and end\nof the experiment and is not relevant to our analysis.",
"raw_intensity.annotations.set_durations(5)\nraw_intensity.annotations.rename({'1.0': 'Control',\n '2.0': 'Tapping/Left',\n '3.0': 'Tapping/Right'})\nunwanted = np.nonzero(raw_intensity.annotations.description == '15.0')\nraw_intensity.annotations.delete(unwanted)",
"Viewing location of sensors over brain surface\nHere we validate that the location of sources-detector pairs and channels\nare in the expected locations. Source-detector pairs are shown as lines\nbetween the optodes, channels (the mid point of source-detector pairs) are\noptionally shown as orange dots. Source are optionally shown as red dots and\ndetectors as black.",
"subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects')\n\nbrain = mne.viz.Brain(\n 'fsaverage', subjects_dir=subjects_dir, background='w', cortex='0.5')\nbrain.add_sensors(\n raw_intensity.info, trans='fsaverage',\n fnirs=['channels', 'pairs', 'sources', 'detectors'])\nbrain.show_view(azimuth=20, elevation=60, distance=400)",
"Selecting channels appropriate for detecting neural responses\nFirst we remove channels that are too close together (short channels) to\ndetect a neural response (less than 1 cm distance between optodes).\nThese short channels can be seen in the figure above.\nTo achieve this we pick all the channels that are not considered to be short.",
"picks = mne.pick_types(raw_intensity.info, meg=False, fnirs=True)\ndists = mne.preprocessing.nirs.source_detector_distances(\n raw_intensity.info, picks=picks)\nraw_intensity.pick(picks[dists > 0.01])\nraw_intensity.plot(n_channels=len(raw_intensity.ch_names),\n duration=500, show_scrollbars=False)",
"Converting from raw intensity to optical density\nThe raw intensity values are then converted to optical density.",
"raw_od = mne.preprocessing.nirs.optical_density(raw_intensity)\nraw_od.plot(n_channels=len(raw_od.ch_names),\n duration=500, show_scrollbars=False)",
"Evaluating the quality of the data\nAt this stage we can quantify the quality of the coupling\nbetween the scalp and the optodes using the scalp coupling index. This\nmethod looks for the presence of a prominent synchronous signal in the\nfrequency range of cardiac signals across both photodetected signals.\nIn this example the data is clean and the coupling is good for all\nchannels, so we will not mark any channels as bad based on the scalp\ncoupling index.",
"sci = mne.preprocessing.nirs.scalp_coupling_index(raw_od)\nfig, ax = plt.subplots()\nax.hist(sci)\nax.set(xlabel='Scalp Coupling Index', ylabel='Count', xlim=[0, 1])",
"In this example we will mark all channels with a SCI less than 0.5 as bad\n(this dataset is quite clean, so no channels are marked as bad).",
"raw_od.info['bads'] = list(compress(raw_od.ch_names, sci < 0.5))",
"At this stage it is appropriate to inspect your data\n(for instructions on how to use the interactive data visualisation tool\nsee tut-visualize-raw)\nto ensure that channels with poor scalp coupling have been removed.\nIf your data contains lots of artifacts you may decide to apply\nartifact reduction techniques as described in ex-fnirs-artifacts.\nConverting from optical density to haemoglobin\nNext we convert the optical density data to haemoglobin concentration using\nthe modified Beer-Lambert law.",
"raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od, ppf=0.1)\nraw_haemo.plot(n_channels=len(raw_haemo.ch_names),\n duration=500, show_scrollbars=False)",
"Removing heart rate from signal\nThe haemodynamic response has frequency content predominantly below 0.5 Hz.\nAn increase in activity around 1 Hz can be seen in the data that is due to\nthe person's heart beat and is unwanted. So we use a low pass filter to\nremove this. A high pass filter is also included to remove slow drifts\nin the data.",
"fig = raw_haemo.plot_psd(average=True)\nfig.suptitle('Before filtering', weight='bold', size='x-large')\nfig.subplots_adjust(top=0.88)\nraw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2,\n l_trans_bandwidth=0.02)\nfig = raw_haemo.plot_psd(average=True)\nfig.suptitle('After filtering', weight='bold', size='x-large')\nfig.subplots_adjust(top=0.88)",
"Extract epochs\nNow that the signal has been converted to relative haemoglobin concentration,\nand the unwanted heart rate component has been removed, we can extract epochs\nrelated to each of the experimental conditions.\nFirst we extract the events of interest and visualise them to ensure they are\ncorrect.",
"events, event_dict = mne.events_from_annotations(raw_haemo)\nfig = mne.viz.plot_events(events, event_id=event_dict,\n sfreq=raw_haemo.info['sfreq'])\nfig.subplots_adjust(right=0.7) # make room for the legend",
"Next we define the range of our epochs, the rejection criteria,\nbaseline correction, and extract the epochs. We visualise the log of which\nepochs were dropped.",
"reject_criteria = dict(hbo=80e-6)\ntmin, tmax = -5, 15\n\nepochs = mne.Epochs(raw_haemo, events, event_id=event_dict,\n tmin=tmin, tmax=tmax,\n reject=reject_criteria, reject_by_annotation=True,\n proj=True, baseline=(None, 0), preload=True,\n detrend=None, verbose=True)\nepochs.plot_drop_log()",
"View consistency of responses across trials\nNow we can view the haemodynamic response for our tapping condition.\nWe visualise the response for both the oxy- and deoxyhaemoglobin, and\nobserve the expected peak in HbO at around 6 seconds consistently across\ntrials, and the consistent dip in HbR that is slightly delayed relative to\nthe HbO peak.",
"epochs['Tapping'].plot_image(combine='mean', vmin=-30, vmax=30,\n ts_args=dict(ylim=dict(hbo=[-15, 15],\n hbr=[-15, 15])))",
"We can also view the epoched data for the control condition and observe\nthat it does not show the expected morphology.",
"epochs['Control'].plot_image(combine='mean', vmin=-30, vmax=30,\n ts_args=dict(ylim=dict(hbo=[-15, 15],\n hbr=[-15, 15])))",
"View consistency of responses across channels\nSimilarly we can view how consistent the response is across the optode\npairs that we selected. All the channels in this data are located over the\nmotor cortex, and all channels show a similar pattern in the data.",
"fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15, 6))\nclims = dict(hbo=[-20, 20], hbr=[-20, 20])\nepochs['Control'].average().plot_image(axes=axes[:, 0], clim=clims)\nepochs['Tapping'].average().plot_image(axes=axes[:, 1], clim=clims)\nfor column, condition in enumerate(['Control', 'Tapping']):\n for ax in axes[:, column]:\n ax.set_title('{}: {}'.format(condition, ax.get_title()))",
"Plot standard fNIRS response image\nNext we generate the most common visualisation of fNIRS data: plotting\nboth the HbO and HbR on the same figure to illustrate the relation between\nthe two signals.",
"evoked_dict = {'Tapping/HbO': epochs['Tapping'].average(picks='hbo'),\n 'Tapping/HbR': epochs['Tapping'].average(picks='hbr'),\n 'Control/HbO': epochs['Control'].average(picks='hbo'),\n 'Control/HbR': epochs['Control'].average(picks='hbr')}\n\n# Rename channels until the encoding of frequency in ch_name is fixed\nfor condition in evoked_dict:\n evoked_dict[condition].rename_channels(lambda x: x[:-4])\n\ncolor_dict = dict(HbO='#AA3377', HbR='b')\nstyles_dict = dict(Control=dict(linestyle='dashed'))\n\nmne.viz.plot_compare_evokeds(evoked_dict, combine=\"mean\", ci=0.95,\n colors=color_dict, styles=styles_dict)",
"View topographic representation of activity\nNext we view how the topographic activity changes throughout the response.",
"times = np.arange(-3.5, 13.2, 3.0)\ntopomap_args = dict(extrapolate='local')\nepochs['Tapping'].average(picks='hbo').plot_joint(\n times=times, topomap_args=topomap_args)",
"Compare tapping of left and right hands\nFinally we generate topo maps for the left and right conditions to view\nthe location of activity. First we visualise the HbO activity.",
"times = np.arange(4.0, 11.0, 1.0)\nepochs['Tapping/Left'].average(picks='hbo').plot_topomap(\n times=times, **topomap_args)\nepochs['Tapping/Right'].average(picks='hbo').plot_topomap(\n times=times, **topomap_args)",
"And we also view the HbR activity for the two conditions.",
"epochs['Tapping/Left'].average(picks='hbr').plot_topomap(\n times=times, **topomap_args)\nepochs['Tapping/Right'].average(picks='hbr').plot_topomap(\n times=times, **topomap_args)",
"And we can plot the comparison at a single time point for two conditions.",
"fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(9, 5),\n gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1]))\nvmin, vmax, ts = -8, 8, 9.0\n\nevoked_left = epochs['Tapping/Left'].average()\nevoked_right = epochs['Tapping/Right'].average()\n\nevoked_left.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 0],\n vmin=vmin, vmax=vmax, colorbar=False,\n **topomap_args)\nevoked_left.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 0],\n vmin=vmin, vmax=vmax, colorbar=False,\n **topomap_args)\nevoked_right.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 1],\n vmin=vmin, vmax=vmax, colorbar=False,\n **topomap_args)\nevoked_right.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 1],\n vmin=vmin, vmax=vmax, colorbar=False,\n **topomap_args)\n\nevoked_diff = mne.combine_evoked([evoked_left, evoked_right], weights=[1, -1])\n\nevoked_diff.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 2:],\n vmin=vmin, vmax=vmax, colorbar=True,\n **topomap_args)\nevoked_diff.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 2:],\n vmin=vmin, vmax=vmax, colorbar=True,\n **topomap_args)\n\nfor column, condition in enumerate(\n ['Tapping Left', 'Tapping Right', 'Left-Right']):\n for row, chroma in enumerate(['HbO', 'HbR']):\n axes[row, column].set_title('{}: {}'.format(chroma, condition))\nfig.tight_layout()",
"Lastly, we can also look at the individual waveforms to see what is\ndriving the topographic plot above.",
"fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 4))\nmne.viz.plot_evoked_topo(epochs['Left'].average(picks='hbo'), color='b',\n axes=axes, legend=False)\nmne.viz.plot_evoked_topo(epochs['Right'].average(picks='hbo'), color='r',\n axes=axes, legend=False)\n\n# Tidy the legend:\nleg_lines = [line for line in axes.lines if line.get_c() == 'b'][:1]\nleg_lines.append([line for line in axes.lines if line.get_c() == 'r'][0])\nfig.legend(leg_lines, ['Left', 'Right'], loc='lower right')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/zh-cn/io/tutorials/dicom.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow IO Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"解码用于医学成像的 DICOM 文件\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/io/tutorials/dicom\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看 </a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/dicom.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行 </a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/dicom.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 中查看源代码</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/dicom.ipynb\">{img1下载笔记本</a></td>\n</table>\n\n概述\n本教程将介绍如何在 TensorFlow IO 中使用 tfio.image.decode_dicom_image 通过 TensorFlow 解码 DICOM 文件。\n设置和用法\n下载 DICOM 图像\n本教程中使用的 DICOM 图像来自 NIH Chest X-Ray 数据集。\nNIH Chest X-Ray 数据集包含 NIH 临床中心提供的 100,000 张胸部 X 射线检查的去标识化 PNG 图像,可通过此链接下载。\nGoogle Cloud 还提供了可从 Cloud Storage 中获得的 DICOM 版本图像。\n在本教程中,您将从 GitHub 仓库下载数据集的样本文件\n注:有关数据集的更多信息,请查看以下参考资料:\n\nXiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, Ronald Summers, ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases, IEEE CVPR, pp. 3462-3471, 2017",
"!curl -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/dicom/dicom_00000001_000.dcm\n!ls -l dicom_00000001_000.dcm",
"安装要求的软件包,然后重新启动运行时",
"try:\n # Use the Colab's preinstalled TensorFlow 2.x\n %tensorflow_version 2.x \nexcept:\n pass\n\n!pip install tensorflow-io",
"解码 DICOM 图像",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nimport tensorflow as tf\n\nimport tensorflow_io as tfio\n\nimage_bytes = tf.io.read_file('dicom_00000001_000.dcm')\n\nimage = tfio.image.decode_dicom_image(image_bytes, dtype=tf.uint16)\n\nskipped = tfio.image.decode_dicom_image(image_bytes, on_error='skip', dtype=tf.uint8)\n\nlossy_image = tfio.image.decode_dicom_image(image_bytes, scale='auto', on_error='lossy', dtype=tf.uint8)\n\n\nfig, axes = plt.subplots(1,2, figsize=(10,10))\naxes[0].imshow(np.squeeze(image.numpy()), cmap='gray')\naxes[0].set_title('image')\naxes[1].imshow(np.squeeze(lossy_image.numpy()), cmap='gray')\naxes[1].set_title('lossy image');",
"解码 DICOM 元数据和使用标记\ndecode_dicom_data 用于解码标记信息。dicom_tags 包含有用的信息,如患者的年龄和性别,因此可以使用 dicom_tags.PatientsAge 和 dicom_tags.PatientsSex 等 DICOM 标记。tensorflow_io 借用了 pydicom dicom 软件包的标记法。",
"tag_id = tfio.image.dicom_tags.PatientsAge\ntag_value = tfio.image.decode_dicom_data(image_bytes,tag_id)\nprint(tag_value)\n\nprint(f\"PatientsAge : {tag_value.numpy().decode('UTF-8')}\")\n\ntag_id = tfio.image.dicom_tags.PatientsSex\ntag_value = tfio.image.decode_dicom_data(image_bytes,tag_id)\nprint(f\"PatientsSex : {tag_value.numpy().decode('UTF-8')}\")",
"文档\n此软件包具有两个包装 DCMTK 函数的运算。decode_dicom_image 可以解码 DICOM 文件中的像素数据,decode_dicom_data 可以解码标记信息。tags 包含有用的 DICOM 标记,例如 tags.PatientsName。标记表示法借用自 pydicom dicom 软件包。\n获取 DICOM 图像数据\npython\nio.dicom.decode_dicom_image(\n contents,\n color_dim=False,\n on_error='skip',\n scale='preserve',\n dtype=tf.uint16,\n name=None\n)\n\ncontents:字符串类型的张量。零维。字节字符串编码的 DICOM 文件\ncolor_dim:可选的 bool。默认值为 False。如果为 True,则第三个通道将附加到构成三维张量的所有图像。1024 x 1024 灰度图像将变为 1024 x 1024 x 1\non_error:默认值为 skip。如果在打开图像时发生错误,或者输出类型不能容纳所有可能的输入值,则此特性会确定行为。例如,当用户将输出 dtype 设置为 tf.uint8,但 dicom 图像存储 tf.uint16 类型时。strict 会引发错误。skip 会返回一个一维空张量。lossy 将继续执行通过 scale 特性缩放值的运算。\nscale:默认值为 preserve。此特性确定如何处理输入值的比例。auto 将自动缩放输入值,如果输出类型为整数,auto 将使用最大输出比例,例如,可以将存储 [0, 255] 范围内值的 uint8 线性拉伸以填充 uint16,即 [0,65535]。如果输出为浮点数,auto 将缩放为 [0,1]。preserve 可按原样保留值,大于最大可能输出的输入值将被裁剪。\ndtype:以下数据类型的可选 tf.DType:tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.float16, tf.float32, tf.float64。默认值为 tf.uint16。\nname:运算的名称(可选)。\n\n返回值 一个类型为 dtype 的 Tensor,其形状由 DICOM 文件确定。\n获取 DICOM 标记数据\npython\nio.dicom.decode_dicom_data(\n contents,\n tags=None,\n name=None\n)\n\ncontents:字符串类型的张量。零维。字节字符串编码的 DICOM 文件\ntags:任意维度的 tf.uint32 类型张量。这些 uint32 数字可以直接映射到 DICOM 标记\nname:运算的名称(可选)。\n\n返回值 一个类型为 tf.string 且形状与 tags 相同的 Tensor。如果 dicom 标记是一个字符串列表,则它们会组合成一个字符串,并用双反斜杠 **返回值** 一个类型为tf.string且形状与tags相同的Tensor`。如果 dicom 标记是一个字符串列表,则它们会组合成一个字符串,并用双反斜杠 分隔。如果标记是一个数字列表,则 DCMTK 中会出现错误,只有第 0 个元素会作为字符串返回。\nBibtex\n如果此软件包有帮助,请引用以下内容:\n@misc{marcelo_lerendegui_2019_3337331,\n author = {Marcelo Lerendegui and\n Ouwen Huang},\n title = {Tensorflow Dicom Decoder},\n month = jul,\n year = 2019,\n doi = {10.5281/zenodo.3337331},\n url = {https://doi.org/10.5281/zenodo.3337331}\n}\n许可\nCopyright 2019 Marcelo Lerendegui, Ouwen Huang, Gradient Health Inc.\n根据 Apache 许可 2.0(“许可”)获得许可;除非遵循许可要求,否则您不得使用此文件。您可在以下网址获得许可的副本:\nhttp://www.apache.org/licenses/LICENSE-2.0\n除非适用法律要求或以书面形式同意,否则在本许可下分发的软件将在“按原样”基础上分发,不存在任何明示或暗示的任何类型的保证或条件。有关在本许可下管理权限和限制的特定语言,请参阅本许可。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sr320/sr320.github.io
|
jupyter/Olurida/Fidalgo-SIbs-postbsmap.ipynb
|
mit
|
[
"BSMAP was run on 8 samples (on different machines)",
"ls analyses/2016-10-11\n\nls -lh /Volumes/caviar/wd/2016-10-11/bsmap*sam\n\nbsmaploc=\"/Applications/bioinfo/BSMAP/bsmap-2.74/\"\n\n\ncd /Volumes/caviar/wd/2016-10-11/\n\nfor i in (\"1_ATCACG\",\"2_CGATGT\",\"3_TTAGGC\",\"4_TGACCA\",\"5_ACAGTG\",\"6_GCCAAT\",\"7_CAGATC\",\"8_ACTTGA\"):\n !python {bsmaploc}methratio.py \\\n-d ../data/Ostrea_lurida.scafSeq \\\n-u -z -g \\\n-o methratio_out_{i}.txt \\\n-s {bsmaploc}samtools \\\nbsmap_out_{i}.sam \\\n\n#first methratio files are converted to filter for CG context, 3x coverage (mr3x.awk), and reformatting (mr_gg.awk.sh).\n#due to issue passing variable to awk, simple scripts were used (included in repository)\nfor i in (\"1_ATCACG\",\"2_CGATGT\",\"3_TTAGGC\",\"4_TGACCA\",\"5_ACAGTG\",\"6_GCCAAT\",\"7_CAGATC\",\"8_ACTTGA\"):\n !echo {i}\n !grep \"[A-Z][A-Z]CG[A-Z]\" <methratio_out_{i}.txt> methratio_out_{i}CG.txt\n !awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr3x.awk methratio_out_{i}CG.txt \\\n > mr3x.{i}.txt\n !awk -f /Users/sr320/git-repos/sr320.github.io/jupyter/scripts/mr_gg.awk.sh \\\n mr3x.{i}.txt > mkfmt_{i}.txt\n\n#maybe we need to ignore case\n\n!md5 mkfmt_M2.txt mkfmti_M2.txt | head\n\n#nope\n\n!head -5 mkfmt*",
"Products",
"cd git-repos/sr320.github.io/jupyter/ \n\nls\n\nmkdir analyses/$(date +%F)\n\nfor i in (\"1_ATCACG\",\"2_CGATGT\",\"3_TTAGGC\",\"4_TGACCA\",\"5_ACAGTG\",\"6_GCCAAT\",\"7_CAGATC\",\"8_ACTTGA\"):\n !cp /Volumes/caviar/wd/2016-10-11/mkfmt_{i}.txt analyses/$(date +%F)/mkfmt_{i}.txt\n\n!head analyses/$(date +%F)/*",
"url for 8 tables..\nhttps://github.com/sr320/sr320.github.io/tree/master/jupyter/analyses/2016-10-22"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
juditacs/morph-segmentation-experiments
|
notebooks/sandbox/seq2seq_attention.ipynb
|
mit
|
[
"import os\nimport yaml\nimport tensorflow as tf\nfrom tensorflow.python.ops import lookup_ops\nfrom tensorflow.python.layers import core as layers_core\n\ntf.reset_default_graph()",
"Model parameters\nSet use_toy_data to True for toy experiments. This will train the network on two unique examples.\nThe real dataset is morphological reinflection task: Hungarian nouns in the instrumental case.\nHungarian features both vowel harmony and assimilation.\nA few examples are listed here (capitalization is added for emphasis):\n| input | output | meaning | what happens |\n| :-----: | :-----: | :-----: | :-----: |\n| autó | autóval | with car | |\n| Peti | PetivEl | with Pete | vowel harmony |\n| fej | fejJel | with head | assimilation |\n| pálca | pálcÁval | with stick | low vowel lengthening |\n| kulcs | kulCCSal | with key | digraph + assimilation |\nThis turns out to be a very easy task for a fairly small seq2seq model.",
"PROJECT_DIR = \"../../\"\nuse_toy_data = False\nLOG_DIR = 'logs' # Tensorboard log directory\n\nif use_toy_data:\n batch_size = 8\n embedding_dim = 5\n cell_size = 32\n max_len = 6\nelse:\n batch_size = 64\n embedding_dim = 20\n cell_size = 128\n max_len = 33\n \nuse_attention = True\nuse_bidirectional_encoder = True\nis_time_major = True",
"Download data if necessary\nThe input data is expected in the following format:\n~~~\ni n p u t 1 TAB o u t p u t 1\ni n p u t 2 TAB o u t p u t 2\n~~~\nEach line contains a single input-output pair separated by a TAB.\nTokens are space-separated.",
"if use_toy_data:\n input_fn = 'toy_input.txt'\n with open(input_fn, 'w') as f:\n f.write('a b c\\td e f d e f\\n')\n f.write('d e f\\ta b c a b c\\n')\nelse:\n DATA_DIR = '../../data/'\n input_fn = 'instrumental.full.train'\n input_fn = os.path.join(DATA_DIR, input_fn)\n if not os.path.exists(input_fn):\n import urllib\n u = urllib.request.URLopener()\n u.retrieve(\n \"http://sandbox.mokk.bme.hu/~judit/resources/instrumental.full.train\", input_fn)",
"Load and preprocess data",
"class Dataset(object):\n PAD = 0\n SOS = 1\n EOS = 2\n UNK = 3\n #src_vocab = ['PAD', 'UNK']\n constants = ['PAD', 'SOS', 'EOS', 'UNK']\n hu_alphabet = list(\"aábcdeéfghiíjklmnoóöőpqrstuúüűvwxyz-+._\")\n \n def __init__(self, fn, config, src_alphabet=None, tgt_alphabet=None):\n self.config = config\n self.create_tables(src_alphabet, tgt_alphabet)\n self.load_and_preproc_dataset(fn)\n \n def create_tables(self, src_alphabet, tgt_alphabet):\n if src_alphabet is None:\n self.src_vocab = Dataset.constants + Dataset.hu_alphabet\n else:\n self.src_vocab = Dataset.constants + alphabet\n self.src_table = lookup_ops.index_table_from_tensor(\n tf.constant(self.src_vocab), default_value=Dataset.UNK\n )\n if self.config.share_vocab:\n self.tgt_vocab = self.src_vocab\n self.tgt_table = self.src_table\n else:\n if tgt_alphabet is None:\n self.tgt_vocab = Dataset.constants + Dataset.hu_alphabet\n else:\n self.tgt_vocab = Dataset.constants + alphabet\n self.tgt_table = lookup_ops.index_table_from_tensor(\n tf.constant(self.tgt_vocab), default_value=Dataset.UNK\n )\n self.src_vocab_size = len(self.src_vocab)\n self.tgt_vocab_size = len(self.tgt_vocab)\n \n def load_and_preproc_dataset(self, fn):\n dataset = tf.contrib.data.TextLineDataset(fn)\n dataset = dataset.repeat()\n dataset = dataset.map(lambda s: tf.string_split([s], delimiter='\\t').values)\n \n src = dataset.map(lambda s: s[0])\n tgt = dataset.map(lambda s: s[1])\n \n src = src.map(lambda s: tf.string_split([s], delimiter=' ').values)\n src = src.map(lambda s: s[:self.config.src_maxlen])\n tgt = tgt.map(lambda s: tf.string_split([s], delimiter=' ').values)\n tgt = tgt.map(lambda s: s[:self.config.tgt_maxlen])\n \n src = src.map(lambda words: self.src_table.lookup(words))\n tgt = tgt.map(lambda words: self.tgt_table.lookup(words))\n \n dataset = tf.contrib.data.Dataset.zip((src, tgt))\n dataset = dataset.map(\n lambda src, tgt: (\n src,\n tf.concat(([Dataset.SOS], tgt), 0),\n tf.concat((tgt, [Dataset.EOS]), 0),\n )\n )\n dataset = dataset.map(\n lambda src, tgt_in, tgt_out: (src, tgt_in, tgt_out, tf.size(src), tf.size(tgt_in))\n )\n batched = dataset.padded_batch(\n self.config.batch_size,\n padded_shapes=(\n tf.TensorShape([self.config.src_maxlen]),\n tf.TensorShape([self.config.tgt_maxlen+2]),\n tf.TensorShape([None]),\n tf.TensorShape([]),\n tf.TensorShape([]),\n )\n )\n self.batched_iter = batched.make_initializable_iterator()\n s = self.batched_iter.get_next()\n self.src_ids = s[0]\n self.tgt_in_ids = s[1]\n self.tgt_out_ids = s[2]\n self.src_size = s[3]\n self.tgt_size = s[4]\n \n def run_initializers(self, session):\n session.run(tf.tables_initializer())\n session.run(self.batched_iter.initializer)",
"Create model\nEmbedding\nThe input and output embeddings are the same.",
"class Config(object):\n default_fn = os.path.join(\n PROJECT_DIR, \"config\", \"seq2seq\", \"default.yaml\"\n )\n \n @staticmethod\n def load_defaults(fn=default_fn):\n with open(fn) as f:\n return yaml.load(f)\n \n @classmethod\n def from_yaml(cls, fn):\n params = yaml.load(fn)\n return cls(**params)\n \n def __init__(self, **kwargs):\n defaults = Config.load_defaults()\n for param, val in defaults.items():\n setattr(self, param, val)\n for param, val in kwargs.items():\n setattr(self, param, val)\n \nconfig = Config(src_maxlen=30, tgt_maxlen=33)\ndataset = Dataset(input_fn, config)\n\nwith tf.variable_scope(\"embedding\"):\n embedding = tf.get_variable(\"embedding\", [dataset.src_vocab_size, embedding_dim], dtype=tf.float32)\n embedding_input = tf.nn.embedding_lookup(embedding, dataset.src_ids)\n decoder_emb_inp = tf.nn.embedding_lookup(embedding, dataset.tgt_in_ids)\n if is_time_major:\n embedding_input = tf.transpose(embedding_input, [1, 0, 2])\n decoder_emb_inp = tf.transpose(decoder_emb_inp, [1, 0, 2])",
"Encoder",
"with tf.variable_scope(\"encoder\"):\n \n if use_bidirectional_encoder:\n fw_cell = tf.nn.rnn_cell.BasicLSTMCell(cell_size)\n fw_cell = tf.contrib.rnn.DropoutWrapper(fw_cell, input_keep_prob=0.8)\n bw_cell = tf.nn.rnn_cell.BasicLSTMCell(cell_size)\n bw_cell = tf.contrib.rnn.DropoutWrapper(bw_cell, input_keep_prob=0.8)\n\n o, e = tf.nn.bidirectional_dynamic_rnn(\n fw_cell, bw_cell, embedding_input, dtype='float32', sequence_length=dataset.src_size,\n time_major=is_time_major)\n encoder_outputs = tf.concat(o, -1)\n encoder_state = e\n \n else:\n fw_cell = tf.nn.rnn_cell.BasicLSTMCell(cell_size)\n fw_cell = tf.contrib.rnn.DropoutWrapper(fw_cell, input_keep_prob=0.8)\n o, e = tf.nn.dynamic_rnn(fw_cell, embedding_input, dtype='float32',\n sequence_length=dataset.src_size, time_major=is_time_major)\n encoder_outputs = o\n encoder_state = e\n ",
"Decoder",
"with tf.variable_scope(\"decoder\", dtype=\"float32\") as scope:\n if use_bidirectional_encoder:\n decoder_cells = []\n for i in range(2):\n decoder_cell = tf.contrib.rnn.BasicLSTMCell(cell_size)\n decoder_cell = tf.contrib.rnn.DropoutWrapper(decoder_cell, input_keep_prob=0.8)\n decoder_cells.append(decoder_cell)\n decoder_cell = tf.contrib.rnn.MultiRNNCell(decoder_cells)\n\n if use_attention:\n if is_time_major:\n attention_states = tf.transpose(encoder_outputs, [1, 0, 2])\n else:\n attention_states = encoder_outputs\n attention_mechanism = tf.contrib.seq2seq.LuongAttention(\n cell_size, attention_states, memory_sequence_length=dataset.src_size,\n scale=True\n )\n decoder_cell = tf.contrib.seq2seq.AttentionWrapper(\n decoder_cell, attention_mechanism, attention_layer_size=cell_size,\n name=\"attention\"\n )\n if is_time_major:\n decoder_initial_state = decoder_cell.zero_state(\n tf.shape(decoder_emb_inp)[1], tf.float32).clone(cell_state=encoder_state)\n else:\n decoder_initial_state = decoder_cell.zero_state(\n tf.shape(decoder_emb_inp)[0], tf.float32).clone(cell_state=encoder_state)\n else:\n decoder_initial_state = encoder_state\n \n else:\n decoder_cell = tf.contrib.rnn.BasicLSTMCell(cell_size)\n decoder_initial_state = encoder_state\n \n helper = tf.contrib.seq2seq.TrainingHelper(\n decoder_emb_inp, dataset.tgt_size, time_major=is_time_major)\n decoder = tf.contrib.seq2seq.BasicDecoder(\n decoder_cell, helper, decoder_initial_state)\n \n outputs, final, _ = tf.contrib.seq2seq.dynamic_decode(\n decoder, output_time_major=is_time_major, swap_memory=True, scope=scope)\n \n output_proj = layers_core.Dense(dataset.tgt_vocab_size, name=\"output_proj\")\n logits = output_proj(outputs.rnn_output)\n \n ",
"Loss and training operations",
"with tf.variable_scope(\"train\"):\n if is_time_major:\n logits = tf.transpose(logits, [1, 0, 2])\n crossent = tf.nn.sparse_softmax_cross_entropy_with_logits(\n labels=dataset.tgt_out_ids, logits=logits)\n target_weights = tf.sequence_mask(dataset.tgt_size, tf.shape(logits)[1], tf.float32)\n else:\n crossent = tf.nn.sparse_softmax_cross_entropy_with_logits(\n labels=dataset.tgt_out_ids, logits=logits)\n target_weights = tf.sequence_mask(dataset.tgt_size, tf.shape(logits)[1], tf.float32)\n loss = tf.reduce_sum(crossent * target_weights) / tf.to_float(batch_size)\n tf.summary.scalar(\"loss\", loss)\n\n learning_rate = tf.placeholder(dtype=tf.float32, name=\"learning_rate\")\n max_global_norm = tf.placeholder(dtype=tf.float32, name=\"max_global_norm\")\n optimizer = tf.train.MomentumOptimizer(learning_rate, momentum=0.5)\n params = tf.trainable_variables()\n gradients = tf.gradients(loss, params)\n for grad, var in zip(gradients, params):\n tf.summary.histogram(var.op.name+'/gradient', grad)\n gradients, _ = tf.clip_by_global_norm(gradients, max_global_norm)\n for grad, var in zip(gradients, params):\n tf.summary.histogram(var.op.name+'/clipped_gradient', grad)\n update = optimizer.apply_gradients(zip(gradients, params))",
"Greedy decoder for inference",
"with tf.variable_scope(\"greedy_decoder\"):\n g_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(\n embedding, tf.fill([dataset.config.batch_size], dataset.SOS), dataset.EOS)\n g_decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, g_helper, decoder_initial_state,\n output_layer=output_proj)\n\n g_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(g_decoder, maximum_iterations=30)",
"Beam search decoder",
"if use_attention is False:\n with tf.variable_scope(\"beam_search\"):\n beam_width = 4\n start_tokens = tf.fill([config.batch_size], dataset.SOS)\n bm_dec_initial_state = tf.contrib.seq2seq.tile_batch(\n encoder_state, multiplier=beam_width)\n bm_decoder = tf.contrib.seq2seq.BeamSearchDecoder(\n cell=decoder_cell,\n embedding=embedding,\n start_tokens=start_tokens,\n initial_state=bm_dec_initial_state,\n beam_width=beam_width,\n output_layer=output_proj,\n end_token=dataset.EOS\n )\n bm_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(\n bm_decoder, maximum_iterations=config.tgt_maxlen)",
"Starting session",
"#sess = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0}))\nsess = tf.Session()\ndataset.run_initializers(sess)\nsess.run(tf.global_variables_initializer())\n\nmerged_summary = tf.summary.merge_all()\nwriter = tf.summary.FileWriter(os.path.join(LOG_DIR, 's2s_sandbox', 'tmp'))\nwriter.add_graph(sess.graph)",
"Training",
"%%time\n\ndef train(epochs, logstep, lr):\n print(\"Running {} epochs with learning rate {}\".format(epochs, lr))\n for i in range(epochs):\n _, s = sess.run([update, merged_summary], feed_dict={learning_rate: lr, max_global_norm: 5.0})\n l = sess.run(loss)\n writer.add_summary(s, i)\n if i % logstep == logstep - 1:\n print(\"Iter {}, learning rate {}, loss {}\".format(i+1, lr, l))\n \nprint(\"Start training...\")\nif use_toy_data:\n train(100, 10, .5)\nelse:\n train(350, 50, 1)\n train(1000, 100, 0.1)\n train(1000, 100, 0.01)",
"Inference",
"inv_vocab = {i: v for i, v in enumerate(dataset.tgt_vocab)}\ninv_vocab[-1] = 'UNK'\nskip_symbols = ('PAD',)\n\ndef decode_ids(input_ids, output_ids):\n decoded = []\n for sample_i in range(output_ids.shape[0]):\n input_sample = input_ids[sample_i]\n output_sample = output_ids[sample_i]\n input_decoded = [inv_vocab[s] for s in input_sample]\n input_decoded = ''.join(c for c in input_decoded if c not in skip_symbols)\n output_decoded = [inv_vocab[s] for s in output_sample]\n try:\n eos_idx = output_decoded.index('EOS')\n except ValueError: # EOS not in list\n eos_idx = len(output_decoded)\n output_decoded = output_decoded[:eos_idx]\n output_decoded = ''.join(c for c in output_decoded if c not in skip_symbols)\n decoded.append((input_decoded, output_decoded))\n return decoded\n\nif use_attention is True:\n input_ids, output_ids = sess.run([dataset.src_ids, g_outputs.sample_id])\nelse:\n input_ids, output_ids, bm_output_ids = sess.run([dataset.src_ids, g_outputs.sample_id,\n bm_outputs.predicted_ids])\ndecoded = decode_ids(input_ids, output_ids)\nprint('\\n'.join(\n '{} ---> {}'.format(dec[0], dec[1]) for dec in decoded\n))",
"Beam search decoding",
"if use_attention is False:\n all_decoded = []\n for beam_i in range(beam_width):\n inputs = []\n all_decoded.append([])\n decoded = decode_ids(input_ids, bm_output_ids[:,:,beam_i])\n for dec in decoded:\n all_decoded[-1].append(dec[1])\n inputs.append(dec[0])\n\n print('\\n'.join(\n '{} ---> {}'.format(inputs[i], ' / '.join(d[i] for d in all_decoded))\n for i in range(len(inputs))\n ))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jrbourbeau/cr-composition
|
notebooks/legacy/lightheavy/data-MC-comparisons.ipynb
|
mit
|
[
"<a id='top'> </a>\nAuthor: James Bourbeau",
"%load_ext watermark\n%watermark -u -d -v -p numpy,scipy,pandas,sklearn,mlxtend",
"Data-MC comparisons\nTable of contents\n\nData preprocessing\nComparison of different sequential feature selections\nSerialize feature selection algorithm",
"import sys\nsys.path.append('/home/jbourbeau/cr-composition')\nprint('Added to PYTHONPATH')\n\nfrom __future__ import division, print_function\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\n\nfrom icecube.weighting.weighting import from_simprod\n\nimport composition as comp\nimport composition.analysis.plotting as plotting\n\n# # Plotting-related\n# sns.set_palette('muted')\n# sns.set_color_codes()\n# color_dict = {}\n# for i, composition in enumerate(['light', 'heavy', 'total']):\n# color_dict[composition] = sns.color_palette('muted').as_hex()[i]\n%matplotlib inline",
"Data preprocessing\n[ back to top ]\n1. Load simulation/data dataframe and apply specified quality cuts\n2. Extract desired features from dataframe\n3. Get separate testing and training datasets\n4. Feature selection\nLoad simulation, format feature and target matrices",
"df_sim = comp.load_dataframe(datatype='sim', config='IC79')\ndf_data = comp.load_dataframe(datatype='data', config='IC79')\n\nn_sim = len(df_sim)\nn_data = len(df_data)\nprint('{} simulation events'.format(n_sim))\nprint('{} data events'.format(n_data))\n\nbeta_bins=np.linspace(1.4, 9.5, 75)\nplotting.make_verification_plot(df_data, df_sim, 'lap_beta', beta_bins, 'Laputop \\\\beta')\n\ncharge_hits_bins=np.linspace(0, 30, 75)\nplotting.make_verification_plot(df_data, df_sim, 'charge_nhits_ratio', charge_hits_bins, 'Charge/Hits ratio')\n\nrlogl_bins=np.linspace(-50, 0, 75)\nplotting.make_verification_plot(df_data, df_sim, 'lap_rlogl', rlogl_bins, 'Laputop rlogl')\n\ns125_bins=np.linspace(0, 2.5, 75)\nplotting.make_verification_plot(df_data, df_sim, 'log_s125', s125_bins, 'S125')\n\ndf_sim['log_s125'].min(), df_sim['log_s125'].max()\n\nfig, ax = plt.subplots()\nax.errorbar(beta_midpoints, rate_sim, yerr=rate_sim_err, label='MC', marker='.', ms=8)\nax.errorbar(beta_midpoints, rate_data, yerr=rate_data_err, label='Data', marker='.', ms=8)\nax.set_yscale(\"log\", nonposy='clip')\nax.set_xlabel('Laputop $\\\\beta$')\nax.set_ylabel('Frequency')\nplt.grid()\nplt.legend()\nplt.show()\n\nfig, ax = plt.subplots()\nratio, ratio_err = comp.ratio_error(rate_data, rate_data_err,\n rate_sim, rate_sim_err)\nax.errorbar(beta_midpoints, ratio, yerr=ratio_err, marker='.', ms=8)\nax.axhline(1.0, marker='None', ls=':')\nax.set_xlabel('Laputop $\\\\beta$')\nax.set_ylabel('Data/MC')\nplt.grid()\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ulitosCoder/DataAnalysis
|
lesson01/L1_Starter_Code.ipynb
|
gpl-2.0
|
[
"Before we get started, a couple of reminders to keep in mind when using iPython notebooks:\n\nRemember that you can see from the left side of a code cell when it was last run if there is a number within the brackets.\nWhen you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session.\nThe previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage.\n\nLoad Data from CSVs",
"import unicodecsv\n\n## Longer version of code (replaced with shorter, equivalent version below)\n\n# enrollments = []\n# f = open('enrollments.csv', 'rb')\n# reader = unicodecsv.DictReader(f)\n# for row in reader:\n# enrollments.append(row)\n# f.close()\ndef read_csv(filename):\n with open(filename, 'rb') as f:\n reader = unicodecsv.DictReader(f)\n aList = list(reader)\n \n return aList\n\n#with open('enrollments.csv', 'rb') as f:\n# reader = unicodecsv.DictReader(f)\n# enrollments = list(reader)\nenrollments = read_csv('enrollments.csv')\n \nenrollments[15]\n\n#####################################\n# 1 #\n#####################################\n\n## Read in the data from daily_engagement.csv and project_submissions.csv \n## and store the results in the below variables.\n## Then look at the first row of each table.\n\n\ndaily_engagement = read_csv('daily_engagement.csv')\nproject_submissions = read_csv('project_submissions.csv')\n \nprint (daily_engagement[0])\nprint ('\\n')\nprint (project_submissions[0])",
"Fixing Data Types",
"from datetime import datetime as dt\n\n# Takes a date as a string, and returns a Python datetime object. \n# If there is no date given, returns None\ndef parse_date(date):\n if date == '':\n return None\n else:\n return dt.strptime(date, '%Y-%m-%d')\n \n# Takes a string which is either an empty string or represents an integer,\n# and returns an int or None.\ndef parse_maybe_int(i):\n if i == '':\n return None\n else:\n return int(i)\n\n# Clean up the data types in the enrollments table\nfor enrollment in enrollments:\n enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])\n enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])\n enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'\n enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'\n enrollment['join_date'] = parse_date(enrollment['join_date'])\n \nenrollments[0]\n\n# Clean up the data types in the engagement table\nfor engagement_record in daily_engagement:\n engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed']))\n engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited']))\n engagement_record['projects_completed'] = int(float(engagement_record['projects_completed']))\n engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited'])\n engagement_record['utc_date'] = parse_date(engagement_record['utc_date'])\n \ndaily_engagement[0]\n\n# Clean up the data types in the submissions table\nfor submission in project_submissions:\n submission['completion_date'] = parse_date(submission['completion_date'])\n submission['creation_date'] = parse_date(submission['creation_date'])\n\nproject_submissions[0]",
"Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.\nInvestigating the Data",
"#####################################\n# 2 #\n#####################################\n\n## Find the total number of rows and the number of unique students (account keys)\n## in each table.\ndef get_unique_keys(a_list,the_key):\n a_set = set()\n try:\n for item in a_list:\n a_set.add(item[the_key])\n except:\n print ('some error')\n \n \n return a_set\n\nenrollment_num_rows = len(enrollments) # Replace this with your code\nunique_enrollment_students = get_unique_keys(enrollments,'account_key')\nenrollment_num_unique_students = len(unique_enrollment_students)\nprint('enrollments: %d' % enrollment_num_rows)\nprint('unique enrollments: %d' % enrollment_num_unique_students)\n\n\nengagement_num_rows = len(daily_engagement) # Replace this with your code\nunique_engagement_students = get_unique_keys(daily_engagement,'account_key') # Replace this with your code\nengagement_num_unique_students = len(unique_engagement_students)\nprint('enagement %d' % engagement_num_rows)\nprint('unique enagement %d' % engagement_num_unique_students)\n\nsubmission_num_rows = len(project_submissions) # Replace this with your code\nsubmission_unique_students = get_unique_keys(project_submissions,'account_key') # Replace this with your code\nsubmission_num_unique_students = len(submission_unique_students)\nprint(submission_num_rows)\nprint(submission_num_unique_students)\n\nprint(daily_engagement[0]['account_key'])\n",
"Problems in the Data",
"#####################################\n# 3 #\n#####################################\n\n## Rename the \"acct\" column in the daily_engagement table to \"account_key\".\n#actually I modified the file",
"Missing Engagement Records",
"#####################################\n# 4 #\n#####################################\n\n## Find any one student enrollments where the student is missing from the daily engagement table.\n## Output that enrollment.\nnotEngCount = 0\nfor enrollment in enrollments:\n student = enrollment['account_key']\n \n if student not in unique_engagement_students:\n #print (enrollment)\n #break\n notEngCount = notEngCount + 1\n\nprint ('Not engagement count %d' % notEngCount)\n ",
"Checking for More Problem Records",
"#####################################\n# 5 #\n#####################################\n\n## Find the number of surprising data points (enrollments missing from\n## the engagement table) that remain, if any.\nnum_problem_students = 0\nfor enrollment in enrollments:\n student = enrollment['account_key']\n if (student not in unique_engagement_students and \n enrollment['join_date'] != enrollment['cancel_date']):\n print (enrollment)\n num_problem_students += 1\n\nnum_problem_students",
"Tracking Down the Remaining Problems",
"# Create a set of the account keys for all Udacity test accounts\nudacity_test_accounts = set()\nfor enrollment in enrollments:\n if enrollment['is_udacity']:\n udacity_test_accounts.add(enrollment['account_key'])\nlen(udacity_test_accounts)\n\n# Given some data with an account_key field, removes any records corresponding to Udacity test accounts\ndef remove_udacity_accounts(data):\n non_udacity_data = []\n for data_point in data:\n if data_point['account_key'] not in udacity_test_accounts:\n non_udacity_data.append(data_point)\n return non_udacity_data\n\n# Remove Udacity test accounts from all three tables\nnon_udacity_enrollments = remove_udacity_accounts(enrollments)\nnon_udacity_engagement = remove_udacity_accounts(daily_engagement)\nnon_udacity_submissions = remove_udacity_accounts(project_submissions)\n\nprint (len(non_udacity_enrollments))\nprint (len(non_udacity_engagement))\nprint (len(non_udacity_submissions))",
"Refining the Question",
"#####################################\n# 6 #\n#####################################\n\n## Create a dictionary named paid_students containing all students who either\n## haven't canceled yet or who remained enrolled for more than 7 days. The keys\n## should be account keys, and the values should be the date the student enrolled.\n\npaid_students = {}\nfor enrollment in non_udacity_enrollments:\n if (not enrollment['is_canceled'] or\n enrollment['days_to_cancel'] > 7):\n account_key = enrollment['account_key']\n enrollment_date = enrollment['join_date']\n if (account_key not in paid_students or\n enrollment_date > paid_students[account_key]):\n paid_students[account_key] = enrollment_date\nlen(paid_students)",
"Getting Data from First Week",
"# Takes a student's join date and the date of a specific engagement record,\n# and returns True if that engagement record happened within one week\n# of the student joining.\ndef within_one_week(join_date, engagement_date):\n time_delta = engagement_date - join_date\n return time_delta.days < 7 and time_delta.days >= 0\n\ndef remove_free_trial_cancels(data):\n new_data = []\n for data_point in data:\n if data_point['account_key'] in paid_students:\n new_data.append(data_point)\n return new_data\n\npaid_enrollments = remove_free_trial_cancels(non_udacity_enrollments)\npaid_engagement = remove_free_trial_cancels(non_udacity_engagement)\npaid_submissions = remove_free_trial_cancels(non_udacity_submissions)\n\nprint (len(paid_enrollments))\nprint (len(paid_engagement))\nprint (len(paid_submissions))\n\nfor engagement_record in paid_engagement:\n if engagement_record['num_courses_visited'] > 0:\n engagement_record['has_visited'] = 1\n else:\n engagement_record['has_visited'] = 0\n\n#####################################\n# 7 #\n#####################################\n\n## Create a list of rows from the engagement table including only rows where\n## the student is one of the paid students you just found, and the date is within\n## one week of the student's join date.\npaid_engagement_in_first_week = []\n\nfor eng_entry in paid_engagement:\n a_key = eng_entry['account_key']\n eng_date = eng_entry['utc_date']\n join_date = paid_students[a_key]\n \n if within_one_week(join_date,eng_date):\n paid_engagement_in_first_week.append(eng_entry)\n \n \nlen(paid_engagement_in_first_week)",
"Exploring Student Engagement",
"from collections import defaultdict\n\n# Create a dictionary of engagement grouped by student.\n# The keys are account keys, and the values are lists of engagement records.\n\ndef group_data(data,key_name):\n grouped_data = defaultdict(list)\n \n for record in data:\n key_value = record[key_name]\n grouped_data[key_value].append(record)\n \n return grouped_data\n\ndef sum_grouped_data(grouped_data,field_name):\n sumed_data = {}\n for account_key, grouped_values in grouped_data.items():\n \n total_value = 0\n for a_record in grouped_values:\n total_value += a_record[field_name]\n \n sumed_data[account_key] = total_value\n \n return sumed_data\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef describe_results(total_grouped_values):\n total_values = total_grouped_values.values()\n print ('Mean: %f' % np.mean(total_values))\n print ('Standard deviation: %f' % np.std(total_values))\n print ('Minimum: %f' % np.min(total_values))\n print ('Maximum: %f' % np.max(total_values))\n plt.hist(total_values)\n\nengagement_by_account = group_data(paid_engagement_in_first_week,'account_key')\ntotal_minutes_by_account = sum_grouped_data(engagement_by_account,'total_minutes_visited')\n\n\ndescribe_results(total_minutes_by_account)",
"Debugging Data Analysis Code",
"#####################################\n# 8 #\n#####################################\n\n## Go through a similar process as before to see if there is a problem.\n## Locate at least one surprising piece of data, output it, and take a look at it.\nstudent_with_max_minutes = None\nmax_minutes = 0\nc = 0\nfor student, total_minutes in total_minutes_by_account.items():\n \n if total_minutes > max_minutes:\n max_minutes = total_minutes\n student_with_max_minutes = student\n \nmax_minutes\n \n\nfor engagement_record in paid_engagement_in_first_week:\n if engagement_record['account_key'] == student_with_max_minutes:\n print engagement_record",
"Lessons Completed in First Week",
"#####################################\n# 9 #\n#####################################\n\n## Adapt the code above to find the mean, standard deviation, minimum, and maximum for\n## the number of lessons completed by each student during the first week. Try creating\n## one or more functions to re-use the code above.\n\n \ntotal_lessons_completed_by_account = sum_grouped_data(engagement_by_account,'lessons_completed')\n\ndescribe_results(total_lessons_completed_by_account)\n",
"Number of Visits in First Week",
"######################################\n# 10 #\n######################################\n\n## Find the mean, standard deviation, minimum, and maximum for the number of\n## days each student visits the classroom during the first week.\ntotal_first_week = sum_grouped_data(engagement_by_account,'has_visited')\n\ndescribe_results(total_first_week)",
"Splitting out Passing Students",
"######################################\n# 11 #\n######################################\n\n## Create two lists of engagement data for paid students in the first week.\n## The first list should contain data for students who eventually pass the\n## subway project, and the second list should contain data for students\n## who do not.\n\nsubway_project_lesson_keys = ['746169184', '3176718735']\n\nsubway_submissions = group_data(paid_submissions,'lesson_key')\npassing_engagement = []\nnon_passing_engagement = []\n\npass_subway_project = set()\n \nfor submission in paid_submissions:\n project = submission['lesson_key']\n rating = submission['assigned_rating']\n \n if project in subway_project_lesson_keys and \\\n (rating == 'PASSED' or rating == 'DISTINCTION'):\n pass_subway_project.add(submission['account_key'])\n\nprint len(pass_subway_project)\n\nfor engagement_record in paid_engagement_in_first_week:\n if engagement_record['account_key'] in pass_subway_project:\n passing_engagement.append(engagement_record)\n else:\n non_passing_engagement.append(engagement_record)\n \nprint len(passing_engagement)\nprint len(non_passing_engagement)",
"Comparing the Two Student Groups",
"######################################\n# 12 #\n######################################\n\n## Compute some metrics you're interested in and see how they differ for\n## students who pass the subway project vs. students who don't. A good\n## starting point would be the metrics we looked at earlier (minutes spent\n## in the classroom, lessons completed, and days visited).\n\n\n\npassing_engagement_by_account = group_data(passing_engagement,'account_key')\nnon_passing_engagement_by_account = group_data(non_passing_engagement,'account_key')\n\ntotal_minutes_by_pass_account = sum_grouped_data(passing_engagement_by_account,'total_minutes_visited')\nprint 'minutes for Passing students'\ndescribe_results(total_minutes_by_pass_account)\n\nprint '\\n'\n\ntotal_minutes_by_non_pass_account = sum_grouped_data(non_passing_engagement_by_account,'total_minutes_visited')\nprint 'minutes for NON Passing students'\ndescribe_results(total_minutes_by_non_pass_account)\n\nprint '\\n'\nprint '\\n'\n\nlessons_completed_by_pass_account = sum_grouped_data(passing_engagement_by_account,'lessons_completed')\nprint 'lessons_completed for Passing students'\ndescribe_results(lessons_completed_by_pass_account)\n\nprint '\\n'\n\nlessons_completed_by_non_pass_account = sum_grouped_data(non_passing_engagement_by_account,'lessons_completed')\nprint 'lessons_completed for NON Passing students'\ndescribe_results(lessons_completed_by_non_pass_account)\n\n\nprint '\\n'\nprint '\\n'\n\ndays_visited_by_pass_account = sum_grouped_data(passing_engagement_by_account,'has_visited')\nprint 'days_visited for Passing students'\ndescribe_results(days_visited_by_pass_account)\n\nprint '\\n'\n\ndays_visited_by_non_pass_account = sum_grouped_data(non_passing_engagement_by_account,'has_visited')\nprint 'days_visited for NON Passing students'\ndescribe_results(days_visited_by_non_pass_account)",
"Making Histograms",
"######################################\n# 13 #\n######################################\n\n## Make histograms of the three metrics we looked at earlier for both\n## students who passed the subway project and students who didn't. You\n## might also want to make histograms of any other metrics you examined.\n\n\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\n\nfig, axes = plt.subplots(nrows=2, ncols=2, figsize=(7, 3), sharey=False,sharex=False)\naxes[0,0].hist(total_minutes_by_non_pass_account.values(),bins=20)\naxes[1,0].hist(days_visited_by_non_pass_account.values())\n\naxes[0,1].hist(total_minutes_by_pass_account.values())\naxes[1,1].hist(days_visited_by_pass_account.values())\n\n\n",
"Improving Plots and Sharing Findings",
"######################################\n# 14 #\n######################################\n\n## Make a more polished version of at least one of your visualizations\n## from earlier. Try importing the seaborn library to make the visualization\n## look better, adding axis labels and a title, and changing one or more\n## arguments to the hist() function."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
probml/pyprobml
|
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
|
mit
|
[
"<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/bayes_stats/linreg_hierarchical_non_centered_pymc3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nHierarchical non-centered Bayesian Linear Regression in PyMC3\nThe text and code for this notebook are taken directly from this blog post\n by Thomas Wiecki. Original notebook",
"!pip install arviz\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pymc3 as pm\nimport pandas as pd\nimport theano\nimport seaborn as sns\n\nsns.set_style(\"whitegrid\")\nnp.random.seed(123)\n\nurl = \"https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/radon.csv?raw=true\"\ndata = pd.read_csv(url)\n# data = pd.read_csv('../data/radon.csv')\ndata[\"log_radon\"] = data[\"log_radon\"].astype(theano.config.floatX)\ncounty_names = data.county.unique()\ncounty_idx = data.county_code.values\n\nn_counties = len(data.county.unique())",
"The intuitive specification\nUsually, hierachical models are specified in a centered way. In a regression model, individual slopes would be centered around a group mean with a certain group variance, which controls the shrinkage:",
"with pm.Model() as hierarchical_model_centered:\n # Hyperpriors for group nodes\n mu_a = pm.Normal(\"mu_a\", mu=0.0, sd=100**2)\n sigma_a = pm.HalfCauchy(\"sigma_a\", 5)\n mu_b = pm.Normal(\"mu_b\", mu=0.0, sd=100**2)\n sigma_b = pm.HalfCauchy(\"sigma_b\", 5)\n\n # Intercept for each county, distributed around group mean mu_a\n # Above we just set mu and sd to a fixed value while here we\n # plug in a common group distribution for all a and b (which are\n # vectors of length n_counties).\n a = pm.Normal(\"a\", mu=mu_a, sd=sigma_a, shape=n_counties)\n\n # Intercept for each county, distributed around group mean mu_a\n b = pm.Normal(\"b\", mu=mu_b, sd=sigma_b, shape=n_counties)\n\n # Model error\n eps = pm.HalfCauchy(\"eps\", 5)\n\n # Linear regression\n radon_est = a[county_idx] + b[county_idx] * data.floor.values\n\n # Data likelihood\n radon_like = pm.Normal(\"radon_like\", mu=radon_est, sd=eps, observed=data.log_radon)\n\n# Inference button (TM)!\nwith hierarchical_model_centered:\n hierarchical_centered_trace = pm.sample(draws=5000, tune=1000)[1000:]\n\npm.traceplot(hierarchical_centered_trace);",
"I have seen plenty of traces with terrible convergences but this one might look fine to the unassuming eye. Perhaps sigma_b has some problems, so let's look at the Rhat:",
"print(\"Rhat(sigma_b) = {}\".format(pm.diagnostics.gelman_rubin(hierarchical_centered_trace)[\"sigma_b\"]))",
"Not too bad -- well below 1.01. I used to think this wasn't a big deal but Michael Betancourt in his StanCon 2017 talk makes a strong point that it is actually very problematic. To understand what's going on, let's take a closer look at the slopes b and their group variance (i.e. how far they are allowed to move from the mean) sigma_b. I'm just plotting a single chain now.",
"fig, axs = plt.subplots(nrows=2)\naxs[0].plot(hierarchical_centered_trace.get_values(\"sigma_b\", chains=1), alpha=0.5)\naxs[0].set(ylabel=\"sigma_b\")\naxs[1].plot(hierarchical_centered_trace.get_values(\"b\", chains=1), alpha=0.5)\naxs[1].set(ylabel=\"b\");",
"sigma_b seems to drift into this area of very small values and get stuck there for a while. This is a common pattern and the sampler is trying to tell you that there is a region in space that it can't quite explore efficiently. While stuck down there, the slopes b_i become all squished together. We've entered The Funnel of Hell (it's just called the funnel, I added the last part for dramatic effect).\nThe Funnel of Hell (and how to escape it)\nLet's look at the joint posterior of a single slope b (I randomly chose the 75th one) and the slope group variance sigma_b.",
"x = pd.Series(hierarchical_centered_trace[\"b\"][:, 75], name=\"slope b_75\")\ny = pd.Series(hierarchical_centered_trace[\"sigma_b\"], name=\"slope group variance sigma_b\")\n\nsns.jointplot(x, y, ylim=(0, 0.7));",
"This makes sense, as the slope group variance goes to zero (or, said differently, we apply maximum shrinkage), individual slopes are not allowed to deviate from the slope group mean, so they all collapose to the group mean.\nWhile this property of the posterior in itself is not problematic, it makes the job extremely difficult for our sampler. Imagine a Metropolis-Hastings exploring this space with a medium step-size (we're using NUTS here but the intuition works the same): in the wider top region we can comfortably make larger jumps to explore the space efficiently. However, once we move to the narrow bottom region we can change b_75 and sigma_b only by tiny amounts. This causes the sampler to become trapped in that region of space. Most of the proposals will be rejected because our step-size is too large for this narrow part of the space and exploration will be very inefficient.\nYou might wonder if we could somehow choose the step-size based on the denseness (or curvature) of the space. Indeed that's possible and it's called Riemannian HMC. It works very well but is quite costly to run. Here, we will explore a different, simpler method.\nFinally, note that this problem does not exist for the intercept parameters a. Because we can determine individual intercepts a_i with enough confidence, sigma_a is not small enough to be problematic. Thus, the funnel of hell can be a problem in hierarchical models, but it does not have to be. (Thanks to John Hall for pointing this out).\nReparameterization\nIf we can't easily make the sampler step-size adjust to the region of space, maybe we can adjust the region of space to make it simpler for the sampler? This is indeed possible and quite simple with a small reparameterization trick, we will call this the non-centered version.",
"with pm.Model() as hierarchical_model_non_centered:\n # Hyperpriors for group nodes\n mu_a = pm.Normal(\"mu_a\", mu=0.0, sd=100**2)\n sigma_a = pm.HalfCauchy(\"sigma_a\", 5)\n mu_b = pm.Normal(\"mu_b\", mu=0.0, sd=100**2)\n sigma_b = pm.HalfCauchy(\"sigma_b\", 5)\n\n # Before:\n # a = pm.Normal('a', mu=mu_a, sd=sigma_a, shape=n_counties)\n # Transformed:\n a_offset = pm.Normal(\"a_offset\", mu=0, sd=1, shape=n_counties)\n a = pm.Deterministic(\"a\", mu_a + a_offset * sigma_a)\n\n # Before:\n # b = pm.Normal('b', mu=mu_b, sd=sigma_b, shape=n_counties)\n # Now:\n b_offset = pm.Normal(\"b_offset\", mu=0, sd=1, shape=n_counties)\n b = pm.Deterministic(\"b\", mu_b + b_offset * sigma_b)\n\n # Model error\n eps = pm.HalfCauchy(\"eps\", 5)\n\n radon_est = a[county_idx] + b[county_idx] * data.floor.values\n\n # Data likelihood\n radon_like = pm.Normal(\"radon_like\", mu=radon_est, sd=eps, observed=data.log_radon)",
"Pay attention to the definitions of a_offset, a, b_offset, and b and compare them to before (commented out). What's going on here? It's pretty neat actually. Instead of saying that our individual slopes b are normally distributed around a group mean (i.e. modeling their absolute values directly), we can say that they are offset from a group mean by a certain value (b_offset; i.e. modeling their values relative to that mean). Now we still have to consider how far from that mean we actually allow things to deviate (i.e. how much shrinkage we apply). This is where sigma_b makes a comeback. We can simply multiply the offset by this scaling factor to get the same effect as before, just under a different parameterization. For a more formal introduction, see e.g. Betancourt & Girolami (2013).\nCritically, b_offset and sigma_b are now mostly independent. This will become more clear soon. Let's first look at if this transform helped our sampling:",
"# Inference button (TM)!\nwith hierarchical_model_non_centered:\n hierarchical_non_centered_trace = pm.sample(draws=5000, tune=1000)[1000:]\n\npm.traceplot(hierarchical_non_centered_trace, varnames=[\"sigma_b\"]);",
"That looks much better as also confirmed by the joint plot:",
"fig, axs = plt.subplots(ncols=2, sharex=True, sharey=True)\n\nx = pd.Series(hierarchical_centered_trace[\"b\"][:, 75], name=\"slope b_75\")\ny = pd.Series(hierarchical_centered_trace[\"sigma_b\"], name=\"slope group variance sigma_b\")\n\naxs[0].plot(x, y, \".\")\naxs[0].set(title=\"Centered\", ylabel=\"sigma_b\", xlabel=\"b_75\")\n\nx = pd.Series(hierarchical_non_centered_trace[\"b\"][:, 75], name=\"slope b_75\")\ny = pd.Series(hierarchical_non_centered_trace[\"sigma_b\"], name=\"slope group variance sigma_b\")\n\naxs[1].plot(x, y, \".\")\naxs[1].set(title=\"Non-centered\", xlabel=\"b_75\");",
"To really drive this home, let's also compare the sigma_b marginal posteriors of the two models:",
"pm.kdeplot(\n np.stack(\n [\n hierarchical_centered_trace[\"sigma_b\"],\n hierarchical_non_centered_trace[\"sigma_b\"],\n ]\n ).T\n)\nplt.axvline(hierarchical_centered_trace[\"sigma_b\"].mean(), color=\"b\", linestyle=\"--\")\nplt.axvline(hierarchical_non_centered_trace[\"sigma_b\"].mean(), color=\"g\", linestyle=\"--\")\nplt.legend([\"Centered\", \"Non-cenetered\", \"Centered posterior mean\", \"Non-centered posterior mean\"])\nplt.xlabel(\"sigma_b\")\nplt.ylabel(\"Probability Density\");",
"That's crazy -- there's a large region of very small sigma_b values that the sampler could not even explore before. In other words, our previous inferences (\"Centered\") were severely biased towards higher values of sigma_b. Indeed, if you look at the previous blog post the sampler never even got stuck in that low region causing me to believe everything was fine. These issues are hard to detect and very subtle, but they are meaningful as demonstrated by the sizable difference in posterior mean.\nBut what does this concretely mean for our analysis? Over-estimating sigma_b means that we have a biased (=false) belief that we can tell individual slopes apart better than we actually can. There is less information in the individual slopes than what we estimated.\nWhy does the reparameterized model work better?\nTo more clearly understand why this model works better, let's look at the joint distribution of b_offset:",
"x = pd.Series(hierarchical_non_centered_trace[\"b_offset\"][:, 75], name=\"slope b_offset_75\")\ny = pd.Series(hierarchical_non_centered_trace[\"sigma_b\"], name=\"slope group variance sigma_b\")\n\nsns.jointplot(x, y, ylim=(0, 0.7))",
"This is the space the sampler sees; you can see how the funnel is flattened out. We can freely change the (relative) slope offset parameters even if the slope group variance is tiny as it just acts as a scaling parameter.\nNote that the funnel is still there -- it's a perfectly valid property of the model -- but the sampler has a much easier time exploring it in this different parameterization.\nWhy hierarchical models are Bayesian\nFinally, I want to take the opportunity to make another point that is not directly related to hierarchical models but can be demonstrated quite well here.\nUsually when talking about the perils of Bayesian statistics we talk about priors, uncertainty, and flexibility when coding models using Probabilistic Programming. However, an even more important property is rarely mentioned because it is much harder to communicate. Ross Taylor touched on this point in his tweet:\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">It's interesting that many summarize Bayes as being about priors; but real power is its focus on integrals/expectations over maxima/modes</p>— Ross Taylor (@rosstaylor90) <a href=\"https://twitter.com/rosstaylor90/status/827263854002401281\">February 2, 2017</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>\n\nMichael Betancourt makes a similar point when he says \"Expectations are the only thing that make sense.\"\nBut what's wrong with maxima/modes? Aren't those really close to the posterior mean (i.e. the expectation)? Unfortunately, that's only the case for the simple models we teach to build up intuitions. In complex models, like the hierarchical one, the MAP can be far away and not be interesting or meaningful at all.\nLet's compare the posterior mode (i.e. the MAP) to the posterior mean of our hierachical linear regression model:",
"with hierarchical_model_centered:\n mode = pm.find_MAP()\n\nmode[\"b\"]\n\nnp.exp(mode[\"sigma_b_log_\"])",
"As you can see, the slopes are all identical and the group slope variance is effectively zero. The reason is again related to the funnel. The MAP only cares about the probability density which is highest at the bottom of the funnel. \nBut if you could only choose one point in parameter space to summarize the posterior above, would this be the one you'd pick? Probably not.\nLet's instead look at the Expected Value (i.e. posterior mean) which is computed by integrating probability density and volume to provide probabilty mass -- the thing we really care about. Under the hood, that's the integration performed by the MCMC sampler.",
"hierarchical_non_centered_trace[\"b\"].mean(axis=0)\n\nhierarchical_non_centered_trace[\"sigma_b\"].mean(axis=0)",
"Quite a difference. This also explains why it can be a bad idea to use the MAP to initialize your sampler: in certain models the MAP is not at all close to the region you want to explore (i.e. the \"typical set\"). \nThis strong divergence of the MAP and the Posterior Mean does not only happen in hierarchical models but also in high dimensional ones, where our intuitions from low-dimensional spaces gets twisted in serious ways. This talk by Michael Betancourt makes the point quite nicely.\nSo why do people -- especially in Machine Learning -- still use the MAP/MLE? As we all learned in high school first hand, integration is much harder than differentation. This is really the only reason.\nFinal disclaimer: This might provide the impression that this is a property of being in a Bayesian framework, which is not true. Technically, we can talk about Expectations vs Modes irrespective of that. Bayesian statistics just happens to provide a very intuitive and flexible framework for expressing and estimating these models.\nSee here for the underlying notebook of this blog post.\nAcknowledgements\nThanks to Jon Sedar for helpful comments on an earlier draft."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
feststelltaste/software-analytics
|
cheatbooks/timeseries.ipynb
|
gpl-3.0
|
[
"timeseries\nWorking with timeseries in pandas is a fullfilling to work with time-based data.\nThis Cheatbook (Cheatsheet + Notebook) introduces you to the core functionality when working with pandas' time series / date functionality.\nReferences\n\nAPI Reference\n\nTimestamp\nUsing just pandas' time data types is fun. Pandas provides intuitive ways for working with time data.\nSingle time objects\nLet's create some Timestamps / point in time.",
"import pandas as pd\npd.Timestamp(\"today\")",
"You can put in some standard date formats. Pandas' will convert them accordingly.",
"new_years_dinner = pd.Timestamp(\"2020-01-01 19:00\")\nnew_years_dinner",
"We can also create relative time information",
"time_needed_to_sober_up = pd.Timedelta(\"1 day\")\ntime_needed_to_sober_up",
"We can also do calculations with thos objects.",
"completely_sober = new_years_dinner + time_needed_to_sober_up\ncompletely_sober",
"Time series\nWe can work with a list of time-based data, too. Here we use pandas' date_range method to create such a list (with m for end of months).",
"dates = pd.DataFrame(\n pd.date_range(\"2020-03-01\", periods=5, freq=\"m\"),\n columns=[\"day\"]\n )\ndates",
"With this, we calculate with time in a similar way as above.",
"dates[\"day_after_tomorrow\"] = dates['day'] + pd.Timedelta(\"2 days\")\ndates",
"DateTimeProperties object\nEspecially the DateTimeProperties object contains time related data as attributes or methods that we can use.",
"dt_properties = dates['day'].dt\ndt_properties",
"Let's take a look the some of the properties.",
"# this code is just for demonstration purposes and not needed in an analysis\n[x for x in dir(dt_properties) if not x.startswith(\"_\")]",
"We can e.g. call the method day_name() on a date time series to get the name of the day for a date.",
"dt_properties.day_name()",
"Timestamp Series\nLet's work with some real data (or at least a part of it). \nExample Scenario\nThe following dataset is an excerpt from a change log of a software. We want to take a look at which hour of the day the changes are made to the software.\nFirst try\nWe can read in time-based datasets as any other dataset.",
"change_log = pd.read_csv(\"datasets/change_history.csv\")\nchange_log.head()",
"Note, if we import a dataset like this, the time data will be of a simple object data type.",
"change_log.info()",
"So we have to convert that data first into a time-based data type with pandas' to_datetime() function.",
"change_log['timestamp'] = pd.to_datetime(change_log['timestamp'])\nchange_log.info()",
"Next, we want to see at whick hour of the day most changes were done. We can use the same strategies to get more detailed information like in the previous examples.",
"change_log['hour'] = change_log['timestamp'].dt.hour\nchange_log.head()",
"Let's simply count the number of changes per hour.",
"changes_per_hour = change_log['hour'].value_counts(sort=False)\nchanges_per_hour.head()",
"And create a little bar chart.",
"changes_per_hour.plot.bar();",
"At the first glance, this looks pretty fine. But there is a problem: Missing data. E.g. at 3am and 5am, there weren't any changes.\nWe can handle this by using the more advanced resample functionality of pandas. This allows us to determine at which frequency we summarize time-based data.\nSecond try: resampling time\nFor this, we create a time series Dataframe from the dataset again. This time, we import the dataset by additionally using the parse_dates keyword and the number of the column that contains dates. This would lead to an converted date column from the beginning.",
"change_log = pd.read_csv(\"datasets/change_history.csv\", parse_dates=[0], index_col=0)\nchange_log.head()\n\nchange_log['changes'] = 1\nchange_log.head()",
"Now we are able to apply the resample function on it with the information that we want to group our data hourly. We also have to decided what we want to do with the",
"hourly_changes = change_log.resample(\"h\").count()\nhourly_changes.head()\n\nhourly_changes['hour'] = hourly_changes.index.hour\nhourly_changes.head()\n\nchanges_per_hour = hourly_changes.groupby(\"hour\").sum()\nchanges_per_hour.head()\n\nchanges_per_hour.plot.bar();",
"Display progressions",
"hourly_changes.head()\n\naccumulated_changes = hourly_changes[['changes']].cumsum()\naccumulated_changes.head()\n\naccumulated_changes.plot();",
"Grouping time and data\nSo far, we did group only on time-based data. But what if we want, e.g., group the weekly changes by each developer? Let's do this!\nOnce again, we read in the dataset that we already know. We only let pandas parse the timestamp information.",
"change_log = pd.read_csv(\"datasets/change_history.csv\", parse_dates=[0])\nchange_log.head() ",
"For this scenario, we also need some developers.",
"devs = pd.Series([\"Alice\", \"Bob\", \"John\", \"Steve\", \"Yvonne\"])\ndevs",
"Let's add some artificial ones to the changes and also mark each change with a separate column.",
"change_log['dev'] = devs.sample(len(change_log), replace=True).values\nchange_log['changes'] = 1\nchange_log.head()",
"OK, we want to group the changes per week per developer to find out the most active developer of the week (if this makes sense is up to you to find out ;-).\nFor this, we use groupby with a pandas Grouper. With the Grouper, we can say which column we want to group at which frequency (seconds, minutes, ... , years and so on). In our case: weekly. Additionally, we want to track which developer did how many weekly changes. So we take developers also in the list with the relevant information that should be grouped and sum up the changes accordingly.",
"weekly_changes_per_dev = \\\n change_log.groupby([\n pd.Grouper(key='timestamp', freq='w'),\n 'dev']) \\\n .sum()\nweekly_changes_per_dev.head()",
"This give as a Dataframe which lists the number of changes per week for each developers. We sort this list to get a kind of \"most active developer per week list\":",
"weekly_changes_per_dev.sort_values(\n by=['timestamp', 'changes'],\n ascending=[True, False])",
"Summary\nThis Cheatbook guided you through several time series use cases. I hope you find this a good starting point for your own data analysis with time-based data!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Unidata/netcdf4-python
|
examples/writing_netCDF.ipynb
|
mit
|
[
"Writing netCDF data\nImportant Note: when running this notebook interactively in a browser, you probably will not be able to execute individual cells out of order without getting an error. Instead, choose \"Run All\" from the Cell menu after you modify a cell.",
"import netCDF4 # Note: python is case-sensitive!\nimport numpy as np",
"Opening a file, creating a new Dataset\nLet's create a new, empty netCDF file named 'data/new.nc', opened for writing.\nBe careful, opening a file with 'w' will clobber any existing data (unless clobber=False is used, in which case an exception is raised if the file already exists).\n\nmode='r' is the default.\nmode='a' opens an existing file and allows for appending (does not clobber existing data)\nformat can be one of NETCDF3_CLASSIC, NETCDF3_64BIT, NETCDF4_CLASSIC or NETCDF4 (default). NETCDF4_CLASSIC uses HDF5 for the underlying storage layer (as does NETCDF4) but enforces the classic netCDF 3 data model so data can be read with older clients.",
"try: ncfile.close() # just to be safe, make sure dataset is not already open.\nexcept: pass\nncfile = netCDF4.Dataset('data/new.nc',mode='w',format='NETCDF4_CLASSIC') \nprint(ncfile)",
"Creating dimensions\nThe ncfile object we created is a container for dimensions, variables, and attributes. First, let's create some dimensions using the createDimension method. \n\nEvery dimension has a name and a length. \nThe name is a string that is used to specify the dimension to be used when creating a variable, and as a key to access the dimension object in the ncfile.dimensions dictionary.\n\nSetting the dimension length to 0 or None makes it unlimited, so it can grow. \n\nFor NETCDF4 files, any variable's dimension can be unlimited. \nFor NETCDF4_CLASSIC and NETCDF3* files, only one per variable can be unlimited, and it must be the leftmost (fastest varying) dimension.",
"lat_dim = ncfile.createDimension('lat', 73) # latitude axis\nlon_dim = ncfile.createDimension('lon', 144) # longitude axis\ntime_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).\nfor dim in ncfile.dimensions.items():\n print(dim)",
"Creating attributes\nnetCDF attributes can be created just like you would for any python object. \n\nBest to adhere to established conventions (like the CF conventions)\nWe won't try to adhere to any specific convention here though.",
"ncfile.title='My model data'\nprint(ncfile.title)",
"Try adding some more attributes...\nCreating variables\nNow let's add some variables and store some data in them. \n\nA variable has a name, a type, a shape, and some data values. \nThe shape of a variable is specified by a tuple of dimension names. \nA variable should also have some named attributes, such as 'units', that describe the data.\n\nThe createVariable method takes 3 mandatory args.\n\nthe 1st argument is the variable name (a string). This is used as the key to access the variable object from the variables dictionary.\nthe 2nd argument is the datatype (most numpy datatypes supported). \nthe third argument is a tuple containing the dimension names (the dimensions must be created first). Unless this is a NETCDF4 file, any unlimited dimension must be the leftmost one.\nthere are lots of optional arguments (many of which are only relevant when format='NETCDF4') to control compression, chunking, fill_value, etc.",
"# Define two variables with the same names as dimensions,\n# a conventional way to define \"coordinate variables\".\nlat = ncfile.createVariable('lat', np.float32, ('lat',))\nlat.units = 'degrees_north'\nlat.long_name = 'latitude'\nlon = ncfile.createVariable('lon', np.float32, ('lon',))\nlon.units = 'degrees_east'\nlon.long_name = 'longitude'\ntime = ncfile.createVariable('time', np.float64, ('time',))\ntime.units = 'hours since 1800-01-01'\ntime.long_name = 'time'\n# Define a 3D variable to hold the data\ntemp = ncfile.createVariable('temp',np.float64,('time','lat','lon')) # note: unlimited dimension is leftmost\ntemp.units = 'K' # degrees Kelvin\ntemp.standard_name = 'air_temperature' # this is a CF standard name\nprint(temp)",
"Pre-defined variable attributes (read only)\nThe netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim. \nNote: since no data has been written yet, the length of the 'time' dimension is 0.",
"print(\"-- Some pre-defined attributes for variable temp:\")\nprint(\"temp.dimensions:\", temp.dimensions)\nprint(\"temp.shape:\", temp.shape)\nprint(\"temp.dtype:\", temp.dtype)\nprint(\"temp.ndim:\", temp.ndim)",
"Writing data\nTo write data to a netCDF variable object, just treat it like a numpy array and assign values to a slice.",
"nlats = len(lat_dim); nlons = len(lon_dim); ntimes = 3\n# Write latitudes, longitudes.\n# Note: the \":\" is necessary in these \"write\" statements\nlat[:] = -90. + (180./nlats)*np.arange(nlats) # south pole to north pole\nlon[:] = (180./nlats)*np.arange(nlons) # Greenwich meridian eastward\n# create a 3D array of random numbers\ndata_arr = np.random.uniform(low=280,high=330,size=(ntimes,nlats,nlons))\n# Write the data. This writes the whole 3D netCDF variable all at once.\ntemp[:,:,:] = data_arr # Appends data along unlimited dimension\nprint(\"-- Wrote data, temp.shape is now \", temp.shape)\n# read data back from variable (by slicing it), print min and max\nprint(\"-- Min/Max values:\", temp[:,:,:].min(), temp[:,:,:].max())",
"You can just treat a netCDF Variable object like a numpy array and assign values to it.\nVariables automatically grow along unlimited dimensions (unlike numpy arrays)\nThe above writes the whole 3D variable all at once, but you can write it a slice at a time instead.\n\nLet's add another time slice....",
"# create a 2D array of random numbers\ndata_slice = np.random.uniform(low=280,high=330,size=(nlats,nlons))\ntemp[3,:,:] = data_slice # Appends the 4th time slice\nprint(\"-- Wrote more data, temp.shape is now \", temp.shape)",
"Note that we have not yet written any data to the time variable. It automatically grew as we appended data along the time dimension to the variable temp, but the data is missing.",
"print(time)\ntimes_arr = time[:]\nprint(type(times_arr),times_arr) # dashes indicate masked values (where data has not yet been written)",
"Let's add write some data into the time variable. \n\nGiven a set of datetime instances, use date2num to convert to numeric time values and then write that data to the variable.",
"from datetime import datetime\nfrom netCDF4 import date2num,num2date\n# 1st 4 days of October.\ndates = [datetime(2014,10,1,0),datetime(2014,10,2,0),datetime(2014,10,3,0),datetime(2014,10,4,0)]\nprint(dates)\ntimes = date2num(dates, time.units)\nprint(times, time.units) # numeric values\ntime[:] = times\n# read time data back, convert to datetime instances, check values.\nprint(num2date(time[:],time.units))",
"Closing a netCDF file\nIt's important to close a netCDF file you opened for writing:\n\nflushes buffers to make sure all data gets written\nreleases memory resources used by open netCDF files",
"# first print the Dataset object to see what we've got\nprint(ncfile)\n# close the Dataset.\nncfile.close(); print('Dataset is closed!')",
"Advanced features\nSo far we've only exercised features associated with the old netCDF version 3 data model. netCDF version 4 adds a lot of new functionality that comes with the more flexible HDF5 storage layer. \nLet's create a new file with format='NETCDF4' so we can try out some of these features.",
"ncfile = netCDF4.Dataset('data/new2.nc','w',format='NETCDF4')\nprint(ncfile)",
"Creating Groups\nnetCDF version 4 added support for organizing data in hierarchical groups.\n\nanalagous to directories in a filesystem. \nGroups serve as containers for variables, dimensions and attributes, as well as other groups. \n\nA netCDF4.Dataset creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem. \n\n\ngroups are created using the createGroup method.\n\ntakes a single argument (a string, which is the name of the Group instance). This string is used as a key to access the group instances in the groups dictionary.\n\nHere we create two groups to hold data for two different model runs.",
"grp1 = ncfile.createGroup('model_run1')\ngrp2 = ncfile.createGroup('model_run2')\nfor grp in ncfile.groups.items():\n print(grp)",
"Create some dimensions in the root group.",
"lat_dim = ncfile.createDimension('lat', 73) # latitude axis\nlon_dim = ncfile.createDimension('lon', 144) # longitude axis\ntime_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to).",
"Now create a variable in grp1 and grp2. The library will search recursively upwards in the group tree to find the dimensions (which in this case are defined one level up).\n\nThese variables are create with zlib compression, another nifty feature of netCDF 4. \nThe data are automatically compressed when data is written to the file, and uncompressed when the data is read. \nThis can really save disk space, especially when used in conjunction with the least_significant_digit keyword argument, which causes the data to be quantized (truncated) before compression. This makes the compression lossy, but more efficient.",
"temp1 = grp1.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)\ntemp2 = grp2.createVariable('temp',np.float64,('time','lat','lon'),zlib=True)\nfor grp in ncfile.groups.items(): # shows that each group now contains 1 variable\n print(grp)",
"Creating a variable with a compound data type\n\nCompound data types map directly to numpy structured (a.k.a 'record' arrays). \nStructured arrays are akin to C structs, or derived types in Fortran. \nThey allow for the construction of table-like structures composed of combinations of other data types, including other compound types. \nMight be useful for representing multiple parameter values at each point on a grid, or at each time and space location for scattered (point) data. \n\nHere we create a variable with a compound data type to represent complex data (there is no native complex data type in netCDF). \n\nThe compound data type is created with the createCompoundType method.",
"# create complex128 numpy structured data type\ncomplex128 = np.dtype([('real',np.float64),('imag',np.float64)])\n# using this numpy dtype, create a netCDF compound data type object\n# the string name can be used as a key to access the datatype from the cmptypes dictionary.\ncomplex128_t = ncfile.createCompoundType(complex128,'complex128')\n# create a variable with this data type, write some data to it.\ncmplxvar = grp1.createVariable('cmplx_var',complex128_t,('time','lat','lon'))\n# write some data to this variable\n# first create some complex random data\nnlats = len(lat_dim); nlons = len(lon_dim)\ndata_arr_cmplx = np.random.uniform(size=(nlats,nlons))+1.j*np.random.uniform(size=(nlats,nlons))\n# write this complex data to a numpy complex128 structured array\ndata_arr = np.empty((nlats,nlons),complex128)\ndata_arr['real'] = data_arr_cmplx.real; data_arr['imag'] = data_arr_cmplx.imag\ncmplxvar[0] = data_arr # write the data to the variable (appending to time dimension)\nprint(cmplxvar)\ndata_out = cmplxvar[0] # read one value of data back from variable\nprint(data_out.dtype, data_out.shape, data_out[0,0])",
"Creating a variable with a variable-length (vlen) data type\nnetCDF 4 has support for variable-length or \"ragged\" arrays. These are arrays of variable length sequences having the same type. \n\nTo create a variable-length data type, use the createVLType method.\nThe numpy datatype of the variable-length sequences and the name of the new datatype must be specified.",
"vlen_t = ncfile.createVLType(np.int64, 'phony_vlen')",
"A new variable can then be created using this datatype.",
"vlvar = grp2.createVariable('phony_vlen_var', vlen_t, ('time','lat','lon'))",
"Since there is no native vlen datatype in numpy, vlen arrays are represented in python as object arrays (arrays of dtype object). \n\nThese are arrays whose elements are Python object pointers, and can contain any type of python object. \nFor this application, they must contain 1-D numpy arrays all of the same type but of varying length. \nFill with 1-D random numpy int64 arrays of random length between 1 and 10.",
"vlen_data = np.empty((nlats,nlons),object)\nfor i in range(nlons):\n for j in range(nlats):\n size = np.random.randint(1,10,size=1) # random length of sequence\n vlen_data[j,i] = np.random.randint(0,10,size=size)# generate random sequence\nvlvar[0] = vlen_data # append along unlimited dimension (time)\nprint(vlvar)\nprint('data =\\n',vlvar[:])",
"Close the Dataset and examine the contents with ncdump.",
"ncfile.close()\n!ncdump -h data/new2.nc",
"Other interesting and useful projects using netcdf4-python\n\nxarray: N-dimensional variant of the core pandas data structure that can operate on netcdf variables.\nIris: a data model to create a data abstraction layer which isolates analysis and visualisation code from data format specifics. Uses netcdf4-python to access netcdf data (can also handle GRIB).\nDask: Virtual large arrays (from netcdf variables) with lazy evaluation.\ncf-python: Implements the CF data model for the reading, writing and processing of data and metadata."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
espenhgn/LFPy
|
examples/LFPy-example-05.ipynb
|
gpl-3.0
|
[
"%matplotlib inline",
"Example plot for LFPy: Single-synapse contribution to the LFP\nCopyright (C) 2017 Computational Neuroscience Group, NMBU.\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.",
"import LFPy\nimport numpy as np\nimport matplotlib.pyplot as plt",
"Main script, set parameters and create cell, synapse and electrode objects:",
"# Define cell parameters\ncell_parameters = { # various cell parameters,\n 'morphology' : 'morphologies/L5_Mainen96_LFPy.hoc', # Mainen&Sejnowski, 1996\n 'cm' : 1.0, # membrane capacitance\n 'Ra' : 150., # axial resistance\n 'v_init' : -65., # initial crossmembrane potential\n 'passive' : True, # turn on passive mechanism for all sections\n 'passive_parameters' : {'g_pas' : 1./30000, 'e_pas' : -65}, # passive params\n 'nsegs_method' : 'lambda_f', # lambda_f method\n 'lambda_f' : 100., # lambda_f critical frequency\n 'dt' : 2.**-3, # simulation time step size\n 'tstart' : 0., # start time of simulation, recorders start at t=0\n 'tstop' : 100., # stop simulation at 200 ms. These can be overridden\n # by setting these arguments in cell.simulation()\n}\n\n# Create cell\ncell = LFPy.Cell(**cell_parameters)\n\n# Rotate cell\ncell.set_rotation(x=4.98919, y=-4.33261, z=0.)\n\n# Define synapse parameters\nsynapse_parameters = {\n 'idx' : cell.get_closest_idx(x=0., y=0., z=900.),\n 'e' : 0., # reversal potential\n 'syntype' : 'ExpSyn', # synapse type\n 'tau' : 10., # syn. time constant\n 'weight' : .001, # syn. weight\n 'record_current' : True,\n}\n\n# Create synapse and set time of synaptic input\nsynapse = LFPy.Synapse(cell, **synapse_parameters)\nsynapse.set_spike_times(np.array([20.]))\n\n# Create a grid of measurement locations, in (mum)\nX, Z = np.mgrid[-500:501:20, -400:1201:40]\nY = np.zeros(X.shape)\n\n# Define electrode parameters\nelectrode_parameters = {\n 'sigma' : 0.3, # extracellular conductivity\n 'x' : X.flatten(), # electrode requires 1d vector of positions\n 'y' : Y.flatten(),\n 'z' : Z.flatten()\n}\n\n# Create electrode object\nelectrode = LFPy.RecExtElectrode(cell=cell, **electrode_parameters)\n\n# Run simulation, electrode object argument in cell.simulate\ncell.simulate(probes=[electrode])",
"Plot simulation output:",
"from example_suppl import plot_ex1\nfig = plot_ex1(cell, electrode, X, Y, Z)\n# Optionally save figure (uncomment the line below)\n# fig.savefig('LFPy-example-5.pdf', dpi=300)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ddtm/dl-course
|
Seminar4/Seminar4-ru.ipynb
|
mit
|
[
"Theano, Lasagne\nи с чем их едят\nразминка\n\nнапиши на numpy функцию, которая считает сумму квадратов чисел от 0 до N, где N - аргумент\nмассив чисел от 0 до N - numpy.arange(N)",
"!pip install Theano\n!pip install lasagne\n\nimport numpy as np\ndef sum_squares(N):\n return сумма квадратов чисел от 0 до N\n\n%%time\nsum_squares(10**8)",
"theano teaser\nКак сделать то же самое",
"import theano\nimport theano.tensor as T\n\n\n\n#будущий параметр функции\nN = T.scalar(\"a dimension\",dtype='int32')\n\n\n#рецепт получения суммы квадратов\nresult = (T.arange(N)**2).sum()\n\n#компиляция функции \"сумма квадратов\" чисел от 0 до N\nsum_function = theano.function(inputs = [N],outputs=result)\n\n%%time\nsum_function(10**8)",
"Как оно работает?\n\nНужно написать \"рецепт\" получения выходов по входам\n\nТо же самое на заумном: нужно описать символический граф вычислений\n\n\n2 вида зверей - \"входы\" и \"преобразования\"\n\n\nОба могут быть числами, массивами, матрицами, тензорами и т.п.\n\n\nВход - это то аргумент функции. То место, на которое подставится аргумент вызове.\n\n\nN - вход в примере выше\n\n\nПреобразования - рецепты вычисления чего-то на основе входов и констант\n\n(T.arange(N)^2).sum() - 3 последовательных преобразования N\nРаботают почти 1 в 1 как векторные операции в numpy\nпочти всё, что есть в numpy есть в theano tensor и называется так же\nnp.mean -> T.mean\nnp.arange -> T.arange\nnp.cumsum -> T.cumsum\nи так далее...\nСовсем редко - бывает, что меняется название или синтаксис - нужно спросить у семинаристов или гугла\n\nНичего не понятно? Сейчас исправим.",
"#входы\nexample_input_integer = T.scalar(\"вход - одно число(пример)\",dtype='float32')\n\nexample_input_tensor = T.tensor4(\"вход - четырёхмерный тензор(пример)\")\n#не бойся, тензор нам не пригодится\n\n\n\ninput_vector = T.vector(\"вход - вектор целых чисел\", dtype='int32')\n\n\n#преобразования\n\n#поэлементное умножение\ndouble_the_vector = input_vector*2\n\n#поэлементный косинус\nelementwise_cosine = T.cos(input_vector)\n\n#разность квадрата каждого элемента и самого элемента\nvector_squares = input_vector**2 - input_vector\n\n\ndouble_the_vector\n\n#теперь сам:\n#создай 2 вектора из чисел float32\nmy_vector = <вектор из float32>\nmy_vector2 = <ещё один такой же>\n\n#напиши преобразование, которое считает\n#(вектор 1)*(вектор 2) / (sin(вектор 1) +1)\nmy_transformation = <преобразование>\n\nprint (my_transformation)\n#то, что получилась не чиселка - это нормально",
"Компиляция\n\nДо этого момента, мы использовали \"символические\" переменные\nписали рецепт вычислений, но ничего не вычисляли\nчтобы рецепт можно было использовать, его нужно скомпилировать",
"inputs = [<от чего завсит функция>]\noutputs = [<что вычисляет функция (можно сразу несколько - списком, либо 1 преобразование)>]\n\n# можно скомпилировать написанные нами преобразования как функцию\nmy_function = theano.function(\n inputs,outputs,\n allow_input_downcast=True #автоматически прводить типы (необязательно)\n )\n\n#можно вызвать вот-так:\nprint (\"using python lists:\")\nprint (my_function([1,2,3],[4,5,6]))\nprint()\n\n#а можно так. \n#К слову, ту тип float приводится к типу второго вектора\nprint (\"using numpy arrays:\")\nprint (my_function(np.arange(10),\n np.linspace(5,6,10,dtype='float')))\n",
"хинт для отладки\n\nЕсли ваша функция большая, компиляция может отнять какое-то время.\nЧтобы не ждать, можно посчитать выражение без компиляции\nВы экономите время 1 раз на компиляции, но сам код выполняется медленнее",
"#словарик значений для входов\nmy_function_inputs = {\n my_vector:[1,2,3],\n my_vector2:[4,5,6]\n}\n\n#вычислить без компиляции\n#если мы ничего не перепутали, \n#должно получиться точно то же, что и раньше\nprint my_transformation.eval(my_function_inputs)\n\n\n#можно вычислять преобразования на ходу\nprint (\"сумма 2 векторов\", (my_vector + my_vector2).eval(my_function_inputs))\n\n#!ВАЖНО! если преобразование зависит только от части переменных,\n#остальные давать не надо\nprint (\"форма первого вектора\", my_vector.shape.eval({\n my_vector:[1,2,3]\n }))\n",
"Для отладки желательно уменьшить масштаб задачи. Если вы планировали послать на вход вектор из 10^9 примеров, пошлите 10~100.\nЕсли #ОЧЕНЬ нужно послать большой вектор, быстрее скомпилировать функцию обычным способом\n\nТеперь сам: MSE (2 pts)",
"# Задание 1 - напиши и скомпилируй theano-функцию, которая считает среднеквадратичную ошибку двух векторов-входов\n# Вернуть нужно одно число - собственно, ошибку. Обновлять ничего не нужно\n\n<твой код - входы и преобразования>\n\ncompute_mse =<твой код - компиляция функции>\n\n#тесты\nfrom sklearn.metrics import mean_squared_error\n\nfor n in [1,5,10,10**3]:\n \n elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),\n np.ones(n),np.random.random(n),np.random.randint(100,size=n)]\n \n for el in elems:\n for el_2 in elems:\n true_mse = np.array(mean_squared_error(el,el_2))\n my_mse = compute_mse(el,el_2)\n if not np.allclose(true_mse,my_mse):\n print ('Wrong result:')\n print ('mse(%s,%s)'%(el,el_2))\n print (\"should be: %f, but your function returned %f\"%(true_mse,my_mse))\n raise ValueError(\"Что-то не так\")\n\nprint (\"All tests passed\")\n \n ",
"Shared variables\n\nВходы и преобразования - части рецепта. \n\nОни существуют только во время вызова функции.\n\n\nShared переменные - всегда остаются в памяти\n\nим можно поменять значение \n(но не внутри символического графа. Об этом позже)\n\nих можно включить в граф вычислений\n\n\nхинт - в таких переменных удобно хранить параметры и гиперпараметры\n\nнапример, веса нейронки или learning rate, если вы его меняете",
"#cоздадим расшаренную перменную\nshared_vector_1 = theano.shared(np.ones(10,dtype='float64'))\n\n\n#получить (численное) значение переменной\nprint (\"initial value\",shared_vector_1.get_value())\n\n#задать новое значение\nshared_vector_1.set_value( np.arange(5) )\n\n#проверим значение\nprint (\"new value\", shared_vector_1.get_value())\n\n#Заметь, что раньше это был вектор из 10 элементов, а сейчас - из 5. \n#Если граф при этом остался выполним, это сработает.",
"Теперь сам",
"#напиши рецепт (преобразование), которое считает произведение(поэллементное) shared_vector на input_scalar\n#скомпилируй это в функцию от input_scalar\n\ninput_scalar = T.scalar('coefficient',dtype='float32')\n\nscalar_times_shared = <рецепт тут>\n\n\nshared_times_n = <твой код, который компилирует функцию>\n\n\nprint (\"shared:\", shared_vector_1.get_value())\n\nprint (\"shared_times_n(5)\",shared_times_n(5))\n\nprint (\"shared_times_n(-0.5)\",shared_times_n(-0.5))\n\n\n#поменяем значение shared_vector_1\nshared_vector_1.set_value([-1,0,1])\nprint (\"shared:\", shared_vector_1.get_value())\n\nprint (\"shared_times_n(5)\",shared_times_n(5))\n\nprint (\"shared_times_n(-0.5)\",shared_times_n(-0.5))\n",
"T.grad, самое вкусное\n\ntheano умеет само считать производные. Все, которые существуют.\nПроизводные считаются в символическом, а не численном виде\n\nОграничения\n* За раз можно считать производную скалярной функции по одной или нескольким скалярным или векторным аргументам\n* Функция должна на всех этапах своего вычисления иметь тип float32 или float64 (т.к. на множестве целых чисел производная не имеет смысл)",
"my_scalar = T.scalar(name='input',dtype='float64')\n\nscalar_squared = T.sum(my_scalar**2)\n\n#производная v_squared по my_vector\nderivative = T.grad(scalar_squared,my_scalar)\n\nfun = theano.function([my_scalar],scalar_squared)\ngrad = theano.function([my_scalar],derivative) \n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\nx = np.linspace(-3,3)\nx_squared = list(map(fun,x))\nx_squared_der = list(map(grad,x))\n\nplt.plot(x, x_squared,label=\"x^2\")\nplt.plot(x, x_squared_der, label=\"derivative\")\nplt.legend()",
"теперь сам",
"\nmy_vector = T.vector('float64')\n\n#посчитай производные этой функции по my_scalar и my_vector\n#warning! Не пытайся понять физический смысл этой функции\nweird_psychotic_function = ((my_vector+my_scalar)**(1+T.var(my_vector)) +1./T.arcsinh(my_scalar)).mean()/(my_scalar**2 +1) + 0.01*T.sin(2*my_scalar**1.5)*(T.sum(my_vector)* my_scalar**2)*T.exp((my_scalar-4)**2)/(1+T.exp((my_scalar-4)**2))*(1.-(T.exp(-(my_scalar-4)**2))/(1+T.exp(-(my_scalar-4)**2)))**2\n\n\nder_by_scalar,der_by_vector = градиент функции сверху по скаляру и вектору (можно дать списком)\n\n\ncompute_weird_function = theano.function([my_scalar,my_vector],weird_psychotic_function)\ncompute_der_by_scalar = theano.function([my_scalar,my_vector],der_by_scalar)\n\n\n#график функции и твоей производной\nvector_0 = [1,2,3]\n\nscalar_space = np.linspace(0,7)\n\ny = [compute_weird_function(x,vector_0) for x in scalar_space]\nplt.plot(scalar_space,y,label='function')\ny_der_by_scalar = [compute_der_by_scalar(x,vector_0) for x in scalar_space]\nplt.plot(scalar_space,y_der_by_scalar,label='derivative')\nplt.grid();plt.legend()\n",
"Последний штрих - Updates\n\n\nupdates - это способ изменять значения shared переменных каждый раз В КОНЦЕ вызова функции\n\n\nфактически, это словарь {shared_переменная: рецепт нового значения}, который добавляется в функцию при компиляции\n\n\nНапример,",
"#умножим shared вектор на число и сохраним новое значение обратно в этот shared вектор\n\ninputs = [input_scalar]\noutputs = [scalar_times_shared] #вернём вектор, умноженный на число\n\nmy_updates = {\n shared_vector_1:scalar_times_shared #и этот же результат запишем в shared_vector_1\n}\n\ncompute_and_save = theano.function(inputs, outputs, updates=my_updates)\n\nshared_vector_1.set_value(np.arange(5))\n\n#изначальное значение shared_vector_1\nprint (\"initial shared value:\" ,shared_vector_1.get_value())\n\n# теперь вычислим функцию (значение shared_vector_1 при этом поменяется)\nprint (\"compute_and_save(2) returns\",compute_and_save(2))\n\n#проверим, что в shared_vector_1\nprint (\"new shared value:\" ,shared_vector_1.get_value())\n\n",
"Логистическая регрессия\nЧто нам потребуется:\n* Веса лучше хранить в shared-переменной\n* Данные можно передавать как input\n* Нужно 2 функции:\n * train_function(X,y) - возвращает ошибку и изменяет веса на 1 шаг по граиденту (через updates)\n * predict_fun(X) - возвращает предсказанные ответы (\"y\") по данным",
"from sklearn.datasets import load_digits\nmnist = load_digits(2)\n\nX,y = mnist.data, mnist.target\n\n\nprint (\"y [форма - %s]:\"%(str(y.shape)),y[:10])\n\nprint (\"X [форма - %s]:\"%(str(X.shape)))\nprint (X[:3])\n\n# переменные и входы\nshared_weights = <твой код>\ninput_X = <твой код>\ninput_y = <твой код>\n\npredicted_y = <предсказание логрегрессии на input_X (вероятность класса)>\nloss = <логистическая ошибка (число - среднее по выборке)>\n\ngrad = <градиент loss по весам модели>\n\n\n\nupdates = {\n shared_weights: <новое значение весов после шага градиентного спуска>\n}\n\ntrain_function = <функция, которая по X и Y возвращает ошибку и обновляет веса>\npredict_function = <функция, которая по X считает предсказание для y>\n\nfrom sklearn.cross_validation import train_test_split\nX_train,X_test,y_train,y_test = train_test_split(X,y)\n\nfrom sklearn.metrics import roc_auc_score\n\nfor i in range(5):\n loss_i = train_function(X_train,y_train)\n print (\"loss at iter %i:%.4f\"%(i,loss_i))\n print (\"train auc:\",roc_auc_score(y_train,predict_function(X_train)))\n print (\"test auc:\",roc_auc_score(y_test,predict_function(X_test)))\n\n \nprint (\"resulting weights:\")\nplt.imshow(shared_weights.get_value().reshape(8,-1))\nplt.colorbar()",
"lasagne\n\nlasagne - это библиотека для написания нейронок произвольной формы на theano\nбиблиотека низкоуровневая, границы между theano и lasagne практически нет\n\nВ качестве демо-задачи выберем то же распознавание чисел, но на большем масштабе задачи\n* картинки 28x28\n* 10 цифр",
"from mnist import load_dataset\nX_train,y_train,X_val,y_val,X_test,y_test = load_dataset()\n\nprint (X_train.shape,y_train.shape)\n\nplt.imshow(X_train[0,0])\n\nimport lasagne\n\ninput_X = T.tensor4(\"X\")\n\n#размерность входа (None означает \"может изменяться\")\ninput_shape = [None,1,28,28]\n\ntarget_y = T.vector(\"target Y integer\",dtype='int32')",
"Так задаётся архитектура нейронки",
"#входной слой (вспомогательный)\ninput_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X)\n\n#полносвязный слой, который принимает на вход input layer и имеет 100 нейронов.\n# нелинейная функция - сигмоида как в логистической регрессии\n# слоям тоже можно давать имена, но это необязательно\ndense_1 = lasagne.layers.DenseLayer(input_layer,num_units=50,\n nonlinearity = lasagne.nonlinearities.sigmoid,\n name = \"hidden_dense_layer\")\n\n#ВЫХОДНОЙ полносвязный слой, который принимает на вход dense_1 и имеет 10 нейронов -по нейрону на цифру\n#нелинейность - softmax - чтобы вероятности всех цифр давали в сумме 1\ndense_output = lasagne.layers.DenseLayer(dense_1,num_units = 10,\n nonlinearity = lasagne.nonlinearities.softmax,\n name='output')\n\n\n#предсказание нейронки (theano-преобразование)\ny_predicted = lasagne.layers.get_output(dense_output)\n\n#все веса нейронки (shared-переменные)\nall_weights = lasagne.layers.get_all_params(dense_output)\nprint (all_weights)",
"дальше вы могли бы просто\n\nзадать функцию ошибки вручную\nпосчитать градиент ошибки по all_weights\nнаписать updates\nно это долго, а простой шаг по градиенту - не самый лучший смособ оптимизировать веса\n\nВместо этого, опять используем lasagne",
"#функция ошибки - средняя кроссэнтропия\nloss = lasagne.objectives.categorical_crossentropy(y_predicted,target_y).mean()\n\n\naccuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean()\n\n#сразу посчитать словарь обновлённых значений с шагом по градиенту, как раньше\nupdates_sgd = lasagne.updates.rmsprop(loss, all_weights,learning_rate=0.01)\n\n#функция, которая обучает сеть на 1 шаг и возвращащет значение функции потерь и точности\ntrain_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd)\n\n#функция, которая считает точность\naccuracy_fun = theano.function([input_X,target_y],accuracy)",
"Вот и всё, пошли её учить\n\nданных теперь много, поэтому лучше учиться стохастическим градиентным спуском\nдля этого напишем функцию, которая бьёт выпорку на мини-батчи (в обычном питоне, не в theano)",
"# вспомогательная функция, которая возвращает список мини-батчей для обучения нейронки\n\n#на вход\n# X - тензор из картинок размером (много, 1, 28, 28), например - X_train\n# y - вектор из чиселок - ответов для каждой картинки из X; например - Y_train\n#batch_size - одно число - желаемый размер группы\n\n#что нужно сделать\n# 1) перемешать данные\n# - важно перемешать X и y одним и тем же образом, чтобы сохранить соответствие картинки ответу на неё\n# 3) побить данные на подгруппы так, чтобы в каждой подгруппе было batch_size картинок и ответов\n# - если число картинок не делится на batch_size, одну подгруппу можно вернуть другого размера\n# 4) вернуть список (или итератор) пар:\n# - (подгруппа картинок, ответы из y на эту подгруппу)\ndef iterate_minibatches(X, y, batchsize):\n \n \n \n \n \n return X_minibatches, Y_minibatches # можно сделать списки, а ещё лучше - генератором через yield \n \n \n \n \n \n \n#\n#\n#\n#\n#\n#\n#\n# Всё плохо и ты не понимаешь, что от тебя хотят?\n# можешь поискать похожую функцию в примере\n# https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py",
"Процесс обучения",
"import time\n\nnum_epochs = 100 #количество проходов по данным\n\nbatch_size = 50 #размер мини-батча\n\nfor epoch in range(num_epochs):\n # In each epoch, we do a full pass over the training data:\n train_err = 0\n train_acc = 0\n train_batches = 0\n start_time = time.time()\n for batch in iterate_minibatches(X_train, y_train,batch_size):\n inputs, targets = batch\n train_err_batch, train_acc_batch= train_fun(inputs, targets)\n train_err += train_err_batch\n train_acc += train_acc_batch\n train_batches += 1\n\n # And a full pass over the validation data:\n val_acc = 0\n val_batches = 0\n for batch in iterate_minibatches(X_val, y_val, batch_size):\n inputs, targets = batch\n val_acc += accuracy_fun(inputs, targets)\n val_batches += 1\n\n \n # Then we print the results for this epoch:\n print(\"Epoch {} of {} took {:.3f}s\".format(\n epoch + 1, num_epochs, time.time() - start_time))\n\n print(\" training loss (in-iteration):\\t\\t{:.6f}\".format(train_err / train_batches))\n print(\" train accuracy:\\t\\t{:.2f} %\".format(\n train_acc / train_batches * 100))\n print(\" validation accuracy:\\t\\t{:.2f} %\".format(\n val_acc / val_batches * 100))\n\ntest_acc = 0\ntest_batches = 0\nfor batch in iterate_minibatches(X_test, y_test, 500):\n inputs, targets = batch\n acc = accuracy_fun(inputs, targets)\n test_acc += acc\n test_batches += 1\nprint(\"Final results:\")\nprint(\" test accuracy:\\t\\t{:.2f} %\".format(\n test_acc / test_batches * 100))\n\nif test_acc / test_batches * 100 > 99:\n print (\"Achievement unlocked: колдун 80 уровня\")\nelse:\n print (\"Нужно больше магии!\")",
"Нейронка твоей мечты\n\nЗадача - сделать нейронку, которая получит точность 99% на валидации (validation accuracy)\n+1 балл за каждые 0.1% сверх 99%\nВариант \"is fine too\" - 97.5%. \nЧем выше, тем лучше.\n\n В конце есть мини-отчётик, который имеет смысл прочитать вначале и заполнять по ходу работы. \nЧто можно улучшить:\n\nразмер сети\nбОльше нейронов, \nбОльше слоёв, \nпочти наверняка нужны свёртки\n\nПх'нглуи мглв'нафх Ктулху Р'льех вгах'нагл фхтагн! \n\n\nрегуляризация - чтобы не переобучалось\n\nприплюсовать к функции ошибки какую-нибудь сумму квадратов весов\n\nможно сделать вручную, а можно - http://lasagne.readthedocs.org/en/latest/modules/regularization.html\n\n\nМетод оптимизации - rmsprop, nesterov_momentum, adadelta, adagrad и т.п.\n\nсходятся быстрее и иногда - к лучшему оптимуму\n\nимеет смысл поиграть с размером батча, количеством эпох и скоростью обучения\n\n\nDropout - для борьбы с переобучением\n\n\nlasagne.layers.DropoutLayer(предыдущий_слой, p=вероятность_занулить)\n\n\nСвёрточные слои \n\nnetwork = lasagne.layers.Conv2DLayer(предыдущий_слой,\n num_filters = число нейронов,\n filter_size = (ширина_квадрата, высота_квадрата),\n nonlinearity = нелинейная_функция)\n\nВАРНУНГ! могут учиться долго на CPU\n\nОднако мы всё равно рекоммендуем обучить хотя бы маленькую свёртку\n\n\n\nЛюбые другие слои и архитектуры\n\nhttp://lasagne.readthedocs.org/en/latest/modules/layers.html\n\nPooling, Batch Normalization, etc\n\n\nНаконец, можно поиграть с нелинейностями в скрытых слоях\n\ntanh, relu, leaky relu, etc\n\nДля удобства, ниже есть заготовка решения, которое можно заполнять, а можно выкинуть и написать своё",
"from mnist import load_dataset\nX_train,y_train,X_val,y_val,X_test,y_test = load_dataset()\n\nprint (X_train.shape,y_train.shape)\n\nimport lasagne\n\ninput_X = T.tensor4(\"X\")\n\n#размерность входа (None означает \"может изменяться\")\ninput_shape = [None,1,28,28]\n\ntarget_y = T.vector(\"target Y integer\",dtype='int32')\n\n#входной слой (вспомогательный)\ninput_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X)\n\n\n<моя архитектура>\n\n#ВЫХОДНОЙ полносвязный слой, который принимает на вход dense_1 и имеет 10 нейронов -по нейрону на цифру\n#нелинейность - softmax - чтобы вероятности всех цифр давали в сумме 1\ndense_output = lasagne.layers.DenseLayer(<предвыходной_слой>,num_units = 10,\n nonlinearity = lasagne.nonlinearities.softmax,\n name='output')\n\n\n#предсказание нейронки (theano-преобразование)\ny_predicted = lasagne.layers.get_output(dense_output)\n\n#все веса нейронки (shared-переменные)\nall_weights = lasagne.layers.get_all_params(dense_output)\nprint (all_weights)\n\n#функция ошибки - средняя кроссэнтропия\nloss = lasagne.objectives.categorical_crossentropy(y_predicted,target_y).mean()\n\n#<возможно добавить регуляризатор>\n\naccuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean()\n\n#сразу посчитать словарь обновлённых значений с шагом по градиенту, как раньше\nupdates_sgd = <поиграться с методами>\n\n#функция, которая обучает сеть на 1 шаг и возвращащет значение функции потерь и точности\ntrain_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd)\n\n#функция, которая считает точность\naccuracy_fun = theano.function([input_X,target_y],accuracy)\n\n#итерации обучения\n\nnum_epochs = сколько_эпох #количество проходов по данным\n\nbatch_size = сколько_картинок_в_минибатче #размер мини-батча\n\nfor epoch in range(num_epochs):\n # In each epoch, we do a full pass over the training data:\n train_err = 0\n train_acc = 0\n train_batches = 0\n start_time = time.time()\n for batch in iterate_minibatches(X_train, y_train,batch_size):\n inputs, targets = batch\n train_err_batch, train_acc_batch= train_fun(inputs, targets)\n train_err += train_err_batch\n train_acc += train_acc_batch\n train_batches += 1\n\n # And a full pass over the validation data:\n val_acc = 0\n val_batches = 0\n for batch in iterate_minibatches(X_val, y_val, batch_size):\n inputs, targets = batch\n val_acc += accuracy_fun(inputs, targets)\n val_batches += 1\n\n \n # Then we print the results for this epoch:\n print(\"Epoch {} of {} took {:.3f}s\".format(\n epoch + 1, num_epochs, time.time() - start_time))\n\n print(\" training loss (in-iteration):\\t\\t{:.6f}\".format(train_err / train_batches))\n print(\" train accuracy:\\t\\t{:.2f} %\".format(\n train_acc / train_batches * 100))\n print(\" validation accuracy:\\t\\t{:.2f} %\".format(\n val_acc / val_batches * 100))\n\ntest_acc = 0\ntest_batches = 0\nfor batch in iterate_minibatches(X_test, y_test, 500):\n inputs, targets = batch\n acc = accuracy_fun(inputs, targets)\n test_acc += acc\n test_batches += 1\nprint(\"Final results:\")\nprint(\" test accuracy:\\t\\t{:.2f} %\".format(\n test_acc / test_batches * 100))\n\n",
"Отчётик, примерный его вид.\nТворческий подход приветствуется, но хотелось бы узнать про следующие вещи:\n* идея\n* краткая история правок\n* как выглядит сеть и почему\n* каким методом обучается и почему\n* регуляризована ли и как\nСтрогих математических выводов от вас никто не ждёт, вариант \n * \"Попробовал так, получилось лучше, чем вот-так, а тот третий вариант по названию не понравился\" - не предел мечты, но ок\n * \"Почитал такие статьи, сделал такие эксперименты, пришёл к такому выводу\" - идеально_\n * \"сделал так, потому что в вон-той демке другой чувак так сделал, но тебе об этом не скажу, а придумаю какую-нибудь наукообразную чушь\" - __не ок\nПривет, я ___ ___, и вот моя история\nКогда-то давно, когда трава была зеленее, а до дедлайна ещё оставалось больше часа, мне в голову пришла идея:\nА давай, я сделаю нейронку, которая\n\nнемного текста\nпро то, что\nи как ты учишь,\nпочему именно так\n\nТак мне казалось.\nВ один прекрасный день, когда ничего не предвещало беды,\nЭта злыдня, наконец, доучилась, как вдруг\n* немного текста\n* про то, что получилось в итоге\n* были ли приняты какие-то изменения и почему\n* если да - к чему они привели\nИ вот, спустя __ попыток, на свет появилась\n\nописание финальной сети\n\nКоторая, после стольких мук, ____ [минут, часов или дней - по вкусу] обучения дала-таки точность\n\nточность - на обучении\nточность - на валидации\nточность - на тесте\n\n[опциональное послесловие и пожелания автору задания сдохнуть в страшных муках]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jrbourbeau/cr-composition
|
unfolding/pyunfold-formatting.ipynb
|
mit
|
[
"<a id='top'> </a>\nAuthor: James Bourbeau",
"%load_ext watermark\n%watermark -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend",
"Formatting for PyUnfold use\nTable of contents\n\nDefine analysis free parameters\nData preprocessing\nFitting random forest\nFraction correctly identified\nSpectrum\nUnfolding\nFeature importance",
"from __future__ import division, print_function\nimport os\nfrom collections import defaultdict\nimport numpy as np\nfrom scipy.sparse import block_diag\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn.apionly as sns\nimport json\nfrom scipy.interpolate import UnivariateSpline\n\nfrom sklearn.metrics import accuracy_score, confusion_matrix, roc_curve, auc, classification_report\nfrom sklearn.model_selection import cross_val_score, StratifiedShuffleSplit, KFold, StratifiedKFold\n\nimport comptools as comp\nimport comptools.analysis.plotting as plotting\ncolor_dict = comp.analysis.get_color_dict()\n\n%matplotlib inline",
"Define analysis free parameters\n[ back to top ]\nWhether or not to train on 'light' and 'heavy' composition classes, or the individual compositions",
"# config = 'IC79.2010'\nconfig = 'IC86.2012'\nnum_groups = 4\ncomp_list = comp.get_comp_list(num_groups=num_groups)\n\ncomp_list",
"Get composition classifier pipeline\nDefine energy binning for this analysis",
"energybins = comp.analysis.get_energybins(config=config)",
"Data preprocessing\n[ back to top ]\n1. Load simulation/data dataframe and apply specified quality cuts\n2. Extract desired features from dataframe\n3. Get separate testing and training datasets\n4. Feature transformation",
"log_energy_min = energybins.log_energy_min\nlog_energy_max = energybins.log_energy_max\n\ndf_sim_train, df_sim_test = comp.load_sim(config=config, log_energy_min=log_energy_min, log_energy_max=log_energy_max)\n\ndf_sim_train.reco_log_energy.min(), df_sim_train.reco_log_energy.max()\n\nlog_reco_energy_sim_test = df_sim_test['reco_log_energy']\nlog_true_energy_sim_test = df_sim_test['MC_log_energy']\n\nfeature_list, feature_labels = comp.analysis.get_training_features()\n\npipeline_str = 'BDT_comp_{}_{}-groups'.format(config, num_groups)\npipeline = comp.get_pipeline(pipeline_str)\n\npipeline = pipeline.fit(df_sim_train[feature_list], df_sim_train['comp_target_{}'.format(num_groups)])",
"Load fitted effective area",
"eff_path = os.path.join(comp.paths.comp_data_dir, config, 'efficiencies', \n 'efficiency_fit_num_groups_{}.hdf'.format(num_groups))\ndf_eff = pd.read_hdf(eff_path)\n\ndf_eff.head()\n\nfig, ax = plt.subplots()\nfor composition in comp_list:\n ax.errorbar(energybins.log_energy_midpoints, df_eff['eff_median_{}'.format(composition)],\n yerr=[df_eff['eff_err_low_{}'.format(composition)],\n df_eff['eff_err_high_{}'.format(composition)]], \n color=color_dict[composition], label=composition, marker='.')\nax.axvline(6.4, marker='None', ls='-.', color='k')\nax.axvline(7.8, marker='None', ls='-.', color='k')\nax.set_xlabel('$\\mathrm{\\log_{10}(E_{true}/GeV)}$')\nax.set_ylabel('Detection efficienies')\nax.grid()\nax.legend()\nax.ticklabel_format(style='sci',axis='y')\nax.yaxis.major.formatter.set_powerlimits((0,0))\nplt.show()",
"Format for PyUnfold response matrix use",
"# efficiencies, efficiencies_err = [], []\n# for idx, row in df_efficiency.iterrows():\n# for composition in comp_list:\n# efficiencies.append(row['eff_median_{}'.format(composition)])\n# efficiencies_err.append(row['eff_err_low_{}'.format(composition)])\n# efficiencies = np.asarray(efficiencies)\n# efficiencies_err = np.asarray(efficiencies_err)\n\nefficiencies, efficiencies_err = [], []\nfor idx, row in df_eff.iterrows():\n for composition in comp_list:\n efficiencies.append(row['eff_median_{}'.format(composition)])\n efficiencies_err.append(row['eff_err_low_{}'.format(composition)])\nefficiencies = np.asarray(efficiencies)\nefficiencies_err = np.asarray(efficiencies_err)\n\nefficiencies\n\ndf_data = comp.load_data(config=config, columns=feature_list,\n log_energy_min=log_energy_min, log_energy_max=log_energy_max,\n n_jobs=20, verbose=True)\n\ndf_data.shape\n\nX_data = comp.dataframe_functions.dataframe_to_array(df_data, feature_list + ['reco_log_energy'])\nlog_energy_data = X_data[:, -1]\nX_data = X_data[:, :-1]\n\nlog_energy_data.min(), log_energy_data.max()\n\ndata_predictions = pipeline.predict(X_data)\n\n# Get composition masks\ndata_labels = np.array(comp.composition_encoding.decode_composition_groups(data_predictions, num_groups=num_groups))\n\n# Get number of identified comp in each energy bin\nunfolding_df = pd.DataFrame()\nfor composition in comp_list:\n comp_mask = data_labels == composition\n unfolding_df['counts_' + composition] = np.histogram(log_energy_data[comp_mask],\n bins=energybins.log_energy_bins)[0]\n unfolding_df['counts_' + composition + '_err'] = np.sqrt(unfolding_df['counts_' + composition])\n\nunfolding_df['counts_total'] = np.histogram(log_energy_data, bins=energybins.log_energy_bins)[0]\nunfolding_df['counts_total_err'] = np.sqrt(unfolding_df['counts_total'])\n\nunfolding_df.index.rename('log_energy_bin_idx', inplace=True)\n\nunfolding_df.head()\n\nfig, ax = plt.subplots()\nfor composition in comp_list:\n ax.plot(unfolding_df['counts_{}'.format(composition)], color=color_dict[composition])\nax.set_yscale(\"log\", nonposy='clip')\nax.grid()\nplt.show()",
"Spectrum\n[ back to top ]\nResponse matrix",
"test_predictions = pipeline.predict(df_sim_test[feature_list])\ntrue_comp = df_sim_test['comp_group_{}'.format(num_groups)].values\npred_comp = np.array(comp.composition_encoding.decode_composition_groups(test_predictions,\n num_groups=num_groups))\n\ntrue_comp\n\ntrue_ebin_idxs = np.digitize(log_true_energy_sim_test, energybins.log_energy_bins) - 1\nreco_ebin_idxs = np.digitize(log_reco_energy_sim_test, energybins.log_energy_bins) - 1\nenergy_bin_idx = np.unique(true_ebin_idxs)\nprint(range(-1, len(energybins.log_energy_midpoints)+1))\n\nhstack_list = []\n# for true_ebin_idx in energy_bin_idx:\nfor true_ebin_idx in range(-1, len(energybins.log_energy_midpoints)+1):\n if (true_ebin_idx == -1) or (true_ebin_idx == energybins.energy_midpoints.shape[0]):\n continue\n true_ebin_mask = true_ebin_idxs == true_ebin_idx\n \n vstack_list = []\n# for reco_ebin_idx in energy_bin_idx:\n for reco_ebin_idx in range(-1, len(energybins.log_energy_midpoints)+1):\n if (reco_ebin_idx == -1) or (reco_ebin_idx == energybins.energy_midpoints.shape[0]):\n continue\n reco_ebin_mask = reco_ebin_idxs == reco_ebin_idx\n \n combined_mask = true_ebin_mask & reco_ebin_mask\n if combined_mask.sum() == 0:\n response_mat = np.zeros((num_groups, num_groups), dtype=int)\n else:\n response_mat = confusion_matrix(true_comp[true_ebin_mask & reco_ebin_mask],\n pred_comp[true_ebin_mask & reco_ebin_mask],\n labels=comp_list)\n # Transpose response matrix to get MC comp on x-axis and reco comp on y-axis\n response_mat = response_mat.T\n vstack_list.append(response_mat)\n hstack_list.append(np.vstack(vstack_list))\n \nres = np.hstack(hstack_list)\nres_err = np.sqrt(res)\n\nres.shape\n\nplt.imshow(res, origin='lower')\n\nfrom itertools import product\nnum_groups = len(comp_list)\nnum_ebins = len(energybins.log_energy_midpoints)\n\ne_bin_iter = product(range(num_ebins), range(num_ebins))\nres2 = np.zeros((num_ebins * num_groups, num_ebins * num_groups), dtype=int)\nfor true_ebin_idx, reco_ebin_idx in e_bin_iter:\n# print(true_ebin_idx, reco_ebin_idx)\n true_ebin_mask = true_ebin_idxs == true_ebin_idx\n reco_ebin_mask = reco_ebin_idxs == reco_ebin_idx\n ebin_mask = true_ebin_mask & reco_ebin_mask\n if ebin_mask.sum() == 0:\n continue\n else:\n response_mat = confusion_matrix(true_comp[ebin_mask],\n pred_comp[ebin_mask],\n labels=comp_list)\n # Transpose response matrix to get MC comp on x-axis\n # and reco comp on y-axis\n# response_mat = np.flipud(response_mat)\n response_mat = response_mat.T\n\n res2[num_groups * reco_ebin_idx : num_groups * (reco_ebin_idx + 1),\n num_groups * true_ebin_idx : num_groups * (true_ebin_idx + 1)] = response_mat\n\nplt.imshow(res2, origin='lower')\n\nnp.testing.assert_array_equal(res2, res)\n\nreco_ebin_idx = 4\ntrue_ebin_idx = 4\nplt.imshow(res2[num_groups * reco_ebin_idx : num_groups * (reco_ebin_idx + 1),\n num_groups * true_ebin_idx : num_groups * (true_ebin_idx + 1)],\n origin='lower')",
"Normalize response matrix column-wise (i.e. $P(E|C)$)",
"res_col_sum = res.sum(axis=0)\nres_col_sum_err = np.array([np.sqrt(np.nansum(res_err[:, i]**2)) for i in range(res_err.shape[1])])\n\nnormalizations, normalizations_err = comp.analysis.ratio_error(res_col_sum, res_col_sum_err,\n efficiencies, efficiencies_err,\n nan_to_num=True)\n\nres_normalized, res_normalized_err = comp.analysis.ratio_error(res, res_err,\n normalizations, normalizations_err,\n nan_to_num=True)\n\n\nres_normalized = np.nan_to_num(res_normalized)\nres_normalized_err = np.nan_to_num(res_normalized_err)\n\nnp.testing.assert_allclose(res_normalized.sum(axis=0), efficiencies)\n\nres\n\nfig, ax = plt.subplots()\n# h = np.flipud(block_response)\nidx = 4*num_groups\nsns.heatmap(res[idx:idx+num_groups, idx:idx+num_groups], annot=True, fmt='d', ax=ax, square=True,\n xticklabels=comp_list, yticklabels=comp_list,\n cbar_kws={'label': 'Counts'}, vmin=0, cmap='viridis')\nax.invert_yaxis()\nplt.xlabel('True composition')\nplt.ylabel('Pred composition')\nplt.title('$\\mathrm{7.6 < \\log_{10}(E_{true}/GeV) < 7.7}$' + '\\n$\\mathrm{7.6 < \\log_{10}(E_{reco}/GeV) < 7.7}$')\n# res_mat_outfile = os.path.join(comp.paths.figures_dir, 'unfolding', 'response-matrix-single-energy-bin.png')\n# comp.check_output_dir(res_mat_outfile)\n# plt.savefig(res_mat_outfile)\nplt.show()\n\nplt.imshow(res, origin='lower', cmap='viridis')\nplt.plot([0, res.shape[0]-1], [0, res.shape[1]-1], marker='None', ls=':', color='C1')\n\n# ax = sns.heatmap(res, square=True, xticklabels=2, yticklabels=2, \n# ax = sns.heatmap(res, square=True, mask=res==0, xticklabels=2, yticklabels=2, \n# cbar_kws={'label': 'Counts'})\n\nax.plot([0, res.shape[0]-1], [0, res.shape[1]-1], marker='None', ls=':', color='C1')\n\n# ax.invert_yaxis()\n\nfor i in np.arange(0, res.shape[0], 2):\n plt.axvline(i-0.5, marker='None', ls='-', lw=0.5, color='gray')\n# for i in np.arange(0, res.shape[0], 2):\n# plt.axvline(i+0.5, marker='None', ls=':', color='gray')\nfor i in np.arange(0, res.shape[0], 2):\n plt.axhline(i-0.5, marker='None', ls='-', lw=0.5, color='gray')\n# for i in np.arange(0, res.shape[0], 2):\n# plt.axhline(i+0.5, marker='None', ls=':', color='gray')\n \nplt.xlabel('True bin')\nplt.ylabel('Reconstructed bin')\n# plt.grid()\n\n# plt.xticks(np.arange(0.5, res.shape[0], 2),\n# ['{}'.format(i+1) for i in range(res.shape[0])], \n# rotation='vertical')\n# plt.yticks(np.arange(0.5, res.shape[0], 2),\n# ['{}'.format(i+1) for i in range(res.shape[0])])\n\nplt.colorbar(label='Counts')\n\nres_mat_outfile = os.path.join(comp.paths.figures_dir, 'unfolding', 'response-statistics.png')\ncomp.check_output_dir(res_mat_outfile)\n# plt.savefig(res_mat_outfile)\nplt.show()\n\nplt.imshow(np.sqrt(res), origin='lower', cmap='viridis')\nplt.plot([0, res.shape[0]-1], [0, res.shape[1]-1], marker='None', ls=':', color='C1')\n\nfor i in np.arange(0, res.shape[0], 2):\n plt.axvline(i-0.5, marker='None', ls='-', lw=0.5, color='gray')\nfor i in np.arange(0, res.shape[0], 2):\n plt.axhline(i-0.5, marker='None', ls='-', lw=0.5, color='gray')\n \nplt.xlabel('True bin')\nplt.ylabel('Reconstructed bin')\n\nplt.colorbar(label='Count errors', format='%d')\n\nres_mat_outfile = os.path.join(comp.paths.figures_dir, 'unfolding', 'response-statistics-err.png')\ncomp.check_output_dir(res_mat_outfile)\n# plt.savefig(res_mat_outfile)\nplt.show()\n\nplt.imshow(res_normalized, origin='lower', cmap='viridis')\nplt.plot([0, res.shape[0]-1], [0, res.shape[1]-1], marker='None', ls=':', color='C1')\n\n# for i in np.arange(0, res.shape[0], 2):\n# plt.axvline(i-0.5, marker='None', ls='-', lw=0.5, color='gray')\n# for i in np.arange(0, res.shape[0], 2):\n# plt.axhline(i-0.5, marker='None', ls='-', lw=0.5, color='gray')\n \nplt.xlabel('True bin')\nplt.ylabel('Reconstructed bin')\nplt.title('Response matrix')\n\n# plt.colorbar(label='A.U.')\nplt.colorbar(label='$\\mathrm{P(E_i|C_{\\mu})}$')\n\nres_mat_outfile = os.path.join(comp.paths.figures_dir, 'unfolding', config, 'response_matrix',\n 'response-matrix_{}-groups.png'.format(num_groups))\ncomp.check_output_dir(res_mat_outfile)\nplt.savefig(res_mat_outfile)\nplt.show()\n\nplt.imshow(res_normalized_err, origin='lower', cmap='viridis')\nplt.plot([0, res.shape[0]-1], [0, res.shape[1]-1], marker='None', ls=':', color='C1')\n\n# for i in np.arange(0, res.shape[0], 2):\n# plt.axvline(i-0.5, marker='None', ls='-', lw=0.5, color='gray')\n# for i in np.arange(0, res.shape[0], 2):\n# plt.axhline(i-0.5, marker='None', ls='-', lw=0.5, color='gray')\n \nplt.xlabel('True bin')\nplt.ylabel('Reconstructed bin')\nplt.title('Response matrix error')\n\nplt.colorbar(label='$\\mathrm{\\delta P(E_i|C_{\\mu})}$')\n\nres_mat_outfile = os.path.join(comp.paths.figures_dir, 'unfolding', 'response-matrix-err.png')\ncomp.check_output_dir(res_mat_outfile)\n# plt.savefig(res_mat_outfile)\nplt.show()\n\nres_mat_outfile = os.path.join(comp.paths.comp_data_dir, config, 'unfolding', \n 'response_{}-groups.txt'.format(num_groups))\nres_mat_err_outfile = os.path.join(comp.paths.comp_data_dir, config, 'unfolding', \n 'response_err_{}-groups.txt'.format(num_groups))\n\ncomp.check_output_dir(res_mat_outfile)\ncomp.check_output_dir(res_mat_err_outfile)\n\nnp.savetxt(res_mat_outfile, res_normalized)\nnp.savetxt(res_mat_err_outfile, res_normalized_err)",
"Priors array",
"from icecube.weighting.weighting import from_simprod, PDGCode, ParticleType\nfrom icecube.weighting.fluxes import GaisserH3a, GaisserH4a, Hoerandel5, Hoerandel_IT, CompiledFlux\n\ndf_sim = comp.load_sim(config=config, test_size=0, log_energy_min=6.0, log_energy_max=8.3)\ndf_sim.head()\n\np = PDGCode().values\npdg_codes = np.array([2212, 1000020040, 1000080160, 1000260560])\nparticle_names = [p[pdg_code].name for pdg_code in pdg_codes]\n\nparticle_names\n\ngroup_names = np.array(comp.composition_encoding.composition_group_labels(particle_names, num_groups=num_groups))\ngroup_names\n\ncomp_to_pdg_list = {composition: pdg_codes[group_names == composition] for composition in comp_list}\n\ncomp_to_pdg_list\n\n# Replace O16Nucleus with N14Nucleus + Al27Nucleus\nfor composition, pdg_list in comp_to_pdg_list.iteritems():\n if 1000080160 in pdg_list:\n pdg_list = pdg_list[pdg_list != 1000080160]\n comp_to_pdg_list[composition] = np.append(pdg_list, [1000070140, 1000130270])\n else:\n continue\n\ncomp_to_pdg_list\n\npriors_list = ['H3a', 'H4a', 'Polygonato']\n\n# priors_list = ['h3a', 'h4a', 'antih3a', 'Hoerandel5', 'antiHoerandel5']\n# # priors_list = ['h3a', 'h4a', 'antih3a', 'Hoerandel5', 'antiHoerandel5', 'uniform', 'alllight', 'allheavy']\n# model_ptypes = {}\n# model_ptypes['h3a'] = {'light': [2212, 1000020040], 'heavy': [1000070140, 1000130270, 1000260560]}\n# model_ptypes['h4a'] = {'light': [2212, 1000020040], 'heavy': [1000070140, 1000130270, 1000260560]}\n# model_ptypes['Hoerandel5'] = {'light': [2212, 1000020040], 'heavy': [1000070140, 1000130270, 1000260560]}\n\nfig, ax = plt.subplots()\nfor flux, name, marker in zip([GaisserH3a(), GaisserH4a(), Hoerandel5()],\n priors_list,\n '.^*o'):\n for composition in comp_list:\n comp_flux = []\n for energy_mid in energybins.energy_midpoints:\n flux_energy_mid = flux(energy_mid, comp_to_pdg_list[composition]).sum()\n comp_flux.append(flux_energy_mid)\n # Normalize flux in each energy bin to a probability\n comp_flux = np.asarray(comp_flux)\n prior_key = '{}_flux_{}'.format(name, composition)\n unfolding_df[prior_key] = comp_flux\n \n # Plot result\n ax.plot(energybins.log_energy_midpoints, energybins.energy_midpoints**2.7*comp_flux,\n color=color_dict[composition], alpha=0.75, marker=marker, ls=':',\n label='{} ({})'.format(name, composition))\nax.set_yscale(\"log\", nonposy='clip')\nax.set_xlabel('$\\mathrm{\\log_{10}(E/GeV)}$')\nax.set_ylabel('$\\mathrm{ E^{2.7} \\ J(E) \\ [GeV^{1.7} m^{-2} sr^{-1} s^{-1}]}$')\nax.grid()\nax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False)\npriors_outfile = os.path.join(comp.paths.figures_dir, 'unfolding',\n 'priors_flux_{}-groups.png'.format(num_groups))\ncomp.check_output_dir(priors_outfile)\nplt.savefig(priors_outfile)\nplt.show()\n\nunfolding_df.head()\n\n# unfolding_df_outfile = os.path.join(comp.paths.comp_data_dir, config, 'unfolding',\n# 'unfolding_{}-groups.hdf'.format(num_groups))\n# comp.check_output_dir(unfolding_df_outfile)\n# unfolding_df.to_hdf(unfolding_df_outfile, 'dataframe', format='table')",
"Formatting for PyUnfold use",
"formatted_df = pd.DataFrame()\n\ncounts_formatted = []\npriors_formatted = defaultdict(list)\nfor index, row in unfolding_df.iterrows():\n for composition in comp_list:\n counts_formatted.append(row['counts_{}'.format(composition)])\n for priors_name in priors_list:\n priors_formatted[priors_name].append(row['{}_flux_{}'.format(priors_name, composition)])\n \nformatted_df['counts'] = counts_formatted\nformatted_df['counts_err'] = np.sqrt(counts_formatted)\n\nformatted_df['efficiencies'] = efficiencies\nformatted_df['efficiencies_err'] = efficiencies_err\n\n\nfor key, value in priors_formatted.iteritems():\n formatted_df[key+'_flux'] = value\n formatted_df[key+'_prior'] = formatted_df[key+'_flux'] / formatted_df[key+'_flux'].sum()\n\nformatted_df.index.rename('log_energy_bin_idx', inplace=True)\n\nformatted_df.head()\n\nprior_sums = formatted_df[[col for col in formatted_df.columns if 'prior' in col]].sum()\nnp.testing.assert_allclose(prior_sums, np.ones_like(prior_sums))",
"Save formatted DataFrame to disk",
"formatted_df_outfile = os.path.join(comp.paths.comp_data_dir, config, 'unfolding', \n 'unfolding-df_{}-groups.hdf'.format(num_groups))\ncomp.check_output_dir(formatted_df_outfile)\nformatted_df.to_hdf(formatted_df_outfile, 'dataframe', format='table')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maxis42/ML-DA-Coursera-Yandex-MIPT
|
3 Unsupervised learning/Homework/5 text themes postnauka BigARTM/PostnaukaPeerReview.ipynb
|
mit
|
[
"Тематическая модель Постнауки\nPeer Review (optional)\nВ этом задании мы применим аппарат тематического моделирования к коллекции текстовых записей видеолекций, скачанных с сайта Постнаука. Мы будем визуализировать модель и создавать прототип тематического навигатора по коллекции. В коллекции 1728 документов, размер словаря - 38467 слов. Слова лемматизированы, то есть приведены к начальной форме, с помощью программы mystem, коллекция сохранена в формате vowpal wabbit. В каждой строке до первой черты записана информация о документе (ссылка на страницу с лекцией), после первой черты следует описание документа. Используются две модальности - текстовая (\"text\") и модальность авторов (\"author\"); у каждого документа один автор.\nДля выполнения задания понадобится библиотека BigARTM. В демонстрации показан пример использования библиотеки версии 0.7.4, на сайте предлагается скачивать версию 0.8.0. В новой версии изменены принципы работы со словарями: они вынесены в отдельный класс (пример в Release Notes). Строить модель и извлекать ее параметры нужно так же, как показано в демонстрации. Вы можете использовать предыдущий релиз или новый релиз на ваше усмотрение.\nСпецификации всех функций вы можете смотреть на странице Python API.",
"import artm\n\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n%matplotlib inline\nsns.set_style(\"whitegrid\", {'axes.grid' : False})\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.externals import joblib\n\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"",
"Считывание данных\nСоздайте объект класса artm.BatchVectorizer, который будет ссылаться на директорию с пакетами данных (батчами). Чтобы библиотека могла преобразовать текстовый файл в батчи, создайте пустую директорию и укажите ее название в параметре target_folder. Размер батча для небольших коллекций (как наша) не важен, вы можете указать любой.",
"# Ваш код\nbatch_vectorizer = artm.BatchVectorizer(data_path='lectures.txt', data_format='vowpal_wabbit',\n target_folder='lectures_batches', batch_size=250)",
"Инициализация модели\nСоздайте объект класса artm.Model с 30 темами, именами тем, указанными ниже и единичными весами обеих модальностей. Количество тем выбрано не очень большим, чтобы вам было удобнее работать с темами. На этой коллекции можно строить и большее число тем, тогда они будут более узко специализированы.",
"T = 30 # количество тем\ntopic_names=[\"sbj\"+str(i) for i in range(T-1)]+[\"bcg\"]\n# Ваш код\nmodel = artm.ARTM(num_topics=T, topic_names=topic_names, num_processors=2, class_ids={'text':1, 'author':1},\n reuse_theta=True, cache_theta=True)",
"Мы будем строить 29 предметных тем и одну фоновую. \nСоберите словарь с помощью метода gather_dictionary и инициализируйте модель, указав random_seed=1. Обязательно укажите свое название словаря, оно понадобится при добавлении регуляризаторов.",
"# Ваш код\nnp.random.seed(1)\ndictionary = artm.Dictionary('dict')\ndictionary.gather(batch_vectorizer.data_path)\nmodel.initialize(dictionary=dictionary)",
"Добавление score\nСоздайте два измерителя качества artm.TopTokensScore - по одному для каждой модальности; количество токенов 15. Названия для score придумайте самостоятельно.",
"# Ваш код\nmodel.scores.add(artm.TopTokensScore(name='top_tokens_score_mod1', class_id='text', num_tokens=15))\nmodel.scores.add(artm.TopTokensScore(name='top_tokens_score_mod2', class_id='author', num_tokens=15))",
"Построение модели\nМы будем строить модель в два этапа: сначала добавим сглаживающий регуляризатор фоновой темы и настроим параметры модели, затем - добавим разреживающий регуляризатор предметрых тем и выполним еще несколько итераций. Так мы сможем получить наиболее чистые от фоновых слов предметные темы. Сглаживающий и разреживающий регуляризаторы задаются одним и тем же классом artm.SmoothSparsePhiRegularizer: если коэффициент tau положительный, то регуляризатор будет сглаживающий, если отрицательный - разреживающий.\nЕсли вы хотите подробнее разобраться, как выполняется регуляризация тематической модели в BigARTM, вы можете прочитать статью, раздел 4.\nДобавьте сглаживающий регуляризатор с коэффициентом tau = 1e5, указав название своего словаря в dictionary, модальность текста в class_ids и тему \"bcg\" в topic_names.",
"# Ваш код\nmodel.regularizers.add(artm.SmoothSparsePhiRegularizer(tau=1e5, class_ids='text', dictionary='dict', topic_names='bcg'))",
"Выполните 30 итераций по коллекции (num_collection_passes), количество внутренних итераций установите равным 1. Используйте метод fit_offline модели.",
"# Ваш код\nmodel.num_document_passes = 1\nmodel.fit_offline(batch_vectorizer=batch_vectorizer, num_collection_passes=30)",
"Добавьте разреживающий регуляризатор с коэффициентом tau=-1e5, указав название своего словаря, модальность текста в class_ids и все темы \"sbjX\" в topic_names.",
"# Ваш код\ntopic_names_cleared = list(topic_names).remove('bcg')\nmodel.regularizers.add(artm.SmoothSparsePhiRegularizer(tau=-1e5, class_ids='text', dictionary='dict',\n topic_names=topic_names_cleared))",
"Выполните еще 15 проходов по коллекции.",
"# Ваш код\nmodel.fit_offline(batch_vectorizer=batch_vectorizer, num_collection_passes=15)",
"Интерпретация тем\nИспользуя созданные score, выведите топы слов и топы авторов в темах. Удобнее всего выводить топ слов каждой темы с новой строки, указывая название темы в начале строки, и аналогично с авторами.",
"# Ваш код\ntokens = model.score_tracker['top_tokens_score_mod1'].last_tokens\nfor topic_name in model.topic_names:\n print topic_name + ': ',\n for word in tokens[topic_name]: \n print word,\n print\n\n# Ваш код\nauthors = model.score_tracker['top_tokens_score_mod2'].last_tokens\nfor topic_name in model.topic_names:\n print topic_name + ': ',\n for author in authors[topic_name]: \n print author,\n print",
"В последней теме \"bcg\" должны находиться общеупотребительные слова.\nВажный шаг в работе с тематической моделью, когда речь идет о визуализации или создании тематического навигатора, это именование тем. Понять, о чем каждая тема, можно по списку ее топовых слов. Например, тему\nчастица взаимодействие физика кварк симметрия элементарный нейтрино стандартный материя протон бозон заряд масса ускоритель слабый\n\nможно назвать \"Физика элементарных частиц\". \nДайте названия 29 предметным темам. Если вы не знаете, как назвать тему, назовите ее первым встретившимся в ней существительным, хотя при таком подходе навигатор будет менее информативным. Из названий тем составьте список из 29 строк и запишите го в переменную sbj_topic_labels. В переменной topic_labels будут храниться названия всех тем, включая фоновую.",
"sbj_topic_labels = [] # запишите названия тем в список\nfor topic_name in model.topic_names[:29]:\n sbj_topic_labels.append(tokens[topic_name][0])\n\ntopic_labels = sbj_topic_labels + [u\"Фоновая тема\"]",
"Анализ тем\nДалее мы будем работать с распределениями тем в документах (матрица $\\Theta$) и авторов в темах (одна из двух матриц $\\Phi$, соответствующая модальности авторов). \nСоздайте переменные, содержащие две этих матрицы, с помощью методов get_phi и get_theta модели. Назовите переменные theta и phi_a. Выведите формы обеих матриц, чтобы понять, по каким осям стоят темы.",
"model.theta_columns_naming = \"title\" # включает именование столбцов Theta их названиями-ссылками, а не внутренними id \n# Ваш код\ntheta = model.get_theta()\nprint('Theta shape: %s' % str(theta.shape))\nphi_a = model.get_phi(class_ids='author')\nprint('Phi_a shape: %s' % str(phi_a.shape))",
"Визуализируем фрагмент матрицы $\\Theta$ - первые 100 документов (это наиболее простой способ визуально оценить, как темы распределяются в документах). С помощью метода seaborn.heatmap выведите фрагмент theta как изображение. Рекомендация: создайте фигуру pyplot размера (20, 10).",
"# Ваш код\ntheta.iloc[:,:100]\n\nplt.figure(figsize=(20,10))\nplt.title('Theta matrix for the first 100 documents')\nsns.heatmap(theta.iloc[:,:100], cmap='YlGnBu', xticklabels=False)\nplt.show();",
"Вы должны увидеть, что фоновая тема имеет большую вероятность в почти каждом документе, и это логично. Кроме того, есть еще одна тема, которая чаще других встречается в документах. Судя по всему, это тема содержит много слов по науку в целом, а каждый документ (видео) в нашей коллекции связан с наукой. Можно (необязательно) дать этой теме название \"Наука\".\nПомимо этих двух тем, фоновой и общенаучной, каждый документ характеризуется малым числом других тем.\nОценим $p(t)$ - долю каждой темы во всей коллекции. По формуле полной вероятности вычислять эти величины нужно как\n$p(t) = \\sum_d p(t|d) p(d)$. Согласно вероятностной модели, $p(d)$ пропорционально длине документа d. Поступим проще: будем полагать, что все документы равновероятны. Тогда оценить $p(t)$ можно, просуммировав $p(t|d)$ по всем документам, а затем разделив полученный вектор на его сумму. \nСоздайте переменную-датафрейм с T строками, индексированными названиями тем, и 1 столбцом, содержащим оценки $p(t)$. Выведите датафрейм на печать.",
"# Ваш код\nprob_theme_data = [np.sum(theta.iloc[i]) for i in range(theta.shape[0])]\nprob_theme_data_normed = prob_theme_data / np.sum(prob_theme_data)\nprob_theme = pd.DataFrame(data=prob_theme_data_normed, index=topic_labels, columns=['prob'])\nprob_theme\n\nprob_theme_max = prob_theme\nprob_theme_min = prob_theme\n\nprint('Max 5 probabilities:')\nfor i in range(5):\n max_value = prob_theme_max.max()[0]\n print(prob_theme_max[prob_theme_max.values == max_value].index[0])\n prob_theme_max = prob_theme_max[prob_theme_max.values != max_value]\n\nprint('\\nMin 3 probabilities:')\nfor i in range(3):\n min_value = prob_theme_min.min()[0]\n print(prob_theme_min[prob_theme_min.values == min_value].index[0])\n prob_theme_min = prob_theme_min[prob_theme_min.values != min_value]",
"Найдите 5 самых распространенных и 3 наименее освещенных темы в коллекции (наибольшие и наименьшие $p(t)$ соответственно), не считая фоновую и общенаучную. Укажите названия, которые вы дали этим темам.\nВизуализируйте матрицу $\\Phi$ модальности авторов в виде изображения. Рекомендация: установите yticklabels=False в heatmap.",
"# Ваш код\nplt.figure(figsize=(20,10))\nplt.title('Theta matrix for the first 100 documents')\nsns.heatmap(phi_a.iloc[:100], cmap='YlGnBu', yticklabels=False)\nplt.show();",
"Каждой теме соответствует не очень большое число авторов - матрица достаточно разреженная. Кроме того, некоторые темы имеют доминирующего автора $a$, имеющего большую вероятность $p(a|t)$ - этот автор записал больше всего лекций по теме. \nБудем считать, что автор $a$ значим в теме, если $p(a|t) > 0.01$. Для каждого автора посчитайте, в скольких темах он значим. Найдите авторов-рекордсменов, которые значимы (а значит, читали лекции) в >= 3 темах.",
"phi_a\n\nfor i in range(phi_a.shape[0]):\n num_valuble_topics = 0\n for val in phi_a.iloc[i]:\n if val > 0.01:\n num_valuble_topics += 1\n if num_valuble_topics >= 3:\n print(i),\n print(phi_a.index[i])\n\nprint(phi_a.iloc[184])",
"Большинство авторов значимы в 1 теме, что логично.\nПостроение тематической карты авторов\nПо сути, в матрице $\\Phi$, соответствующей модальности авторов, записаны тематические кластеры авторов. Для любого автора мы можем составить его тематический круг - авторов, разбирающихся в той же теме, что и данный. Интересующиеся слушатели могут попробовать выполнить эту процедуру для ученых, читающих лекции на Постнауке, которых они знают (например, на Постнауке есть лекции с К. В. Воронцовым - лектором текущего модуля :)\nСоставим карту близости авторов по тематике их исследований. Для этого применим метод понижения размерности MDS к тематическим профилям авторов.\nЧтобы получить тематический профиль автора, распределение $p(t|a)$, нужно воспользоваться формулой Байеса: \n$p(t|a) = \\frac {p(a|t) p(t)} {\\sum_t' p(a|t') p(t')}$. Все необходимые для этого величины у вас есть и записаны в переменных phi и pt. \nПередайте матрицу тематических профилей авторов, записанных по строкам, в метод MDS с n_components=2. Используйте косинусную метрику (она хорошо подходит для поиска расстояний между векторами, имеющими фиксированную сумму компонент).",
"from sklearn.manifold import MDS\nfrom sklearn.metrics import pairwise_distances\n\nprob_theme_author = np.empty(phi_a.shape)\nfor i in range(prob_theme_author.shape[0]):\n for j in range(prob_theme_author.shape[1]):\n prob_theme_author[i,j] = phi_a.iloc[i,j] * prob_theme.iloc[j,:] / np.sum(phi_a.iloc[i,:] * prob_theme.prob.values)\n\n# Ваш код\nsimilarities = pairwise_distances(prob_theme_author, metric='cosine')\nmds = MDS(n_components=2, dissimilarity='precomputed', random_state=42)\npos = mds.fit_transform(similarities)",
"Визуализируйте найденные двумерные представления с помощью функции scatter.",
"# Ваш код\nplt.figure(figsize=(10,5))\nplt.scatter(pos[:,0], pos[:,1])\nplt.show();",
"Должно получиться, что некоторые грппы авторов формируют сгустки, которые можно считать тематическими группами авторов.\nРаскрасим точки следующим образом: для каждого автора выберем наиболее вероятную для него тему ($\\max_t p(t|a)$), и каждой теме сопоставим цвет. Кроме того, добавим на карту имена и фамилии авторов, это можно сделать в цикле по всем точкам с помощью функции plt.annotate, указывая метку точки первым аргументом и ее координаты в аргументе xy. Рекомендуется сделать размер изображения большим, тогда маркеры точек тоже придется увеличить (s=100 в plt.scatter). Изобразите карту авторов и сохраните в pdf-файл с помощью функции plt.savefig. \nМетки авторов будут пересекаться. Будет очень хорошо, если вы найдете способ, как этого можно избежать.",
"import matplotlib.cm as cm\ncolors = cm.rainbow(np.linspace(0, 1, T)) # цвета для тем\n# Ваш код\nmax_theme_prob_for_colors = [np.argmax(author) for author in prob_theme_author]\nplt.figure(figsize=(15,10))\nplt.axis('off')\nplt.scatter(pos[:,0], pos[:,1], s=100, c=colors[max_theme_prob_for_colors])\nfor i, author in enumerate(phi_a.index):\n plt.annotate(author, pos[i])\nplt.savefig('authors_map.pdf', dpi=200, format='pdf')\nplt.show();",
"Создание простого тематического навигатора по Постнауке\nНаш тематический навигатор будет для каждой темы показывать ее список слов, а также список релевантных теме документов. \nНам понадобятся распределения $p(d|t)$. По формуле Байеса $p(d|t) = \\frac{p(t|d)p(d)}{\\sum_{d'}p(t|d')p(d')}$, но поскольку мы считаем документы равновероятными, достаточно разделить каждую строку $\\Theta$ на ее сумму, чтобы оценить распределение. \nОтсортируйте матрицу $p(d|t)$ по убыванию $p(d|t)$ в каждой теме (то есть построчно). Нам понадобятся индексы наиболее вероятных документов в каждой теме, поэтому используйте функцию argmax.",
"# Ваш код\nprob_doc_theme = theta.values / np.array([np.sum(theme) for theme in theta.values])[:, np.newaxis]\nprob_doc_theme_sorted_indices = prob_doc_theme.argsort(axis=1)[:,::-1]\nprob_doc_theme_sorted_indices",
"Создавать навигатор мы будем прямо в jupiter notebook: это возможно благодаря тому факту, что при печати ссылки она автоматически превращается в гиперссылку.",
"print \"http://yandex.ru\" # получится кликабельная ссылка",
"Кроме того, подключив модуль ipython.core.display, можно использовать html-разметку в выводе. Например:",
"from IPython.core.display import display, HTML\ndisplay(HTML(u\"<h1>Заголовок</h1>\")) # также <h2>, <h3>\ndisplay(HTML(u\"<ul><li>Пункт 1</li><li>Пункт 2</li></ul>\"))\ndisplay(HTML(u'<font color=\"green\">Зеленый!</font>'))\ndisplay(HTML(u'<a href=\"http://yandex.ru\">Еще один вариант вывода ссылки</a>'))",
"В цикле для каждой темы выведите ее заголовок, в следующей строке - топ-10 слов темы, затем в виде списка ссылки на 10 наиболее релевантных (по $p(d|t)$) теме документов. Используйте html-разметку. Творчество приветствуется :)",
"# Ваш код\nfor i, theme in enumerate(topic_labels):\n display(HTML(\"<h3>%s</h3>\" % theme))\n for j in range(10):\n print(tokens[model.topic_names[i]][j]),\n print('')\n for k in range(10):\n print(theta.columns[prob_doc_theme_sorted_indices[i,k]])",
"Заключение\nВ этом Peer Review мы познакомились с базовыми возможностями библиотеки BigARTM и с методами визуализации тематических моделей. Визуализация тематических моделей - это широкая и активно развивающаяся область научных исследований. Мы рассмотрели только самые простые приемы. Желающие могут попробовать применить Serendip, разработанный в University of Wisconsin-Madison, к построенной модели. Эта библиотека позволяет максимально полно охарактеризовать темы и написана для языка python. \nСделав задание, вы можете выбрать в навигаторе наиболее интересную для вас тему и посмотреть видеолекции :) На Постнауке очень много интересных материалов."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mayank-johri/LearnSeleniumUsingPython
|
Section 1 - Core Python/Chapter 02 - Basics/2.3. Maths Operators.ipynb
|
gpl-3.0
|
[
"Maths Operators\n\nPython supports most common maths operations. The table below lists maths operators supported.\n| Syntax | Math | Operation Name |\n|-------------- |------------------------------------------- |------------------------------------------------------------------ |\n| a+b | a+b | addition |\n| a-b | a-b | subtraction |\n| ab | a * b | multiplication |\n| a/b | a\\div b | division (see note below) |\n| a//b | a//b | floor division (e.g. 5//2=2) |\n| a%b | a%b | modulo | \n| -a | -a | negation | \n| < | a < b | less- than | \n| > | a > b | greater- than |\n| <= | a <= b | less- than- equal |\n| >= | a >= b | greater- than- equal |\n| abs(a)| <code>| a |</code> | absolute value |\n| ab | a*b | exponent |\n| math.sqrt(a) | sqrt a | square root |\n\nNote:\nIn order to use math.sqrt() function, you must explicitly load the math module by adding import math at the top of your file, where all the other modules import is defined.",
"# Sample Code\n# Say Cheese\nx = 34 - 23\ny = \"!!! Say\" \nz = 3.45\nprint(id(x), id(y), id(z))\n\nprint(x, y, z)\nx = x + 1\ny = y + \" Cheese !!!\"\nprint(\"x = \" + str(x))\nprint(y, id(y))\nprint(\"Is x > z\", x > z ,\"and y is\", y, \"and x =\", x)\nprint(\"x - z =\", x - z)\n\nprint(\"~^\" * 30)\nprint(30 * \"~_\")\nprint(id(x), id(y), id(z))\n\nprint((30 * \"~_\") * 2)\n\nprint(30 * \"~_\" * 2)\n\nprint(30 * \"~_\" * \"#\")\n\nprint(30 * \"~_\" + \"#\")\n\nt = x > z\nprint(\"x = \" + str(x) + \" and z = \" + str(z) + \" : \" + str(t))\nprint(\"x =\", x, \"and z =\", z, \":\", t)\nprint(x, z)\nprint(\"x % z =\", x % z )\nprint(\"x >= z\", x <= z)\n\nmass_kg = int(input(\"What is your mass in kilograms?\" ))\nmass_stone = mass_kg * 1.1 / 7\nprint(\"You weigh\", mass_stone, \"stone.\")",
"Order of Operations\n\nPython uses the standard order of operations as taught in Algebra and Geometry classes. That, mathematical expressions are evaluated in the following order (memorized by many as PEMDAS or BODMAS {Brackets, Orders or pOwers, Division, Multiplication, Addition, Subtraction}) .\n(Note that operations which share a table row are performed from left to right. That is, a division to the left of a multiplication, with no parentheses between them, is performed before the multiplication simply because it is to the left.)\n| Name | Syntax | Description | PEMDAS Mnemonic |\n|---------------------------- |---------- |---------------------------------------------------------------------------------------------------------------------------------------- |----------------- |\n| Parentheses | ( ... ) | Before operating on anything else, Python must evaluate all parentheticals starting at the innermost level. (This includes functions.) | Please |\n| Exponents | ** | As an exponent is simply short multiplication or division, it should be evaluated before them. | Excuse |\n| Multiplication and Division | * / // % | Again, multiplication is rapid addition and must, therefore, happen first. | My Dear |\n| Addition and Subtraction | + - | They should happen independent to one another and finally operated among eachother | Aunt Sally |\nFormatting output\n\nround()",
"print (round(3.14159265, 2))",
"Reference, Recommendation, Remarks & Thanks\n\nhttps://en.wikibooks.org/wiki/Python_Programming/Operators"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mtasende/Machine-Learning-Nanodegree-Capstone
|
notebooks/prod/n08_simple_q_learner_1000_states_4_actions_full_training.ipynb
|
mit
|
[
"In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.",
"# Basic imports\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport datetime as dt\nimport scipy.optimize as spo\nimport sys\nfrom time import time\nfrom sklearn.metrics import r2_score, median_absolute_error\nfrom multiprocessing import Pool\n\n%matplotlib inline\n\n%pylab inline\npylab.rcParams['figure.figsize'] = (20.0, 10.0)\n\n%load_ext autoreload\n%autoreload 2\n\nsys.path.append('../../')\n\nimport recommender.simulator as sim\nfrom utils.analysis import value_eval\nfrom recommender.agent import Agent\nfrom functools import partial\n\nNUM_THREADS = 1\nLOOKBACK = -1 # 252*4 + 28\nSTARTING_DAYS_AHEAD = 252\nPOSSIBLE_FRACTIONS = [0.0, 0.25, 0.5, 1.0]\n\n# Get the data\nSYMBOL = 'SPY'\ntotal_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')\ndata_train_df = total_data_train_df[SYMBOL].unstack()\ntotal_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')\ndata_test_df = total_data_test_df[SYMBOL].unstack()\nif LOOKBACK == -1:\n total_data_in_df = total_data_train_df\n data_in_df = data_train_df\nelse:\n data_in_df = data_train_df.iloc[-LOOKBACK:]\n total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]\n\n# Create many agents\nindex = np.arange(NUM_THREADS).tolist()\nenv, num_states, num_actions = sim.initialize_env(total_data_in_df, \n SYMBOL, \n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n n_levels=10)\nagents = [Agent(num_states=num_states, \n num_actions=num_actions, \n random_actions_rate=0.98, \n random_actions_decrease=0.9999,\n dyna_iterations=0,\n name='Agent_{}'.format(i)) for i in index]\n\ndef show_results(results_list, data_in_df, graph=False):\n for values in results_list:\n total_value = values.sum(axis=1)\n print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))\n print('-'*100)\n initial_date = total_value.index[0]\n compare_results = data_in_df.loc[initial_date:, 'Close'].copy()\n compare_results.name = SYMBOL\n compare_results_df = pd.DataFrame(compare_results)\n compare_results_df['portfolio'] = total_value\n std_comp_df = compare_results_df / compare_results_df.iloc[0]\n if graph:\n plt.figure()\n std_comp_df.plot()",
"Let's show the symbols data, to see how good the recommender has to be.",
"print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))\n\n# Simulate (with new envs, each time)\nn_epochs = 7\n\nfor i in range(n_epochs):\n tic = time()\n env.reset(STARTING_DAYS_AHEAD)\n results_list = sim.simulate_period(total_data_in_df, \n SYMBOL,\n agents[0],\n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\n toc = time()\n print('Epoch: {}'.format(i))\n print('Elapsed time: {} seconds.'.format((toc-tic)))\n print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\n show_results([results_list], data_in_df)\n\nenv.reset(STARTING_DAYS_AHEAD)\nresults_list = sim.simulate_period(total_data_in_df, \n SYMBOL, agents[0], \n learn=False, \n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\nshow_results([results_list], data_in_df, graph=True)",
"Let's run the trained agent, with the test set\nFirst a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).",
"TEST_DAYS_AHEAD = 20\n\nenv.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)\ntic = time()\nresults_list = sim.simulate_period(total_data_test_df, \n SYMBOL,\n agents[0],\n learn=False,\n starting_days_ahead=TEST_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\ntoc = time()\nprint('Epoch: {}'.format(i))\nprint('Elapsed time: {} seconds.'.format((toc-tic)))\nprint('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\nshow_results([results_list], data_test_df, graph=True)",
"And now a \"realistic\" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).",
"env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)\ntic = time()\nresults_list = sim.simulate_period(total_data_test_df, \n SYMBOL,\n agents[0],\n learn=True,\n starting_days_ahead=TEST_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\ntoc = time()\nprint('Epoch: {}'.format(i))\nprint('Elapsed time: {} seconds.'.format((toc-tic)))\nprint('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\nshow_results([results_list], data_test_df, graph=True)",
"What are the metrics for \"holding the position\"?",
"print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))\n\nimport pickle\nwith open('../../data/simple_q_learner_1000_states_4_actions_full_training.pkl', 'wb') as best_agent:\n pickle.dump(agents[0], best_agent)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
karlstroetmann/Artificial-Intelligence
|
Python/2 Constraint Solver/N-Queens-Problem-CSP.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open('../style.css') as f:\n css = f.read()\nHTML(css)",
"The N-Queens-Problem as a CSP\nThe function create_csp(n) takes a natural number n as argument and returns\na constraint satisfaction problem that encodes the \nn-queens puzzle.\nA constraint satisfaction problem $\\mathcal{P}$ is a triple of the form\n$$ \\mathcal{P} = \\langle \\mathtt{Vars}, \\mathtt{Values}, \\mathtt{Constraints} \\rangle $$\nwhere \n- Vars is a set of strings which serve as variables.\nThe idea is that $V_i$ specifies the column of the queen that is placed in row $i$.\n\nValues is a set of values that can be assigned \n to the variables in $\\mathtt{Vars}$.\n\nIn the 8-queens-problem we will have $\\texttt{Values} = {1,\\cdots,8}$.\n- Constraints is a set of formulas from first order logic.\n Each of these formulas is called a constraint of $\\mathcal{P}$.\n There are two different types of constraints.\n * We have constraints that express that no two queens that are positioned in different rows share the same\n column. To capture these constraints, we define\n $$\\texttt{DifferentCol} := \\bigl{ \\texttt{V}_i \\not= \\texttt{V}_j \\bigm| i \\in {1,\\cdots,8} \\wedge j \\in {1,\\cdots,8} \\wedge j < i \\bigr}.$$\n Here the condition $j < i$ ensures that, for example, while we have the constraint\n $\\texttt{V}_2 \\not= \\texttt{V}_1$ we do not also have the constraint $\\texttt{V}_1 \\not= \\texttt{V}_2$, as the latter \n constraint would be redundant if the former constraint had already been established.\n * We have constraints that express that no two queens positioned in different rows share the same \n diagonal. The queens in row $i$ and row $j$ share the same diagonal iff the equation\n $$ |i - j| = |V_i - V_j| $$\n holds. The expression $|i-j|$ is the absolute value of the difference of the rows of the queens in row\n $i$ and row $j$, while the expression $|V_i - V_j|$ is the absolute value of the difference of the\n columns of these queens. To capture these constraints, we define\n $$ \\texttt{DifferentDiag} := \\bigl{ |i - j| \\not= |\\texttt{V}_i - \\texttt{V}_j| \\bigm| i \\in {1,\\cdots,8} \\wedge j \\in {1,\\cdots,8} \\wedge j < i \\bigr}. $$",
"def create_csp(n):\n S = range(1, n+1) \n Variables = { f'V{i}' for i in S }\n Values = set(S)\n DifferentCols = { f'V{i} != V{j}' for i in S\n for j in S\n if i < j \n }\n DifferentDiags = { f'abs(V{j} - V{i}) != {j - i}' for i in S\n for j in S \n if i < j \n }\n return Variables, Values, DifferentCols | DifferentDiags",
"The function main() creates a CSP representing the 4-queens puzzle and prints the CSP.\nIt is included for testing purposes.",
"def main():\n Vars, Values, Constraints = create_csp(4)\n print('Variables: ', Vars)\n print('Values: ', Values)\n print('Constraints:')\n for c in Constraints:\n print(' ', c)\n\nmain()",
"Displaying the Solution",
"import chess",
"The function show_solution(Solution) takes a dictionary that contains a variable assignment that represents a solution to the 8-queens puzzle. It displays this Solution on a chess board.",
"def show_solution(Solution):\n board = chess.Board(None) # create empty chess board\n queen = chess.Piece(chess.QUEEN, True)\n for row in range(1, 8+1):\n col = Solution['V'+str(row)]\n field_number = (row - 1) * 8 + col - 1\n board.set_piece_at(field_number, queen)\n display(board)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
darioizzo/optimal_landing
|
examples/1 - Generate optimal trajectories - Direct Method.ipynb
|
lgpl-3.0
|
[
"Data generation\n@cesans",
"import matplotlib as plt\n%matplotlib inline\n\nimport sys\nsys.path.append('..')\nimport numpy as np\nimport deep_control as dc\n",
"dc.data.get_trajectory can be used to get an optimal trajectory for some initial conditions",
"conditions = {'x0': 200, 'z0': 1000, 'vx0':-30, 'vz0': 0, 'theta0': 0, 'm0': 10000}\n\ncol_names = ['t', 'm', 'x', 'vx', 'z' , 'vz',' theta', 'u1', 'u2']\n\ntraj = dc.data.get_trajectory('../SpaceAMPL/lander/hs/main_rw_mass.mod', conditions, col_names=col_names)",
"The trajectory can be visualized (xy) with dc.vis.vis_trajectory",
"dc.vis.vis_trajectory(traj)",
"Or all the variables and control with dc.vis.vis_control",
"dc.vis.vis_control(traj,2)",
"Several random trajectories can be generated (in parallell) using a direct method with dc.data.generate_data",
"params = {'x0': (-1000,1000), 'z0': (500,2000), 'vx0': (-100,100), 'vz0': (-30,10), 'theta0': (-np.pi/20,np.pi/20), 'm0': (8000,12000)}\n\ndc.data.generate_data('../SpaceAMPL/lander/hs/main_thrusters.mod', params, 100,10)",
"All trajectories can then be loaded with dc.data.load_trajectories",
"col_names = ['t', 'm', 'x', 'vx', 'z', 'vz', 'theta', 'vtheta', 'u1', 'uR', 'uL']\n\ntrajs = dc.data.load_trajectories('data/main_thrusters/', col_names = col_names)\n\ntrajs[0].head(5)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NlGG/Projects
|
不動産/research02.ipynb
|
mit
|
[
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\n\n# 統計用ツール\nimport statsmodels.api as sm\nimport statsmodels.tsa.api as tsa\nfrom patsy import dmatrices\n\n# 自作の空間統計用ツール\nfrom spatialstat import *\n\n#描画\nimport matplotlib.pyplot as plt\nfrom pandas.tools.plotting import autocorrelation_plot\nimport seaborn as sns\nsns.set(font=['IPAmincho'])\n\n#深層学習\nimport chainer\nfrom chainer import cuda, Function, gradient_check, Variable, optimizers, serializers, utils\nfrom chainer import Link, Chain, ChainList\nimport chainer.functions as F\nimport chainer.links as L\n\nimport pyper\n\ndata = pd.read_csv(\"TokyoSingle.csv\")\ndata = data.dropna()\nCITY_NAME = data['CITY_CODE'].copy()\n\nCITY_NAME[CITY_NAME == 13101] = '01千代田区'\nCITY_NAME[CITY_NAME == 13102] = \"02中央区\"\nCITY_NAME[CITY_NAME == 13103] = \"03港区\"\nCITY_NAME[CITY_NAME == 13104] = \"04新宿区\"\nCITY_NAME[CITY_NAME == 13105] = \"05文京区\"\nCITY_NAME[CITY_NAME == 13106] = \"06台東区\"\nCITY_NAME[CITY_NAME == 13107] = \"07墨田区\"\nCITY_NAME[CITY_NAME == 13108] = \"08江東区\"\nCITY_NAME[CITY_NAME == 13109] = \"09品川区\"\nCITY_NAME[CITY_NAME == 13110] = \"10目黒区\"\nCITY_NAME[CITY_NAME == 13111] = \"11大田区\"\nCITY_NAME[CITY_NAME == 13112] = \"12世田谷区\"\nCITY_NAME[CITY_NAME == 13113] = \"13渋谷区\"\nCITY_NAME[CITY_NAME == 13114] = \"14中野区\"\nCITY_NAME[CITY_NAME == 13115] = \"15杉並区\"\nCITY_NAME[CITY_NAME == 13116] = \"16豊島区\"\nCITY_NAME[CITY_NAME == 13117] = \"17北区\"\nCITY_NAME[CITY_NAME == 13118] = \"18荒川区\"\nCITY_NAME[CITY_NAME == 13119] = \"19板橋区\"\nCITY_NAME[CITY_NAME == 13120] = \"20練馬区\"\nCITY_NAME[CITY_NAME == 13121] = \"21足立区\"\nCITY_NAME[CITY_NAME == 13122] = \"22葛飾区\"\nCITY_NAME[CITY_NAME == 13123] = \"23江戸川区\"\n\n#Make Japanese Block name\nBLOCK = data[\"CITY_CODE\"].copy()\nBLOCK[BLOCK == 13101] = \"01都心・城南\"\nBLOCK[BLOCK == 13102] = \"01都心・城南\"\nBLOCK[BLOCK == 13103] = \"01都心・城南\"\nBLOCK[BLOCK == 13104] = \"01都心・城南\"\nBLOCK[BLOCK == 13109] = \"01都心・城南\"\nBLOCK[BLOCK == 13110] = \"01都心・城南\"\nBLOCK[BLOCK == 13111] = \"01都心・城南\"\nBLOCK[BLOCK == 13112] = \"01都心・城南\"\nBLOCK[BLOCK == 13113] = \"01都心・城南\"\nBLOCK[BLOCK == 13114] = \"02城西・城北\"\nBLOCK[BLOCK == 13115] = \"02城西・城北\"\nBLOCK[BLOCK == 13105] = \"02城西・城北\"\nBLOCK[BLOCK == 13106] = \"02城西・城北\"\nBLOCK[BLOCK == 13116] = \"02城西・城北\"\nBLOCK[BLOCK == 13117] = \"02城西・城北\"\nBLOCK[BLOCK == 13119] = \"02城西・城北\"\nBLOCK[BLOCK == 13120] = \"02城西・城北\"\nBLOCK[BLOCK == 13107] = \"03城東\"\nBLOCK[BLOCK == 13108] = \"03城東\"\nBLOCK[BLOCK == 13118] = \"03城東\"\nBLOCK[BLOCK == 13121] = \"03城東\"\nBLOCK[BLOCK == 13122] = \"03城東\"\nBLOCK[BLOCK == 13123] = \"03城東\"\n\nnames = list(data.columns) + ['CITY_NAME', 'BLOCK']\ndata = pd.concat((data, CITY_NAME, BLOCK), axis = 1)\ndata.columns = names",
"変数名とデータの内容メモ\nCENSUS: 市区町村コード(9桁)\nP: 成約価格\nS: 専有面積\nL: 土地面積\nR: 部屋数\nRW: 前面道路幅員\nCY: 建築年\nA: 建築後年数(成約時)\nTS: 最寄駅までの距離\nTT: 東京駅までの時間\nACC: ターミナル駅までの時間\nWOOD: 木造ダミー\nSOUTH: 南向きダミー\nRSD: 住居系地域ダミー\nCMD: 商業系地域ダミー\nIDD: 工業系地域ダミー\nFAR: 建ぺい率\nFLR: 容積率\nTDQ: 成約時点(四半期)\nX: 緯度\nY: 経度\nCITY_CODE: 市区町村コード(5桁)\nCITY_NAME: 市区町村名\nBLOCK: 地域ブロック名\n\n市区町村別の件数を集計",
"print(data['CITY_NAME'].value_counts()) ",
"成約時点別×市区町村別の件数を集計",
"print(data.pivot_table(index=['TDQ'], columns=['CITY_NAME'])) ",
"成約時点別×地域ブロック別の件数を集計",
"print(data.pivot_table(index=['TDQ'], columns=['BLOCK'])) ",
"Histogram\n価格(真数)",
"data['P'].hist() ",
"価格(自然対数)",
"(np.log(data['P'])).hist() ",
"建築後年数",
"data['A'].hist() \n\nplt.figure(figsize=(20,8))\n\nplt.subplot(4, 2, 1)\ndata['P'].hist()\nplt.title(u\"成約価格\")\n\nplt.subplot(4, 2, 2)\ndata['S'].hist()\nplt.title(\"専有面積\")\n\nplt.subplot(4, 2, 3)\ndata['L'].hist()\nplt.title(\"土地面積\")\n\nplt.subplot(4, 2, 4)\ndata['R'].hist()\nplt.title(\"部屋数\")\n\nplt.subplot(4, 2, 5)\ndata['A'].hist()\nplt.title(\"建築後年数\")\n\nplt.subplot(4, 2, 6)\ndata['RW'].hist()\nplt.title(\"前面道路幅員\")\n\nplt.subplot(4, 2, 7)\ndata['TS'].hist()\nplt.title(\"最寄駅までの距離\")\n\nplt.subplot(4, 2, 8)\ndata['TT'].hist()\nplt.title(u\"東京駅までの時間\")",
"Plot\n件数の推移",
"plt.figure(figsize=(20,8))\ndata['TDQ'].value_counts().plot(kind='bar') \n\nplt.figure(figsize=(20,8))\ndata['CITY_NAME'].value_counts().plot(kind='bar') #市区町村別の件数",
"Main Analysis\nOLS part",
"vars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR', 'X', 'Y']\neq = fml_build(vars)\n\ny, X = dmatrices(eq, data=data, return_type='dataframe')\n\nCITY_NAME = pd.get_dummies(data['CITY_NAME'])\nTDQ = pd.get_dummies(data['TDQ'])\n\nX = pd.concat((X, CITY_NAME, TDQ), axis=1)\n\ndatas = pd.concat((y, X), axis=1)\n\ndatas = datas[datas['12世田谷区'] == 1][0:5000]\n\ndatas.head()\n\nvars = ['S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR']\n#vars += vars + list(TDQ.columns)\n\nclass CAR(Chain):\n def __init__(self, unit1, unit2, unit3, col_num):\n self.unit1 = unit1\n self.unit2 = unit2\n self.unit3 = unit3\n super(CAR, self).__init__(\n l1 = L.Linear(col_num, unit1),\n l2 = L.Linear(self.unit1, self.unit1),\n l3 = L.Linear(self.unit1, self.unit2),\n l4 = L.Linear(self.unit2, self.unit3),\n l5 = L.Linear(self.unit3, self.unit3),\n l6 = L.Linear(self.unit3, 1),\n )\n \n def __call__(self, x, y):\n fv = self.fwd(x, y)\n loss = F.mean_squared_error(fv, y)\n return loss\n \n def fwd(self, x, y):\n h1 = F.sigmoid(self.l1(x))\n h2 = F.sigmoid(self.l2(h1))\n h3 = F.sigmoid(self.l3(h2))\n h4 = F.sigmoid(self.l4(h3))\n h5 = F.sigmoid(self.l5(h4))\n h6 = self.l6(h5)\n return h6\n\nclass OLS_DLmodel(object):\n def __init__(self, data, vars, bs=200, n=1000):\n self.vars = vars\n eq = fml_build(vars)\n y, X = dmatrices(eq, data=datas, return_type='dataframe')\n self.y_in = y[:-n]\n self.X_in = X[:-n]\n self.y_ex = y[-n:]\n self.X_ex = X[-n:]\n \n self.logy_in = np.log(self.y_in)\n self.logy_ex = np.log(self.y_ex)\n \n self.bs = bs\n\n def OLS(self):\n X_in = self.X_in\n X_in = X_in.drop(['X', 'Y'], axis=1)\n model = sm.OLS(self.logy_in, X_in, intercept=False)\n self.reg = model.fit()\n print(self.reg.summary())\n df = (pd.DataFrame(self.reg.params)).T\n df['X'] = 0\n df['Y'] = 0\n self.reg.params = pd.Series((df.T)[0])\n \n def directDL(self, ite=100, bs=200, add=False):\n logy_in = np.array(self.logy_in, dtype='float32')\n X_in = np.array(self.X_in, dtype='float32')\n\n y = Variable(logy_in)\n x = Variable(X_in)\n\n num, col_num = X_in.shape\n \n if add is False:\n self.model1 = CAR(15, 15, 5, col_num)\n \n optimizer = optimizers.SGD()\n optimizer.setup(self.model1)\n\n for j in range(ite):\n sffindx = np.random.permutation(num)\n for i in range(0, num, bs):\n x = Variable(X_in[sffindx[i:(i+bs) if (i+bs) < num else num]])\n y = Variable(logy_in[sffindx[i:(i+bs) if (i+bs) < num else num]])\n self.model1.zerograds()\n loss = self.model1(x, y)\n loss.backward()\n optimizer.update()\n if j % 1000 == 0:\n loss_val = loss.data\n print('epoch:', j)\n print('train mean loss={}'.format(loss_val))\n print(' - - - - - - - - - ')\n \n y_ex = np.array(self.y_ex, dtype='float32').reshape(len(self.y_ex))\n X_ex = np.array(self.X_ex, dtype='float32')\n X_ex = Variable(X_ex)\n\n logy_pred = self.model1.fwd(X_ex, X_ex).data\n y_pred = np.exp(logy_pred)\n error = y_ex - y_pred.reshape(len(y_pred),)\n plt.hist(error[:])\n \n def DL(self, ite=100, bs=200, add=False):\n y_in = np.array(self.y_in, dtype='float32').reshape(len(self.y_in))\n \n resid = y_in - np.exp(self.reg.predict())\n resid = np.array(resid, dtype='float32').reshape(len(resid),1)\n \n X_in = np.array(self.X_in, dtype='float32')\n\n y = Variable(resid)\n x = Variable(X_in)\n\n num, col_num = X_in.shape\n \n if add is False:\n self.model1 = CAR(10, 10, 3, col_num)\n \n optimizer = optimizers.Adam()\n optimizer.setup(self.model1)\n\n for j in range(ite):\n sffindx = np.random.permutation(num)\n for i in range(0, num, bs):\n x = Variable(X_in[sffindx[i:(i+bs) if (i+bs) < num else num]])\n y = Variable(resid[sffindx[i:(i+bs) if (i+bs) < num else num]])\n self.model1.zerograds()\n loss = self.model1(x, y)\n loss.backward()\n optimizer.update()\n if j % 1000 == 0:\n loss_val = loss.data\n print('epoch:', j)\n print('train mean loss={}'.format(loss_val))\n print(' - - - - - - - - - ')\n \n def predict(self):\n y_ex = np.array(self.y_ex, dtype='float32').reshape(len(self.y_ex))\n \n X_ex = np.array(self.X_ex, dtype='float32')\n X_ex = Variable(X_ex)\n resid_pred = self.model1.fwd(X_ex, X_ex).data \n print(resid_pred[:10])\n \n self.logy_pred = np.matrix(self.X_ex)*np.matrix(self.reg.params).T\n self.error1 = np.array(y_ex - np.exp(self.logy_pred.reshape(len(self.logy_pred),)))[0]\n \n self.pred = np.exp(self.logy_pred) + resid_pred\n self.error2 = np.array(y_ex - self.pred.reshape(len(self.pred),))[0]\n \n def compare(self):\n plt.hist(self.error1)\n plt.hist(self.error2)\n\nvars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR', 'X', 'Y']\n#vars += vars + list(TDQ.columns)\n\nmodel = OLS_DLmodel(datas, vars)\n\nmodel.OLS()\n\nmodel.DL(ite=10, bs=200)\n\nmodel.predict()\n\nmodel.DL(ite=20000, bs=200, add=True)\n\nmodel.DL(ite=10000, bs=200, add=True)\n\nmodel.predict()",
"青がOLSの誤差、緑がOLSと深層学習を組み合わせた誤差。",
"model.compare()\n\nprint(np.mean(model.error1))\nprint(np.mean(model.error2))\n\nprint(np.mean(np.abs(model.error1)))\nprint(np.mean(np.abs(model.error2)))\n\nprint(max(np.abs(model.error1)))\nprint(max(np.abs(model.error2)))\n\nprint(np.var(model.error1))\nprint(np.var(model.error2))\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nerrors = [model.error1, model.error2]\n\nbp = ax.boxplot(errors)\n\nplt.grid()\nplt.ylim([-5000,5000])\n\nplt.title('分布の箱ひげ図')\n\nplt.show()\n\nX = model.X_ex['X'].values\nY = model.X_ex['Y'].values\n\ne = model.error2\n\nimport numpy\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\n\nfig=plt.figure()\nax=Axes3D(fig)\n \nax.scatter3D(X, Y, e)\nplt.show()\n\nt\n\nplt.hist(Xs)\n\nimport numpy as np\nfrom scipy.stats import gaussian_kde\nimport matplotlib.pyplot as plt\n\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nXs = np.linspace(min(X),max(X),10)\nYs = np.linspace(min(Y),max(Y),10)\n\nerror = model.error1\nXgrid, Ygrid = np.meshgrid(Xs, Ys)\nZ = LL(X, Y, Xs, Ys, error)\n\nfig = plt.figure()\nax = Axes3D(fig)\nax.plot_wireframe(Xgrid,Ygrid,Z) #<---ここでplot\n\nplt.show()\n\nfig = plt.figure()\nax = Axes3D(fig)\nax.set_zlim(-100, 500)\nax.plot_surface(Xgrid,Ygrid,Z) #<---ここでplot\n\nplt.show()\n\nh = 10\n(0.9375*(1-((X-1)/h)**2)**2)*(0.9375*(1-((Y-2)/h)**2)**2)\n\ndef LL(X, Y, Xs, Ys, error): \n n = len(X)\n h = 0.1\n error = model.error2\n mean_of_error = np.zeros((len(Xs), len(Ys)))\n for i in range(len(Xs)):\n for j in range(len(Ys)):\n u1 = ((X-Xs[i])/h)**2 \n u2 = ((Y-Ys[j])/h)**2\n k = (0.9375*(1-((X-Xs[i])/h)**2)**2)*(0.9375*(1-((Y-Ys[j])/h)**2)**2)\n K = np.diag(k)\n indep = np.matrix(np.array([np.ones(n), X - Xs[i], Y-Ys[j]]).T)\n dep = np.matrix(np.array([error]).T)\n gls_model = sm.GLS(dep, indep, sigma=K)\n gls_results = gls_model.fit()\n mean_of_error[i, j] = gls_results.params[0]\n return mean_of_error\n\nh = 200\nu1 = ((X-30)/h)**2 \n\nu1\n\nu1[u1 < 0] = 0\n\nfor x in range(lXs[:2]):\n print(x)\n\nmean_of_error\n\nplt.plot(gaussian_kde(Y, 0.1)(Ys))\n\n\n\nN = 5\n\nmeans = np.random.randn(N,2) * 10 + np.array([100, 200])\nstdev = np.random.randn(N,2) * 10 + 30\ncount = np.int64(np.int64(np.random.randn(N,2) * 10000 + 50000))\n\na = [\n np.hstack([\n np.random.randn(count[i,j]) * stdev[i,j] + means[i,j]\n for j in range(2)])\n for i in range(N)]\n\nfor x in Xs:\n for y in Ys:\n \n\ndef loclinearc(points,x,y,h):\n n = len(points[,1])\n const = matrix(1, nrow=length(x), ncol=1)\n bhat = matrix(0, nrow=3, ncol=n)\n b1 = matrix(0, n, n)\n predict = matrix(0, n, 1)\n\n for (j in 1:n) {\n\n for (i in 1:n) {\n a <- -.5*sign( abs( (points[i, 1]*const - x[,1])/h ) -1 ) + .5\t\n #get the right data points, (K(x) ~=0)\n b <- -.5*sign( abs( (points[j, 2]*const - x[,2])/h ) -1 ) + .5\n\n x1andy <- nonzmat(cbind((x[,1]*a*b), (y*a*b)))\n x2andy <- nonzmat(cbind((x[,2]*a*b), (y*a*b)))\n ztheta1 <- x1andy[,1]\n ztheta2 <- x2andy[,1]\n yuse <- x1andy[,2]\n q1 <- (ztheta1 - points[i,1]);\n q2 <- (ztheta2 - points[j,2]);\n nt1 <- ( (ztheta1- points[i,1])/h )\n nt2 <- ( (ztheta2- points[j,2])/h )\n #q2 = ((ztheta - points(i,1)).^2)/2;\n weights <- diag(c((15/16)%*%( 1-(nt1^2))^2*((15/16)%*%( 1-(nt2^2))^2)))\n #Biweight Kernel\n tempp3 <- cbind(matrix(1, nrow=length(ztheta1), ncol=1), q1, q2)\n bhat[,i] <- solve(t(tempp3)%*%weights%*%tempp3)%*%t(tempp3)%*%weights%*%yuse\n }\n b1[,j] <- t(bhat[1,])\n }\n return(b1)\n}\n\n\nnonzmat(x):\n #This function computes nonzeros of a MATRIX when certain ROWS of the \n #matrix are zero. This function returns a matrix with the \n #zero rows deleted\n\n m, k = x.shape\n xtemp = matrix(np.zeros(m, k))\n\n for (i in 1:m) {\n xtemp[i,] <- ifelse(x[i,] == matrix(0, nrow=1, ncol=k), 99999*matrix(1, nrow=1, ncol=k), x[i,])\n }\n\n xtemp <- xtemp - 99999\n\n if (length(which(xtemp !=0,arr.ind = T)) == 0) {\n a <- matrix(-99999, nrow=1, ncol=k)\n } else {\n a <- xtemp[which(xtemp !=0,arr.ind = T)]\n }\n a <- a + 99999\n n1 <- length(a)\n rowlen <- n1/k\n collen <- k\n\n out = matrix(a, nrow=rowlen, ncol=collen)\n return(out)\n }\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.tri as mtri\n\n\n\n#============\n# First plot\n#============\n# Plot the surface. The triangles in parameter space determine which x, y, z\n# points are connected by an edge.\nax = fig.add_subplot(1, 2, 1, projection='3d')\nax.plot_trisurf(X, Y, e)\nax.set_zlim(-1, 1)\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gfeiden/Notebook
|
Projects/mlt_calib/float_Y_float_alpha.ipynb
|
mit
|
[
"Float $Y_i$ & Float $\\alpha_{MLT}$\nFirst, we load the appropriate libraries and data file. MCMC trials where all quantities are permitted to float happened during Run 05. Note that the metallicity uncertainty for those measurements where no observational uncertainties were provided, are assumed to be ±0.2 dex.",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.genfromtxt('data/run05_kde_props_tmp3.txt')\ndata = np.array([x for x in data if x[30] > -0.5]) # remove stars that our outside of the model grid",
"As before, we can confirm that distances were accurately recovered and we can check on how well metallicities were recovered compared to the measured value.",
"fig, ax = plt.subplots(1, 2, figsize=(10, 5))\n\n# distance recovery diagnostic\ndistance_limits = (0.0, 20.0)\nax[0].set_xlabel('Observed Distance (pc)', fontsize=22.)\nax[0].set_ylabel('Inferred Distance (pc)', fontsize=22.)\nax[0].set_xlim(distance_limits)\nax[0].set_ylim(distance_limits)\nax[0].plot(distance_limits, distance_limits, '--', lw=3, color=\"#444444\")\nax[0].plot(1.0/data[:, 20], data[:, 4], 'o', markersize=9.0, color='#4682B4')\n\n# metallicity recovery diagnostic\nquoted_err = np.array([x for x in data if x[31] > 0.0])\n\nFeH_limits = (-0.5, 0.5)\nax[1].set_xlabel('Observed [Fe/H] (dex)', fontsize=22.)\nax[1].set_ylabel('Inferred [M/H] (dex)', fontsize=22.)\nax[1].set_xlim(FeH_limits)\nax[1].set_ylim(FeH_limits)\nax[1].plot(FeH_limits, FeH_limits, '--', lw=3, color=\"#444444\")\nax[1].plot(data[:, 30], data[:, 1], 'o', markersize=9.0, color='#4682B4')\nax[1].plot(quoted_err[:, 30], quoted_err[:, 1], 'o', markersize=9.0, color='#800000')\n\n# auto-adjust subplot spacing\nfig.tight_layout()",
"Distance are well recovered, as anticipated. Metallicities are scattered about the zero-point with perhaps a tendency for predicting systematically higher metallicities between $-0.40$ and $-0.20$ dex. Typical scatter appears to be around $\\pm0.2$ dex, consistent with the assumed metallicities uncertainty. Points in red are those that have a measured metallicity uncertainty. In general, those metallicities are better recovered, perhaps owing to tighter constraints.\nDefine relative errors and errors normalized to observational unceratinties.",
"# relative errors\ndTheta = (data[:,18] - data[:,8])/data[:,18]\ndTeff = (data[:,24] - 10**data[:,6])/data[:,24]\ndFbol = (data[:,22] - 10**(data[:,7]+ 8.0))/data[:,22]\n\n# uncertainty normalized errors\ndTheta_sigma = (data[:,18] - data[:,8])/data[:,19]\ndTeff_sigma = (data[:,24] - 10**data[:,6])/data[:,25]\ndFbol_sigma = (data[:,22] - 10**(data[:,7] + 8.0))/data[:,23]",
"Recovery of observed fundamental properties.",
"from matplotlib.patches import Ellipse\nfig, ax = plt.subplots(1, 1, figsize=(8, 8))\n\n# set axis labels\nax.set_xlabel('$\\\\Delta F_{\\\\rm bol} / \\\\sigma$', fontsize=22.)\nax.set_ylabel('$\\\\Delta \\\\Theta / \\\\sigma$', fontsize=22.)\nax.set_xlim(-3.0, 3.0)\nax.set_ylim(-4.0, 4.0)\n\n# plot 68% and 99% confidence intervals\nells = [Ellipse(xy=(0.0, 0.0), width=2.*x, height=2.*x, angle=0.0, lw=3, fill=False, \n linestyle='dashed', edgecolor='#333333') for x in [1.0, 3.0]]\nfor e in ells:\n ax.add_artist(e)\n\n# plot recovery diagnostics (uncertainty normalized errors)\nax.plot([-3.0, 3.0], [ 0.0, 0.0], '--', lw=2, color=\"#444444\")\nax.plot([ 0.0, 0.0], [-4.0, 4.0], '--', lw=2, color=\"#444444\")\nax.plot(dFbol_sigma, dTheta_sigma, 'o', markersize=9.0, color='#4682B4')",
"There is considerably better recovery of stellar fundamental properties once variations in helium abundance and the convective mixing length parameter are permitted. All points, with the exception of one, lie within the 99% confidence interval. We can explore whether systematic errors still remain in the sample, although from the above figure we can gather that such systematic effects may be small.\nFirst as a function of bolometric flux and angular diameter,",
"fig, ax = plt.subplots(2, 2, figsize=(10, 8), sharex=False, sharey=True)\n\nax[1, 0].set_xlabel('Bolometric Flux (erg s$^{-1}$ cm$^{-2}$)', fontsize=20.)\nax[1, 1].set_xlabel('Angular Diameter (mas)', fontsize=20.)\nax[1, 0].set_ylabel('$\\\\Delta \\\\Theta / \\\\sigma$', fontsize=20.)\nax[0, 0].set_ylabel('$\\\\Delta F_{\\\\rm bol} / \\\\sigma$', fontsize=20.)\n\n# vs bolometric flux\nax[0, 0].semilogx([0.1, 1.0e3], [0.0, 0.0], '--', lw=2, color='#444444')\nax[1, 0].semilogx([0.1, 1.0e3], [0.0, 0.0], '--', lw=2, color='#444444')\nax[0, 0].semilogx(data[:, 22], dFbol_sigma, 'o', markersize=9.0, color='#4682B4')\nax[1, 0].semilogx(data[:, 22], dTheta_sigma, 'o', markersize=9.0, color='#4682B4')\n\n# vs angular diameter\nax[0, 1].plot([0.0, 7.0], [0.0, 0.0], '--', lw=2, color='#444444')\nax[1, 1].plot([0.0, 7.0], [0.0, 0.0], '--', lw=2, color='#444444')\nax[0, 1].plot(data[:, 18], dFbol_sigma, 'o', markersize=9.0, color='#4682B4')\nax[1, 1].plot(data[:, 18], dTheta_sigma, 'o', markersize=9.0, color='#4682B4')\n\nfig.tight_layout()",
"There is a rise of the average error as one moves toward lower bolometric fluxes and smaller angular diameters. These points are effectively all M dwarfs. This illustrates, quite well, that problems for the lowest mass stars are most resiliant to variations in stellar model input parameters, thus preserving the trends present in the data where $\\alpha_{MLT}$ and $Y_i$ are fixed. The growth of the trend is therefore attributable to the model's steadily increasing resistance to change resulting from modifications to input parameters.\nAs a function of stellar mass and effective temperature,",
"fig, ax = plt.subplots(2, 2, figsize=(10, 8), sharex=False, sharey=True)\n\nax[1, 0].set_xlabel('Mass (M$_{\\\\odot}$)', fontsize=20.)\nax[1, 1].set_xlabel('Effective Temperature (K)', fontsize=20.)\nax[1, 0].set_ylabel('$\\\\Delta \\\\Theta / \\\\sigma$', fontsize=20.)\nax[0, 0].set_ylabel('$\\\\Delta F_{\\\\rm bol} / \\\\sigma$', fontsize=20.)\n\n# vs mass\nax[0, 0].plot([0.0, 1.0], [0.0, 0.0], '--', lw=2, color='#444444')\nax[1, 0].plot([0.0, 1.0], [0.0, 0.0], '--', lw=2, color='#444444')\nax[0, 0].plot(data[:, 0], dFbol_sigma, 'o', markersize=9.0, color='#4682B4')\nax[1, 0].plot(data[:, 0], dTheta_sigma, 'o', markersize=9.0, color='#4682B4')\n\n# vs effective temperature\nax[0, 1].plot([2500., 6000.], [0.0, 0.0], '--', lw=2, color='#444444')\nax[1, 1].plot([2500., 6000.], [0.0, 0.0], '--', lw=2, color='#444444')\nax[0, 1].plot(data[:,24], dFbol_sigma, 'o', markersize=9.0, color='#4682B4')\nax[1, 1].plot(data[:,24], dTheta_sigma, 'o', markersize=9.0, color='#4682B4')\n\nfig.tight_layout()",
"These figures nicely illustrate the aforementioned phenomenon that models grow increasingly resiliant to variations in input parameters as stellar mass (and effective temperature) decreases. Note that it should be possible to quantify the significance of any potential rising trend toward lower masses or temperatures with statistical tests, if one so desires.\nFor the moment, we can look how the tunable parameters vary. Starting with helium abundance,",
"fig, ax = plt.subplots(2, 2, figsize=(12, 12), sharey=True)\n\nax[0, 0].set_xlabel('Mass (M$_{\\\\odot}$)', fontsize=20.)\nax[0, 1].set_xlabel('Effective Temperature (K)', fontsize=20.)\nax[1, 0].set_xlabel('Heavy Element Mass Fraction, $Z_i$', fontsize=20.)\nax[1, 1].set_xlabel('Mixing Length Parameter', fontsize=20.)\nax[1, 0].set_ylabel('Helium Mass Fraction, $Y_i$', fontsize=20.)\nax[0, 0].set_ylabel('Helium Mass Fraction, $Y_i$', fontsize=20.)\n\nfor x in ax:\n for y in x:\n y.tick_params(which='major', axis='both', length=10., labelsize=16.)\n\nZ_init = (1.0 - data[:, 2])/(10.0**(-1.0*(data[:, 1] + np.log10(0.026579))) + 1.0) \n\n# Helium abundance variation\nax[0, 0].plot(data[:, 0], data[:, 2], 'o', markersize=9.0, color='#4682B4')\nax[1, 0].plot(Z_init, data[:, 2], 'o', markersize=9.0, color='#4682B4')\nax[0, 1].plot(data[:,24], data[:, 2], 'o', markersize=9.0, color='#4682B4')\nax[1, 1].plot(data[:, 5], data[:, 2], 'o', markersize=9.0, color='#4682B4')\n\nfig.tight_layout()",
"Should probably provide some analysis.\nNow we can look at the mixing length parameter,",
"fig, ax = plt.subplots(2, 2, figsize=(12, 12), sharey=True)\n\nax[0, 0].set_xlabel('Mass (M$_{\\\\odot}$)', fontsize=20.)\nax[0, 1].set_xlabel('Effective Temperature (K)', fontsize=20.)\nax[1, 0].set_xlabel('Metallicity, [M/H] (dex)', fontsize=20.)\nax[1, 1].set_xlabel('$\\\\log (g)$', fontsize=20.)\nax[1, 0].set_ylabel('Mixing Length Parameter', fontsize=20.)\nax[0, 0].set_ylabel('Mixing Length Parameter', fontsize=20.)\n\nfor x in ax:\n for y in x:\n y.tick_params(which='major', axis='both', length=10., labelsize=16.)\n\nLog_g = np.log10(6.67e-8*data[:,0]*1.989e33/(data[:,26]*6.956e10)**2)\n\n# mixing length parameter variation\nax[0, 0].plot(data[:, 0], data[:, 5], 'o', markersize=9.0, color='#4682B4')\nax[1, 0].plot(data[:, 1], data[:, 5], 'o', markersize=9.0, color='#4682B4')\nax[0, 1].plot(10**data[:, 6], data[:, 5], 'o', markersize=9.0, color='#4682B4')\nax[1, 1].plot(Log_g, data[:, 5], 'o', markersize=9.0, color='#4682B4')\n\n# points of reference (Sun, HD 189733)\nax[0, 0].plot([1.0, 0.80], [1.884, 1.231], '*', markersize=15.0, color='#DC143C')\nax[0, 1].plot([5778., 4883.], [1.884, 1.231], '*', markersize=15.0, color='#DC143C')\nax[1, 0].plot([0.01876, 0.01614], [1.884, 1.231], '*', markersize=15.0, color='#DC143C')\nax[1, 1].plot([4.43, 4.54], [1.884, 1.231], '*', markersize=15.0, color='#DC143C')\n\nfig.tight_layout()",
"NOTE: values for the Sun plotted above are drawn from our solar-calibrated model.\nWe can also compare how the inferred mixing length compares to those that are the result of extrapolating the Bonaca et al. (2012, ApJL, 755, L12) relation. The latter is valid for stars with $3.8 \\le \\log(g) \\le 4.5$, $5000 \\le T_{\\rm eff} \\le 6700$ K, and $-0.65 \\le \\textrm{[Fe/H]} \\le +0.35$, but here we extrapolate to see how the mixing length parameter might evolve toward cooler temperatures.",
"B12_coeffs = [-12.77, 0.54, 3.18, 0.52] # from Table 1: Trilinear analysis\nB12_alphas = B12_coeffs[0] + B12_coeffs[1]*Log_g + B12_coeffs[2]*data[:,6] + B12_coeffs[3]*data[:,1]",
"Now plot the Bonaca et al values against those we derived,",
"fig, ax = plt.subplots(1, 1, figsize=(8, 8))\n\nax.set_xlabel('Bonaca et al. $\\\\alpha_{\\\\rm MLT}$', fontsize=20.)\nax.set_ylabel('This work, $\\\\alpha_{\\\\rm MLT}$', fontsize=20.)\nax.tick_params(which='major', axis='both', length=10., labelsize=16.)\n\n# one-to-one relation\nax.plot([1.0, 1.59], [1.0, 1.59], '--', lw=2, color='#444444')\n\n# compare values\nax.errorbar(B12_alphas, data[:,5], yerr=data[:,14], fmt='o', lw=2, markersize=9.0, color='#4682B4')",
"Quite surprisingly, there is some agreement in the range of $\\alpha_{\\rm MLT} \\approx 1.5$, but below that value, our models tend to prefer slightly lower mixing lengths. There is then a sharp decrease for the lowest mass stars, as the mixing length parameter plummets. Once one includes errors (roughly $\\pm 0.4$), there is rough agreement for most stars with temperatures above 4000 K. Stars for which we find $\\alpha_{\\rm MLT} \\sim 0.5$ actually represent upper limits, with models prefering to push below what is permitted by the model grid.\nStars for which we derive $\\alpha_{\\rm MLT} \\sim 3.0$ may actually require lower values, in reality. Many of these tend to have a fairly flat probability distribution, with some small local maximums. Indeed, it is often just as likely that those stars have $\\alpha_{\\rm MLT} \\sim 0.5$, consistent with other similar stars. We do not assign great confidence to the derivation of $\\alpha_{\\rm MLT} \\sim 3.0$, but for a single star, where there is a clear peak in the posterior distribution."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
supergis/git_notebook
|
geospatial/giscript/giscript_quickstart.ipynb
|
gpl-3.0
|
[
"GIScript-开放地理空间信息处理与分析Python库\nGIScript是一个开放的地理空间心处理与分析Python框架,GIS内核采用SuperMap UGC封装,集成多种开源软件,也可以使用其它的商业软件引擎。\n by wangerqi@supermap.com, 2016-05-03。\n本文档介绍GIScript的安装和配置,并进行简单的运行测试,以确认安装的软件正常运行。\n\n本教程基于Anaconda3+python3.5.1科学计算环境,请参考:http://www.anaconda.org 。\n本Notebook在Ubuntu 14.04/15.10/16.04运行通过,在本地服务器和阿里云服务器都可以运行。\n可以在NBViewer上直接访问和下载本文档。\n\n(一)安装与配置\nGIScript的安装包括<font color=\"blue\">系统库的设置、UGC Runtime设置和Python库</font>的设置,通过编写一个启动脚本,可以在给定环境下载入相应的运行库的路径。\n1、下载GIScript支持库:\ncd /home/supermap/GISpark\ngit clone https://github.com/supergis/GIScriptLib.git\n2、UGC系统库的版本适配。\n由于GIScript的几个编译库版本较新,在默认使用系统老版本库时部分函数找不到会引起调用失败,因此需要将这几个的系统调用指向到GIScript编译使用的的新版本。在Ubuntu上,具体操作包括:\ncd ~/anaconda3/envs/GISpark/lib\nmv libstdc++.so libstdc++.so.x\nmv libstdc++.so.6 libstdc++.so.6.x\nmv libsqlite3.so.0 libsqlite3.so.0.x\nmv libsqlite3.so libsqlite3.so.x\nmv libgomp.so.1.0.0 libgomp.so.1.0.0.x\nmv libgomp.so.1 libgomp.so.1.x\nmv libgomp.so libgomp.so.x\n* 可以运行GIScriptLib/lib-giscript-x86-linux64/下的setup-giscript.sh来自动处理(请根据自己的目录布局修改路径)。\n* 由于不同系统安装的软件和版本不同,如果还有其它的动态库冲突,可以使用ldd *.so来查看库的依赖关系,按照上述办法解决。\n3、安装Python的支持库。\nGIScript的Python封装库,默认存放在系统目录:/usr/lib/python3/dist-packages/PyUGC\n使用Anaconda时,存在相应的env的目录下,如:[/home/supermap/Anaconda3]/envs/GISpark/lib/python3.5/site-packages \n 安装方法一:链接。在[...]/python3.5/site-packages下建立PyUGC的软连接。注意,原文件不可删除,否则就找不到了。\nln -s -f /home/supermap/GISpark/GIScriptLib/lib-giscript-x86-linux64/lib ~/anaconda3/envs/GISpark/lib/python3.5/site-packages/PyUGC\n* 安装方法二:复制。*将lib-giscript-x86-linux64/lib(Python的UGC封装库)复制为Python下的site-packages/PyUGC目录,如下: \ncd /home/supermap/GISpark/GIScriptLib\ncp -r lib-giscript-x86-linux64/lib ~/anaconda3/envs/GISpark/lib/python3.5/site-packages/PyUGC\n4、Jupyter启动之前,设置GIScript运行时 library 载入的路径:\n\n编写脚本,启动前设置GIScript的运行时动态库路径,内容如下: \n\n```\necho \"Config GIScript2016...\"\n使用GIScript2015的开发工程目录,配置示例:\nexport SUPERMAP_HOME=/home/supermap/GIScript/GIScript2015/Linux64-gcc4.9\n使用GIScriptLib运行时动态库,配置如下:\nexport SUPERMAP_HOME=/home/supermap/GISpark/GIScriptLib/lib-giscript-x86-linux64\nexport LD_LIBRARY_PATH=$SUPERMAP_HOME/Bin:$LD_LIBRARY_PATH\necho \"Config: LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH\n```\n\n将上面的内容与Jupyter启动命令放到start.sh脚本中,如下:\n\n```\necho \"Activate conda enviroment GISpark ...\"\nsource activate GISpark\necho \"Config GIScript 2016 for Jupyter ...\"\nexport SUPERMAP_HOME=/home/supermap/GISpark/GIScriptLib/lib-giscript-x86-linux64\nexport LD_LIBRARY_PATH=$SUPERMAP_HOME/bin:/usr/lib/x86_64-linux-gnu/:$LD_LIBRARY_PATH\necho \"Config: LD_LIBRARY_PATH=\"$LD_LIBRARY_PATH\necho \"Start Jupyter notebook\"\njupyter notebook\n```\n\n修改start.sh执行权限,运行Jupyter Notebook。\nsudo chmod +x start.sh\n./start.sh\n\n默认配置下,将会自动打开浏览器,就可以开始使用Jupyter Notebook并调用GIScript的库了。 \n如果通过服务器使用,需要使用`jupyter notebook --generate-config`创建配置文件,然后进去修改参数,这里不再详述。\n(二)运行测试,导入一些数据。\n1、导入GIScript的Python库。",
"from PyUGC import * \nfrom PyUGC.Stream import UGC \nfrom PyUGC.Base import OGDC \nfrom PyUGC import Engine \nfrom PyUGC import FileParser \nfrom PyUGC import DataExchange \n\nimport datasource",
"2、使用Python的help(...)查看库的元数据信息获得帮助。",
"#help(UGC)\n#help(OGDC)\n#help(datasource)",
"3、设置测试数据目录。",
"import os\n\nbasepath = os.path.join(os.getcwd(),\"../data\")\nprint(\"Data path: \", basepath)\n\nfile1 = basepath + u\"/Shape/countries.shp\"\nprint(\"Data file: \", file1)\n\nfile2 = basepath + u\"/Raster/astronaut(CMYK)_32.tif\"\nprint(\"Data file: \", file2)\n\nfile3 = basepath + u\"/Grid/grid_Int32.grd\"\nprint(\"Data file: \", file3)\n\ndatapath_out = basepath + u\"/GIScript_Test.udb\"\nprint(\"Output UDB: \",datapath_out)",
"4、导入数据的测试函数。",
"def Import_Test():\n print(\"Export to UDB: \",datapath_out)\n ds = datasource.CreateDatasource(UGC.UDB,datapath_out)\n datasource.ImportVector(file1,ds)\n datasource.ImportRaster(file2,ds)\n datasource.ImportGrid(file3,ds)\n ds.Close()\n del ds\n print(\"Finished.\")",
"5、运行这个测试。",
"try:\n Import_Test()\nexcept Exception as ex:\n print(ex)\n ",
"(三)查看生成的数据源文件UDB。\n下面使用了<font color=\"green\">IPython的Magic操作符 !</font>,可以直接运行操作系统的Shell命令行。",
"!ls -l -h ../data/GIScript_Test.*",
"<font color=\"red\">删除生成的测试文件。注意,不要误删其它文件!</font>\n如果重复运行上面的Import_Test()将会发现GIScript_Test.udb和GIScript_Test.udd文件会不断增大。\n但是打开UDB文件却只有一份数据,为什么呢?\n* 因为UDB文件是增量存储的,不用的存储块需要使用SQLlite的存储空间紧缩处理才能回收。",
"!rm ../data/GIScript_Test.*",
"再次查看目录,文件是否存在。",
"!ls -l -h ../data/GIScript_Test.*"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
4DGenome/Chromosomal-Conformation-Course
|
Notebooks/A1-Preparation_reference_genome.ipynb
|
gpl-3.0
|
[
"import os",
"Search for a reference genome\nHomo sapiens's reference genome sequence\nWe would need two reference genomes. One as a fasta file with each chromosome, and one that we will use exclusively for the mapping that would contain all contigs.\nThe use of contigs in the reference genome increases the mapping specificity.",
"species = 'Homo sapiens'\ntaxid = '9606'\ngenome = 'GRCh38.p10'\nrefseq, dir1, dir2, dir3 = 'GCF', '000', '001', '405' \ngenbank = 'GCF_000001405.36'\n\nsumurl = 'ftp://ftp.ncbi.nlm.nih.gov/genomes/all/{0}/{1}/{2}/{3}/{4}_{5}/{4}_{5}_assembly_report.txt'.format(\n refseq, dir1, dir2, dir3, genbank, genome)\n\ncrmurl = 'https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=nuccore&id=%s&rettype=fasta&retmode=text'",
"Download from the NCBI the list of chromosome/contigs",
"! wget -q $sumurl -O chromosome_list.txt\n\n! head chromosome_list.txt\n\ndirname = 'genome/'\n! mkdir -p $dirname",
"For each contig/chromosome download the corresponding FASTA file from NCBI",
"contig = []\nfor line in open('chromosome_list.txt'):\n if line.startswith('#'):\n continue\n seq_name, seq_role, assigned_molecule, _, genbank, _, refseq, _ = line.split(None, 7)\n if seq_role == 'assembled-molecule':\n name = 'chr%s.fasta' % assigned_molecule\n else:\n name = 'chr%s_%s.fasta' % (assigned_molecule, seq_name.replace('/', '-'))\n contig.append(name)\n\n outfile = os.path.join(dirname, name)\n if os.path.exists(outfile) and os.path.getsize(outfile) > 10:\n continue\n error_code = os.system('wget \"%s\" --no-check-certificate -O %s' % (crmurl % (genbank), outfile))\n if error_code:\n error_code = os.system('wget \"%s\" --no-check-certificate -O %s' % (crmurl % (refseq), outfile))\n if error_code:\n print genbank",
"Concatenate all contigs/chromosomes into a single file",
"contig_file = open('genome/Homo_sapiens_contigs.fa','w')\nfor molecule in contig:\n for line in open('genome/' + molecule):\n # replace the header of the sequence in the fasta file\n if line == '\\n':\n continue\n if line.startswith('>'):\n line = '>' + molecule[3:].replace('.fasta', '') + '\\n'\n contig_file.write(line)\ncontig_file.close()",
"Remove all the other files (with single chromosome/contig)",
"! rm -f genome/*.fasta",
"Creation of an index file for GEM mapper",
"! gem-indexer -t 8 -i genome/Homo_sapiens_contigs.fa -o genome/Homo_sapiens_contigs",
"The index file will be: genome/Homo_sapiens_contigs.gem\nWARNING: in more recent versions of GEM the \"-t\" flag should be \"-T\""
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
M-R-Houghton/euroscipy_2015
|
scikit_image/lectures/adv5_blob_segmentation.v3.ipynb
|
mit
|
[
"Image segmentation: extracting objects from images\nDuring this part of the tutorial, we will illustrate a task of image processing frequently encountered in natural or material science, that is the extraction and labeling of pixels belonging to objects of interest. Such an operation is called image segmentation.\nImage segmentation typically requires to perform a succession of different operations on the image of interest, therefore this second part of the tutorial will bring the opportunity to use concepts introduced during the first part of the tutorial, such as the manipulation of numpy arrays, or the filtering of images.\nAs an example, we will use a scanning electron microscopy image of a multiphase glass. Let us start by opening the image.",
"from __future__ import division, print_function\n%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt, cm\n\nfrom skimage import io\nfrom skimage import img_as_float\n\nim = io.imread('../images/phase_separation.png')\n\nplt.imshow(im, cmap='gray')\n\nim.dtype, im.shape",
"For the sake of convenience, one first removes the information bar at the bottom, in order to retain only the region of the image with the blobs of interest. This operation is just an array slicing removing the last rows, for which we can leverage the nice syntax of NumPy's slicing. \nIn order to determine how many rows to remove, it is possible to use either visual inspection, or a more advanced and robust way relying on NumPy machinery in order to determine the first completely dark row.",
"phase_separation = im[:947]\nplt.imshow(phase_separation, cmap='gray')\n\nnp.nonzero(np.all(im < 0.1 * im.max(), axis=1))[0][0]",
"Image contrast, histogram and thresholding\nIn order to separate blobs from the background, a simple idea is to use the gray values of pixels: blobs are typically darker than the background. \nIn order to check this impression, let us look at the histogram of pixel values of the image.",
"from skimage import exposure\n\nhistogram = exposure.histogram(phase_separation)\nplt.plot(histogram[1], histogram[0])\nplt.xlabel('gray value')\nplt.ylabel('number of pixels')\nplt.title('Histogram of gray values')",
"Two peaks are clearly visible in the histogram, but they have a strong overlap. What happens if we try to threshold the image at a value that separates the two peaks?\nFor an automatic computation of the thresholding values, we use Otsu's thresholding, an operation that chooses the threshold in order to have a good separation between gray values of background and foreground.",
"from skimage import filters\n\nthreshold = filters.threshold_otsu(phase_separation)\nprint(threshold)\n\nfig, ax = plt.subplots(ncols=2, figsize=(12, 8))\nax[0].imshow(phase_separation, cmap='gray')\nax[0].contour(phase_separation, [threshold])\nax[1].imshow(phase_separation < threshold, cmap='gray')",
"Image denoising\nIn order to improve the thresholding, we will try first to filter the image so that gray values are more uniform inside the two phases, and more separated. Filters used to this aim are called denoising filters, since their action amounts to reducing the intensity of the noise on the image.\nZooming on a part of the image that should be uniform illustrates well the concept of noise: the image has random variations of gray levels that originate from the imaging process. Noise can be due to low photon-counting, or to electronic noise on the sensor, although other sources of noise are possible as well.",
"plt.imshow(phase_separation[390:410, 820:840], cmap='gray', \n interpolation='nearest')\nplt.colorbar()\nprint(phase_separation[390:410, 820:840].std())",
"Several denoising filters average together pixels that are close to each other. If the noise is not spatially correlated, random noise fluctuations will be strongly attenuated by this averaging. \nOne of the most common denoising filters is called the median filter: it replaces the value of a pixel by the median gray value inside a neighbourhood of the pixel. Taking the median gray value preserves edges much better than taking the mean gray value.\nHere we use a square neighbourhood of size 7x7: the larger the window size, the larger the attenuation of the noise, but this may come at the expense of precision for the location of boundaries. Choosing a window size therefore represents a trade-off between denoising and accuracy.",
"from skimage import restoration\nfrom skimage import filters\n\nmedian_filtered = filters.median(phase_separation, np.ones((7, 7)))\n\nplt.imshow(median_filtered, cmap='gray')\n\nplt.imshow(median_filtered[390:410, 820:840], cmap='gray', \n interpolation='nearest')\nplt.colorbar()\nprint(median_filtered[390:410, 820:840].std())",
"Variations of gray levels inside zones that should be uniform are now smaller in range, and also spatially smoother.\nPlotting the histogram of the denoised image shows that the gray levels of the two phases are now better separated.",
"histo_median = exposure.histogram(median_filtered)\nplt.plot(histo_median[1], histo_median[0])",
"As a consequence, Otsu thresholding now results in a much better segmentation.",
"plt.imshow(phase_separation[:300, :300], cmap='gray')\nplt.contour(median_filtered[:300, :300], \n [filters.threshold_otsu(median_filtered)])",
"Going further: Otsu thresholding with adaptative threshold. For images with non-uniform illumination, it is possible to extend Otsu's method to the case for which different thresholds are used in different regions of space.",
"binary_image = median_filtered < filters.threshold_otsu(median_filtered)\n\nplt.imshow(binary_image, cmap='gray')",
"Exercise: try other denoising filters\nSeveral other denoising filters are available in scikit-image.\n\n\nThe bilateral filter uses similar ideas as for the median filter or the average filter: it averages a pixel with other pixels in a neighbourhood, but gives more weight to pixels for which the gray value is close to the one of the central pixel. The bilateral filter is very efficient at preserving edges.\n\n\nThe total variation filter results in images that are piecewise-constant. This filter optimizes a trade-off between the closeness to the original image, and the (L1) norm of the gradient, the latter part resulting in picewise-constant regions. \n\n\nGoing further: in addition to trying different denoising filters on the phase separation image, do the same on a synthetic image of a square, corrupted by artificial noise.\nFurther reading on denoising with scikit-image: see the Gallery example on denoising\nAn another approach: more advanced segmentation algorithms\nOur approach above consisted in filtering the image so that it was as binary as possible, and then to threshold it. Other methods are possible, that do not threshold the image according only to gray values, but also use spatial information: they tend to attribute the same label to neighbouring pixels. A famous algorithm in order to segment binary images is called the graph cuts algorithm. Although graph cut is not available yet in scikit-image, other algorithms using spatial information are available as well, such as the watershed algorithm, or the random walker algorithm.",
"blob_markers = median_filtered < 110\nbg_markers = median_filtered > 160\nmarkers = np.zeros_like(phase_separation)\nmarkers[blob_markers] = 2\nmarkers[bg_markers] = 1\nfrom skimage import morphology\nwatershed = morphology.watershed(filters.sobel(median_filtered), markers)\nplt.imshow(watershed, cmap='gray')",
"Image cleaning\nIf we use the denoising + thresholding approach, the result of the thresholding is not completely what we want: small objects are detected, and small holes exist in the objects. Such defects of the segmentation can be amended, using the knowledge that no small holes should exist, and that blobs have a minimal size.\nUtility functions to modify binary images are found in the morphology submodule. Although mathematical morphology encompasses a large set of possible operations, we will only see here how to remove small objects.",
"from skimage import morphology\n\nonly_large_blobs = morphology.remove_small_objects(binary_image, \n min_size=300)\nplt.imshow(only_large_blobs, cmap='gray')\n\nonly_large = np.logical_not(morphology.remove_small_objects(\n np.logical_not(only_large_blobs), \n min_size=300))\nplt.imshow(only_large, cmap='gray')",
"Measuring region properties\nThe segmentation of foreground (objects) and background results in a binary image. In order to measure the properties of the different blobs, one must first attribute a different label to each blob (identified as a connected component of the foreground phase). Then, the utility function measure.regionprops can be used to compute several properties of the labeled regions.\nProperties of the regions can be used for classifying the objects, for example with scikit-learn.",
"from skimage import measure\n\nlabels = measure.label(only_large)\nplt.imshow(labels, cmap='spectral')\n\nprops = measure.regionprops(labels, phase_separation)\n\nareas = np.array([prop.area for prop in props])\nperimeters = np.array([prop.perimeter for prop in props])\n\nplt.plot(np.sort(perimeters**2./areas), 'o')",
"Other examples\nPlotting labels on an image\nMeasuring region properties\nExercise: visualize an image where the color of a blob encodes its size (blobs of similar size have a similar color). \nExercise: visualize an image where only the most circular blobs are represented. Hint: this involves some manipulations of NumPy arrays.\nProcessing batches of images\nIf one wishes to process a single image, a lot of trial and error is possible, using interactive sessions and intermediate visualizations. Such workflow typically allows to optimize over parameter values, such as the size of the filtering window for denoising the image, or the area of small spurious objects to be removed.\nIn a time of cheap CCD sensors, it is also frequent to deal with collections of images, for which one cannot afford to process each image individually. In such a case, the workflow has to be adapted.\n\n\nfunction parameters need to be set in a more robust manner, using statistical information like the typical noise of the image, or the typical size of objects in the image.\n\n\nit is a good practice to divide the different array manipulations into several functions. Outside an Ipython notebook, such functions would typically be found in a dedicated module, that could be imported from a script.\n\n\napplying the same operations to a collection of (independent) images is a typical example of embarassingly parallel workflow, that calls for multiprocessing computation. The joblib module provides a simple helper function for using multiprocessing on embarassingly parallel for loops. \n\n\nLet us first define two functions with a more robust handling of parameters.",
"def remove_information_bar(image, value=0.1):\n value *= image.max()\n row_index = np.nonzero(np.all(image < value, axis=1))[0][0]\n return image[:row_index]\n\nfrom scipy import stats\ndef clean_image(binary_image):\n labels = measure.label(binary_image)\n props = measure.regionprops(labels)\n areas = np.array([prop.area for prop in props])\n large_area = stats.scoreatpercentile(areas, 90)\n remove_small = morphology.remove_small_objects(binary_image, \n large_area / 20)\n remove_holes = np.logical_not(morphology.remove_small_objects(\n np.logical_not(remove_small), \n large_area / 20))\n return remove_holes\n\ndef process_blob_image(image):\n image = remove_information_bar(image)\n image = filters.median(image, np.ones((7, 7)))\n binary_im = image < filters.threshold_otsu(image)\n binary_im = clean_image(binary_im)\n return binary_im",
"The glob module is very handy to retrieve lists of image file names using wildcard patterns.",
"from glob import glob\nfilelist = glob('../images/phase_separation*.png')\nfilelist.sort()\nprint(filelist)\n\nfig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 8))\nfor index, filename in enumerate(filelist[1:]):\n print(filename)\n im = io.imread(filename)\n binary_im = process_blob_image(im)\n i, j = np.unravel_index(index, (2, 2))\n ax[i, j].imshow(binary_im, cmap='gray')\n ax[i, j].axis('off')",
"Pipeline approach and order of operations\nIt is quite uncommon to perform a successful segmentation in only one or two operations: typical image require some pre- and post-processing. However, a large number of image processing steps, each using some hand-tuning of parameters, can result in disasters, since the processing pipeline will not work as well for a different image.\nAlso, the order in which the operations are performed is important.",
"crude_segmentation = phase_separation < filters.threshold_otsu(phase_separation)\n\nclean_crude = morphology.remove_small_objects(crude_segmentation, 300)\nclean_crude = np.logical_not(morphology.remove_small_objects(\n np.logical_not(clean_crude), 300))\nplt.imshow(clean_crude[:200, :200], cmap='gray')",
"It would be possible to filter the image to smoothen the boundary, and then threshold again. However, it is more satisfying to first filter the image so that it is as binary as possible (which corresponds better to our prior information on the materials), and then to threshold the image.\nGoing further: want to know more about image segmentation with scikit-image, or see different examples?\n\nthe tutorial on image segmentation in the user documentation \nthe chapter on scikit-image of the SciPy lecture notes.\na tutorial on chromosome segmentation with uneven illumination"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
g-weatherill/notebooks
|
hmtk/Geology.ipynb
|
agpl-3.0
|
[
"HMTK Geological Tools Demonstration\nThis notepad demonstrates the use of the HMTK geological tools for preparing fault source models for input into OpenQuake\nConstruction of the Geological Input File\nAn active fault model input file contains two sections:\n1) A tectonic regionalisation - this can provide a container for a set of properties that may be assigned to multiple faults by virtue of a common tectonic region\n2) A set of active faults\nTectonic Regionalisation Representation in the Fault Source File\nIn the tectonic regionalisation information each of the three properties can be represented according to a set of weighted values.\nFor example, in the case below faults in an arbitrarily named tectonic region (called here \"GEM Region 1\") will share the same set\nof magnitude scaling relations and shear moduli, unless over-written by the specific fault. Those faults assigned to \"GEM Region 2\"\nwill have the magnitude scaling relation fixed as WC1994 and the shear modulus of 30 GPa\nActive Fault Model\nA set of active faults will be defined with a common ID and name. \nAn active fault set containing a single fault is shown below:\nFault Geometry Representations - Example 1: Simple Fault\nFault Geometry Representations - Example 2: Complex Fault\nRupture Properties\nThe rupture requires characterisation of the rake (using the Aki & Richards 2002 convention), the slip-type, the slip completeness factor\n(an integer constraining the quality of the slip information with 1 being the hights quality), the range of slip values and their \ncorresponding weights, and the aseismic slip coefficient (the proportion of slip released aseismically, 1.0 - coupling coefficient)\nThe Magnitude Frequency Distributions",
"#Import tools\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom hmtk.plotting.faults.geology_mfd_plot import plot_recurrence_models\nfrom openquake.hazardlib.scalerel.wc1994 import WC1994 # In all the following examples the Wells & Coppersmith (1994) Scaling Relation is Used",
"The following examples refer to a fault with the following properties:\nLength (Along-strike) = 100 km,\nWidth (Down-Dip) = 20 km,\nSlip = 10.0 mm/yr,\nRake = 0. (Strike Slip),\nMagnitude Scaling Relation = Wells & Coppersmith (1994),\nShear Modulus = 30.0 GPa",
"# Set up fault parameters\nslip = 10.0 # Slip rate in mm/yr\n\n# Area = along-strike length (km) * down-dip with (km)\narea = 100.0 * 20.0\n\n# Rake = 0.\nrake = 0.\n\n# Magnitude Scaling Relation\nmsr = WC1994()",
"Anderson & Luco (Arbitrary)\nThis describes a set of distributons where the maximum magnitude is assumed to rupture the whole fault surface",
"#Magnitude Frequency Distribution Example\n\nanderson_luco_config1 = {'Model_Name': 'AndersonLucoArbitrary',\n 'Model_Type': 'First',\n 'Model_Weight': 1.0, # Weight is a required key - normally weights should sum to 1.0 - current example is simply illustrative! \n 'MFD_spacing': 0.1,\n 'Maximum_Magnitude': None,\n 'Minimum_Magnitude': 4.5,\n 'b_value': [0.8, 0.05]}\nanderson_luco_config2 = {'Model_Name': 'AndersonLucoArbitrary',\n 'Model_Type': 'Second',\n 'Model_Weight': 1.0,\n 'MFD_spacing': 0.1,\n 'Maximum_Magnitude': None,\n 'Minimum_Magnitude': 4.5,\n 'b_value': [0.8, 0.05]}\nanderson_luco_config3 = {'Model_Name': 'AndersonLucoArbitrary',\n 'Model_Type': 'Third',\n 'Model_Weight': 1.0, \n 'MFD_spacing': 0.1,\n 'Maximum_Magnitude': None,\n 'Minimum_Magnitude': 4.5,\n 'b_value': [0.8, 0.05]}\n# Create a list of the configurations\nanderson_luco_arb = [anderson_luco_config1, anderson_luco_config2, anderson_luco_config3]\n\n# View the corresponding magnitude recurrence model\nplot_recurrence_models(anderson_luco_arb, area, slip, msr, rake, msr_sigma=0.0)",
"Anderson & Luco (Area - MMax)\nThis describes a set of distributons where the maximum rupture extent is limited to only part of the fault surface",
"anderson_luco_config1 = {'Model_Name': 'AndersonLucoAreaMmax',\n 'Model_Type': 'First',\n 'Model_Weight': 1.0, # Weight is a required key - normally weights should sum to 1.0 - current example is simply illustrative! \n 'MFD_spacing': 0.1,\n 'Maximum_Magnitude': None,\n 'Minimum_Magnitude': 4.5,\n 'b_value': [0.8, 0.05]}\nanderson_luco_config2 = {'Model_Name': 'AndersonLucoAreaMmax',\n 'Model_Type': 'Second',\n 'Model_Weight': 1.0,\n 'MFD_spacing': 0.1,\n 'Maximum_Magnitude': None,\n 'Minimum_Magnitude': 4.5,\n 'b_value': [0.8, 0.05]}\nanderson_luco_config3 = {'Model_Name': 'AndersonLucoAreaMmax',\n 'Model_Type': 'Third',\n 'Model_Weight': 1.0, \n 'MFD_spacing': 0.1,\n 'Maximum_Magnitude': None,\n 'Minimum_Magnitude': 4.5,\n 'b_value': [0.8, 0.05]}\n\n# For these models a displacement to length ratio is needed\ndisp_length_ratio = 1.25E-5\n\n# Create a list of the configurations\nanderson_luco_area_mmax = [anderson_luco_config1, anderson_luco_config2, anderson_luco_config3]\n\n# View the corresponding magnitude recurrence model\nplot_recurrence_models(anderson_luco_area_mmax, area, slip, msr, rake, msr_sigma=0.0)\n",
"Characteristic Earthquake\nThe following example illustrates a \"Characteristic\" Model, represented by a Truncated Gaussian Distribution",
"characteristic = [{'Model_Name': 'Characteristic',\n 'MFD_spacing': 0.05,\n 'Model_Weight': 1.0,\n 'Maximum_Magnitude': None,\n 'Sigma': 0.15, # Standard Deviation of Distribution (in Magnitude Units) - omit for fixed value\n 'Lower_Bound': -3.0, # Bounds of the distribution correspond to the number of sigma for truncation\n 'Upper_Bound': 3.0}]\n\n# View the corresponding magnitude recurrence model\nplot_recurrence_models(characteristic, area, slip, msr, rake, msr_sigma=0.0)",
"Youngs & Coppersmith (1985) Models\nThe following describes the recurrence from two distributions presented by Youngs & Coppersmith (1985): 1) Exponential Distribution, 2) Hybrid Exponential-Characteristic Distribution",
"exponential = {'Model_Name': 'YoungsCoppersmithExponential',\n 'MFD_spacing': 0.1,\n 'Maximum_Magnitude': None,\n 'Maximum_Magnitude_Uncertainty': None,\n 'Minimum_Magnitude': 5.0,\n 'Model_Weight': 1.0,\n 'b_value': [0.8, 0.1]}\n\nhybrid = {'Model_Name': 'YoungsCoppersmithCharacteristic',\n 'MFD_spacing': 0.1,\n 'Maximum_Magnitude': None,\n 'Maximum_Magnitude_Uncertainty': None,\n 'Minimum_Magnitude': 5.0,\n 'Model_Weight': 1.0,\n 'b_value': [0.8, 0.1],\n 'delta_m': None}\n\nyoungs_coppersmith = [exponential, hybrid]\n\n# View the corresponding magnitude recurrence model\nplot_recurrence_models(youngs_coppersmith, area, slip, msr, rake, msr_sigma=0.0)\n",
"Epistemic Uncertainty Examples\nThis example considers the fault defined at the top of the page. This fault defines two values of slip rate and two different magnitude frequency distributions",
"def show_file_contents(filename):\n \"\"\"\n Shows the file contents\n \"\"\"\n fid = open(filename, 'r')\n for row in fid.readlines():\n print row\n fid.close()\n\ninput_file = 'input_data/simple_fault_example_4branch.yml'\nshow_file_contents(input_file)\n",
"Example 1 - Full Enumeration\nIn this example each individual MFD for each branch is determined. In the resulting file the fault is duplicated n_branches number of times, with the\ncorresponding MFD multiplied by the end-branch weight",
"# Import the Parser\nfrom hmtk.parsers.faults.fault_yaml_parser import FaultYmltoSource\n\n# Fault mesh discretization step\nmesh_spacing = 1.0 # (km)\n\n# Read in the fault model\nreader = FaultYmltoSource(input_file)\nfault_model, tectonic_region = reader.read_file(mesh_spacing)\n\n# Construct the fault source model (this is really running the MFD calculation code)\nfault_model.build_fault_model()\n\n# Write to an output NRML file\noutput_file_1 = 'output_data/fault_example_enumerated.xml'\nfault_model.source_model.serialise_to_nrml(output_file_1)\n\nshow_file_contents(output_file_1)",
"Example 2: Collapsed Branches\nIn the following example we implement the same model, this time collapsing the branched. This means that the MFD is discretised and the incremental rate\nin each magnitude bin is the weighted sum of the rates in that bin from all the end branches of the logic tree.\nWhen collapsing the branches, however, it is necessary to define a single Magnitude Scaling Relation that will need to be assigned to the fault for\nuse in OpenQuake.",
"# Read in the fault model\nreader = FaultYmltoSource(input_file)\nfault_model, tectonic_region = reader.read_file(mesh_spacing)\n\n# Scaling relation for export\noutput_msr = WC1994()\n\n# Construct the fault source model - collapsing the branches\nfault_model.build_fault_model(collapse=True, rendered_msr=output_msr)\n\n\n# Write to an output NRML file\noutput_file_2 = 'output_data/fault_example_collapsed.xml'\nfault_model.source_model.serialise_to_nrml(output_file_2)\n\nshow_file_contents(output_file_2)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
massie/notebooks
|
Physio.ipynb
|
apache-2.0
|
[
"Physiology\n1) Using the ion concentrations of interstitial and intracellular compartments and the Nernst equation, calculate the equilibrium potentials for Na+, K+, and Cl-",
"from math import log\n\n# RT/F = 26.73 at room temperature\nrt_div_f = 26.73\nnernst = lambda xO, xI, z: rt_div_f/z * log(1.0 * xO / xI)\n\nNa_Eq = nernst(145, 15, 1)\nK_Eq = nernst(4.5, 120, 1)\nCl_Eq = nernst(116, 20, -1)\n\nprint \"Na+ equilibrium potential is %.2f mV\" % (Na_Eq)\nprint \"K+ equilibrium potential is %.2f mV\" % (K_Eq)\nprint \"Cl- equilibrium potential is %.2f mV\" % (Cl_Eq)",
"2) Assuming the resting potential for the plasma membrane is -70mV, explain whether each of the ions in question 1 would be expected to move into or out of the cell. Use an I-V plot to support your answer.",
"# Values from Table 3.1 p57 in syllabus\nG_Na = 1\nG_K = 100\nG_Cl = 25\n\ngoldman = lambda Na_Out, Na_In, K_Out, K_In, Cl_Out, Cl_In: \\\nrt_div_f * log((G_Na * Na_Out + G_K * K_Out + G_Cl * Cl_In)/\\\n(1.0 * G_Na * Na_In + G_K * K_In + G_Cl * Cl_Out))\n\nprint \"Potential at equalibrium is %.2f mV\" % goldman(150, 15, 5, 150, 100, 10)",
"IV graph",
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(20,20))\n\nx = np.arange(-100, 60, 0.1);\n\niv_line = lambda G_val, E_x: G_val * x + ((0.0 - E_x) * G_val)\n\nK_line = iv_line(G_K, K_Eq)\nNa_line = iv_line(G_Na, Na_Eq)\nCl_line = iv_line(G_Cl, Cl_Eq)\nSum_line = K_line + Na_line + Cl_line\nplt.grid(True)\nK, = plt.plot(x, K_line, label=\"K\")\nNa, = plt.plot(x, Na_line, label=\"Na\")\nCl, = plt.plot(x, Cl_line, label=\"Cl\")\nEm, = plt.plot(x, Sum_line, label=\"Em\")\nplt.legend(handles=[K, Na, Cl, Em])\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/jax
|
docs/notebooks/Custom_derivative_rules_for_Python_code.ipynb
|
apache-2.0
|
[
"Custom derivative rules for JAX-transformable Python functions\n\nmattjj@ Mar 19 2020, last updated Oct 14 2020\nThere are two ways to define differentiation rules in JAX:\n\nusing jax.custom_jvp and jax.custom_vjp to define custom differentiation rules for Python functions that are already JAX-transformable; and\ndefining new core.Primitive instances along with all their transformation rules, for example to call into functions from other systems like solvers, simulators, or general numerical computing systems.\n\nThis notebook is about #1. To read instead about #2, see the notebook on adding primitives.\nFor an introduction to JAX's automatic differentiation API, see The Autodiff Cookbook. This notebook assumes some familiarity with jax.jvp and jax.grad, and the mathematical meaning of JVPs and VJPs.\nTL;DR\nCustom JVPs with jax.custom_jvp",
"import jax.numpy as jnp\nfrom jax import custom_jvp\n\n@custom_jvp\ndef f(x, y):\n return jnp.sin(x) * y\n\n@f.defjvp\ndef f_jvp(primals, tangents):\n x, y = primals\n x_dot, y_dot = tangents\n primal_out = f(x, y)\n tangent_out = jnp.cos(x) * x_dot * y + jnp.sin(x) * y_dot\n return primal_out, tangent_out\n\nfrom jax import jvp, grad\n\nprint(f(2., 3.))\ny, y_dot = jvp(f, (2., 3.), (1., 0.))\nprint(y)\nprint(y_dot)\nprint(grad(f)(2., 3.))\n\n# Equivalent alternative using the defjvps convenience wrapper\n\n@custom_jvp\ndef f(x, y):\n return jnp.sin(x) * y\n\nf.defjvps(lambda x_dot, primal_out, x, y: jnp.cos(x) * x_dot * y,\n lambda y_dot, primal_out, x, y: jnp.sin(x) * y_dot)\n\nprint(f(2., 3.))\ny, y_dot = jvp(f, (2., 3.), (1., 0.))\nprint(y)\nprint(y_dot)\nprint(grad(f)(2., 3.))",
"Custom VJPs with jax.custom_vjp",
"from jax import custom_vjp\n\n@custom_vjp\ndef f(x, y):\n return jnp.sin(x) * y\n\ndef f_fwd(x, y):\n# Returns primal output and residuals to be used in backward pass by f_bwd.\n return f(x, y), (jnp.cos(x), jnp.sin(x), y)\n\ndef f_bwd(res, g):\n cos_x, sin_x, y = res # Gets residuals computed in f_fwd\n return (cos_x * g * y, sin_x * g)\n\nf.defvjp(f_fwd, f_bwd)\n\nprint(grad(f)(2., 3.))",
"Example problems\nTo get an idea of what problems jax.custom_jvp and jax.custom_vjp are meant to solve, let's go over a few examples. A more thorough introduction to the jax.custom_jvp and jax.custom_vjp APIs is in the next section.\nNumerical stability\nOne application of jax.custom_jvp is to improve the numerical stability of differentiation.\nSay we want to write a function called log1pexp, which computes $x \\mapsto \\log ( 1 + e^x )$. We can write that using jax.numpy:",
"import jax.numpy as jnp\n\ndef log1pexp(x):\n return jnp.log(1. + jnp.exp(x))\n\nlog1pexp(3.)",
"Since it's written in terms of jax.numpy, it's JAX-transformable:",
"from jax import jit, grad, vmap\n\nprint(jit(log1pexp)(3.))\nprint(jit(grad(log1pexp))(3.))\nprint(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))",
"But there's a numerical stability problem lurking here:",
"print(grad(log1pexp)(100.))",
"That doesn't seem right! After all, the derivative of $x \\mapsto \\log (1 + e^x)$ is $x \\mapsto \\frac{e^x}{1 + e^x}$, and so for large values of $x$ we'd expect the value to be about 1.\nWe can get a bit more insight into what's going on by looking at the jaxpr for the gradient computation:",
"from jax import make_jaxpr\n\nmake_jaxpr(grad(log1pexp))(100.)",
"Stepping through how the jaxpr would be evaluated, we can see that the last line would involve multiplying values that floating point math will round to 0 and $\\infty$, respectively, which is never a good idea. That is, we're effectively evaluating lambda x: (1 / (1 + jnp.exp(x))) * jnp.exp(x) for large x, which effectively turns into 0. * jnp.inf.\nInstead of generating such large and small values, hoping for a cancellation that floats can't always provide, we'd rather just express the derivative function as a more numerically stable program. In particular, we can write a program that more closely evaluates the equal mathematical expression $1 - \\frac{1}{1 + e^x}$, with no cancellation in sight.\nThis problem is interesting because even though our definition of log1pexp could already be JAX-differentiated (and transformed with jit, vmap, ...), we're not happy with the result of applying standard autodiff rules to the primitives comprising log1pexp and composing the result. Instead, we'd like to specify how the whole function log1pexp should be differentiated, as a unit, and thus arrange those exponentials better.\nThis is one application of custom derivative rules for Python functions that are already JAX transformable: specifying how a composite function should be differentiated, while still using its original Python definition for other transformations (like jit, vmap, ...).\nHere's a solution using jax.custom_jvp:",
"from jax import custom_jvp\n\n@custom_jvp\ndef log1pexp(x):\n return jnp.log(1. + jnp.exp(x))\n\n@log1pexp.defjvp\ndef log1pexp_jvp(primals, tangents):\n x, = primals\n x_dot, = tangents\n ans = log1pexp(x)\n ans_dot = (1 - 1/(1 + jnp.exp(x))) * x_dot\n return ans, ans_dot\n\nprint(grad(log1pexp)(100.))\n\nprint(jit(log1pexp)(3.))\nprint(jit(grad(log1pexp))(3.))\nprint(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))",
"Here's a defjvps convenience wrapper to express the same thing:",
"@custom_jvp\ndef log1pexp(x):\n return jnp.log(1. + jnp.exp(x))\n\nlog1pexp.defjvps(lambda t, ans, x: (1 - 1/(1 + jnp.exp(x))) * t)\n\nprint(grad(log1pexp)(100.))\nprint(jit(log1pexp)(3.))\nprint(jit(grad(log1pexp))(3.))\nprint(vmap(jit(grad(log1pexp)))(jnp.arange(3.)))",
"Enforcing a differentiation convention\nA related application is to enforce a differentiation convention, perhaps at a boundary.\nConsider the function $f : \\mathbb{R}+ \\mapsto \\mathbb{R}+$ with $f(x) = \\frac{x}{1 + \\sqrt{x}}$, where we take $\\mathbb{R}_+ = [0, \\infty)$. We might implement $f$ as a program like this:",
"def f(x):\n return x / (1 + jnp.sqrt(x))",
"As a mathematical function on $\\mathbb{R}$ (the full real line), $f$ is not differentiable at zero (because the limit defining the derivative doesn't exist from the left). Correspondingly, autodiff produces a nan value:",
"print(grad(f)(0.))",
"But mathematically if we think of $f$ as a function on $\\mathbb{R}_+$ then it is differentiable at 0 [Rudin's Principles of Mathematical Analysis Definition 5.1, or Tao's Analysis I 3rd ed. Definition 10.1.1 and Example 10.1.6]. Alternatively, we might say as a convention we want to consider the directional derivative from the right. So there is a sensible value for the Python function grad(f) to return at 0.0, namely 1.0. By default, JAX's machinery for differentiation assumes all functions are defined over $\\mathbb{R}$ and thus doesn't produce 1.0 here.\nWe can use a custom JVP rule! In particular, we can define the JVP rule in terms of the derivative function $x \\mapsto \\frac{\\sqrt{x} + 2}{2(\\sqrt{x} + 1)^2}$ on $\\mathbb{R}_+$,",
"@custom_jvp\ndef f(x):\n return x / (1 + jnp.sqrt(x))\n\n@f.defjvp\ndef f_jvp(primals, tangents):\n x, = primals\n x_dot, = tangents\n ans = f(x)\n ans_dot = ((jnp.sqrt(x) + 2) / (2 * (jnp.sqrt(x) + 1)**2)) * x_dot\n return ans, ans_dot\n\nprint(grad(f)(0.))",
"Here's the convenience wrapper version:",
"@custom_jvp\ndef f(x):\n return x / (1 + jnp.sqrt(x))\n\nf.defjvps(lambda t, ans, x: ((jnp.sqrt(x) + 2) / (2 * (jnp.sqrt(x) + 1)**2)) * t)\n\nprint(grad(f)(0.))",
"Gradient clipping\nWhile in some cases we want to express a mathematical differentiation computation, in other cases we may even want to take a step away from mathematics to adjust the computation autodiff performs. One canonical example is reverse-mode gradient clipping.\nFor gradient clipping, we can use jnp.clip together with a jax.custom_vjp reverse-mode-only rule:",
"from functools import partial\nfrom jax import custom_vjp\n\n@custom_vjp\ndef clip_gradient(lo, hi, x):\n return x # identity function\n\ndef clip_gradient_fwd(lo, hi, x):\n return x, (lo, hi) # save bounds as residuals\n\ndef clip_gradient_bwd(res, g):\n lo, hi = res\n return (None, None, jnp.clip(g, lo, hi)) # use None to indicate zero cotangents for lo and hi\n\nclip_gradient.defvjp(clip_gradient_fwd, clip_gradient_bwd)\n\nimport matplotlib.pyplot as plt\nfrom jax import vmap\n\nt = jnp.linspace(0, 10, 1000)\n\nplt.plot(jnp.sin(t))\nplt.plot(vmap(grad(jnp.sin))(t))\n\ndef clip_sin(x):\n x = clip_gradient(-0.75, 0.75, x)\n return jnp.sin(x)\n\nplt.plot(clip_sin(t))\nplt.plot(vmap(grad(clip_sin))(t))",
"Python debugging\nAnother application that is motivated by development workflow rather than numerics is to set a pdb debugger trace in the backward pass of reverse-mode autodiff.\nWhen trying to track down the source of a nan runtime error, or just examine carefully the cotangent (gradient) values being propagated, it can be useful to insert a debugger at a point in the backward pass that corresponds to a specific point in the primal computation. You can do that with jax.custom_vjp.\nWe'll defer an example until the next section.\nImplicit function differentiation of iterative implementations\nThis example gets pretty deep in the mathematical weeds!\nAnother application for jax.custom_vjp is reverse-mode differentiation of functions that are JAX-transformable (by jit, vmap, ...) but not efficiently JAX-differentiable for some reason, perhaps because they involve lax.while_loop. (It's not possible to produce an XLA HLO program that efficiently computes the reverse-mode derivative of an XLA HLO While loop because that would require a program with unbounded memory use, which isn't possible to express in XLA HLO, at least without side-effecting interactions through infeed/outfeed.)\nFor example, consider this fixed_point routine which computes a fixed point by iteratively applying a function in a while_loop:",
"from jax.lax import while_loop\n\ndef fixed_point(f, a, x_guess):\n def cond_fun(carry):\n x_prev, x = carry\n return jnp.abs(x_prev - x) > 1e-6\n\n def body_fun(carry):\n _, x = carry\n return x, f(a, x)\n\n _, x_star = while_loop(cond_fun, body_fun, (x_guess, f(a, x_guess)))\n return x_star",
"This is an iterative procedure for numerically solving the equation $x = f(a, x)$ for $x$, by iterating $x_{t+1} = f(a, x_t)$ until $x_{t+1}$ is sufficiently close to $x_t$. The result $x^$ depends on the parameters $a$, and so we can think of there being a function $a \\mapsto x^(a)$ that is implicitly defined by equation $x = f(a, x)$.\nWe can use fixed_point to run iterative procedures to convergence, for example running Newton's method to calculate square roots while only executing adds, multiplies, and divides:",
"def newton_sqrt(a):\n update = lambda a, x: 0.5 * (x + a / x)\n return fixed_point(update, a, a)\n\nprint(newton_sqrt(2.))",
"We can vmap or jit the function as well:",
"print(jit(vmap(newton_sqrt))(jnp.array([1., 2., 3., 4.])))",
"We can't apply reverse-mode automatic differentiation because of the while_loop, but it turns out we wouldn't want to anyway: instead of differentiating through the implementation of fixed_point and all its iterations, we can exploit the mathematical structure to do something that is much more memory-efficient (and FLOP-efficient in this case, too!). We can instead use the implicit function theorem [Prop A.25 of Bertsekas's Nonlinear Programming, 2nd ed.], which guarantees (under some conditions) the existence of the mathematical objects we're about to use. In essence, we linearize at the solution and solve those linear equations iteratively to compute the derivatives we want.\nConsider again the equation $x = f(a, x)$ and the function $x^$. We want to evaluate vector-Jacobian products like $v^\\mathsf{T} \\mapsto v^\\mathsf{T} \\partial x^(a_0)$.\nAt least in an open neighborhood around the point $a_0$ at which we want to differentiate, let's assume that the equation $x^(a) = f(a, x^(a))$ holds for all $a$. Since the two sides are equal as functions of $a$, their derivatives must be equal as well, so let's differentiate both sides:\n$\\qquad \\partial x^(a) = \\partial_0 f(a, x^(a)) + \\partial_1 f(a, x^(a)) \\partial x^(a)$.\nSetting $A = \\partial_1 f(a_0, x^(a_0))$ and $B = \\partial_0 f(a_0, x^(a_0))$, we can write the quantity we're after more simply as\n$\\qquad \\partial x^(a_0) = B + A \\partial x^(a_0)$,\nor, by rearranging,\n$\\qquad \\partial x^*(a_0) = (I - A)^{-1} B$.\nThat means we can evaluate vector-Jacobian products like\n$\\qquad v^\\mathsf{T} \\partial x^*(a_0) = v^\\mathsf{T} (I - A)^{-1} B = w^\\mathsf{T} B$,\nwhere $w^\\mathsf{T} = v^\\mathsf{T} (I - A)^{-1}$, or equivalently $w^\\mathsf{T} = v^\\mathsf{T} + w^\\mathsf{T} A$, or equivalently $w^\\mathsf{T}$ is the fixed point of the map $u^\\mathsf{T} \\mapsto v^\\mathsf{T} + u^\\mathsf{T} A$. That last characterization gives us a way to write the VJP for fixed_point in terms of a call to fixed_point! Moreover, after expanding $A$ and $B$ back out, we can see we need only to evaluate VJPs of $f$ at $(a_0, x^*(a_0))$.\nHere's the upshot:",
"from jax import vjp\n\n@partial(custom_vjp, nondiff_argnums=(0,))\ndef fixed_point(f, a, x_guess):\n def cond_fun(carry):\n x_prev, x = carry\n return jnp.abs(x_prev - x) > 1e-6\n\n def body_fun(carry):\n _, x = carry\n return x, f(a, x)\n\n _, x_star = while_loop(cond_fun, body_fun, (x_guess, f(a, x_guess)))\n return x_star\n\ndef fixed_point_fwd(f, a, x_init):\n x_star = fixed_point(f, a, x_init)\n return x_star, (a, x_star)\n\ndef fixed_point_rev(f, res, x_star_bar):\n a, x_star = res\n _, vjp_a = vjp(lambda a: f(a, x_star), a)\n a_bar, = vjp_a(fixed_point(partial(rev_iter, f),\n (a, x_star, x_star_bar),\n x_star_bar))\n return a_bar, jnp.zeros_like(x_star)\n \ndef rev_iter(f, packed, u):\n a, x_star, x_star_bar = packed\n _, vjp_x = vjp(lambda x: f(a, x), x_star)\n return x_star_bar + vjp_x(u)[0]\n\nfixed_point.defvjp(fixed_point_fwd, fixed_point_rev)\n\nprint(newton_sqrt(2.))\n\nprint(grad(newton_sqrt)(2.))\nprint(grad(grad(newton_sqrt))(2.))",
"We can check our answers by differentiating jnp.sqrt, which uses a totally different implementation:",
"print(grad(jnp.sqrt)(2.))\nprint(grad(grad(jnp.sqrt))(2.))",
"A limitation to this approach is that the argument f can't close over any values involved in differentiation. That is, you might notice that we kept the parameter a explicit in the argument list of fixed_point. For this use case, consider using the low-level primitive lax.custom_root, which allows for deriviatives in closed-over variables with custom root-finding functions.\nBasic usage of jax.custom_jvp and jax.custom_vjp APIs\nUse jax.custom_jvp to define forward-mode (and, indirectly, reverse-mode) rules\nHere's a canonical basic example of using jax.custom_jvp, where the comments use\nHaskell-like type signatures:",
"from jax import custom_jvp\nimport jax.numpy as jnp\n\n# f :: a -> b\n@custom_jvp\ndef f(x):\n return jnp.sin(x)\n\n# f_jvp :: (a, T a) -> (b, T b)\ndef f_jvp(primals, tangents):\n x, = primals\n t, = tangents\n return f(x), jnp.cos(x) * t\n\nf.defjvp(f_jvp)\n\nfrom jax import jvp\n\nprint(f(3.))\n\ny, y_dot = jvp(f, (3.,), (1.,))\nprint(y)\nprint(y_dot)",
"In words, we start with a primal function f that takes inputs of type a and produces outputs of type b. We associate with it a JVP rule function f_jvp that takes a pair of inputs representing the primal inputs of type a and the corresponding tangent inputs of type T a, and produces a pair of outputs representing the primal outputs of type b and tangent outputs of type T b. The tangent outputs should be a linear function of the tangent inputs.\nYou can also use f.defjvp as a decorator, as in\n```python\n@custom_jvp\ndef f(x):\n ...\n@f.defjvp\ndef f_jvp(primals, tangents):\n ...\n```\nEven though we defined only a JVP rule and no VJP rule, we can use both forward- and reverse-mode differentiation on f. JAX will automatically transpose the linear computation on tangent values from our custom JVP rule, computing the VJP as efficiently as if we had written the rule by hand:",
"from jax import grad\n\nprint(grad(f)(3.))\nprint(grad(grad(f))(3.))",
"For automatic transposition to work, the JVP rule's output tangents must be linear as a function of the input tangents. Otherwise a transposition error is raised.\nMultiple arguments work like this:",
"@custom_jvp\ndef f(x, y):\n return x ** 2 * y\n\n@f.defjvp\ndef f_jvp(primals, tangents):\n x, y = primals\n x_dot, y_dot = tangents\n primal_out = f(x, y)\n tangent_out = 2 * x * y * x_dot + x ** 2 * y_dot\n return primal_out, tangent_out\n\nprint(grad(f)(2., 3.))",
"The defjvps convenience wrapper lets us define a JVP for each argument separately, and the results are computed separately then summed:",
"@custom_jvp\ndef f(x):\n return jnp.sin(x)\n\nf.defjvps(lambda t, ans, x: jnp.cos(x) * t)\n\nprint(grad(f)(3.))",
"Here's a defjvps example with multiple arguments:",
"@custom_jvp\ndef f(x, y):\n return x ** 2 * y\n\nf.defjvps(lambda x_dot, primal_out, x, y: 2 * x * y * x_dot,\n lambda y_dot, primal_out, x, y: x ** 2 * y_dot)\n\nprint(grad(f)(2., 3.))\nprint(grad(f, 0)(2., 3.)) # same as above\nprint(grad(f, 1)(2., 3.))",
"As a shorthand, with defjvps you can pass a None value to indicate that the JVP for a particular argument is zero:",
"@custom_jvp\ndef f(x, y):\n return x ** 2 * y\n\nf.defjvps(lambda x_dot, primal_out, x, y: 2 * x * y * x_dot,\n None)\n\nprint(grad(f)(2., 3.))\nprint(grad(f, 0)(2., 3.)) # same as above\nprint(grad(f, 1)(2., 3.))",
"Calling a jax.custom_jvp function with keyword arguments, or writing a jax.custom_jvp function definition with default arguments, are both allowed so long as they can be unambiguously mapped to positional arguments based on the function signature retrieved by the standard library inspect.signature mechanism.\nWhen you're not performing differentiation, the function f is called just as if it weren't decorated by jax.custom_jvp:",
"@custom_jvp\ndef f(x):\n print('called f!') # a harmless side-effect\n return jnp.sin(x)\n\n@f.defjvp\ndef f_jvp(primals, tangents):\n print('called f_jvp!') # a harmless side-effect\n x, = primals\n t, = tangents\n return f(x), jnp.cos(x) * t\n\nfrom jax import vmap, jit\n\nprint(f(3.))\n\nprint(vmap(f)(jnp.arange(3.)))\nprint(jit(f)(3.))",
"The custom JVP rule is invoked during differentiation, whether forward or reverse:",
"y, y_dot = jvp(f, (3.,), (1.,))\nprint(y_dot)\n\nprint(grad(f)(3.))",
"Notice that f_jvp calls f to compute the primal outputs. In the context of higher-order differentiation, each application of a differentiation transform will use the custom JVP rule if and only if the rule calls the original f to compute the primal outputs. (This represents a kind of fundamental tradeoff, where we can't make use of intermediate values from the evaluation of f in our rule and also have the rule apply in all orders of higher-order differentiation.)",
"grad(grad(f))(3.)",
"You can use Python control flow with jax.custom_jvp:",
"@custom_jvp\ndef f(x):\n if x > 0:\n return jnp.sin(x)\n else:\n return jnp.cos(x)\n\n@f.defjvp\ndef f_jvp(primals, tangents):\n x, = primals\n x_dot, = tangents\n ans = f(x)\n if x > 0:\n return ans, 2 * x_dot\n else:\n return ans, 3 * x_dot\n\nprint(grad(f)(1.))\nprint(grad(f)(-1.))",
"Use jax.custom_vjp to define custom reverse-mode-only rules\nWhile jax.custom_jvp suffices for controlling both forward- and, via JAX's automatic transposition, reverse-mode differentiation behavior, in some cases we may want to directly control a VJP rule, for example in the latter two example problems presented above. We can do that with jax.custom_vjp:",
"from jax import custom_vjp\nimport jax.numpy as jnp\n\n# f :: a -> b\n@custom_vjp\ndef f(x):\n return jnp.sin(x)\n\n# f_fwd :: a -> (b, c)\ndef f_fwd(x):\n return f(x), jnp.cos(x)\n\n# f_bwd :: (c, CT b) -> CT a\ndef f_bwd(cos_x, y_bar):\n return (cos_x * y_bar,)\n\nf.defvjp(f_fwd, f_bwd)\n\nfrom jax import grad\n\nprint(f(3.))\nprint(grad(f)(3.))",
"In words, we again start with a primal function f that takes inputs of type a and produces outputs of type b. We associate with it two functions, f_fwd and f_bwd, which describe how to perform the forward- and backward-passes of reverse-mode autodiff, respectively.\nThe function f_fwd describes the forward pass, not only the primal computation but also what values to save for use on the backward pass. Its input signature is just like that of the primal function f, in that it takes a primal input of type a. But as output it produces a pair, where the first element is the primal output b and the second element is any \"residual\" data of type c to be stored for use by the backward pass. (This second output is analogous to PyTorch's save_for_backward mechanism.)\nThe function f_bwd describes the backward pass. It takes two inputs, where the first is the residual data of type c produced by f_fwd and the second is the output cotangents of type CT b corresponding to the output of the primal function. It produces an output of type CT a representing the cotangents corresponding to the input of the primal function. In particular, the output of f_bwd must be a sequence (e.g. a tuple) of length equal to the number of arguments to the primal function.\nSo multiple arguments work like this:",
"from jax import custom_vjp\n\n@custom_vjp\ndef f(x, y):\n return jnp.sin(x) * y\n\ndef f_fwd(x, y):\n return f(x, y), (jnp.cos(x), jnp.sin(x), y)\n\ndef f_bwd(res, g):\n cos_x, sin_x, y = res\n return (cos_x * g * y, -sin_x * g)\n\nf.defvjp(f_fwd, f_bwd)\n\nprint(grad(f)(2., 3.))",
"Calling a jax.custom_vjp function with keyword arguments, or writing a jax.custom_vjp function definition with default arguments, are both allowed so long as they can be unambiguously mapped to positional arguments based on the function signature retrieved by the standard library inspect.signature mechanism.\nAs with jax.custom_jvp, the custom VJP rule comprised by f_fwd and f_bwd is not invoked if differentiation is not applied. If function is evaluated, or transformed with jit, vmap, or other non-differentiation transformations, then only f is called.",
"@custom_vjp\ndef f(x):\n print(\"called f!\")\n return jnp.sin(x)\n\ndef f_fwd(x):\n print(\"called f_fwd!\")\n return f(x), jnp.cos(x)\n\ndef f_bwd(cos_x, y_bar):\n print(\"called f_bwd!\")\n return (cos_x * y_bar,)\n\nf.defvjp(f_fwd, f_bwd)\n\nprint(f(3.))\n\nprint(grad(f)(3.))\n\nfrom jax import vjp\n\ny, f_vjp = vjp(f, 3.)\nprint(y)\n\nprint(f_vjp(1.))",
"Forward-mode autodiff cannot be used on the jax.custom_vjp function and will raise an error:",
"from jax import jvp\n\ntry:\n jvp(f, (3.,), (1.,))\nexcept TypeError as e:\n print('ERROR! {}'.format(e))",
"If you want to use both forward- and reverse-mode, use jax.custom_jvp instead.\nWe can use jax.custom_vjp together with pdb to insert a debugger trace in the backward pass:",
"import pdb\n\n@custom_vjp\ndef debug(x):\n return x # acts like identity\n\ndef debug_fwd(x):\n return x, x\n\ndef debug_bwd(x, g):\n import pdb; pdb.set_trace()\n return g\n\ndebug.defvjp(debug_fwd, debug_bwd)\n\ndef foo(x):\n y = x ** 2\n y = debug(y) # insert pdb in corresponding backward pass step\n return jnp.sin(y)",
"```python\njax.grad(foo)(3.)\n\n<ipython-input-113-b19a2dc1abf7>(12)debug_bwd()\n-> return g\n(Pdb) p x\nDeviceArray(9., dtype=float32)\n(Pdb) p g\nDeviceArray(-0.91113025, dtype=float32)\n(Pdb) q\n```\n\nMore features and details\nWorking with list / tuple / dict containers (and other pytrees)\nYou should expect standard Python containers like lists, tuples, namedtuples, and dicts to just work, along with nested versions of those. In general, any pytrees are permissible, so long as their structures are consistent according to the type constraints. \nHere's a contrived example with jax.custom_jvp:",
"from collections import namedtuple\nPoint = namedtuple(\"Point\", [\"x\", \"y\"])\n\n@custom_jvp\ndef f(pt):\n x, y = pt.x, pt.y\n return {'a': x ** 2,\n 'b': (jnp.sin(x), jnp.cos(y))}\n\n@f.defjvp\ndef f_jvp(primals, tangents):\n pt, = primals\n pt_dot, = tangents\n ans = f(pt)\n ans_dot = {'a': 2 * pt.x * pt_dot.x,\n 'b': (jnp.cos(pt.x) * pt_dot.x, -jnp.sin(pt.y) * pt_dot.y)}\n return ans, ans_dot\n\ndef fun(pt):\n dct = f(pt)\n return dct['a'] + dct['b'][0]\n\npt = Point(1., 2.)\n\nprint(f(pt))\n\nprint(grad(fun)(pt))",
"And an analogous contrived example with jax.custom_vjp:",
"@custom_vjp\ndef f(pt):\n x, y = pt.x, pt.y\n return {'a': x ** 2,\n 'b': (jnp.sin(x), jnp.cos(y))}\n\ndef f_fwd(pt):\n return f(pt), pt\n\ndef f_bwd(pt, g):\n a_bar, (b0_bar, b1_bar) = g['a'], g['b']\n x_bar = 2 * pt.x * a_bar + jnp.cos(pt.x) * b0_bar\n y_bar = -jnp.sin(pt.y) * b1_bar\n return (Point(x_bar, y_bar),)\n\nf.defvjp(f_fwd, f_bwd)\n\ndef fun(pt):\n dct = f(pt)\n return dct['a'] + dct['b'][0]\n\npt = Point(1., 2.)\n\nprint(f(pt))\n\nprint(grad(fun)(pt))",
"Handling non-differentiable arguments\nSome use cases, like the final example problem, call for non-differentiable arguments like function-valued arguments to be passed to functions with custom differentiation rules, and for those arguments to also be passed to the rules themselves. In the case of fixed_point, the function argument f was such a non-differentiable argument. A similar situation arises with jax.experimental.odeint.\njax.custom_jvp with nondiff_argnums\nUse the optional nondiff_argnums parameter to jax.custom_jvp to indicate arguments like these. Here's an example with jax.custom_jvp:",
"from functools import partial\n\n@partial(custom_jvp, nondiff_argnums=(0,))\ndef app(f, x):\n return f(x)\n\n@app.defjvp\ndef app_jvp(f, primals, tangents):\n x, = primals\n x_dot, = tangents\n return f(x), 2. * x_dot\n\nprint(app(lambda x: x ** 3, 3.))\n\nprint(grad(app, 1)(lambda x: x ** 3, 3.))",
"Notice the gotcha here: no matter where in the argument list these parameters appear, they're placed at the start of the signature of the corresponding JVP rule. Here's another example:",
"@partial(custom_jvp, nondiff_argnums=(0, 2))\ndef app2(f, x, g):\n return f(g((x)))\n\n@app2.defjvp\ndef app2_jvp(f, g, primals, tangents):\n x, = primals\n x_dot, = tangents\n return f(g(x)), 3. * x_dot\n\nprint(app2(lambda x: x ** 3, 3., lambda y: 5 * y))\n\nprint(grad(app2, 1)(lambda x: x ** 3, 3., lambda y: 5 * y))",
"jax.custom_vjp with nondiff_argnums\nA similar option exists for jax.custom_vjp, and, similarly, the convention is that the non-differentiable arguments are passed as the first arguments to the _bwd rule, no matter where they appear in the signature of the original function. The signature of the _fwd rule remains unchanged - it is the same as the signature of the primal function. Here's an example:",
"@partial(custom_vjp, nondiff_argnums=(0,))\ndef app(f, x):\n return f(x)\n\ndef app_fwd(f, x):\n return f(x), x\n\ndef app_bwd(f, x, g):\n return (5 * g,)\n\napp.defvjp(app_fwd, app_bwd)\n\nprint(app(lambda x: x ** 2, 4.))\n\nprint(grad(app, 1)(lambda x: x ** 2, 4.))",
"See fixed_point above for another usage example.\nYou don't need to use nondiff_argnums with array-valued arguments, for example ones with integer dtype. Instead, nondiff_argnums should only be used for argument values that don't correspond to JAX types (essentially don't correspond to array types), like Python callables or strings. If JAX detects that an argument indicated by nondiff_argnums contains a JAX Tracer, then an error is raised. The clip_gradient function above is a good example of not using nondiff_argnums for integer-dtype array arguments."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/bcc/cmip6/models/bcc-esm1/ocean.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: BCC\nSource ID: BCC-ESM1\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:39\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'bcc', 'bcc-esm1', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ktaneishi/deepchem
|
examples/tutorials/Uncertainty.ipynb
|
mit
|
[
"Tutorial Part 4: Uncertainty in Deep Learning\nA common criticism of deep learning models is that they tend to act as black boxes. A model produces outputs, but doesn't given enough context to interpret them properly. How reliable are the model's predictions? Are some predictions more reliable than others? If a model predicts a value of 5.372 for some quantity, should you assume the true value is between 5.371 and 5.373? Or that it's between 2 and 8? In some fields this situation might be good enough, but not in science. For every value predicted by a model, we also want an estimate of the uncertainty in that value so we can know what conclusions to draw based on it.\nDeepChem makes it very easy to estimate the uncertainty of predicted outputs (at least for the models that support it—not all of them do). Let's start by seeing an example of how to generate uncertainty estimates. We load a dataset, create a model, train it on the training set, and predict the output on the test set.",
"import deepchem as dc\nimport numpy as np\nimport matplotlib.pyplot as plot\n\ntasks, datasets, transformers = dc.molnet.load_sampl()\ntrain_dataset, valid_dataset, test_dataset = datasets\n\nmodel = dc.models.MultitaskRegressor(len(tasks), 1024, uncertainty=True)\nmodel.fit(train_dataset, nb_epoch=200)\ny_pred, y_std = model.predict_uncertainty(test_dataset)",
"All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right?\nOf course, it isn't really that simple at all. DeepChem is doing a lot of work to come up with those uncertainties. So now let's pull back the curtain and see what is really happening. (For the full mathematical details of calculating uncertainty, see https://arxiv.org/abs/1703.04977)\nTo begin with, what does \"uncertainty\" mean? Intuitively, it is a measure of how much we can trust the predictions. More formally, we expect that the true value of whatever we are trying to predict should usually be within a few standard deviations of the predicted value. But uncertainty comes from many sources, ranging from noisy training data to bad modelling choices, and different sources behave in different ways. It turns out there are two fundamental types of uncertainty we need to take into account.\nAleatoric Uncertainty\nConsider the following graph. It shows the best fit linear regression to a set of ten data points.",
"# Generate some fake data and plot a regression line.\n\nx = np.linspace(0, 5, 10)\ny = 0.15*x + np.random.random(10)\nplot.scatter(x, y)\nfit = np.polyfit(x, y, 1)\nline_x = np.linspace(-1, 6, 2)\nplot.plot(line_x, np.poly1d(fit)(line_x))\nplot.show()",
"The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty.\nHow can we estimate the size of this uncertainty? By training a model to do it, of course! At the same time it is learning to predict the outputs, it is also learning to predict how accurately each output matches the training data. For every output of the model, we add a second output that produces the corresponding uncertainty. Then we modify the loss function to make it learn both outputs at the same time.\nEpistemic Uncertainty\nNow consider these three curves. They are fit to the same data points as before, but this time we are using 10th degree polynomials.",
"plot.figure(figsize=(12, 3))\nline_x = np.linspace(0, 5, 50)\nfor i in range(3):\n plot.subplot(1, 3, i+1)\n plot.scatter(x, y)\n fit = np.polyfit(np.concatenate([x, [3]]), np.concatenate([y, [i]]), 10)\n plot.plot(line_x, np.poly1d(fit)(line_x))\nplot.show()",
"Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points.) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions.\nThe ideal way to measure epistemic uncertainty is to train many different models, each time using a different random seed and possibly varying hyperparameters. Then use all of them for each input and see how much the predictions vary. This is very expensive to do, since it involves repeating the whole training process many times. Fortunately, we can approximate the same effect in a less expensive way: by using dropout.\nRecall that when you train a model with dropout, you are effectively training a huge ensemble of different models all at once. Each training sample is evaluated with a different dropout mask, corresponding to a different random subset of the connections in the full model. Usually we only perform dropout during training and use a single averaged mask for prediction. But instead, let's use dropout for prediction too. We can compute the output for lots of different dropout masks, then see how much the predictions vary. This turns out to give a reasonable estimate of the epistemic uncertainty in the outputs.\nUncertain Uncertainty?\nNow we can combine the two types of uncertainty to compute an overall estimate of the error in each output:\n$$\\sigma_\\text{total} = \\sqrt{\\sigma_\\text{aleatoric}^2 + \\sigma_\\text{epistemic}^2}$$\nThis is the value DeepChem reports. But how much can you trust it? Remember how I started this tutorial: deep learning models should not be used as black boxes. We want to know how reliable the outputs are. Adding uncertainty estimates does not completely eliminate the problem; it just adds a layer of indirection. Now we have estimates of how reliable the outputs are, but no guarantees that those estimates are themselves reliable.\nLet's go back to the example we started with. We trained a model on the SAMPL training set, then generated predictions and uncertainties for the test set. Since we know the correct outputs for all the test samples, we can evaluate how well we did. Here is a plot of the absolute error in the predicted output versus the predicted uncertainty.",
"abs_error = np.abs(y_pred.flatten()-test_dataset.y.flatten())\nplot.scatter(y_std.flatten(), abs_error)\nplot.xlabel('Standard Deviation')\nplot.ylabel('Absolute Error')\nplot.show()",
"The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors.\nNow let's see how well the values satisfy the expected distribution. If the standard deviations are correct, and if the errors are normally distributed (which is certainly not guaranteed to be true!), we expect 95% of the values to be within two standard deviations, and 99% to be within three standard deviations. Here is a histogram of errors as measured in standard deviations.",
"plot.hist(abs_error/y_std.flatten(), 20)\nplot.show()",
"Most of the values are in the expected range, but there are a handful of outliers at much larger values. Perhaps this indicates the errors are not normally distributed, but it may also mean a few of the uncertainties are too low. This is an important reminder: the uncertainties are just estimates, not rigorous measurements. Most of them are pretty good, but you should not put too much confidence in any single value.\nCongratulations on finishing the series! Time to join the Community!\nCongratulations on completing this tutorial notebook! This is currently the last tutorial in the DeepChem introductory tutorial series. If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to join the DeepChem Community and get involved:\nStar DeepChem on GitHub\nStarring DeepChem on GitHub helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\nJoin the DeepChem Gitter\nThe DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fja05680/pinkfish
|
examples/300.micro-futures/strategy.ipynb
|
mit
|
[
"Futures Trend Following Portfolio\n1. The Security closes with 50/100 ma > 0, buy.\n2. If the Security closes 50/100 ma < 0, sell your long position.\n\n(For a Portfolio of futures.)\n\nNOTE: pinkfish does not yet have full support for futures backtesting, and\nthe futures data from yahoo finance isn't very good.",
"import datetime\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport pinkfish as pf\nimport strategy\n\n# Format price data.\npd.options.display.float_format = '{:0.2f}'.format\npd.set_option('display.max_rows', None)\n\n%matplotlib inline\n\n# Set size of inline plots\n'''note: rcParams can't be in same cell as import matplotlib\n or %matplotlib inline\n \n %matplotlib notebook: will lead to interactive plots embedded within\n the notebook, you can zoom and resize the figure\n \n %matplotlib inline: only draw static images in the notebook\n'''\nplt.rcParams[\"figure.figsize\"] = (10, 7)",
"MICRO FUTURES",
"# symbol: (description, multiplier)\n\n\nmicro_futures = {\n 'MES=F': 'Micro E-mini S&P 500 Index Futures',\n 'MNQ=F': 'Micro E-mini Nasdaq-100 Index Futures',\n 'M2K=F': 'Micro E-mini Russell 2000 Index Futures',\n 'MYM=F': 'Micro E-mini Dow Jones Futures',\n 'MGC=F': 'Micro Gold Futures',\n 'SIL=F': 'Micro Silver Futures',\n 'M6A=F': 'Micro AUD/USD Futures',\n 'MSF=F': 'Micro CHF/USD Futures',\n 'MCD=F': 'Micro CAD/USD Futures',\n 'MSF=F': 'Micro CHF/USD Futures',\n 'M6E=F': 'Micro EUR/USD Futures',\n 'M6B=F': 'Micro GBP/USD Futures',\n 'MIR=F': 'Micro INR/USD Futures'\n}\n\nsymbols = list(micro_futures)\n#symbols = ['MES=F']\ncapital = 100_000\nstart = datetime.datetime(1900, 1, 1)\nend = datetime.datetime.now()\n\noptions = {\n 'use_adj' : False,\n 'use_cache' : True,\n 'sell_short' : False,\n 'force_stock_market_calendar' : True,\n 'margin' : 2,\n 'sma_timeperiod_slow': 50,\n 'sma_timeperiod_fast': 10,\n 'use_vola_weight' : True\n}",
"Run Strategy",
"s = strategy.Strategy(symbols, capital, start, end, options=options)\ns.run()",
"View log DataFrames: raw trade log, trade log, and daily balance",
"s.rlog.head()\n\ns.tlog.head()\n\ns.dbal.tail()",
"Generate strategy stats - display all available stats",
"pf.print_full(s.stats)",
"View Performance by Symbol",
"weights = {symbol: 1 / len(symbols) for symbol in symbols}\ntotals = s.portfolio.performance_per_symbol(weights=weights)\ntotals\n\ncorr_df = s.portfolio.correlation_map(s.ts)\ncorr_df",
"Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats",
"benchmark = pf.Benchmark('SPY', s.capital, s.start, s.end, use_adj=True)\nbenchmark.run()",
"Plot Equity Curves: Strategy vs Benchmark",
"pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)",
"Bar Graph: Strategy vs Benchmark",
"df = pf.plot_bar_graph(s.stats, benchmark.stats)\ndf",
"Analysis: Kelly Criterian",
"kelly = pf.kelly_criterion(s.stats, benchmark.stats)\nkelly"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/sandbox-3/atmoschem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: SANDBOX-3\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:27\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'atmoschem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Timestep Framework --> Split Operator Order\n5. Key Properties --> Tuning Applied\n6. Grid\n7. Grid --> Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --> Surface Emissions\n11. Emissions Concentrations --> Atmospheric Emissions\n12. Emissions Concentrations --> Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --> Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmospheric chemistry model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmospheric chemistry model code.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Chemistry Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"1.8. Coupling With Chemical Reactivity\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemical species advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Split Operator Chemistry Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemistry (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Split Operator Alternate Order\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\n?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.6. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.7. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Timestep Framework --> Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.2. Convection\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.3. Precipitation\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.4. Emissions\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.5. Deposition\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.6. Gas Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.9. Photo Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.10. Aerosols\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview of transport implementation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Use Atmospheric Transport\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Transport Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric chemistry emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Emissions Concentrations --> Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Emissions Concentrations --> Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Emissions Concentrations --> Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview gas phase atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Number Of Bimolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.4. Number Of Termolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.7. Number Of Advected Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.8. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.9. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.10. Wet Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.11. Wet Oxidation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Gas Phase Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n",
"14.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n",
"14.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.5. Sedimentation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Gas Phase Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n",
"15.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.5. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric photo chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Number Of Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17. Photo Chemistry --> Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nPhotolysis scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"17.2. Environmental Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
benkamphaus/remote-sensing-notebooks
|
new_horizons/NHRalphLEISA_Jupiter.ipynb
|
epl-1.0
|
[
"NH Ralph LEISA Data from Jupiter\nBen Kamphaus, PhD\nThis time we hack around and get lost a little at the beginning, but finally get somewhere - getting spectral information from Jupiter!",
"import pyfits\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\npyfits.info(\"/Users/bkamphaus/data/fits/20070224_003466/lsb_0034663919_0x53c_sci_1.fit\")\n\nleisa_file = \"/Users/bkamphaus/data/fits/20070224_003466/lsb_0034663919_0x53c_sci_1.fit\"\nimg_cube, data_hdr = pyfits.getdata(leisa_file, 0, header=True)\nimg_cube = np.float32(img_cube)\nwl = pyfits.getdata(leisa_file, 1)\ng_o = pyfits.getdata(leisa_file, 4)\n\ndef cal(DN, gain, offset):\n return gain*DN + offset\n\ndef cal_all(img_cube, gain_offset):\n out = np.zeros_like(img_cube, dtype=np.float128)\n for i in range(np.shape(out)[0]):\n out[i] = cal(img_cube[i,:,:], gain_offset[1,:,:], gain_offset[0,:,:])\n return out\n\ncal_cube = cal_all(img_cube, g_o)",
"Hmm, this worked fine for our previous dataset. What could be wrong?",
"np.max(img_cube)\n\n# Noooo! Do not want!\ni_nan = np.where(np.isnan(img_cube))\nimg_cube[i_nan] = 0\ncal_cube = cal_all(img_cube, g_o)\n\nnp.max(img_cube) # ok, this error was due to a really large value that we can't multiply for this dataset type\n(np.max(img_cube), np.max(g_o), np.min(g_o))\n\ni_max = np.where(img_cube == np.max(img_cube))\nimg_cube[i_max] = 0\nnp.max(img_cube)",
"It's really doubtful that these high values are meaningful, but there are probably several bright pixels due to particles hitting the sensor or glitched pixels. I can try applying the bad data mask.",
"qf = pyfits.getdata(leisa_file, 6)\n(np.shape(qf))\n(np.max(img_cube[0, qf]), np.max(img_cube[0,:,:]))",
"Well, high values are outside of the bad pixels.",
"%matplotlib inline\nplt.hist(np.ravel(img_cube[0, qf]), bins=np.arange(1e13,2e16,1e14))\nplt.show()\n\nplt.hist(np.ravel(img_cube[0, qf]), bins=np.arange(1e13,2e16,1e13))\nplt.show()\n\n# Yeah, let's just play fast and loose right now and kill those high values\ni_bright = np.where(img_cube[0,:,:] >= 2e16)\nmap(len, i_bright)\n\nimg_cube = np.float64(img_cube)\nimg_cube_small = img_cube/1e10\n\nplt.hist(np.ravel(img_cube_small))\n\ni_nan = np.where(np.isnan(img_cube_small))\nimg_cube_small[i_nan] = 0\nplt.hist(np.ravel(img_cube_small[0,:,:]), np.arange(0.0,1e5,1e3))\nplt.show()\n\nplt.imshow(img_cube_small[0,:,:], clim=(0.0,55000.), cmap='bone')",
"OK, this finally looks interesting. We're out of real units with the small image, so let's see if we can apply gains and offsets to the large image if we use quad-preicison floating point.",
"img128 = np.float128(img_cube)\ni_nan = np.where(np.isnan(img128))\nimg128[i_nan] = 0.\n\ngo_i_nan = np.where(np.isnan(g_o))\ng_o[go_i_nan] = 0.\ng_o = np.float128(g_o) \n\nnp.max(img128)\n\nnp.shape(g_o)\n\n(np.max(g_o[0,:,:]), np.min(g_o[0,:,:]))\n\n(np.max(g_o[1,:,:]), np.min(g_o[1,:,:]))",
"Well, I thought I had calibrate correct in the previous notebook, but I suppose not. A gain across the board of 0.0 makes no sense -- the same for offsets is reasonable if we're really in values that match some kind of meaningful units. This could be the case since the values are very high - much higher than you would expect if they were intended for an offset to be applied later as part of calibration.",
"# since we don't have to add an offset, we can go back to our smaller img_cube_small vals\n(np.shape(img_cube_small), np.max(img_cube_small), np.min(img_cube_small))\n\ni_inf = np.where(np.isinf(img_cube_small))\nimg_cube_small[i_inf] = 0.\n\ndef apply_gains(cube, gains):\n out = np.zeros_like(cube, dtype=np.float64)\n for i in range(np.shape(out)[0]):\n out[i,:,:] = out[i,:,:] * gains\n return out\n\nimg_cube_small_cal = apply_gains(img_cube_small, g_o[0,:,:])\n\nplt.hist(np.ravel(img_cube_small_cal[10,:,:]), bins=np.arange(0., 0.1, 0.01))\nplt.show()\n\n(np.max(img_cube_small_cal), np.min(img_cube_small_cal))",
"At this point, I looked a few cells above where I defined apply_gains and realized I assigned the result of multiplying out and gains, not cube and gains. Oops!",
"def apply_gains(cube, gains):\n out = np.zeros_like(cube, dtype=np.float64)\n for i in range(np.shape(out)[0]):\n out[i,:,:] = cube[i,:,:] * gains\n return out\n\nimg_cube_small_cal = apply_gains(img_cube_small, g_o[0,:,:])\n\nplt.hist(np.ravel(img_cube_small_cal[:,:,100]), bins=np.arange(0, 0.5e5, 1.5e3))\nplt.show()",
"Whew, finally a reasonable looking histogram!",
"plt.imshow(img_cube_small_cal[:,:,100], clim=(0.,3.5e5), cmap='hot')\n\nplt.imshow(img_cube_small_cal[:,100,:], clim=(500,1e5), cmap='bone')\n\nplt.figure(figsize=(6,6))\nplt.imshow(img_cube_small_cal[:,50,:], clim=(500,1e4), cmap='bone')",
"That's right -- this is BIL, so the spectral information is in the second dimension (dimension indexed by 1).\nThese images won't be pretty -- that's what LORRI and MVIC are for. What we want, in this case, are image reference points to let us know where we can pull meaningful spectral information from the planetary body.",
"plt.plot(wl[0,:,200], img_cube_small_cal[375,:,200])\nplt.show()",
"These wavelengths look correct, but this is the same issue we ran into previously where the wavelength sort is off.",
"sort_i = np.argsort(wl[0,:,200])\nplt.plot(wl[0,sort_i,200], img_cube_small_cal[375,sort_i,200])\nplt.show()\n\nsort_i = np.argsort(wl[0,:,200])\nplt.plot(wl[0,sort_i,200], img_cube_small_cal[375,sort_i,200], color='blue')\nplt.plot(wl[0, sort_i,220], img_cube_small_cal[550,sort_i,220], color='red')\nplt.plot(wl[0, sort_i,220], img_cube_small_cal[300,sort_i,220], color='teal')\nplt.ylim(0,3e5)\nplt.show()",
"We can now spot abpsorption, reflectance, and (maybe?) emissivity features! We definitely have some noisy and incorrect looking spectral information, though. One thing we forgot to do was filter down the data to only capture known good pixels. It's possible we may be able to kill some of the bad bands and only see valuable spectral information.",
"def apply_mask(img, qf):\n img_copy = np.copy(img)\n mult = np.int16(np.logical_not(qf))\n for i in range(np.shape(img_copy)[0]):\n img_copy[i,:,:] *= mult\n return img_copy\n\nspec_cube = apply_mask(img_cube_small_cal, qf)\n\nplt.plot(wl[0, sort_i, 200], spec_cube[375, sort_i, 200], color='blue')\nplt.plot(wl[0, sort_i, 220], spec_cube[550, sort_i, 220], color='red')\nplt.plot(wl[0, sort_i, 220], spec_cube[300, sort_i, 220], color='teal')\nplt.show()",
"Much improved! 0's aren't a perfect \"no data\" value, but really everything is kind of crap. 0, Arbitrary numbers like -9999.9, NaN all have their problems. Here we can at least filter out 0's on the fly when necessary and be pretty sure they're not a real, empirical 0.\nLook at the plot above -- notice the \"peaks\" in two of the spectra around 1.3, 1.6, and 1.9-ish?",
"def get_wl_match(wl_array, target_wl):\n \"\"\"\n Pass a spectrum and a wl, returns the index of the closest wavelength to the target\n wavelength value.\n \"\"\"\n i_match = np.argmin(np.abs(wl_array - target_wl))\n return i_match\n\n# We're just eyeballing it here, but we'd really want to look at, e.g., local min/max\n# from numerical derivative, or just min/max across a range.\n(get_wl_match(np.ravel(wl[0, :, 220]), 1.3),\n get_wl_match(np.ravel(wl[0, :, 220]), 1.6),\n get_wl_match(np.ravel(wl[0, :, 220]), 1.9))\n\n# need to get color interleave in right dims/shape for imshow\nbip = np.transpose(spec_cube, (0, 2, 1))\nbip_disp = bip[:,:,[181,124,76]]\n\nplt.imshow(bip_disp)\nplt.show()\n\nplt.imshow(bip_disp[:,:,0], clim=(0, 2e5), cmap='bone')\nplt.show()\n\nplt.imshow(bip_disp[:,:,1], clim=(0, 2.5e5), cmap='bone')\nplt.show()\n\nplt.imshow(bip_disp[:,:,2], clim=(0, 1e5), cmap='bone')\nplt.show()\n\n# Yikes, no good. Let's figure out image stretch.\nfrom scipy.misc import bytescale\n\nplt.hist(np.ravel(bip_disp[:,:,0]), bins=np.arange(0, 2e5, 1e4), color='red', alpha=0.5)\nplt.hist(np.ravel(bip_disp[:,:,1]), bins=np.arange(0, 2e5, 1e4), color='blue', alpha=0.5)\nplt.hist(np.ravel(bip_disp[:,:,2]), bins=np.arange(0, 2e5, 1e4), color='green', alpha=0.5)\nplt.show()\n\nplt.imshow(np.hstack((bip_disp[:,:,0], bip_disp[:,:,1], bip_disp[:,:,2])))\nplt.show()\n\nplt.imshow(np.hstack((spec_cube[:,230,:],spec_cube[:,60,:])),\n vmin=0, vmax=3e4, cmap='bone')",
"I can tell from the images above that the spectral dimension of the data does not well match Jupiter's position. That is, a spectral slice of the data will be getting different materials as Jupiter moves vertically across the data as we move through the data's spectral axis.",
"plt.plot(wl[0, sort_i, 220], spec_cube[200, sort_i, 220], color='teal')\nplt.plot(wl[0, sort_i, 220], spec_cube[450, sort_i, 220], color='red')\nplt.xlabel(\"Wavelength ($\\mu$m)\")\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/structured/labs/4a_sample_babyweight.ipynb
|
apache-2.0
|
[
"LAB 4a: Creating a Sampled Dataset.\nLearning Objectives\n\nSetup up the environment.\nSample the natality dataset to create train/eval/test sets.\nPreprocess the data in Pandas dataframe.\n\nIntroduction\nIn this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.\nWe will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.\nSet up environment variables and load necessary libraries\nCheck that the Google BigQuery library is installed and if not, install it.",
"%%bash\npip freeze | grep google-cloud-bigquery==1.6.1 || \\\npip install google-cloud-bigquery==1.6.1",
"Import necessary libraries.",
"from google.cloud import bigquery\nimport pandas as pd",
"Lab Task #1: Set environment variables.\nSet environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.",
"%%bash\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT\n\n# TODO: Change environment variables\nPROJECT = \"cloud-training-demos\" # Replace with your PROJECT",
"Create ML datasets by sampling using BigQuery\nWe'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.",
"bq = bigquery.Client(project = PROJECT)",
"We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.",
"modulo_divisor = 100\ntrain_percent = 80.0\neval_percent = 10.0\n\ntrain_buckets = int(modulo_divisor * train_percent / 100.0)\neval_buckets = int(modulo_divisor * eval_percent / 100.0)",
"We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.",
"def display_dataframe_head_from_query(query, count=10):\n \"\"\"Displays count rows from dataframe head from query.\n \n Args:\n query: str, query to be run on BigQuery, results stored in dataframe.\n count: int, number of results from head of dataframe to display.\n Returns:\n Dataframe head with count number of results.\n \"\"\"\n df = bq.query(\n query + \" LIMIT {limit}\".format(\n limit=count)).to_dataframe()\n\n return df.head(count)",
"For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.",
"# Get label, features, and columns to hash and split into buckets\nhash_cols_fixed_query = \"\"\"\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n year,\n month,\n CASE\n WHEN day IS NULL THEN\n CASE\n WHEN wday IS NULL THEN 0\n ELSE wday\n END\n ELSE day\n END AS date,\n IFNULL(state, \"Unknown\") AS state,\n IFNULL(mother_birth_state, \"Unknown\") AS mother_birth_state\nFROM\n publicdata.samples.natality\nWHERE\n year > 2000\n AND weight_pounds > 0\n AND mother_age > 0\n AND plurality > 0\n AND gestation_weeks > 0\n\"\"\"\n\ndisplay_dataframe_head_from_query(hash_cols_fixed_query)",
"Using COALESCE would provide the same result as the nested CASE WHEN. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN would become COALESCE(wday, day, 0) AS date. You can read more about it here.\nNext query will combine our hash columns and will leave us just with our label, features, and our hash values.",
"data_query = \"\"\"\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n FARM_FINGERPRINT(\n CONCAT(\n CAST(year AS STRING),\n CAST(month AS STRING),\n CAST(date AS STRING),\n CAST(state AS STRING),\n CAST(mother_birth_state AS STRING)\n )\n ) AS hash_values\nFROM\n ({CTE_hash_cols_fixed})\n\"\"\".format(CTE_hash_cols_fixed=hash_cols_fixed_query)\n\ndisplay_dataframe_head_from_query(data_query)",
"The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY.",
"# Get the counts of each of the unique hashs of our splitting column\nfirst_bucketing_query = \"\"\"\nSELECT\n hash_values,\n COUNT(*) AS num_records\nFROM\n ({CTE_data})\nGROUP BY\n hash_values\n\"\"\".format(CTE_data=data_query)\n\ndisplay_dataframe_head_from_query(first_bucketing_query)",
"The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.",
"# Get the number of records in each of the hash buckets\nsecond_bucketing_query = \"\"\"\nSELECT\n ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,\n SUM(num_records) AS num_records\nFROM\n ({CTE_first_bucketing})\nGROUP BY\n ABS(MOD(hash_values, {modulo_divisor}))\n\"\"\".format(\n CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)\n\ndisplay_dataframe_head_from_query(second_bucketing_query)",
"The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.",
"# Calculate the overall percentages\npercentages_query = \"\"\"\nSELECT\n bucket_index,\n num_records,\n CAST(num_records AS FLOAT64) / (\n SELECT\n SUM(num_records)\n FROM\n ({CTE_second_bucketing})) AS percent_records\nFROM\n ({CTE_second_bucketing})\n\"\"\".format(CTE_second_bucketing=second_bucketing_query)\n\ndisplay_dataframe_head_from_query(percentages_query)",
"We'll now select the range of buckets to be used in training.",
"# Choose hash buckets for training and pull in their statistics\ntrain_query = \"\"\"\nSELECT\n *,\n \"train\" AS dataset_name\nFROM\n ({CTE_percentages})\nWHERE\n bucket_index >= 0\n AND bucket_index < {train_buckets}\n\"\"\".format(\n CTE_percentages=percentages_query,\n train_buckets=train_buckets)\n\ndisplay_dataframe_head_from_query(train_query)",
"We'll do the same by selecting the range of buckets to be used evaluation.",
"# Choose hash buckets for validation and pull in their statistics\neval_query = \"\"\"\nSELECT\n *,\n \"eval\" AS dataset_name\nFROM\n ({CTE_percentages})\nWHERE\n bucket_index >= {train_buckets}\n AND bucket_index < {cum_eval_buckets}\n\"\"\".format(\n CTE_percentages=percentages_query,\n train_buckets=train_buckets,\n cum_eval_buckets=train_buckets + eval_buckets)\n\ndisplay_dataframe_head_from_query(eval_query)",
"Lastly, we'll select the hash buckets to be used for the test split.",
"# Choose hash buckets for testing and pull in their statistics\ntest_query = \"\"\"\nSELECT\n *,\n \"test\" AS dataset_name\nFROM\n ({CTE_percentages})\nWHERE\n bucket_index >= {cum_eval_buckets}\n AND bucket_index < {modulo_divisor}\n\"\"\".format(\n CTE_percentages=percentages_query,\n cum_eval_buckets=train_buckets + eval_buckets,\n modulo_divisor=modulo_divisor)\n\ndisplay_dataframe_head_from_query(test_query)",
"In the below query, we'll UNION ALL all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id so that we can sort on it in the query after.",
"# Union the training, validation, and testing dataset statistics\nunion_query = \"\"\"\nSELECT\n 0 AS dataset_id,\n *\nFROM\n ({CTE_train})\nUNION ALL\nSELECT\n 1 AS dataset_id,\n *\nFROM\n ({CTE_eval})\nUNION ALL\nSELECT\n 2 AS dataset_id,\n *\nFROM\n ({CTE_test})\n\"\"\".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)\n\ndisplay_dataframe_head_from_query(union_query)",
"Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.",
"# Show final splitting and associated statistics\nsplit_query = \"\"\"\nSELECT\n dataset_id,\n dataset_name,\n SUM(num_records) AS num_records,\n SUM(percent_records) AS percent_records\nFROM\n ({CTE_union})\nGROUP BY\n dataset_id,\n dataset_name\nORDER BY\n dataset_id\n\"\"\".format(CTE_union=union_query)\n\ndisplay_dataframe_head_from_query(split_query)",
"Lab Task #1: Sample BigQuery dataset.\nSample the BigQuery result set (above) so that you have approximately 8,000 training examples and 1000 evaluation examples.\nThe training and evaluation datasets have to be well-distributed (not all the babies are born in Jan 2005, for example)\nand should not overlap (no baby is part of both training and evaluation datasets).\nNow that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.",
"# every_n allows us to subsample from each of the hash values\n# This helps us get approximately the record counts we want\nevery_n = # TODO: Experiment with values to get close to target counts\n\n# TODO: Replace FUNC with correct function to split with\n# TODO: Replace COLUMN with correct column to split on\nsplitting_string = \"ABS(FUNC(COLUMN, {0} * {1}))\".format(every_n, modulo_divisor)\n\ndef create_data_split_sample_df(query_string, splitting_string, lo, up):\n \"\"\"Creates a dataframe with a sample of a data split.\n\n Args:\n query_string: str, query to run to generate splits.\n splitting_string: str, modulo string to split by.\n lo: float, lower bound for bucket filtering for split.\n up: float, upper bound for bucket filtering for split.\n Returns:\n Dataframe containing data split sample.\n \"\"\"\n query = \"SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}\".format(\n query_string, splitting_string, int(lo), int(up))\n\n df = bq.query(query).to_dataframe()\n\n return df\n\ntrain_df = create_data_split_sample_df(\n data_query, splitting_string,\n lo=0, up=train_percent)\n\neval_df = create_data_split_sample_df(\n data_query, splitting_string,\n lo=train_percent, up=train_percent + eval_percent)\n\ntest_df = create_data_split_sample_df(\n data_query, splitting_string,\n lo=train_percent + eval_percent, up=modulo_divisor)\n\nprint(\"There are {} examples in the train dataset.\".format(len(train_df)))\nprint(\"There are {} examples in the validation dataset.\".format(len(eval_df)))\nprint(\"There are {} examples in the test dataset.\".format(len(test_df)))",
"Preprocess data using Pandas\nWe'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male field be Unknown. Also, if there is more than child we'll change the plurality to Multiple(2+). While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. \nLet's start by examining the training dataset as is.",
"train_df.head()",
"Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)",
"train_df.describe()",
"It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.\nLab Task #2: Pandas preprocessing.\nUse Pandas to:\n* Clean up the data to remove rows that are missing any of the fields.\n* Simulate the lack of ultrasound.\n* Change the plurality column to be a string.\nHint (highlight to see): <p>\nFiltering:\n<pre style=\"color:white\">\ndf = df[df.weight_pounds > 0]\n</pre>\nModify plurality to be a string:\n<pre style=\"color:white\">\ntwins_etc = dict(zip([1,2,3,4,5],\n [\"Single(1)\", \"Twins(2)\", \"Triplets(3)\", \"Quadruplets(4)\", \"Quintuplets(5)\"]))\ndf[\"plurality\"].replace(twins_etc, inplace=True)\n</pre>\nLack of ultrasound:\n<pre style=\"color:white\">\nno_ultrasound = df.copy(deep=True)\nno_ultrasound[\"is_male\"] = \"Unknown\"\n</pre>\n</p>",
"def preprocess(df):\n \"\"\" Preprocess pandas dataframe for augmented babyweight data.\n \n Args:\n df: Dataframe containing raw babyweight data.\n Returns:\n Pandas dataframe containing preprocessed raw babyweight data as well\n as simulated no ultrasound data masking some of the original data.\n \"\"\"\n # Clean up raw data\n # TODO: Filter out what we don\"t want to use for training\n\n\n # TODO: Modify plurality field to be a string\n\n\n # TODO: Clone data and mask certain columns to simulate lack of ultrasound\n\n # TODO: Modify is_male\n \n # TODO: Modify plurality\n\n # Concatenate both datasets together and shuffle\n return pd.concat(\n [df, no_ultrasound]).sample(frac=1).reset_index(drop=True)",
"Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:",
"train_df = preprocess(train_df)\neval_df = preprocess(eval_df)\ntest_df = preprocess(test_df)\n\ntrain_df.head()\n\ntrain_df.tail()",
"Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up.",
"train_df.describe()",
"Write to .csv files\nIn the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.",
"# Define columns\ncolumns = [\"weight_pounds\",\n \"is_male\",\n \"mother_age\",\n \"plurality\",\n \"gestation_weeks\"]\n\n# Write out CSV files\ntrain_df.to_csv(\n path_or_buf=\"train.csv\", columns=columns, header=False, index=False)\neval_df.to_csv(\n path_or_buf=\"eval.csv\", columns=columns, header=False, index=False)\ntest_df.to_csv(\n path_or_buf=\"test.csv\", columns=columns, header=False, index=False)\n\n%%bash\nwc -l *.csv\n\n%%bash\nhead *.csv\n\n%%bash\ntail *.csv",
"Lab Summary:\nIn this lab, we set up the environment, sampled the natality dataset to create train/eval/test splits, and preprocessed the data in a Pandas dataframe.\nCopyright 2022 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mavillan/SciProg
|
04_jit/04_jit.ipynb
|
gpl-3.0
|
[
"<h1 align=\"center\">Scientific Programming in Python</h1>\n<h2 align=\"center\">Topic 4: Just in Time Compilation: Numba and NumExpr </h2>\n\nNotebook created by Martín Villanueva - martin.villanueva@usm.cl - DI UTFSM - April 2017.",
"%matplotlib inline\n%load_ext memory_profiler\n\nimport numpy as np\nimport numexpr as ne\nimport numba\nimport math\nimport random\nimport matplotlib.pyplot as plt\nimport scipy as sp\nimport sys",
"Table of Contents\n\n1.- Just in Time Compilation\n2.- Numba\n3.- Applications\n4.- NumExp\n\n<div id='jit' />\n1.- Just in Time Compilation\nA JIT compiler runs after the program has started and compiles the code (usually bytecode) on the fly (or just-in-time, as it's called) into a form that's usually faster, typically the host CPU's native instruction set. A JIT has access to dynamic runtime information whereas a standard compiler doesn't and can make better optimizations like inlining functions that are used frequently.\nThis is in contrast to a traditional compiler (AOT, Ahead Of Time compilation) that compiles all the code to machine language before the program is first run.\n<div id='numba' />\n2.- Numba\n\nNumba takes pure Python code and translates it automatically (just-in-time) into optimized machine code, thanks to the LLVM (Low Level Virtual Machine) compiler architecture.\nWe can write a non-vectorized function in pure Python, using for loops, and have this function vectorized automatically by using a single decorator.\nPerformance speedups when compared to pure Python code can reach several orders of magnitude and may even outmatch manually-vectorized NumPy code.\n\nWhy NumPy is not sufficient?\n\nSometimes it is difficult to visualize a vectorized implementation of an algorithm (difficult to understand also),\nAnd sometimes it is just imposible to implement a vectorized solution (Typical case: When we need the result from from one iteration to perform the next).\n\n(Some kind of) Rule of thumb: When a loop in an algorithm needs the result from one iteration to perform the next iteration, then it is not possible to implement it in a vectorized way.\nLets introduce its usage with a naive example:\nArray sum\nThe function below is a naive sum function that sums all the elements of a given array.",
"def sum_array(inp):\n I,J = inp.shape\n mysum = 0\n for i in range(I):\n for j in range(J):\n mysum += inp[i, j]\n return mysum\n\narr = np.random.random((500,500))\n\nsum_array(arr)\n\nnaive = %timeit -o sum_array(arr)\n\n#lazzy compilation\n@numba.jit\ndef sum_array_numba(inp):\n I,J = inp.shape\n mysum = 0\n for i in range(I):\n for j in range(J):\n mysum += inp[i, j]\n return mysum\n\nsum_array_numba(arr)\n\njitted = %timeit -o sum_array_numba(arr)\n\nprint(\"Improvement: {0} times faster\".format(naive.best/jitted.best))\n\n%timeit np.sum(arr)",
"Some important notes: \n* The first time we invoke a JITted function, it is translated to native machine code.\n* The very first time you run a numba compiled function, there will be a little bit of overhead for the compilation step to take place.\n* As an optimizing compiler, Numba needs to decide on the type of each variable to generate efficient machine code.\n* When no argument are passed to the numba.jit decorator, this will try to automatically detect the types of input, output and intermediate variables.\n* Additionally we can explicitily pass the signature of the function to the decorator, to make the work easier for Numba :).",
"#single signature\n@numba.jit('float64[:] (float64[:], float64[:])')\ndef sum1(a,b):\n return a+b\n\na = np.arange(10, dtype=np.float64)\nb = np.arange(10, dtype=np.float64)\nprint(sum1(a,b))\n\n#multiple signatures (polymorphism)\nsignatures = ['int32[:] (int32[:], int32[:])', 'int64[:] (int64[:], int64[:])', \\\n 'float32[:] (float32[:], float32[:])', 'float64[:] (float64[:], float64[:])']\n@numba.jit(signatures)\ndef sum2(a,b):\n return a+b\n\na = np.arange(10, dtype=np.int64)\nb = np.arange(10, dtype=np.int64)\n#print(sum1(a,b))\nprint(sum2(a,b))",
"For a full reference of the signature types supported by Numba see the documentation.\nNow that we've run sum1 and sum2 once, they are now compiled and we can check out what's happened behind the scenes. Use the inspect_types method to see how Numba translated the functions.",
"sum1.inspect_types()",
"nopython mode\nNumba can compile a Python function in two modes:\n1. python mode. In Python mode, the compiled code relies on the CPython interpreter. (More flexible, but slow).\n2. nopython mode. The code is compiled to standalone 100% machine code that doesn't rely on CPython, i.e, when we call the function, it doesn't pass through the CPython interpreter in any time (Limited, but fast).\nCitting the official documentation:\nNumba is aware of NumPy arrays as typed memory regions and so can speedup code using NumPy arrays. Other, less well-typed code will be translated to Python C-API calls effectively removing the \"interpreter\" but not removing the dynamic indirection.\n\nRule of thumb: In order to successfully use the nopython mode, we need to ensure that no native and unsupported Python features are being used inside the fuction to be jitted.",
"@numba.jit('float64[:] (float64[:,:], float64[:])', nopython=True)\ndef dot_numba1(A,b):\n m,n = A.shape\n c = np.empty(m)\n for i in range(m):\n c[i] = np.dot(A[i],b)\n return c\n\nA = np.random.random((1000,1000))\nb = np.random.random(1000)\n\n%timeit dot_numba1(A,b)\n\n@numba.jit('float64[:] (float64[:,:], float64[:])', nopython=True)\ndef dot_numba2(A,b):\n m,n = A.shape\n c = []\n for i in range(m):\n c.append( np.dot(A[i],b) )\n return np.array(c)\n\n# A few time ago, it wouldn't work!\n\n%timeit dot_numba2(A,b)",
"Lets create a silly example where Numba fails at nopython mode. Prepare for a long error...",
"@numba.jit('float64[:] (float64[:,:], float64[:])', nopython=True)\ndef dot_numba2(A,b):\n m,n = A.shape\n c = dict()\n for i in range(m):\n c[i] = np.dot(A[i],b) \n return np.array(c.values)",
"for a full list of the supported native Python features see here. \nNumba and NumPy\n\nOne objective of Numba is having a seamless integration with NumPy.\nNumba excels at generating code that executes on top of NumPy arrays.\nNumba understands calls to (some, almost all) NumPy features, and is able to generate equivalent native code for many of them.\n\nTo see all the feature of NumPy actually supported by Numba, please see the documentation.",
"#row mean example\n@numba.jit('float64[:] (float64[:,:])', nopython=True)\ndef row_mean(A):\n m,n = A.shape\n mean = np.empty(m)\n for i in range(m):\n mean[i] = np.sum(A[i])/n\n return mean\n\nA = np.random.random((10,10))\nprint( row_mean(A) )",
"Ahead of time Compilation\nNumba also provides a facility for Ahead-of-Time compilation (AOT), which has the following beneficts:\n1. AOT compilation produces a compiled extension module which does not depend on Numba.\n2. There is no compilation overhead at runtime.\nBut it is much more restrictive than JIT functionality. For more info see the documentation.\n<div id='app' />\n3.-Applications\nRandom Walks\nWe will simulate random walk with jums: \n1. A particle is on the real line, starting at 0. \n2. At every time step, the particle makes a step to the right or to the left.\n3. If the particle crosses a threshold, it is reset at its initial position.\nThis type of stochastic model is notably used in neuroscience.\nWe first create a function that returns a -1 or +1 value:",
"def step():\n if random.random()>.5: return 1.\n else: return -1.",
"and create the simulation in pure Python, where the function walk() takes a number of steps as input:",
"def walk(n):\n x = np.zeros(n)\n dx = 1./n\n for i in range(n-1):\n x_new = x[i] + dx * step()\n if abs(x_new) > 5e-3:\n x[i+1] = 0.\n else:\n x[i+1] = x_new\n return x",
"Now we create a random walk, plot it and %timeit its execution:",
"n = 100000\nx = walk(n)\n\nplt.figure(figsize=(8,8))\nplt.plot(x)\nplt.show()\n\npython_t = %timeit -o walk(n)",
"Now, let's JIT-compile this function with Numba:",
"@numba.jit(nopython=True)\ndef step_numba():\n if random.random()>.5: return 1.\n else: return -1.\n\n@numba.jit(nopython=True)\ndef walk_numba(n):\n x = np.zeros(n)\n dx = 1./n\n for i in range(n-1):\n x_new = x[i] + dx * step_numba()\n if abs(x_new) > 5e-3:\n x[i+1] = 0.\n else:\n x[i+1] = x_new\n return x\n\nnumba_t = %timeit -o walk_numba(n)\n\nprint(\"Improvement: {0} times faster\".format(python_t.best/numba_t.best))",
"Mandelbrot fractal\nNow we will create a Mandelbrot fractal (which is a task that cannot be vectorized) using native Python and Numba... \nIt basically consist in the next iteration, with starting point $z_0 = 0$:\n$$ z_{i+1} = z_{i}^2 + c $$\nfor which the values of the sequence remains bounded.",
"size = 200\niterations = 100\n\ndef mandelbrot_python(m, size, iterations):\n for i in range(size):\n for j in range(size):\n c = -2 + 3./size*j + 1j*(1.5-3./size*i)\n z= 0\n for n in range(iterations):\n if np.abs(z) <= 10:\n z = z*z + c\n m[i, j] = n\n else:\n break\n\nm = np.zeros((size,size))\nmandelbrot_python(m, size, iterations)\n\nplt.figure(figsize=(7,7))\nplt.imshow(np.log(m), cmap=plt.cm.hot)\nplt.axis('off')",
"Now we evaluate the time taken by this function:",
"%%timeit m = np.zeros((size,size))\nmandelbrot_python(m,size,iterations)",
"Next, we add the numba.jit decorator and let Numba infer the types of all the variables (Lazzy Compilation):",
"@numba.jit\ndef mandelbrot_numba(m, size, iterations):\n for i in range(size):\n for j in range(size):\n c = -2 + 3./size*j + 1j*(1.5-3./size*i)\n z= 0\n for n in range(iterations):\n if np.abs(z) <= 10:\n z = z*z + c\n m[i, j] = n\n else:\n break\n\n%%timeit m = np.zeros((size,size))\nmandelbrot_python(m,size,iterations)",
"<div id='numexpr' />\n4.- NumExpr\nThe problem...\nAs mention in previous lectures, NumPy is good (fast and efficient) at doing vector operations. Moreover, it has some problems when trying to evaluate to complex expressions:",
"def test_func(a,b,c):\n \"\"\"\n Consider that a, b and c are 1D ndarrys\n \"\"\"\n return np.sin(a**2 + np.exp(b)) + np.cos(b**2 + np.exp(c)) + np.tan(a**2+b**2+c**2)\n\nn = 1000000\na = np.random.random(n)\nb = np.random.random(n)\nc = np.random.random(n)\n\n%timeit test_func(a,b,c)",
"Let's create now a Numba function that performs the same operations but iteratively:",
"@numba.jit('float64[:] (float64[:], float64[:], float64[:])', nopython=True)\ndef test_func_numba(a,b,c):\n n = len(a)\n res = np.empty(n)\n for i in range(n):\n res[i] = np.sin(a[i]**2 + np.exp(b[i])) + np.cos(b[i]**2 + np.exp(c[i])) + np.tan(a[i]**2+b[i]**2+c[i]**2)\n return res\n\n%timeit test_func_numba(a,b,c)",
"Then, what is the problem with NumPy:\n1. Implicit copy operations.\n2. A lot of iterations for over the same arrays.\n3. Bad usage of CPU registers...\nThe solution: NumExpr\n\nNumexpr is a fast numerical expression evaluator that use less memory than doing the same calculation.\nWith its multi-threaded capabilities can make use of all your cores.\nIt make use of Intel's VML (Vector Math Library, normally integrated in its Math Kernel Library, or MKL).\nIt follows (more or less) the JIT paradigm: Numexpr parses expressions into its own op-codes that are then used by an integrated computing virtual machine.\nNumexpr works best with large arrays.\n\nWhy it is faster than NumPy?\n\nIt avoids allocating memory for intermediate results, with better cache utilization and reduced memory access in general.\nThe array operands are split into small chunks that easily fit in the cache of the CPU.\n\nSuppose we want to perform the next operation: 2*a+3*b. In NumPy you will need 3 temporary arrays, and doesn't make an efficient use of the cache memory: The results of 2*a and 3*b won't be in cache when you do the add.\nA possible solution it to iterate over a and b computing the operation element by element (We test this above with Numba). But here is the approach the NumExpr follows:\nArrays are handled as chunks (of 256 elements) at a time, using a register machine. As Python code, it looks something like this:\npython\nfor i in xrange(0, len(a), 256):\n r0 = a[i:i+256]\n r1 = b[i:i+256]\n multiply(r0, 2, r2)\n multiply(r1, 3, r3)\n add(r2, r3, r2)\n c[i:i+128] = r2\nLet's use it!",
"# Change to size of the arrays to see the differences\nm = 10000\nn = 5000\nA = np.random.random((m,n))\nB = np.random.random((m,n))\nC = np.random.random((m,n))\n\nnp_t = %timeit -o test_func(A,B,C)\n\nne_t = %timeit -o ne.evaluate('sin(a**2 + exp(b)) + cos(b**2 + exp(c)) + tan(a**2+b**2+c**2)')\n\nprint(\"Improvement: {0} times\".format(np_t.best/ne_t.best))",
"Additionally we can explicitly specify the number of threads that NumExpr can use to evaluate the expression with the set_num_threads() function:",
"n_threads = 4\nfor i in range(1, n_threads+1):\n ne.set_num_threads(i)\n %timeit ne.evaluate('sin(a**2 + exp(b)) + cos(b**2 + exp(c)) + tan(a**2+b**2+c**2)')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
KorosecN13/Potovanje
|
analiza_London.ipynb
|
mit
|
[
"Potovanje\nProjekt pri predmetu Programiranje 1, ko smo se učili obdelave podatkov. Podatke sem zajemala iz te spletne strani. Za vsak vikend v obdobju od 3. februarja do 31. decembra 2017 sem poiskala 200 najugodnejših ponudb za 2 nočitvi za 2 odrasli osebi.\nZajeti podatki\n\nID hotela\nIme hotela\nOddaljenost od centra\nOcena gostov\nMožnost brezplačnega preklica\nKratek opis hotela\nDatum začetka vikenda - petkov datum\nCena za 2 nočitvi za 2 odrasli osebi\n\nAnaliza\nV analizi bom najprej prikazala datoteke, iz katerih bom potem črpala informacije, da bom lahko odgovorila na naslednja vprašanja:\n* Kdaj se najbolj splača prespati v Londonu in koliko denarja je potrebno za to odšteti?\n* Kako se v povprečju spreminjajo cene skozi celo leto?\n* Ali so hoteli z boljšo oceno tudi bliže centru?\n* Kolikšna je razlika v ceni med najbolj in najmanj luksuznimi namestitvami?\n* Ali obstaja korelacija med ceno in možnostjo brezplačnega preklica rezervacije?\n* Kakšna je cena namestitve v odvisnosti od razdalje do centra?\nUvodne vrstice za delo s orodjem pandas.",
"# naložimo paket\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\npd.set_option('display.mpl_style', 'default') # Make the graphs a bit prettier\nplt.rcParams['figure.figsize'] = (15, 5)\nplt.rcParams['font.family'] = 'sans-serif'\n\npd.set_option('display.width', 5000) \npd.set_option('display.max_columns', 20)\n\n# naložimo datoteke, ki jih bomo uporabljali pri analizi\nlondon = pd.read_csv('LONDON.csv')\nhoteli = pd.read_csv('LONDON_Hoteli.csv')",
"Zdaj pa zares! \nV datoteki spodaj so zajeti naslednji podatki : datum začetka vikenda (petek), ID hotela in ceno namestitve za 2 noči za 2 odrasli osebi.",
"london[:10]",
"V datoteki spodaj so zajeti podatki o hotelih: ID hotela, ime, razdalja do središča, ocena gostov, ali je možnost brezplačne prekinitve rezervacije in kratek opis.",
"hoteli[:3]",
"Spodaj pa si lahko ogledamo združeno tabelo, kjer so prikazane vse namestitve z dodanimi podatki o hotelih.",
"merged = london.merge(hoteli, on= 'hotelId')\nmerged[:3]",
"Kdaj se najbolj splača prespati v Londonu in koliko denarja je potrebno za to odšteti?",
"urejeni_po_ceni = london.sort_values('price') \nurejeni_po_ceni[:3] ############popravi na prvih 20 vrstic!!!",
"Odgovor: Najcenejša nočitev je 3. novembra 2017 in stane samo 62 dolarjev. \nČe pogledamo malo širše: najcenejših 20 ponudb se giblje med 62 in 74 dolarji. Koliko je to evrov, lahko preverite tukaj. Če pogledamo datume, lahko opazimo, da so prisotni predvsem meseci iz druge polovice leta. Morda je takrat še bolj deževno in vetrovno kot ponavadi, kdo bi vedel. :)\nKako se v povprečju spreminjajo cene skozi celo leto?",
"london_po_datumih = london.groupby(\"friday\")\nlondon_po_datumih[\"price\"].mean().plot()",
"Cene kar precej poskakujejo, vendar pa lahko zaznamo naraščanje v prvi polovici leta in potem zopet padanje v drugi polovici leta. Najdražje nočitve so v povprečju junija, najcenejše pa marca.\nAli so hoteli z boljšo oceno tudi bliže centru?",
"razdalje = merged['proximityDistance']\nocene = merged['guestRating']\nrazdalje, ocene = zip(*sorted(zip(razdalje,ocene), key=lambda x: x[0]))\nplt.plot(razdalje, ocene)",
"Iz grafa se težko razbere kakšno smiselno povezavo med oddaljenostjo od centra in oceno gostov. Verjetno k temu pripomorejo dejavniki, ki jih v analizi nisem zajela, npr možnost uporabe in kvaliteta wifija ali kaj podobnega.\nOpazka: Graf od 6 milje naprej ni reprezentativen. Če bi bil, bi lahko posplošili, da je je ocena vseh hotelov med 7,5 in 8 miljami tako nizka, pa temu gotovo ni tako. To pripisujem premajhnemu številu hotelov na tej razdalji. Po pogledu v osnovno tabelo, sem ugotovila, da se na razdalji 7.53 milje nahaja hotel Apartment Wharf - Discovery Dock West, ki ima oceno 0, kar pomeni, da nima sploh ocene.\nKolikšna je razlika v ceni med najbolj in najmanj luksuznimi namestitvami?",
"urejeni_po_ceni_skupno = merged.sort_values('price')\nnajcenejsi = urejeni_po_ceni_skupno[:200]\nnajdrazji = urejeni_po_ceni_skupno[-200:]\nurejeni_po_ceni_skupno[:10]\n\nurejeni_po_ceni_skupno[-10:]\n\nnajcenejsi.mean()\n\nnajdrazji.mean()",
"Za vzorec sem vzela 200 najcenejših in 200 najdražjih ponudb. Razlika v povprečni ceni je kar 677,395 dolarjev. Koliko je to evrov, lahko preverite tukaj. Razlika med najcenejšo in najdražjo ponudbo pa je kar 1789 dolarjev. \nAli obstaja korelacija med ceno in možnostjo brezplačnega preklica rezervacije?\nIz zgornjih dveh izračunov povprečnih vrednosti za najcenejše in najdražje namestitve lahko razberemo tudi ta podatek. V najdražjih hotelih je možnost, da je prekinitev rezervacije brezplačna kar 99%, medtem ko je v hotelih, ki nudijo najcenejšo namestitev ta možnost zgolj 12,5 %.\nKakšna je cena namestitve v odvisnosti od razdalje do centra?",
"razdalje = merged['proximityDistance']\ncene = merged['price']\nrazdalje, cene = zip(*sorted(zip(razdalje,cene), key=lambda x: x[0]))\nplt.plot(razdalje, cene)",
"Preden sem naredila analizo, sem postavila hipotezo, da cena z razdaljo pada. Graf to hipotezo potrdi z nekaj izjemami, ki jih pripisujem dejstvu, da je lahko tudi kakšen bolj udobno opremljen hotel, ki ni ravno v centru, pa ima vseeno malo višjo ceno.\nTako! Z odgovorom na še zadnje zastavljeno vprašanje zaključujem svojo analizo. Tako analizo bi lahko naredili za poljubno mesto. Vse, kar je treba spremeniti, je naslednje: v tej datoteki nastaviti drugo mesto, pognati program, si vmes skuhati eno kavo ali pa 2, in še enkrat zagnati vse celice v tem jupyter notebooku in analiza se lahko prične. Obilico veselja!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
raschuetz/foundations-homework
|
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
|
mit
|
[
"Grade: 11.25 / 11\nHomework #4\nThese problem sets focus on list comprehensions, string operations and regular expressions.\nProblem set #1: List slices and list comprehensions\nLet's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:",
"numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'",
"In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').",
"numbers = [int(number) for number in numbers_str.split(',')]\nmax(numbers)",
"Great! We'll be using the numbers list you created above in the next few problems.\nIn the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:\n[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]\n\n(Hint: use a slice.)",
"sorted(numbers)[-10:]",
"In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:\n[120, 171, 258, 279, 528, 699, 804, 855]",
"[number for number in sorted(numbers) if number % 3 == 0]",
"Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:\n[2.6457513110645907, 8.06225774829855, 8.246211251235321]\n\n(These outputs might vary slightly depending on your platform.)",
"from math import sqrt\n[sqrt(number) for number in sorted(numbers) if number < 100]",
"Problem set #2: Still more list comprehensions\nStill looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.",
"planets = [\n {'diameter': 0.382,\n 'mass': 0.06,\n 'moons': 0,\n 'name': 'Mercury',\n 'orbital_period': 0.24,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.949,\n 'mass': 0.82,\n 'moons': 0,\n 'name': 'Venus',\n 'orbital_period': 0.62,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 1.00,\n 'mass': 1.00,\n 'moons': 1,\n 'name': 'Earth',\n 'orbital_period': 1.00,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.532,\n 'mass': 0.11,\n 'moons': 2,\n 'name': 'Mars',\n 'orbital_period': 1.88,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 11.209,\n 'mass': 317.8,\n 'moons': 67,\n 'name': 'Jupiter',\n 'orbital_period': 11.86,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 9.449,\n 'mass': 95.2,\n 'moons': 62,\n 'name': 'Saturn',\n 'orbital_period': 29.46,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 4.007,\n 'mass': 14.6,\n 'moons': 27,\n 'name': 'Uranus',\n 'orbital_period': 84.01,\n 'rings': 'yes',\n 'type': 'ice giant'},\n {'diameter': 3.883,\n 'mass': 17.2,\n 'moons': 14,\n 'name': 'Neptune',\n 'orbital_period': 164.8,\n 'rings': 'yes',\n 'type': 'ice giant'}]",
"Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:\n['Jupiter', 'Saturn', 'Uranus']",
"# I think that the question has a typo. This is for planets that have a diameter greater than four Earth DIAMETERS\n[planet['name'] for planet in planets if planet['diameter'] > 4 * planets[2]['diameter']]",
"In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79",
"sum([planet['mass'] for planet in planets])",
"Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:\n['Jupiter', 'Saturn', 'Uranus', 'Neptune']",
"[planet['name'] for planet in planets if planet['type'].find('giant') > -1]",
"EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:\nfor ['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']",
"# TA-COMMENT: (+0.25) You were almost there! Just had to adjust your list comprehension: \n# [planet['name'] for planet in newlist]\n\nnewlist = sorted(planets, key=lambda k: k['moons'])\n[planetfor planet in newlist\n# sorted_moons = sorted([planet['moons'] for planet in planets])\n# sorted([planet['name'] for planet in planets], key = [planet['moons'] for planet in planets])\n# sorted(planets, key=[planet['moons'] for planet in planets].sort()) ",
"Problem set #3: Regular expressions\nIn the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.",
"import re\npoem_lines = ['Two roads diverged in a yellow wood,',\n 'And sorry I could not travel both',\n 'And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'To where it bent in the undergrowth;',\n '',\n 'Then took the other, as just as fair,',\n 'And having perhaps the better claim,',\n 'Because it was grassy and wanted wear;',\n 'Though as for that the passing there',\n 'Had worn them really about the same,',\n '',\n 'And both that morning equally lay',\n 'In leaves no step had trodden black.',\n 'Oh, I kept the first for another day!',\n 'Yet knowing how way leads on to way,',\n 'I doubted if I should ever come back.',\n '',\n 'I shall be telling this with a sigh',\n 'Somewhere ages and ages hence:',\n 'Two roads diverged in a wood, and I---',\n 'I took the one less travelled by,',\n 'And that has made all the difference.']",
"In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.\nIn the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \\b anchor. Don't overthink the \"two words in a row\" requirement.)\nExpected result:\n['Then took the other, as just as fair,',\n 'Had worn them really about the same,',\n 'And both that morning equally lay',\n 'I doubted if I should ever come back.',\n 'I shall be telling this with a sigh']",
"[line for line in poem_lines if re.search(r'\\b\\w{4} \\b\\w{4}\\b', line)]",
"Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:\n['And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'And having perhaps the better claim,',\n 'Though as for that the passing there',\n 'In leaves no step had trodden black.',\n 'Somewhere ages and ages hence:']",
"[line for line in poem_lines if re.search(r'\\b\\w{5}\\W?$', line)]",
"Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.",
"all_lines = \" \".join(poem_lines)",
"Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:\n['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']",
"re.findall(r'\\bI \\b(\\w+)\\b', all_lines)",
"Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.",
"entrees = [\n \"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95\",\n \"Lavender and Pepperoni Sandwich $8.49\",\n \"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v\",\n \"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v\",\n \"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95\",\n \"Rutabaga And Cucumber Wrap $8.49 - v\"\n]",
"You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.\nExpected output:\n[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',\n 'price': 10.95,\n 'vegetarian': False},\n {'name': 'Lavender and Pepperoni Sandwich ',\n 'price': 8.49,\n 'vegetarian': False},\n {'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',\n 'price': 12.95,\n 'vegetarian': True},\n {'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',\n 'price': 9.95,\n 'vegetarian': True},\n {'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',\n 'price': 19.95,\n 'vegetarian': False},\n {'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]",
"# Way 1\nmenu = []\nfor item in entrees:\n food_entry = {}\n for x in re.findall(r'(^.+) \\$', item):\n food_entry['name'] = str(x)\n for x in re.findall(r'\\$(\\d+.\\d{2})', item):\n food_entry['price'] = float(x)\n if re.search(r'- v$', item):\n food_entry['vegetarian'] = True\n else: \n food_entry['vegetarian'] = False\n menu.append(food_entry)\nmenu\n\n# Way 2\nmenu = []\nfor item in entrees:\n food_entry = {}\n match = re.search(r'(^.+) \\$(\\d+.\\d{2}) ?(-? ?v?$)', item)\n food_entry['name'] = match.group(1)\n food_entry['price'] = float(match.group(2))\n if match.group(3):\n food_entry['vegetarian'] = True\n else:\n food_entry['vegetarian'] = False\n menu.append(food_entry)\nmenu",
"Great work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/kfserving-lts
|
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
|
apache-2.0
|
[
"Cifar10 Drift Detection\nIn this example we will deploy an image classification model along with a drift detector trained on the same dataset. For in depth details on creating a drift detection model for your own dataset see the alibi-detect project and associated documentation. You can find details for this CIFAR10 example in their documentation as well.\nPrequisites:\n\nRunning cluster with \nkfserving installed\nKnative eventing installed >= 0.18\n\n\n\nTested on GKE and Kind with Knative Eventing 0.18 and Istio 1.7.3",
"!pip install -r requirements_notebook.txt",
"Setup Resources",
"!kubectl create namespace cifar10\n\n%%writefile broker.yaml\napiVersion: eventing.knative.dev/v1\nkind: broker\nmetadata:\n name: default\n namespace: cifar10\n\n!kubectl create -f broker.yaml\n\n%%writefile event-display.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: hello-display\n namespace: cifar10\nspec:\n replicas: 1\n selector:\n matchLabels: &labels\n app: hello-display\n template:\n metadata:\n labels: *labels\n spec:\n containers:\n - name: event-display\n image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display\n\n---\n\nkind: Service\napiVersion: v1\nmetadata:\n name: hello-display\n namespace: cifar10\nspec:\n selector:\n app: hello-display\n ports:\n - protocol: TCP\n port: 80\n targetPort: 8080\n\n!kubectl apply -f event-display.yaml",
"Create the Kfserving image classification model for Cifar10. We add in a logger for requests - the default destination is the namespace Knative Broker.",
"%%writefile cifar10.yaml\napiVersion: \"serving.kubeflow.org/v1alpha2\"\nkind: \"InferenceService\"\nmetadata:\n name: \"tfserving-cifar10\"\n namespace: cifar10\nspec:\n default:\n predictor:\n tensorflow:\n storageUri: \"gs://kfserving-samples/tfserving/cifar10/resnet32\"\n logger:\n mode: all\n url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default\n\n!kubectl apply -f cifar10.yaml",
"Create the pretrained Drift Detector. We forward replies to the message-dumper we started. Notice the drift_batch_size. The drift detector will wait until drift_batch_size number of requests are received before making a drift prediction.",
"%%writefile cifar10cd.yaml\napiVersion: serving.knative.dev/v1\nkind: Service\nmetadata:\n name: drift-detector\n namespace: cifar10\nspec:\n template:\n metadata:\n annotations:\n autoscaling.knative.dev/minScale: \"1\"\n spec:\n containers:\n - image: seldonio/alibi-detect-server:0.0.2\n imagePullPolicy: IfNotPresent\n args:\n - --model_name\n - cifar10cd\n - --http_port\n - '8080'\n - --protocol\n - tensorflow.http\n - --storage_uri\n - gs://seldon-models/alibi-detect/cd/ks/cifar10\n - --reply_url\n - http://hello-display.cifar10\n - --event_type\n - org.kubeflow.serving.inference.outlier\n - --event_source\n - org.kubeflow.serving.cifar10cd\n - DriftDetector\n - --drift_batch_size\n - '5000'\n\n\n!kubectl apply -f cifar10cd.yaml",
"Create a Knative trigger to forward logging events to our Outlier Detector.",
"%%writefile trigger.yaml\napiVersion: eventing.knative.dev/v1\nkind: Trigger\nmetadata:\n name: drift-trigger\n namespace: cifar10\nspec:\n broker: default\n filter:\n attributes:\n type: org.kubeflow.serving.inference.request\n subscriber:\n ref:\n apiVersion: serving.knative.dev/v1\n kind: Service\n name: drift-detector\n namespace: cifar10\n\n!kubectl apply -f trigger.yaml",
"Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.",
"CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')\nCLUSTER_IP=CLUSTER_IPS[0]\nprint(CLUSTER_IP)",
"If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following cell.\nINGRESS_GATEWAY_SERVICE=$(kubectl get svc --namespace istio-system --selector=\"app=istio-ingressgateway\" --output jsonpath='{.items[0].metadata.name}')\nkubectl port-forward --namespace istio-system svc/${INGRESS_GATEWAY_SERVICE} 8080:80",
"#CLUSTER_IP=\"localhost:8080\"\n\nSERVICE_HOSTNAMES=!(kubectl get inferenceservice -n cifar10 tfserving-cifar10 -o jsonpath='{.status.url}' | cut -d \"/\" -f 3)\nSERVICE_HOSTNAME_CIFAR10=SERVICE_HOSTNAMES[0]\nprint(SERVICE_HOSTNAME_CIFAR10)\n\nSERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 drift-detector -o jsonpath='{.status.url}' | cut -d \"/\" -f 3)\nSERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]\nprint(SERVICE_HOSTNAME_VAEOD)\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport requests\nimport json\nimport tensorflow as tf\ntf.keras.backend.clear_session()\n\ntrain, test = tf.keras.datasets.cifar10.load_data()\nX_train, y_train = train\nX_test, y_test = test\n\nX_train = X_train.astype('float32') / 255\nX_test = X_test.astype('float32') / 255\nprint(X_train.shape, y_train.shape, X_test.shape, y_test.shape)\nclasses = ('plane', 'car', 'bird', 'cat',\n 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')\n\ndef show(X):\n plt.imshow(X.reshape(32, 32, 3))\n plt.axis('off')\n plt.show()\n\ndef predict(X):\n formData = {\n 'instances': X.tolist()\n }\n headers = {}\n headers[\"Host\"] = SERVICE_HOSTNAME_CIFAR10\n res = requests.post('http://'+CLUSTER_IP+'/v1/models/tfserving-cifar10:predict', json=formData, headers=headers)\n if res.status_code == 200:\n j = res.json()\n if len(j[\"predictions\"]) == 1:\n return classes[np.array(j[\"predictions\"])[0].argmax()]\n else:\n print(\"Failed with \",res.status_code)\n return []\n \ndef drift(X):\n formData = {\n 'instances': X.tolist()\n }\n headers = { \"ce-namespace\": \"default\",\"ce-modelid\":\"cifar10drift\",\"ce-type\":\"io.seldon.serving.inference.request\", \\\n \"ce-id\":\"1234\",\"ce-source\":\"localhost\",\"ce-specversion\":\"1.0\"}\n headers[\"Host\"] = SERVICE_HOSTNAME_VAEOD \n res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)\n if res.status_code == 200:\n od = res.json()\n return od\n else:\n print(\"Failed with \",res.status_code)\n return []",
"Normal Prediction",
"idx = 1\nX = X_train[idx:idx+1]\nshow(X)\npredict(X)",
"Test Drift\nWe need to accumulate a large enough batch size so no drift will be tested as yet.",
"!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}') ",
"We will now send 5000 requests to the model in batches. The drift detector will run at the end of this as we set the drift_batch_size to 5000 in our yaml above.",
"from tqdm.notebook import tqdm\nfor i in tqdm(range(0,5000,100)):\n X = X_train[i:i+100]\n predict(X)",
"Let's check the message dumper and extract the first drift result.",
"res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}') \ndata= []\nfor i in range(0,len(res)):\n if res[i] == 'Data,':\n data.append(res[i+1])\nj = json.loads(json.loads(data[0]))\nprint(\"Drift\",j[\"data\"][\"is_drift\"]==1)",
"Now, let's create some CIFAR10 examples with motion blur.",
"from alibi_detect.datasets import fetch_cifar10c, corruption_types_cifar10c\ncorruption = ['motion_blur']\nX_corr, y_corr = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True)\nX_corr = X_corr.astype('float32') / 255\n\nshow(X_corr[0])\nshow(X_corr[1])\nshow(X_corr[2])",
"Send these examples to the predictor.",
"for i in tqdm(range(0,5000,100)):\n X = X_corr[i:i+100]\n predict(X)",
"Now when we check the message dump we should find a new drift response.",
"res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}') \ndata= []\nfor i in range(0,len(res)):\n if res[i] == 'Data,':\n data.append(res[i+1])\nj = json.loads(json.loads(data[-1]))\nprint(\"Drift\",j[\"data\"][\"is_drift\"]==1)",
"Tear Down",
"!kubectl delete ns cifar10"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
myfunprograms/machine-learning
|
boston_housing/boston_housing_original.ipynb
|
apache-2.0
|
[
"Machine Learning Engineer Nanodegree\nModel Evaluation & Validation\nProject: Predicting Boston Housing Prices\nWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.\nThe dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:\n- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.\n- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.\n- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.\n- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.\nRun the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.",
"# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom sklearn.cross_validation import ShuffleSplit\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the Boston housing dataset\ndata = pd.read_csv('housing.csv')\nprices = data['MEDV']\nfeatures = data.drop('MEDV', axis = 1)\n \n# Success\nprint \"Boston housing dataset has {} data points with {} variables each.\".format(*data.shape)",
"Data Exploration\nIn this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.\nSince the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.\nImplementation: Calculate Statistics\nFor your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.\nIn the code cell below, you will need to implement the following:\n- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.\n - Store each calculation in their respective variable.",
"# TODO: Minimum price of the data\nminimum_price = None\n\n# TODO: Maximum price of the data\nmaximum_price = None\n\n# TODO: Mean price of the data\nmean_price = None\n\n# TODO: Median price of the data\nmedian_price = None\n\n# TODO: Standard deviation of prices of the data\nstd_price = None\n\n# Show the calculated statistics\nprint \"Statistics for Boston housing dataset:\\n\"\nprint \"Minimum price: ${:,.2f}\".format(minimum_price)\nprint \"Maximum price: ${:,.2f}\".format(maximum_price)\nprint \"Mean price: ${:,.2f}\".format(mean_price)\nprint \"Median price ${:,.2f}\".format(median_price)\nprint \"Standard deviation of prices: ${:,.2f}\".format(std_price)",
"Question 1 - Feature Observation\nAs a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):\n- 'RM' is the average number of rooms among homes in the neighborhood.\n- 'LSTAT' is the percentage of homeowners in the neighborhood considered \"lower class\" (working poor).\n- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.\nUsing your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.\nHint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?\nAnswer: \n\nDeveloping a Model\nIn this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.\nImplementation: Define a Performance Metric\nIt is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how \"good\" that model is at making predictions. \nThe values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable.\nFor the performance_metric function in the code cell below, you will need to implement the following:\n- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.\n- Assign the performance score to the score variable.",
"# TODO: Import 'r2_score'\n\ndef performance_metric(y_true, y_predict):\n \"\"\" Calculates and returns the performance score between \n true and predicted values based on the metric chosen. \"\"\"\n \n # TODO: Calculate the performance score between 'y_true' and 'y_predict'\n score = None\n \n # Return the score\n return score",
"Question 2 - Goodness of Fit\nAssume that a dataset contains five data points and a model made the following predictions for the target variable:\n| True Value | Prediction |\n| :-------------: | :--------: |\n| 3.0 | 2.5 |\n| -0.5 | 0.0 |\n| 2.0 | 2.1 |\n| 7.0 | 7.8 |\n| 4.2 | 5.3 |\nWould you consider this model to have successfully captured the variation of the target variable? Why or why not? \nRun the code cell below to use the performance_metric function and calculate this model's coefficient of determination.",
"# Calculate the performance of this model\nscore = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])\nprint \"Model has a coefficient of determination, R^2, of {:.3f}.\".format(score)",
"Answer:\nImplementation: Shuffle and Split Data\nYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.\nFor the code cell below, you will need to implement the following:\n- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.\n - Split the data into 80% training and 20% testing.\n - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.\n- Assign the train and testing splits to X_train, X_test, y_train, and y_test.",
"# TODO: Import 'train_test_split'\n\n# TODO: Shuffle and split the data into training and testing subsets\nX_train, X_test, y_train, y_test = (None, None, None, None)\n\n# Success\nprint \"Training and testing split was successful.\"",
"Question 3 - Training and Testing\nWhat is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?\nHint: What could go wrong with not having a way to test your model?\nAnswer: \n\nAnalyzing Model Performance\nIn this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.\nLearning Curves\nThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. \nRun the code cell below and use these graphs to answer the following question.",
"# Produce learning curves for varying training set sizes and maximum depths\nvs.ModelLearning(features, prices)",
"Question 4 - Learning the Data\nChoose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?\nHint: Are the learning curves converging to particular scores?\nAnswer: \nComplexity Curves\nThe following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. \nRun the code cell below and use this graph to answer the following two questions.",
"vs.ModelComplexity(X_train, y_train)",
"Question 5 - Bias-Variance Tradeoff\nWhen the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?\nHint: How do you know when a model is suffering from high bias or high variance?\nAnswer: \nQuestion 6 - Best-Guess Optimal Model\nWhich maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?\nAnswer: \n\nEvaluating Model Performance\nIn this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.\nQuestion 7 - Grid Search\nWhat is the grid search technique and how it can be applied to optimize a learning algorithm?\nAnswer: \nQuestion 8 - Cross-Validation\nWhat is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?\nHint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?\nAnswer: \nImplementation: Fitting a Model\nYour final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.\nIn addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_splits') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.\nPlease note that ShuffleSplit has different parameters in scikit-learn versions 0.17 and 0.18.\nFor the fit_model function in the code cell below, you will need to implement the following:\n- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.\n - Assign this object to the 'regressor' variable.\n- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.\n- Use make_scorer from sklearn.metrics to create a scoring function object.\n - Pass the performance_metric function as a parameter to the object.\n - Assign this scoring function to the 'scoring_fnc' variable.\n- Use GridSearchCV from sklearn.grid_search to create a grid search object.\n - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. \n - Assign the GridSearchCV object to the 'grid' variable.",
"# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'\n\ndef fit_model(X, y):\n \"\"\" Performs grid search over the 'max_depth' parameter for a \n decision tree regressor trained on the input data [X, y]. \"\"\"\n \n # Create cross-validation sets from the training data\n cv_sets = ShuffleSplit(X.shape[0], n_splits = 10, test_size = 0.20, random_state = 0)\n\n # TODO: Create a decision tree regressor object\n regressor = None\n\n # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10\n params = {}\n\n # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' \n scoring_fnc = None\n\n # TODO: Create the grid search object\n grid = None\n\n # Fit the grid search object to the data to compute the optimal model\n grid = grid.fit(X, y)\n\n # Return the optimal model after fitting the data\n return grid.best_estimator_",
"Making Predictions\nOnce a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.\nQuestion 9 - Optimal Model\nWhat maximum depth does the optimal model have? How does this result compare to your guess in Question 6? \nRun the code block below to fit the decision tree regressor to the training data and produce an optimal model.",
"# Fit the training data to the model using grid search\nreg = fit_model(X_train, y_train)\n\n# Produce the value for 'max_depth'\nprint \"Parameter 'max_depth' is {} for the optimal model.\".format(reg.get_params()['max_depth'])",
"Answer: \nQuestion 10 - Predicting Selling Prices\nImagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:\n| Feature | Client 1 | Client 2 | Client 3 |\n| :---: | :---: | :---: | :---: |\n| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |\n| Neighborhood poverty level (as %) | 17% | 32% | 3% |\n| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |\nWhat price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?\nHint: Use the statistics you calculated in the Data Exploration section to help justify your response. \nRun the code block below to have your optimized model make predictions for each client's home.",
"# Produce a matrix for client data\nclient_data = [[5, 17, 15], # Client 1\n [4, 32, 22], # Client 2\n [8, 3, 12]] # Client 3\n\n# Show predictions\nfor i, price in enumerate(reg.predict(client_data)):\n print \"Predicted selling price for Client {}'s home: ${:,.2f}\".format(i+1, price)",
"Answer: \nSensitivity\nAn optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.",
"vs.PredictTrials(features, prices, fit_model, client_data)",
"Question 11 - Applicability\nIn a few sentences, discuss whether the constructed model should or should not be used in a real-world setting.\nHint: Some questions to answering:\n- How relevant today is data that was collected from 1978?\n- Are the features present in the data sufficient to describe a home?\n- Is the model robust enough to make consistent predictions?\n- Would data collected in an urban city like Boston be applicable in a rural city?\nAnswer: \n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ZwickyTransientFacility/ztf_sim
|
notebooks/ztf_sim_introduction.ipynb
|
bsd-3-clause
|
[
"ztf_sim_introduction\nThis notebook illustrates basic use of the ztf_sim modules.",
"# hack to get the path right\nimport sys\nsys.path.append('..')\n\nimport ztf_sim\nfrom astropy.time import Time\nimport pandas as pd\nimport numpy as np\nimport astropy.units as u\nimport pylab as plt",
"First we'll generate a test field grid. You only need to do this the first time you run the simulator.",
"ztf_sim.fields.generate_test_field_grid()",
"Let's load the Fields object with the default field grid. Fields is a thin wrapper around a pandas DataFrame containing the field information.",
"f = ztf_sim.fields.Fields()",
"The raw fieldid and coordinates are stored as a pandas Dataframe in the .fields attribute:",
"f.fields.head()",
"Now let's calculate their altitude and azimuth at a specific time using the astropy.time.Time object:",
"f.alt_az(Time.now()).head()",
"Demonstrating low-level access to fields by the fieldid index (usually not required):",
"f.fields.loc[853]",
"We can select fields with conditionals:",
"f.fields['dec'] > -30.",
"It's easier to use the select_fields convenience function, though. It returns a boolean Series indexed by fieldid that we can use to do calculations on subsets of the field grid.",
"cuts = f.select_fields(dec_range=[0,10],gridid=0,ecliptic_lat_range=[-5,5])\ncuts.head()",
"Calculate the current altitude and azimuth of the selected fields:",
"f.alt_az(Time.now(),cuts=cuts)",
"Calculating the overhead time (max of ha, dec, dome slews and readout time):",
"f.overhead_time(853,Time.now())\n\nf = ztf_sim.fields.Fields()\nExposure_time = 60*u.second\nNight_length=9*u.h\n\n\ntime0 = Time('2015-09-10 20:00:00') + 7*u.h\ntime = time0\nf.fields = f.fields.join(pd.DataFrame(np.zeros(len(f.fields)),columns=['observed']))\nf.fields = f.fields.join(pd.DataFrame(np.zeros(len(f.fields)),columns=['possibleToObserve']))\n\ndef observe(f, nightStart):\n time=nightStart\n goodAltitude = f.alt_az(time)['alt'] > 20\n shouldObserve = f.fields['observed'] == 0\n good = goodAltitude & shouldObserve #& f.alt_az(time+1*u.h)['alt'] < 20 # start with a field which won't be observable later\n \n if np.all(good) is False:\n good = goodAltitude & shouldObserve\n \n fid = f.fields[good].iloc[0].name\n f.fields['observed'][fid]+=1\n f.fields['possibleToObserve'][goodAltitude] = 1\n time += Exposure_time\n\n while time < nightStart + Night_length:\n goodAltitude = f.alt_az(time)['alt'] > 20\n shouldObserve = f.fields['observed'] == 0\n good = goodAltitude & shouldObserve\n f.fields['possibleToObserve'][goodAltitude] = 1\n \n if not np.any(good):\n time += 60*u.s\n continue\n \n slewTime = f.overhead_time(fid,time)[good]\n fid = int(slewTime.idxmin())\n # print slewTime['overhead_time'][fid]\n time += Exposure_time + slewTime['overhead_time'][fid]*u.second\n f.fields['observed'][fid]+=1\n # print time-7*u.h\n \n\n# First night\nobserve(f,time)\nfieldsPossible = np.sum(f.fields['possibleToObserve'])\nprint fieldsPossible\nfieldsObserved = np.sum(f.fields['observed'])\nprint fieldsObserved\nmeanTime = (Night_length.to(u.s)-fieldsObserved*Exposure_time)/(fieldsObserved-1)\nprint meanTime\n\n# Second night\ntime=time0+24*u.h\nobserve(f,time)\nfieldsPossible = np.sum(f.fields['possibleToObserve'])\nprint fieldsPossible\nfieldsObserved = np.sum(f.fields['observed'])\nprint fieldsObserved\nmeanTime = (2*Night_length.to(u.s)-fieldsObserved*Exposure_time)/(fieldsObserved-1)\nprint meanTime\n\nfor dec in np.append(np.linspace(-90,90,10),0):\n ra=np.linspace(0, 360,1000)\n x,y = raDec2xy(ra,dec)\n plt.plot(x,y,'k')\n \nfor ra in np.linspace(0,360,10):\n dec=np.linspace(-90, 90,1000)\n x,y = raDec2xy(ra,dec)\n plt.plot(x,y,'k')\n \nx,y = raDec2xy(f.fields['ra'],f.fields['dec'])\nplt.plot(x,y,'o',color=(.8,.8,.8)) \nplt.show()\n\ndef raDec2xy(ra,dec):\n # Using Aitoff projections (from Wiki) returns x-y coordinates on a plane of RA and Dec\n theta = np.deg2rad(dec)\n phi = np.deg2rad(ra)-np.pi #the range is [-pi,pi]\n alpha=np.arccos(np.cos(theta)*np.cos(phi/2))\n x=2*np.cos(theta)*np.sin(phi/2)/np.sinc(alpha/np.pi) # The python's sinc is normalazid, hence the /pi\n y=np.sin(theta)/np.sinc(alpha/np.pi)\n return x,y"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mtasende/Machine-Learning-Nanodegree-Capstone
|
notebooks/prod/n10_dyna_q_with_predictor_full_training.ipynb
|
mit
|
[
"In this notebook a Q learner with dyna and a custom predictor will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value).",
"# Basic imports\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport datetime as dt\nimport scipy.optimize as spo\nimport sys\nfrom time import time\nfrom sklearn.metrics import r2_score, median_absolute_error\nfrom multiprocessing import Pool\nimport pickle\n\n%matplotlib inline\n\n%pylab inline\npylab.rcParams['figure.figsize'] = (20.0, 10.0)\n\n%load_ext autoreload\n%autoreload 2\n\nsys.path.append('../../')\n\nimport recommender.simulator as sim\nfrom utils.analysis import value_eval\nfrom recommender.agent_predictor import AgentPredictor\nfrom functools import partial\nfrom sklearn.externals import joblib\n\nNUM_THREADS = 1\nLOOKBACK = -1\nSTARTING_DAYS_AHEAD = 252\nPOSSIBLE_FRACTIONS = [0.0, 1.0]\nDYNA = 20\nBASE_DAYS = 112\n\n# Get the data\nSYMBOL = 'SPY'\ntotal_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')\ndata_train_df = total_data_train_df[SYMBOL].unstack()\ntotal_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')\ndata_test_df = total_data_test_df[SYMBOL].unstack()\nif LOOKBACK == -1:\n total_data_in_df = total_data_train_df\n data_in_df = data_train_df\nelse:\n data_in_df = data_train_df.iloc[-LOOKBACK:]\n total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]\n \n# Crop the final days of the test set as a workaround to make dyna work \n# (the env, only has the market calendar up to a certain time)\ndata_test_df = data_test_df.iloc[:-DYNA]\ntotal_data_test_df = total_data_test_df.loc[:data_test_df.index[-1]]\n\n# Create many agents\nindex = np.arange(NUM_THREADS).tolist()\nenv, num_states, num_actions = sim.initialize_env(total_data_in_df, \n SYMBOL, \n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS)\n\nestimator_close = joblib.load('../../data/best_predictor.pkl')\nestimator_volume = joblib.load('../../data/best_volume_predictor.pkl')\n\nagents = [AgentPredictor(num_states=num_states, \n num_actions=num_actions, \n random_actions_rate=0.98, \n random_actions_decrease=0.999,\n dyna_iterations=DYNA,\n name='Agent_{}'.format(i),\n estimator_close=estimator_close,\n estimator_volume=estimator_volume,\n env=env,\n prediction_window=BASE_DAYS) for i in index]\n\ndef show_results(results_list, data_in_df, graph=False):\n for values in results_list:\n total_value = values.sum(axis=1)\n print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))\n print('-'*100)\n initial_date = total_value.index[0]\n compare_results = data_in_df.loc[initial_date:, 'Close'].copy()\n compare_results.name = SYMBOL\n compare_results_df = pd.DataFrame(compare_results)\n compare_results_df['portfolio'] = total_value\n std_comp_df = compare_results_df / compare_results_df.iloc[0]\n if graph:\n plt.figure()\n std_comp_df.plot()",
"Let's show the symbols data, to see how good the recommender has to be.",
"print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))\n\n# Simulate (with new envs, each time)\nn_epochs = 4\n\nfor i in range(n_epochs):\n tic = time()\n env.reset(STARTING_DAYS_AHEAD)\n results_list = sim.simulate_period(total_data_in_df, \n SYMBOL,\n agents[0],\n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\n toc = time()\n print('Epoch: {}'.format(i))\n print('Elapsed time: {} seconds.'.format((toc-tic)))\n print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\n show_results([results_list], data_in_df)\n\nenv.reset(STARTING_DAYS_AHEAD)\nresults_list = sim.simulate_period(total_data_in_df, \n SYMBOL, agents[0], \n learn=False, \n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n other_env=env)\nshow_results([results_list], data_in_df, graph=True)\n\nimport pickle\nwith open('../../data/dyna_q_with_predictor.pkl', 'wb') as best_agent:\n pickle.dump(agents[0], best_agent)",
"Let's run the trained agent, with the test set\nFirst a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).",
"TEST_DAYS_AHEAD = 112\n\nenv.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)\ntic = time()\nresults_list = sim.simulate_period(total_data_test_df, \n SYMBOL,\n agents[0],\n learn=False,\n starting_days_ahead=TEST_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\ntoc = time()\nprint('Epoch: {}'.format(i))\nprint('Elapsed time: {} seconds.'.format((toc-tic)))\nprint('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\nshow_results([results_list], data_test_df, graph=True)",
"And now a \"realistic\" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).",
"env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)\ntic = time()\nresults_list = sim.simulate_period(total_data_test_df, \n SYMBOL,\n agents[0],\n learn=True,\n starting_days_ahead=TEST_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\ntoc = time()\nprint('Epoch: {}'.format(i))\nprint('Elapsed time: {} seconds.'.format((toc-tic)))\nprint('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\nshow_results([results_list], data_test_df, graph=True)",
"What are the metrics for \"holding the position\"?",
"print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))",
"Conclusion:"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jbpoline/newpower
|
peakdistribution/find_peakdistr_2.ipynb
|
mit
|
[
"Find distribution of local maxima in a Gaussian Random Field\nCode formula of Cheng&Schwartzman\n\nBelow I defined the formulae of Cheng&Schwartzman in arXiv:1503.01328v1. On page 3.3 the density functions are displayed for 1D, 2D and 3D. \nConsequently, I apply these formulae to a range of x-values, which reproduces Figure 1.",
"% matplotlib inline\nimport numpy as np\nimport math\nimport nibabel as nib\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\nfrom nipy.labs.utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset\nimport palettable.colorbrewer as cb\nfrom nipype.interfaces import fsl\nimport os\nimport pandas as pd",
"Define formulae",
"def peakdens3D(x,k):\n fd1 = 144*stats.norm.pdf(x)/(29*6**(0.5)-36)\n fd211 = k**2.*((1.-k**2.)**3. + 6.*(1.-k**2.)**2. + 12.*(1.-k**2.)+24.)*x**2. / (4.*(3.-k**2.)**2.)\n fd212 = (2.*(1.-k**2.)**3. + 3.*(1.-k**2.)**2.+6.*(1.-k**2.)) / (4.*(3.-k**2.))\n fd213 = 3./2.\n fd21 = (fd211 + fd212 + fd213)\n fd22 = np.exp(-k**2.*x**2./(2.*(3.-k**2.))) / (2.*(3.-k**2.))**(0.5)\n fd23 = stats.norm.cdf(2.*k*x / ((3.-k**2.)*(5.-3.*k**2.))**(0.5))\n fd2 = fd21*fd22*fd23\n fd31 = (k**2.*(2.-k**2.))/4.*x**2. - k**2.*(1.-k**2.)/2. - 1.\n fd32 = np.exp(-k**2.*x**2./(2.*(2.-k**2.))) / (2.*(2.-k**2.))**(0.5)\n fd33 = stats.norm.cdf(k*x / ((2.-k**2.)*(5.-3.*k**2.))**(0.5))\n fd3 = fd31 * fd32 * fd33\n fd41 = (7.-k**2.) + (1-k**2)*(3.*(1.-k**2.)**2. + 12.*(1.-k**2.) + 28.)/(2.*(3.-k**2.))\n fd42 = k*x / (4.*math.pi**(0.5)*(3.-k**2.)*(5.-3.*k**2)**0.5)\n fd43 = np.exp(-3.*k**2.*x**2/(2.*(5-3.*k**2.)))\n fd4 = fd41*fd42 * fd43\n fd51 = math.pi**0.5*k**3./4.*x*(x**2.-3.)\n f521low = np.array([-10.,-10.])\n f521up = np.array([0.,k*x/2.**(0.5)])\n f521mu = np.array([0.,0.])\n f521sigma = np.array([[3./2., -1.],[-1.,(3.-k**2.)/2.]])\n fd521,i = stats.mvn.mvnun(f521low,f521up,f521mu,f521sigma) \n f522low = np.array([-10.,-10.])\n f522up = np.array([0.,k*x/2.**(0.5)])\n f522mu = np.array([0.,0.])\n f522sigma = np.array([[3./2., -1./2.],[-1./2.,(2.-k**2.)/2.]])\n fd522,i = stats.mvn.mvnun(f522low,f522up,f522mu,f522sigma) \n fd5 = fd51*(fd521+fd522)\n out = fd1*(fd2+fd3+fd4+fd5)\n return out",
"Apply formulae to a range of x-values",
"xs = np.arange(-4,4,0.01).tolist()\nys_3d_k01 = []\nys_3d_k05 = []\nys_3d_k1 = []\nys_2d_k01 = []\nys_2d_k05 = []\nys_2d_k1 = []\nys_1d_k01 = []\nys_1d_k05 = []\nys_1d_k1 = []\n\n\nfor x in xs:\n ys_1d_k01.append(peakdens1D(x,0.1))\n ys_1d_k05.append(peakdens1D(x,0.5))\n ys_1d_k1.append(peakdens1D(x,1))\n ys_2d_k01.append(peakdens2D(x,0.1))\n ys_2d_k05.append(peakdens2D(x,0.5))\n ys_2d_k1.append(peakdens2D(x,1))\n ys_3d_k01.append(peakdens3D(x,0.1))\n ys_3d_k05.append(peakdens3D(x,0.5))\n ys_3d_k1.append(peakdens3D(x,1))\n",
"Figure 1 from paper",
"plt.figure(figsize=(7,5))\nplt.plot(xs,ys_1d_k01,color=\"black\",ls=\":\",lw=2)\nplt.plot(xs,ys_1d_k05,color=\"black\",ls=\"--\",lw=2)\nplt.plot(xs,ys_1d_k1,color=\"black\",ls=\"-\",lw=2)\nplt.plot(xs,ys_2d_k01,color=\"blue\",ls=\":\",lw=2)\nplt.plot(xs,ys_2d_k05,color=\"blue\",ls=\"--\",lw=2)\nplt.plot(xs,ys_2d_k1,color=\"blue\",ls=\"-\",lw=2)\nplt.plot(xs,ys_3d_k01,color=\"red\",ls=\":\",lw=2)\nplt.plot(xs,ys_3d_k05,color=\"red\",ls=\"--\",lw=2)\nplt.plot(xs,ys_3d_k1,color=\"red\",ls=\"-\",lw=2)\nplt.ylim([-0.1,0.55])\nplt.show()",
"Apply the distribution to simulated data\nI now simulate data, extract peaks and compare these simulated peaks with the theoretical distribution.",
"os.chdir(\"/Users/Joke/Documents/Onderzoek/Studie_7_newpower/WORKDIR/\")\n\nsm=1\nsmooth_FWHM = 3\nsmooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2)))\ndata = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1)\nminimum = data.min()\nnewdata = data - minimum #little trick because fsl.model.Cluster ignores negative values\nimg=nib.Nifti1Image(newdata,np.eye(4))\nimg.to_filename(os.path.join(\"RF_\"+str(sm)+\".nii.gz\"))\ncl=fsl.model.Cluster()\ncl.inputs.threshold = 0\ncl.inputs.in_file=os.path.join(\"RF_\"+str(sm)+\".nii.gz\")\ncl.inputs.out_localmax_txt_file=os.path.join(\"locmax_\"+str(sm)+\".txt\")\ncl.inputs.num_maxima=10000000\ncl.inputs.connectivity=26\ncl.inputs.terminal_output='none'\ncl.run()\n\n\npeaks = pd.read_csv(\"locmax_\"+str(1)+\".txt\",sep=\"\\t\").drop('Unnamed: 5',1)\npeaks.Value = peaks.Value + minimum\n\nxn = np.arange(-10,10,0.01)\nyn = []\nfor x in xn:\n yn.append(peakdens3D(x,1))\n\n\nplt.figure(figsize=(7,5))\nplt.hist(peaks.Value,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(-5,5,0.1),label=\"observed distribution\")\nplt.xlim([-2,5])\nplt.ylim([0,0.6])\nplt.plot(xn,yn,color=twocol[1],lw=3,label=\"theoretical distribution\")\nplt.title(\"histogram\")\nplt.xlabel(\"peak height\")\nplt.ylabel(\"density\")\nplt.legend(loc=\"upper left\",frameon=False)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Ziqi-Li/bknqgis
|
pandas/doc/source/style.ipynb
|
gpl-2.0
|
[
"Styling\nNew in version 0.17.1\n<span style=\"color: red\">Provisional: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span>\nThis document is written as a Jupyter Notebook, and can be viewed or downloaded here.\nYou can apply conditional formatting, the visual styling of a DataFrame\ndepending on the data within, by using the DataFrame.style property.\nThis is a property that returns a Styler object, which has\nuseful methods for formatting and displaying DataFrames.\nThe styling is accomplished using CSS.\nYou write \"style functions\" that take scalars, DataFrames or Series, and return like-indexed DataFrames or Series with CSS \"attribute: value\" pairs for the values.\nThese functions can be incrementally passed to the Styler which collects the styles before rendering.\nBuilding Styles\nPass your style functions into one of the following methods:\n\nStyler.applymap: elementwise\nStyler.apply: column-/row-/table-wise\n\nBoth of those methods take a function (and some other keyword arguments) and applies your function to the DataFrame in a certain way.\nStyler.applymap works through the DataFrame elementwise.\nStyler.apply passes each column or row into your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument.\nFor columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None.\nFor Styler.applymap your function should take a scalar and return a single string with the CSS attribute-value pair.\nFor Styler.apply your function should take a Series or DataFrame (depending on the axis parameter), and return a Series or DataFrame with an identical shape where each value is a string with a CSS attribute-value pair.\nLet's see some examples.",
"import matplotlib.pyplot\n# We have this here to trigger matplotlib's font cache stuff.\n# This cell is hidden from the output\n\nimport pandas as pd\nimport numpy as np\n\nnp.random.seed(24)\ndf = pd.DataFrame({'A': np.linspace(1, 10, 10)})\ndf = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],\n axis=1)\ndf.iloc[0, 2] = np.nan",
"Here's a boring example of rendering a DataFrame, without any (visible) styles:",
"df.style",
"Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_ method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or for writing to file call the .render() method which returns a string.\nThe above output looks very similar to the standard DataFrame HTML representation. But we've done some work behind the scenes to attach CSS classes to each cell. We can view these by calling the .render method.",
"df.style.highlight_null().render().split('\\n')[:10]",
"The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames).\nWhen writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches those up with the CSS classes that identify each cell.\nLet's write a simple style function that will color negative numbers red and positive numbers black.",
"def color_negative_red(val):\n \"\"\"\n Takes a scalar and returns a string with\n the css property `'color: red'` for negative\n strings, black otherwise.\n \"\"\"\n color = 'red' if val < 0 else 'black'\n return 'color: %s' % color",
"In this case, the cell's style depends only on it's own value.\nThat means we should use the Styler.applymap method which works elementwise.",
"s = df.style.applymap(color_negative_red)\ns",
"Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to resuse your existing knowledge of how to interact with DataFrames.\nNotice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a <style> tag. This will be a common theme.\nFinally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function returns a scalar output.\nNow suppose you wanted to highlight the maximum value in each column.\nWe can't use .applymap anymore since that operated elementwise.\nInstead, we'll turn to .apply which operates columnwise (or rowwise using the axis keyword). Later on we'll see that something like highlight_max is already defined on Styler so you wouldn't need to write this yourself.",
"def highlight_max(s):\n '''\n highlight the maximum in a Series yellow.\n '''\n is_max = s == s.max()\n return ['background-color: yellow' if v else '' for v in is_max]\n\ndf.style.apply(highlight_max)",
"In this case the input is a Series, one column at a time.\nNotice that the output shape of highlight_max matches the input shape, an array with len(s) items.\nWe encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.",
"df.style.\\\n applymap(color_negative_red).\\\n apply(highlight_max)",
"Above we used Styler.apply to pass in each column one at a time.\n<span style=\"background-color: #DEDEBE\">Debugging Tip: If you're having trouble writing your style function, try just passing it into <code style=\"background-color: #DEDEBE\">DataFrame.apply</code>. Internally, <code style=\"background-color: #DEDEBE\">Styler.apply</code> uses <code style=\"background-color: #DEDEBE\">DataFrame.apply</code> so the result should be the same.</span>\nWhat if you wanted to highlight just the maximum value in the entire table?\nUse .apply(function, axis=None) to indicate that your function wants the entire table, not one column or row at a time. Let's try that next.\nWe'll rewrite our highlight-max to handle either Series (from .apply(axis=0 or 1)) or DataFrames (from .apply(axis=None)). We'll also allow the color to be adjustable, to demonstrate that .apply, and .applymap pass along keyword arguments.",
"def highlight_max(data, color='yellow'):\n '''\n highlight the maximum in a Series or DataFrame\n '''\n attr = 'background-color: {}'.format(color)\n if data.ndim == 1: # Series from .apply(axis=0) or axis=1\n is_max = data == data.max()\n return [attr if v else '' for v in is_max]\n else: # from .apply(axis=None)\n is_max = data == data.max().max()\n return pd.DataFrame(np.where(is_max, attr, ''),\n index=data.index, columns=data.columns)",
"When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.",
"df.style.apply(highlight_max, color='darkorange', axis=None)",
"Building Styles Summary\nStyle functions should return strings with one or more CSS attribute: value delimited by semicolons. Use\n\nStyler.applymap(func) for elementwise styles\nStyler.apply(func, axis=0) for columnwise styles\nStyler.apply(func, axis=1) for rowwise styles\nStyler.apply(func, axis=None) for tablewise styles\n\nAnd crucially the input and output shapes of func must match. If x is the input then func(x).shape == x.shape.\nFiner Control: Slicing\nBoth Styler.apply, and Styler.applymap accept a subset keyword.\nThis allows you to apply styles to specific rows or columns, without having to code that logic into your style function.\nThe value passed to subset behaves simlar to slicing a DataFrame.\n\nA scalar is treated as a column label\nA list (or series or numpy array)\nA tuple is treated as (row_indexer, column_indexer)\n\nConsider using pd.IndexSlice to construct the tuple for the last one.",
"df.style.apply(highlight_max, subset=['B', 'C', 'D'])",
"For row and column slicing, any valid indexer to .loc will work.",
"df.style.applymap(color_negative_red,\n subset=pd.IndexSlice[2:5, ['B', 'D']])",
"Only label-based slicing is supported right now, not positional.\nIf your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.\npython\nmy_func2 = functools.partial(my_func, subset=42)\nFiner Control: Display Values\nWe distinguish the display value from the actual value in Styler.\nTo control the display value, the text is printed in each cell, use Styler.format. Cells can be formatted according to a format spec string or a callable that takes a single value and returns a string.",
"df.style.format(\"{:.2%}\")",
"Use a dictionary to format specific columns.",
"df.style.format({'B': \"{:0<4.0f}\", 'D': '{:+.2f}'})",
"Or pass in a callable (or dictionary of callables) for more flexible handling.",
"df.style.format({\"B\": lambda x: \"±{:.2f}\".format(abs(x))})",
"Builtin Styles\nFinally, we expect certain styling functions to be common enough that we've included a few \"built-in\" to the Styler, so you don't have to write them yourself.",
"df.style.highlight_null(null_color='red')",
"You can create \"heatmaps\" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.",
"import seaborn as sns\n\ncm = sns.light_palette(\"green\", as_cmap=True)\n\ns = df.style.background_gradient(cmap=cm)\ns",
"Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.",
"# Uses the full color range\ndf.loc[:4].style.background_gradient(cmap='viridis')\n\n# Compress the color range\n(df.loc[:4]\n .style\n .background_gradient(cmap='viridis', low=.5, high=0)\n .highlight_null('red'))",
"There's also .highlight_min and .highlight_max.",
"df.style.highlight_max(axis=0)",
"Use Styler.set_properties when the style doesn't actually depend on the values.",
"df.style.set_properties(**{'background-color': 'black',\n 'color': 'lawngreen',\n 'border-color': 'white'})",
"Bar charts\nYou can include \"bar charts\" in your DataFrame.",
"df.style.bar(subset=['A', 'B'], color='#d65f5f')",
"New in version 0.20.0 is the ability to customize further the bar chart: You can now have the df.style.bar be centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of the cell), and you can pass a list of [color_negative, color_positive].\nHere's how you can change the above with the new align='mid' option:",
"df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])",
"The following example aims to give a highlight of the behavior of the new align options:",
"import pandas as pd\nfrom IPython.display import HTML\n\n# Test series\ntest1 = pd.Series([-100,-60,-30,-20], name='All Negative')\ntest2 = pd.Series([10,20,50,100], name='All Positive')\ntest3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')\n\nhead = \"\"\"\n<table>\n <thead>\n <th>Align</th>\n <th>All Negative</th>\n <th>All Positive</th>\n <th>Both Neg and Pos</th>\n </thead>\n </tbody>\n\n\"\"\"\n\naligns = ['left','zero','mid']\nfor align in aligns:\n row = \"<tr><th>{}</th>\".format(align)\n for serie in [test1,test2,test3]:\n s = serie.copy()\n s.name=''\n row += \"<td>{}</td>\".format(s.to_frame().style.bar(align=align, \n color=['#d65f5f', '#5fba7d'], \n width=100).render()) #testn['width']\n row += '</tr>'\n head += row\n \nhead+= \"\"\"\n</tbody>\n</table>\"\"\"\n \n\nHTML(head)",
"Sharing Styles\nSay you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set",
"df2 = -df\nstyle1 = df.style.applymap(color_negative_red)\nstyle1\n\nstyle2 = df2.style\nstyle2.use(style1.export())\nstyle2",
"Notice that you're able share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.\nOther Options\nYou've seen a few methods for data-driven styling.\nStyler also provides a few other options for styles that don't depend on the data.\n\nprecision\ncaptions\ntable-wide styles\n\nEach of these can be specified in two ways:\n\nA keyword argument to Styler.__init__\nA call to one of the .set_ methods, e.g. .set_caption\n\nThe best method to use depends on the context. Use the Styler constructor when building many styled DataFrames that should all share the same properties. For interactive use, the.set_ methods are more convenient.\nPrecision\nYou can control the precision of floats using pandas' regular display.precision option.",
"with pd.option_context('display.precision', 2):\n html = (df.style\n .applymap(color_negative_red)\n .apply(highlight_max))\nhtml",
"Or through a set_precision method.",
"df.style\\\n .applymap(color_negative_red)\\\n .apply(highlight_max)\\\n .set_precision(2)",
"Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start.\nCaptions\nRegular table captions can be added in a few ways.",
"df.style.set_caption('Colormaps, with a caption.')\\\n .background_gradient(cmap=cm)",
"Table Styles\nThe next option you have are \"table styles\".\nThese are styles that apply to the table as a whole, but don't look at the data.\nCertain sytlings, including pseudo-selectors like :hover can only be used this way.",
"from IPython.display import HTML\n\ndef hover(hover_color=\"#ffff99\"):\n return dict(selector=\"tr:hover\",\n props=[(\"background-color\", \"%s\" % hover_color)])\n\nstyles = [\n hover(),\n dict(selector=\"th\", props=[(\"font-size\", \"150%\"),\n (\"text-align\", \"center\")]),\n dict(selector=\"caption\", props=[(\"caption-side\", \"bottom\")])\n]\nhtml = (df.style.set_table_styles(styles)\n .set_caption(\"Hover to highlight.\"))\nhtml",
"table_styles should be a list of dictionaries.\nEach dictionary should have the selector and props keys.\nThe value for selector should be a valid CSS selector.\nRecall that all the styles are already attached to an id, unique to\neach Styler. This selector is in addition to that id.\nThe value for props should be a list of tuples of ('attribute', 'value').\ntable_styles are extremely flexible, but not as fun to type out by hand.\nWe hope to collect some useful ones either in pandas, or preferable in a new package that builds on top the tools here.\nCSS Classes\nCertain CSS classes are attached to cells.\n\nIndex and Column names include index_name and level<k> where k is its level in a MultiIndex\nIndex label cells include\nrow_heading\nrow<n> where n is the numeric position of the row\nlevel<k> where k is the level in a MultiIndex\nColumn label cells include\ncol_heading\ncol<n> where n is the numeric position of the column\nlevel<k> where k is the level in a MultiIndex\nBlank cells include blank\nData cells include data\n\nLimitations\n\nDataFrame only (use Series.to_frame().style)\nThe index and columns must be unique\nNo large repr, and performance isn't great; this is intended for summary DataFrames\nYou can only style the values, not the index or columns\nYou can only apply styles, you can't insert new HTML entities\n\nSome of these will be addressed in the future.\nTerms\n\nStyle function: a function that's passed into Styler.apply or Styler.applymap and returns values like 'css attribute: value'\nBuiltin style functions: style functions that are methods on Styler\ntable style: a dictionary with the two keys selector and props. selector is the CSS selector that props will apply to. props is a list of (attribute, value) tuples. A list of table styles passed into Styler.\n\nFun stuff\nHere are a few interesting examples.\nStyler interacts pretty well with widgets. If you're viewing this online instead of running the notebook yourself, you're missing out on interactively adjusting the color palette.",
"from IPython.html import widgets\n@widgets.interact\ndef f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):\n return df.style.background_gradient(\n cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,\n as_cmap=True)\n )\n\ndef magnify():\n return [dict(selector=\"th\",\n props=[(\"font-size\", \"4pt\")]),\n dict(selector=\"td\",\n props=[('padding', \"0em 0em\")]),\n dict(selector=\"th:hover\",\n props=[(\"font-size\", \"12pt\")]),\n dict(selector=\"tr:hover td:hover\",\n props=[('max-width', '200px'),\n ('font-size', '12pt')])\n]\n\nnp.random.seed(25)\ncmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)\nbigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()\n\nbigdf.style.background_gradient(cmap, axis=1)\\\n .set_properties(**{'max-width': '80px', 'font-size': '1pt'})\\\n .set_caption(\"Hover to magnify\")\\\n .set_precision(2)\\\n .set_table_styles(magnify())",
"Export to Excel\nNew in version 0.20.0\n<span style=\"color: red\">Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span>\nSome support is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL engine. CSS2.2 properties handled include:\n\nbackground-color\nborder-style, border-width, border-color and their {top, right, bottom, left variants}\ncolor\nfont-family\nfont-style\nfont-weight\ntext-align\ntext-decoration\nvertical-align\nwhite-space: nowrap\n\nOnly CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.",
"df.style.\\\n applymap(color_negative_red).\\\n apply(highlight_max).\\\n to_excel('styled.xlsx', engine='openpyxl')",
"A screenshot of the output:\n\nExtensibility\nThe core of pandas is, and will remain, its \"high-performance, easy-to-use data structures\".\nWith that in mind, we hope that DataFrame.style accomplishes two goals\n\nProvide an API that is pleasing to use interactively and is \"good enough\" for many tasks\nProvide the foundations for dedicated libraries to build on\n\nIf you build a great library on top of this, let us know and we'll link to it.\nSubclassing\nIf the default template doesn't quite suit your needs, you can subclass Styler and extend or override the template.\nWe'll show an example of extending the default template to insert a custom header before each table.",
"from jinja2 import Environment, ChoiceLoader, FileSystemLoader\nfrom IPython.display import HTML\nfrom pandas.io.formats.style import Styler\n\n%mkdir templates",
"This next cell writes the custom template.\nWe extend the template html.tpl, which comes with pandas.",
"%%file templates/myhtml.tpl\n{% extends \"html.tpl\" %}\n{% block table %}\n<h1>{{ table_title|default(\"My Table\") }}</h1>\n{{ super() }}\n{% endblock table %}",
"Now that we've created a template, we need to set up a subclass of Styler that\nknows about it.",
"class MyStyler(Styler):\n env = Environment(\n loader=ChoiceLoader([\n FileSystemLoader(\"templates\"), # contains ours\n Styler.loader, # the default\n ])\n )\n template = env.get_template(\"myhtml.tpl\")",
"Notice that we include the original loader in our environment's loader.\nThat's because we extend the original template, so the Jinja environment needs\nto be able to find it.\nNow we can use that custom styler. It's __init__ takes a DataFrame.",
"MyStyler(df)",
"Our custom template accepts a table_title keyword. We can provide the value in the .render method.",
"HTML(MyStyler(df).render(table_title=\"Extending Example\"))",
"For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.",
"EasyStyler = Styler.from_custom_template(\"templates\", \"myhtml.tpl\")\nEasyStyler(df)",
"Here's the template structure:",
"with open(\"template_structure.html\") as f:\n structure = f.read()\n \nHTML(structure)",
"See the template in the GitHub repo for more details.",
"# Hack to get the same style in the notebook as the\n# main site. This is hidden in the docs.\nfrom IPython.display import HTML\nwith open(\"themes/nature_with_gtoc/static/nature.css_t\") as f:\n css = f.read()\n \nHTML('<style>{}</style>'.format(css))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tgsmith61591/skutil
|
doc/examples/h2o/h2o_example.ipynb
|
bsd-3-clause
|
[
"<br/><br/>\n\nskutil\nSkutil brings the best of both worlds to H2O and sklearn, delivering an easy transition into the world of distributed computing that H2O offers, while providing the same, familiar interface that sklearn users have come to know and love. This notebook will give an example of how to use skutil preprocessors with H2OEstimators and H2OFrames.\nAuthor: Taylor G Smith\nContact: tgsmith61591@gmail.com\nPython packages you will need:\n - python 2.7\n - numpy >= 1.6\n - scipy >= 0.17\n - scikit-learn >= 0.16\n - pandas >= 0.18\n - cython >= 0.22\n - h2o >= 3.8.2.9\nMisc. requirements (for compiling Fortran a la f2py):\n - gfortran\n - gcc\n - Note that the El Capitan Apple Developer tool upgrade necessitates upgrading this! Use:\n brew upgrade gcc\n\nThis notebook is intended for an audience with a working understanding of machine learning principles and a background in Python development, ideally sklearn or H2O users. Note that this notebook is not designed to teach machine learning, but to demonstrate use of the skutil package.\nProcession of events:\n\nData split—always the first step!\nPreprocessing:\nBalance response classes in train set\nRemove near-zero variance features\nRemove multicollinear features\n\n\nModeling\nFormulate pipeline\nGrid search\n\n\nModel selection\n... (not shown here, but other models built)\nAll models finally evaluated against holdout\n\n\nModel persistence",
"from __future__ import print_function, division, absolute_import\nimport warnings\nimport skutil\nimport sklearn\nimport h2o\nimport pandas as pd\nimport numpy as np\n\n# we'll be plotting inline...\n%matplotlib inline\n\nprint('Skutil version: %s' % skutil.__version__)\nprint('H2O version: %s' % h2o.__version__)\nprint('Numpy version: %s' % np.__version__)\nprint('Sklearn version: %s' % sklearn.__version__)\nprint('Pandas version: %s' % pd.__version__)",
"Initialize H2O\nFirst, we'll start our H2O cluster...",
"with warnings.catch_warnings():\n warnings.simplefilter('ignore')\n \n # I started this cluster up via CLI with:\n # $ java -Xmx2g -jar /anaconda/h2o_jar/h2o.jar\n h2o.init(ip='10.7.187.84', port=54321, start_h2o=False)",
"Load data\nWe'll load sklearn's breast cancer data. Using skutil's from_pandas method, we can upload a Pandas frame to the H2O cloud",
"from sklearn.datasets import load_breast_cancer\nfrom skutil.h2o.util import from_pandas\n\n# import data, load into pandas\nbc = load_breast_cancer()\nX = pd.DataFrame.from_records(data=bc.data, columns=bc.feature_names)\nX['target'] = bc.target\n\n# push to h2o cloud\nX = from_pandas(X)\nprint(X.shape)\nX.head()\n\n# Here are our feature names:\nx = list(bc.feature_names)\ny = 'target'",
"train/test split\nSklearn provides a great mechanism for splitting data into a train and validation set. Skutil provides the same mechanism for h2o frames. This cell does the following:\n\nMakes the response variable an enum\nCreates two splits:\nX_train: 75%\nX_val: 25%",
"from skutil.h2o import h2o_train_test_split\n\n# first, let's make sure our target is a factor\nX[y] = X[y].asfactor()\n\n# we'll use 75% of the data for training, 25%\nX_train, X_val = h2o_train_test_split(X, train_size=0.75, random_state=42)\n\n# make sure we did it right...\n# assert X.shape[0] == (X_train.shape[0] + X_val.shape[0])",
"preprocessing with skutil.h2o\nSkutil provides an h2o module which delivers some skutil feature_selection classes that can operate on an H2OFrame. Each BaseH2OTransformer has the following __init__ signature:\nBaseH2OTransformer(self, feature_names=None, target_feature=None)\n\nThe selector will only operate on the feature_names (if provided—else it will operate on all features) and will always exclude the target_feature.\nThe first step would be to ensure our data is balanced, as we don't want imbalanced minority/majority classes. The problem of class imbalance is well-documented, and many solutions have been proposed. Skutil provides a mechanism by which we could over-sample the minority class using the H2OOversamplingClassBalancer, or under-sample the majority class using the H2OUndersamplingClassBalancer.\nFortunately for us, the classes in this dataset are fairly balanced, so we can move on to the next piece.\nHandling near-zero variance\nSome predictors contain few unique values and are considered \"near-zero variance\" predictors. For parametric many models, this may cause the fit to be unstable. Skutil's NearZeroVarianceFilterer and H2ONearZeroVarianceFilterer drop features with variance below a given threshold (based on caret's preprocessor).\nNote: sklearn added this in 0.18 (released last week) under VarianceThreshold",
"from skutil.h2o import H2ONearZeroVarianceFilterer\n\n# Let's determine whether we're at risk for any near-zero variance\nnzv = H2ONearZeroVarianceFilterer(feature_names=x, target_feature=y, threshold=1e-4)\nnzv.fit(X_train)\n\n# let's see if anything was dropped...\nnzv.drop_\n\nnzv.var_",
"Multicollinearity\nMulticollinearity (MC) can be detrimental to the fit of parametric models (for our example, we're going to use a tree-based model, which is non-parametric, but the demo is still useful), and can cause confounding results in some models' variable importances. With skutil, we can filter out features that are correlated beyond a certain absolute threshold. When a violating correlation is identified, the feature with the highest mean absolute correlation is removed (see also).\nBefore filtering out collinear features, let's take a look at the correlation matrix.",
"from skutil.h2o import h2o_corr_plot\n\n# note that we want to exclude the target!!\nh2o_corr_plot(X_train[x], xticklabels=x, yticklabels=x)\n\nfrom skutil.h2o import H2OMulticollinearityFilterer\n\n# Are we at risk of any multicollinearity?\nmcf = H2OMulticollinearityFilterer(feature_names=x, target_feature=y, threshold=0.90)\nmcf.fit(X_train)\n\n# we can look at the dropped features\nmcf.correlations_",
"Dropping features\nAs you'll see in the next section (Pipelines), where certain preprocessing steps take place matters. If there are a subset of features on which you don't want to model or process, you can drop them out. Sometimes this is more effective than creating a list of potentially thousands of feature names to pass as the feature_names parameter.",
"from skutil.h2o import H2OFeatureDropper\n\n# maybe I don't like 'mean fractal dimension'\ndropper = H2OFeatureDropper(feature_names=['mean fractal dimension'], target_feature=y)\ntransformed = dropper.fit_transform(X_train)\n\n# we can ensure it's not there\nassert not 'mean fractal dimension' in transformed.columns",
"skutil.h2o modeling\nSkutil's h2o module allows us to form the Pipeline objects we're familiar with from sklearn. This permits us to string a series of preprocessors together, with an optional H2OEstimator as the last step. Like sklearn Pipelines, the first argument is a single list of length-two tuples (where the first arg is the name of the step, and the second is the Estimator/Transformer), however the H2OPipeline takes two more arguments: feature_names and target_feature.\nNote that the feature_names arg is the names the first preprocessor will operate on; after that, all remaining feature names (i.e., not the target) will be passed to the next processor.",
"from skutil.h2o import H2OPipeline\nfrom h2o.estimators import H2ORandomForestEstimator\nfrom skutil.h2o.metrics import h2o_accuracy_score # same as sklearn's, but with H2OFrames\n\n# let's fit a pipeline with our estimator...\npipe = H2OPipeline([\n ('nzv', H2ONearZeroVarianceFilterer(threshold=1e-1)),\n ('mcf', H2OMulticollinearityFilterer(threshold=0.95)),\n ('rf' , H2ORandomForestEstimator(ntrees=50, max_depth=8, min_rows=5))\n ], \n \n # feature_names is the set of features the first transformer\n # will operate on. The remaining features will be passed\n # to the next step\n feature_names=x, \n target_feature=y)\n\n\n# fit...\npipe = pipe.fit(X_train)\n\n\n# eval accuracy on validation set\npred = pipe.predict(X_val)\nactual = X_val[y]\npred = pred['predict']\nprint('Validation accuracy: %.5f' % h2o_accuracy_score(actual, pred))",
"Which features were retained?\nWe can see which features were modeled on with the training_cols_ attribute of the fitted pipe.",
"pipe.training_cols_",
"Hyperparameter optimization\nWith relatively little effort, we got > 93% accuracy on our validation set! Can we improve that? We can use sklearn-esque grid searches, which also allow us to search over preprocessor objects to optimize a set of hyperparameters.",
"from skutil.h2o import H2ORandomizedSearchCV\nfrom skutil.h2o import H2OKFold\nfrom scipy.stats import uniform, randint\n\n# define our random state\nrand_state = 2016\n\n# we have the option to choose the model that maximizes CV scores,\n# or the model that minimizes std deviations between CV scores.\n# let's choose the former for this example\nminimize = 'bias'\n\n# let's redefine our pipeline\npipe = H2OPipeline([\n ('nzv', H2ONearZeroVarianceFilterer()),\n ('mcf', H2OMulticollinearityFilterer()),\n ('rf' , H2ORandomForestEstimator(seed=rand_state))\n ])\n\n# our hyperparameters over which to search...\nhyper = {\n 'nzv__threshold' : uniform(1e-4,1e-1), # see scipy.stats.uniform:\n 'mcf__threshold' : uniform(0.7, 0.29), # uniform in range (0.7 + 0.29)\n 'rf__ntrees' : randint(50, 100),\n 'rf__max_depth' : randint(10, 12),\n 'rf__min_rows' : randint(25, 50)\n}\n\n# define our grid search\nsearch = H2ORandomizedSearchCV(\n estimator=pipe,\n param_grid=hyper,\n feature_names=x,\n target_feature=y,\n n_iter=2, # keep it small for our demo...\n random_state=rand_state,\n scoring='accuracy_score',\n cv=H2OKFold(n_folds=3, shuffle=True, random_state=rand_state),\n verbose=3,\n minimize=minimize\n )\n\n# fit\nsearch.fit(X_train)",
"Model evaluation\nBeyond merely observing our validation set score, we can dig into the cross validation scores of each model in our H2O grid search, and select the model that has not only the best mean score, but the model that minimizes variability in the CV scores.",
"from skutil.utils import report_grid_score_detail\n\n# now let's look deeper...\nsort_by = 'std' if minimize == 'variance' else 'score'\nreport_grid_score_detail(search, charts=True, sort_results=True, \n ascending=minimize=='variance',\n sort_by=sort_by)",
"Variable importance\nWe can easily extract the best model's variable importances like so:",
"search.varimp()",
"Model evaluation—introduce the validation set\nSo our best estimator achieves a mean cross validation accuracy of 93%! We can predict on our best estimator as follows:",
"val_preds = search.predict(X_val)\n\n# print accuracy\nprint('Validation accuracy: %.5f' % h2o_accuracy_score(actual, val_preds['predict']))\nval_preds.head()",
"Model selection\n(Not shown: other models we built and evaluated against the validation set (once!)—we only introduce the holdout set at the very end)\nIn a real situation, you probably will have a holdout set, and will have built several models. After you have a collection of models and you'd like to select one, you introduce the holdout set only once!\nModel persistence\nWhen we find a model that performs well, we can save it to disk for later use:",
"import os\n\n# get absolute path\ncwd = os.getcwd()\nmodel_path = os.path.join(cwd, 'grid.pkl')\n\n# save -- it's that easy!!!\nsearch.save(location=model_path, warn_if_exists=False)",
"Loading and making predictions",
"search = H2ORandomizedSearchCV.load(model_path)\nnew_predictions = search.predict(X_val)\nnew_predictions.head()",
"Cleanup\nAlways make sure to shut down your cluster...",
"h2o.shutdown(prompt=False) # shutdown cluster\nos.unlink(model_path) # remove the pickle file..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SIMEXP/Projects
|
metaad/network_level_meta.ipynb
|
mit
|
[
"# AUTHOR Christian Dansereau 2016\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport pandas as pd\nimport scipy.io\nimport os\nimport nibabel as nib\nfrom nibabel.affines import apply_affine\nfrom nilearn import plotting\nimport numpy.linalg as npl",
"Load data",
"#seed_data = pd.read_csv('20160128_AD_Decrease_Meta_Christian.csv')\n\ntemplate_036= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale036.nii.gz')\ntemplate_020= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale020.nii.gz')\ntemplate_012= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale012.nii.gz')\ntemplate_007= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale007.nii.gz')\n\nscale = '7'\n\nif scale == '7':\n template = template_007\nelse:\n template = template_036\n\n#seed_data = pd.read_csv('20160205_AD_Decrease_Meta_Final.csv')\n#seed_data = pd.read_csv('20160129_AD_Increase_Meta_Final.csv')\n\n#seed_data = pd.read_csv('20160205_MCI_Decrease_Meta_Final.csv')\n#seed_data = pd.read_csv('20160204_MCI_Increase_Meta_Final.csv')\n\nseed_data = pd.read_csv('20160205_ADMCI_Decrease_Meta_Final.csv')\n#seed_data = pd.read_csv('20160129_ADMCI_Increase_Meta_Final.csv')\n\n# ******************* #\n\n#output_stats = 'AD_decrease_scale'+scale+'_stats.mat'\n#output_vol = 'AD_decrease_ratio'+scale+'_vol.nii.gz'\n#output_stats = 'AD_increase_scale'+scale+'_stats.mat'\n#output_vol = 'AD_increase_ratio_scale'+scale+'_vol.nii.gz'\n\n#output_stats = 'MCI_decrease_scale'+scale+'_stats.mat'\n#output_vol = 'MCI_decrease_ratio_scale'+scale+'_vol.nii.gz'\n#output_stats = 'MCI_increase_scale'+scale+'_stats.mat'\n#output_vol = 'MCI_increase_ratio_scale'+scale+'_vol.nii.gz'\n\noutput_stats = 'ADMCI_decrease_scale'+scale+'_stats.mat'\noutput_vol = 'ADMCI_decrease_ratio_scale'+scale+'_vol.nii.gz'\n#output_stats = 'ADMCI_increase_scale'+scale+'_stats.mat'\n#output_vol = 'ADMCI_increase_ratio_scale'+scale+'_vol.nii.gz'\n\nseed_data",
"Get the number of coordinates reported for each network",
"from numpy.linalg import norm\n# find the closest network to the coordo\ndef get_nearest_net(template,world_coor):\n list_coord = np.array(np.where(template.get_data()>0))\n mni_coord = apply_affine(template.get_affine(),list_coord.T)\n distances = norm(mni_coord-np.array(world_coor),axis=1)\n #print distances.shape\n idx_nearest_net = np.where(distances == np.min(distances))[0][0]\n return int(template.get_data()[list_coord[:,idx_nearest_net][0],list_coord[:,idx_nearest_net][1],list_coord[:,idx_nearest_net][2]])\n\ndef get_nearest_voxel(template,world_coor):\n list_coord = np.array(np.where(template.get_data()>0))\n mni_coord = apply_affine(template.get_affine(),list_coord.T)\n distances = norm(mni_coord-np.array(world_coor),axis=1)\n #print distances.shape\n idx_nearest_net = np.where(distances == np.min(distances))[0][0]\n #return int(template.get_data()[list_coord[:,idx_nearest_net][0],list_coord[:,idx_nearest_net][1],list_coord[:,idx_nearest_net][2]])\n #return mni_coord[:,idx_nearest_net][0],mni_coord[:,idx_nearest_net][1],mni_coord[:,idx_nearest_net][2]\n return mni_coord[idx_nearest_net,:]\n\n\n#get_nearest_net(template,[-15,-10,-10])\n# Convert from world MNI space to the EPI voxel space\ndef get_world2vox(template, mni_coord):\n return np.round(apply_affine(npl.inv(template.get_affine()),mni_coord)+[1])\n \nnetwork_votes = np.zeros((np.max(template.get_data().flatten()),1))[:,0]\nnetwork_votes\n\n# get the voxel coordinates of the MNI seeds\nmni_space_targets = seed_data[['x','y','z']].values\nvox_corrd = get_world2vox(template,mni_space_targets)\nvotes = []\nn_outofbrain=0\nfor i in range(vox_corrd.shape[0]):\n net_class = template.get_data()[vox_corrd[i,0],vox_corrd[i,1],vox_corrd[i,2]]\n if net_class==0:\n n_outofbrain+=1\n votes.append(get_nearest_net(template,[mni_space_targets[i,0],mni_space_targets[i,1],mni_space_targets[i,2]]))\n else:\n votes.append(net_class)\n\nprint('Out of brain coordinates: '+ str(n_outofbrain))\nvotes = np.array(votes) \n\n# take one vote for each study only\nuni_pmid = np.unique(seed_data['PMID'])\nvotes.shape\nfrequency_votes=np.zeros((len(uni_pmid),len(network_votes)))\n#for i in range(len(uni_pmid)):\n# frequency_votes = np.hstack((frequency_votes,np.unique(votes[(seed_data['PMID']==uni_pmid[i]).values])))\nfor i in range(len(uni_pmid)):\n aa = votes[(seed_data['PMID']==uni_pmid[i]).values]\n for j in aa:\n frequency_votes[i,j-1] = (aa == j).sum()/float(len(aa))\nprint frequency_votes\n\n\n# compile the stats for each network\n#for i in range(1,len(network_votes)+1):\n# network_votes[i-1] = np.mean(frequency_votes==i)\nnetwork_votes = np.mean(frequency_votes,axis=0)\nprint network_votes \n#vox_corrd[np.array(votes)==5,:]\n\nget_nearest_net(template,[-24,-10, 22])\n\nget_nearest_voxel(template,[-24,-10, 22])\n\nprint '#AD<HC'\nprint '#15: 48.0 -12.0 66.0'\nprint get_nearest_voxel(template,[48,-12, 66])\nprint '#34: 50.0 0 61.0'\nprint get_nearest_voxel(template,[50,0, 61])\nprint '#35: 48.0 -12.0 66.0'\nprint get_nearest_voxel(template,[48,-12, 66])\nprint '#46: 0 -96.0 28.0'\nprint get_nearest_voxel(template,[0,-96, 28])\nprint '#52: -4.510000228881836 13.970000267028809 -30.84000015258789'\nprint get_nearest_voxel(template,[-4.510000228881836,13.970000267028809, -30.84000015258789])\n\nprint 'AD>HC'\nprint '#5: 57.0 42.0 24.0'\nprint get_nearest_voxel(template,[57,42, 24])\nprint '#105: 45.0 55.0 31.0'\nprint get_nearest_voxel(template,[45,55, 31])\n\nprint 'AD decrease'\nprint '#10: 6.0 55.0 47.0'\nprint 'MNI coord',get_nearest_voxel(template,[6.0, 55.0, 47.0])\nprint '#20: -22.0 -70.0 60.0'\nprint 'MNI coord',get_nearest_voxel(template,[-22.0, -70.0, 60.0])\nprint '#108: 30.0 -94.0 20.0'\nprint 'MNI coord',get_nearest_voxel(template,[30.0, -94.0, 20.0])\nprint 'AD increase'\nprint '#2: -45.0 33.0 48.0'\nprint 'MNI coord',get_nearest_voxel(template,[-45.0, 33.0, 48.0])\nprint '#19: 37.0 -85.0 30.0'\nprint 'MNI coord',get_nearest_voxel(template,[37.0, -85.0, 30.0])\nprint '#20: -52.0 -61.0 45.0'\nprint 'MNI coord',get_nearest_voxel(template,[-52.0, -61.0, 45.0])\nprint '#31: 7.909999847412109 -102.08000183105469 8.859999656677246'\nprint 'MNI coord',get_nearest_voxel(template,[7.909999847412109, -102.08000183105469, 8.859999656677246])\nprint 'MCI decresae'\nprint '#6: 45.0 3.0 57.0'\nprint 'MNI coord',get_nearest_voxel(template,[45.0, 3.0, 57.0])\nprint '#7: 54.0 9.0 45.0'\nprint 'MNI coord',get_nearest_voxel(template,[54.0, 9.0, 45.0])\nprint '#14: 57.0 -21.0 51.0'\nprint 'MNI coord',get_nearest_voxel(template,[57.0, -21.0, 51.0])\nprint '#32: 5.769999980926514 -97.69000244140625 10.75'\nprint 'MNI coord',get_nearest_voxel(template,[5.769999980926514, -97.69000244140625, 10.75])\nprint '#80: 42.0 -78.0 36.0'\nprint 'MNI coord',get_nearest_voxel(template,[42.0, -78.0, 36.0])\nprint '#82: -54.0 -72.0 18.0'\nprint 'MNI coord',get_nearest_voxel(template,[-54.0, -72.0, 18.0])\nprint '#105: 50.0 -58.0 50.0'\nprint 'MNI coord',get_nearest_voxel(template,[50.0, -58.0, 50.0])\nprint '#107: 50.0 -58.0 50.0'\nprint 'MNI coord',get_nearest_voxel(template,[50.0, -58.0, 50.0])\nprint 'MCI increase'\nprint '#18: -36.0 57.0 21.0'\nprint 'MNI coord',get_nearest_voxel(template,[-36.0, 57.0, 21.0])\n\ndef gen1perm(n_seeds,proba):\n ratio_votes_1study = np.zeros_like(proba)\n perm_votes = np.random.choice(range(0,len(proba)),size=(n_seeds,1),p=proba)\n for j in perm_votes:\n ratio_votes_1study[j] = (perm_votes == j).sum()/float(len(perm_votes))\n return ratio_votes_1study\n\n# check if the proba is respected \n#print proba_networks\n#gen1perm(10000,proba_networks)\n#ange(0,len(proba_networks))",
"Generate random coordinates\nThe assigned coodinates are generated for each network witha proability equivalent to there volume size compare to the total volume of the brain",
"'''\nfrom numpy.random import permutation\ndef permute_table(frequency_votes,n_iter):\n h0_results = []\n for n in range(n_iter):\n perm_freq = frequency_votes.copy()\n #print perm_freq\n for i in range(perm_freq.shape[0]):\n perm_freq[i,:] = permutation(perm_freq[i,:])\n #print perm_freq\n h0_results.append(np.mean(perm_freq,axis=0))\n return np.array(h0_results).T\n'''\ndef compute_freq(votes,data_ratio_votes,seed_data,proba):\n # take one vote for each study only\n uni_pmid = np.unique(seed_data['PMID'])\n ratio_votes=np.zeros((data_ratio_votes.shape[0],data_ratio_votes.shape[1],10000))\n for idx_perm in range(ratio_votes.shape[-1]):\n # frequency_votes = np.hstack((frequency_votes,np.unique(votes[(seed_data['PMID']==uni_pmid[i]).values])))\n for i in range(len(uni_pmid)):\n aa = votes[(seed_data['PMID']==uni_pmid[i]).values]\n n_seeds = len(aa)\n ratio_votes[i,:,idx_perm] = gen1perm(n_seeds,proba)\n #print ratio_votes.shape\n # compute the frequency\n freq_data = np.mean(ratio_votes,axis=0)\n \n for i in range(freq_data.shape[0]):\n freq_data[i,:] = np.sort(freq_data[i,:])[::-1]\n \n return freq_data\n\n# Total volume of the brain\ntotal_volume = np.sum(template.get_data()>0)\n\n# compute the proba of each network\nproba_networks=[]\nfor i in range(1,len(network_votes)+1):\n proba_networks.append(np.sum(template.get_data()==i)/(total_volume*1.))\nproba_networks = np.array(proba_networks)\nprint np.sum(proba_networks)\nprint proba_networks\n\n# generate random values \n'''\ndef gen_rnd_hits(proba,n_seeds):\n results_h0 = np.random.choice(range(0,len(proba)),size=(n_seeds,1000),p=proba)\n #results_h0 = permute_table(frequency_votes,1000)\n print results_h0.shape\n ditributions = []\n for i in range(frequency_votes.shape[1]):\n results_h0[i,:] = np.sort(results_h0[i,:])[::-1]\n #ditributions.append(one_way_pdf) \n #return ditributions\n return results_h0\n'''\n#dist_data = gen_rnd_hits(proba_networks,np.sum(network_votes))\ndist_data = compute_freq(votes,frequency_votes,seed_data,proba_networks)\n\nplt.figure()\nplt.hist(dist_data[0],bins=np.arange(0,1,.01))\nplt.figure()\nplt.plot(dist_data[0].T)",
"Generate the p-values for each network",
"def getpval_old(nhit,dist_data):\n distribution_val = np.histogram(dist_data,bins=np.arange(0,1,0.01))\n idx_bin = np.where((distribution_val[1]>=round(nhit,2)) & (distribution_val[1]<=round(nhit,2)))[0][0]\n #print distribution_val[1]\n return (np.sum(distribution_val[0][idx_bin:-1])+1)/(dist_data.shape[0]+1.)\n\ndef getpval(target,dist_data):\n dist_sorted = np.sort(np.copy(dist_data))\n b = np.sum(dist_sorted > target)\n #print b\n #print dist_data.shape[0]\n #print distribution_val[1]\n return ((b+1.)/(dist_data.shape[0]+1.))\n\nprint network_votes\n\npval_results=[]\nfor i in range(0,len(dist_data)):\n pval_results.append(getpval(network_votes[i],dist_data[i,:]))\n \nprint pval_results\nplt.figure()\nplt.bar(np.arange(1,len(pval_results)+1),pval_results,width=0.5,align='center')\nplt.xlabel('Networks')\nplt.ylabel('p-value')",
"Map the p-values to the template",
"from proteus.matrix import tseries as ts\nhitfreq_vol = ts.vec2map(network_votes,template)\npval_vol = ts.vec2map(1-np.array(pval_results),template)\nplt.figure()\nplotting.plot_stat_map(hitfreq_vol,cut_coords=(0,0,0),draw_cross=False)\nplt.figure()\nplotting.plot_stat_map(pval_vol,cut_coords=(0,0,0),draw_cross=False)\n",
"FDR correction of the p-values",
"# correct for FRD\nfrom statsmodels.sandbox.stats.multicomp import fdrcorrection0\n\nfdr_test,fdr_pval=fdrcorrection0(pval_results,alpha=0.05)\nprint network_votes\nprint fdr_test\nprint fdr_pval\n\n# save the results\n\npath_output = '/home/cdansereau/git/Projects/metaad/maps_results/'\nstats_results = {'Hits':network_votes ,'pvalues':pval_results,'fdr_test':fdr_test,'fdr_pval':fdr_pval,'n_outofbrain':n_outofbrain}\nscipy.io.savemat(path_output + output_stats, stats_results)\nhitfreq_vol.to_filename(os.path.join(path_output,output_vol))\n#hitfreq_vol.to_filename(os.path.join('/home/cdansereau/git/Projects/metaad/maps_results/','AD_pval_vol.nii.gz'))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
YihaoLu/statsmodels
|
examples/notebooks/statespace_varmax.ipynb
|
bsd-3-clause
|
[
"VARMAX models\nThis is a notebook stub for VARMAX models. Full development will be done after impulse response functions are available.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\ndta = sm.datasets.webuse('lutkepohl2', 'http://www.stata-press.com/data/r12/')\ndta.index = dta.qtr\nendog = dta.ix['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]",
"Model specification\nThe VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).\nExample 1: VAR\nBelow is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables.",
"# exog = pd.Series(np.arange(len(endog)), index=endog.index, name='trend')\nexog = endog['dln_consump']\nmod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='nc', exog=exog)\nres = mod.fit(maxiter=1000)\nprint res.summary()",
"Example 2: VMA\nA vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.",
"mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')\nres = mod.fit(maxiter=1000)\nprint res.summary()",
"Caution: VARMA(p,q) specifications\nAlthough the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information.",
"mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))\nres = mod.fit(maxiter=1000)\nprint res.summary()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_tf_dics.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Time-frequency beamforming using DICS\nCompute DICS source power in a grid of time-frequency windows and display\nresults.\nThe original reference is:\nDalal et al. Five-dimensional neuroimaging: Localization of the time-frequency\ndynamics of cortical activity. NeuroImage (2008) vol. 40 (4) pp. 1686-1700",
"# Author: Roman Goj <roman.goj@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.event import make_fixed_length_events\nfrom mne.datasets import sample\nfrom mne.time_frequency import csd_epochs\nfrom mne.beamformer import tf_dics\nfrom mne.viz import plot_source_spectrogram\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nnoise_fname = data_path + '/MEG/sample/ernoise_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nsubjects_dir = data_path + '/subjects'\nlabel_name = 'Aud-lh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name",
"Read raw data",
"raw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\n\n# Pick a selection of magnetometer channels. A subset of all channels was used\n# to speed up the example. For a solution based on all MEG channels use\n# meg=True, selection=None and add mag=4e-12 to the reject dictionary.\nleft_temporal_channels = mne.read_selection('Left-temporal')\npicks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,\n stim=False, exclude='bads',\n selection=left_temporal_channels)\nraw.pick_channels([raw.ch_names[pick] for pick in picks])\nreject = dict(mag=4e-12)\n# Re-normalize our empty-room projectors, which should be fine after\n# subselection\nraw.info.normalize_proj()\n\n# Setting time windows. Note that tmin and tmax are set so that time-frequency\n# beamforming will be performed for a wider range of time points than will\n# later be displayed on the final spectrogram. This ensures that all time bins\n# displayed represent an average of an equal number of time windows.\ntmin, tmax, tstep = -0.55, 0.75, 0.05 # s\ntmin_plot, tmax_plot = -0.3, 0.5 # s\n\n# Read epochs\nevent_id = 1\nevents = mne.read_events(event_fname)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n baseline=None, preload=True, proj=True, reject=reject)\n\n# Read empty room noise raw data\nraw_noise = mne.io.read_raw_fif(noise_fname, preload=True)\nraw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\nraw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])\nraw_noise.info.normalize_proj()\n\n# Create noise epochs and make sure the number of noise epochs corresponds to\n# the number of data epochs\nevents_noise = make_fixed_length_events(raw_noise, event_id)\nepochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,\n tmax_plot, baseline=None, preload=True, proj=True,\n reject=reject)\nepochs_noise.info.normalize_proj()\nepochs_noise.apply_proj()\n# then make sure the number of epochs is the same\nepochs_noise = epochs_noise[:len(epochs.events)]\n\n# Read forward operator\nforward = mne.read_forward_solution(fname_fwd, surf_ori=True)\n\n# Read label\nlabel = mne.read_label(fname_label)",
"Time-frequency beamforming based on DICS",
"# Setting frequency bins as in Dalal et al. 2008\nfreq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz\nwin_lengths = [0.3, 0.2, 0.15, 0.1] # s\n# Then set FFTs length for each frequency range.\n# Should be a power of 2 to be faster.\nn_ffts = [256, 128, 128, 128]\n\n# Subtract evoked response prior to computation?\nsubtract_evoked = False\n\n# Calculating noise cross-spectral density from empty room noise for each\n# frequency bin and the corresponding time window length. To calculate noise\n# from the baseline period in the data, change epochs_noise to epochs\nnoise_csds = []\nfor freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):\n noise_csd = csd_epochs(epochs_noise, mode='fourier',\n fmin=freq_bin[0], fmax=freq_bin[1],\n fsum=True, tmin=-win_length, tmax=0,\n n_fft=n_fft)\n noise_csds.append(noise_csd)\n\n# Computing DICS solutions for time-frequency windows in a label in source\n# space for faster computation, use label=None for full solution\nstcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,\n freq_bins=freq_bins, subtract_evoked=subtract_evoked,\n n_ffts=n_ffts, reg=0.001, label=label)\n\n# Plotting source spectrogram for source with maximum activity\n# Note that tmin and tmax are set to display a time range that is smaller than\n# the one for which beamforming estimates were calculated. This ensures that\n# all time bins shown are a result of smoothing across an identical number of\n# time windows.\nplot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,\n source_index=None, colorbar=True)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
minesh1291/Practicing-Kaggle
|
MNIST_2017/dump_/women_2018_gridsearchCV.ipynb
|
gpl-3.0
|
[
"# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory\n\nfrom subprocess import check_output\nprint(check_output([\"ls\", \"../input\"]).decode(\"utf8\"))\n\n# Any results you write to the current directory are saved as output.",
"First we import some datasets of interest",
"#the seed information\ndf_seeds = pd.read_csv('../input/WNCAATourneySeeds_SampleTourney2018.csv')\n\n#tour information\ndf_tour = pd.read_csv('../input/WRegularSeasonCompactResults_PrelimData2018.csv')",
"Now we separate the winners from the losers and organize our dataset",
"df_seeds['seed_int'] = df_seeds['Seed'].apply( lambda x : int(x[1:3]) )\n\ndf_winseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'WTeamID', 'seed_int':'WSeed'})\ndf_lossseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'LTeamID', 'seed_int':'LSeed'})\ndf_dummy = pd.merge(left=df_tour, right=df_winseeds, how='left', on=['Season', 'WTeamID'])\ndf_concat = pd.merge(left=df_dummy, right=df_lossseeds, on=['Season', 'LTeamID'])",
"Now we match the detailed results to the merge dataset above",
"df_concat['DiffSeed'] = df_concat[['LSeed', 'WSeed']].apply(lambda x : 0 if x[0] == x[1] else 1, axis = 1)",
"Here we get our submission info",
"#prepares sample submission\ndf_sample_sub = pd.read_csv('../input/WSampleSubmissionStage2.csv')\n\ndf_sample_sub['Season'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[0]) )\ndf_sample_sub['TeamID1'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[1]) )\ndf_sample_sub['TeamID2'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[2]) )",
"Training Data Creation",
"winners = df_concat.rename( columns = { 'WTeamID' : 'TeamID1', \n 'LTeamID' : 'TeamID2',\n 'WScore' : 'Team1_Score',\n 'LScore' : 'Team2_Score'}).drop(['WSeed', 'LSeed', 'WLoc'], axis = 1)\nwinners['Result'] = 1.0\n\nlosers = df_concat.rename( columns = { 'WTeamID' : 'TeamID2', \n 'LTeamID' : 'TeamID1',\n 'WScore' : 'Team2_Score',\n 'LScore' : 'Team1_Score'}).drop(['WSeed', 'LSeed', 'WLoc'], axis = 1)\n\nlosers['Result'] = 0.0\n\ntrain = pd.concat( [winners, losers], axis = 0).reset_index(drop = True)\n\ntrain['Score_Ratio'] = train['Team1_Score'] / train['Team2_Score']\ntrain['Score_Total'] = train['Team1_Score'] + train['Team2_Score']\ntrain['Score_Pct'] = train['Team1_Score'] / train['Score_Total']",
"We will only consider years relevant to our test submission",
"df_sample_sub['Season'].unique()",
"Now lets just look at TeamID2, or just the second team info.",
"train_test_inner = pd.merge( train.loc[ train['Season'].isin([2018]), : ].reset_index(drop = True), \n df_sample_sub.drop(['ID', 'Pred'], axis = 1), \n on = ['Season', 'TeamID1', 'TeamID2'], how = 'inner' )\n\ntrain_test_inner.head()",
"From the inner join, we will create data per team id to estimate the parameters we are missing that are independent of the year. Essentially, we are trying to estimate the average behavior of the team across the year.",
"team1d_num_ot = train_test_inner.groupby(['Season', 'TeamID1'])['NumOT'].median().reset_index()\\\n.set_index('Season').rename(columns = {'NumOT' : 'NumOT1'})\nteam2d_num_ot = train_test_inner.groupby(['Season', 'TeamID2'])['NumOT'].median().reset_index()\\\n.set_index('Season').rename(columns = {'NumOT' : 'NumOT2'})\n\nnum_ot = team1d_num_ot.join(team2d_num_ot).reset_index()\n\n#sum the number of ot calls and subtract by one to prevent overcounting\nnum_ot['NumOT'] = num_ot[['NumOT1', 'NumOT2']].apply(lambda x : round( x.sum() ), axis = 1 )\n\nnum_ot.head()",
"Here we look at the comparable statistics. For the TeamID2 column, we would consider the inverse of the ratio, and 1 minus the score attempt percentage.",
"team1d_score_spread = train_test_inner.groupby(['Season', 'TeamID1'])[['Score_Ratio', 'Score_Pct']].median().reset_index()\\\n.set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio1', 'Score_Pct' : 'Score_Pct1'})\nteam2d_score_spread = train_test_inner.groupby(['Season', 'TeamID2'])[['Score_Ratio', 'Score_Pct']].median().reset_index()\\\n.set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio2', 'Score_Pct' : 'Score_Pct2'})\n\nscore_spread = team1d_score_spread.join(team2d_score_spread).reset_index()\n\n#geometric mean of score ratio of team 1 and inverse of team 2\nscore_spread['Score_Ratio'] = score_spread[['Score_Ratio1', 'Score_Ratio2']].apply(lambda x : ( x[0] * ( x[1] ** -1.0) ), axis = 1 ) ** 0.5\n\n#harmonic mean of score pct\nscore_spread['Score_Pct'] = score_spread[['Score_Pct1', 'Score_Pct2']].apply(lambda x : 0.5*( x[0] ** -1.0 ) + 0.5*( 1.0 - x[1] ) ** -1.0, axis = 1 ) ** -1.0\n\nscore_spread.head()",
"Now lets create a model just solely based on the inner group and predict those probabilities. \nWe will get the teams with the missing result.",
"X_train = train_test_inner.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']]\ntrain_labels = train_test_inner['Result']\n\ntrain_test_outer = pd.merge( train.loc[ train['Season'].isin([2014, 2015, 2016, 2017]), : ].reset_index(drop = True), \n df_sample_sub.drop(['ID', 'Pred'], axis = 1), \n on = ['Season', 'TeamID1', 'TeamID2'], how = 'outer' )\n\ntrain_test_outer = train_test_outer.loc[ train_test_outer['Result'].isnull(), \n ['TeamID1', 'TeamID2', 'Season']]\n\ntrain_test_missing = pd.merge( pd.merge( score_spread.loc[:, ['TeamID1', 'TeamID2', 'Season', 'Score_Ratio', 'Score_Pct']], \n train_test_outer, on = ['TeamID1', 'TeamID2', 'Season']),\n num_ot.loc[:, ['TeamID1', 'TeamID2', 'Season', 'NumOT']],\n on = ['TeamID1', 'TeamID2', 'Season'])",
"We scale our data for our keras classifier, and make sure our categorical variables are properly processed.",
"X_test = train_test_missing.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']]\n\nn = X_train.shape[0]\n\ntrain_test_merge = pd.concat( [X_train, X_test], axis = 0 ).reset_index(drop = True)\n\ntrain_test_merge = pd.concat( [pd.get_dummies( train_test_merge['Season'].astype(object) ), \n train_test_merge.drop('Season', axis = 1) ], axis = 1 )\n\ntrain_test_merge = pd.concat( [pd.get_dummies( train_test_merge['NumOT'].astype(object) ), \n train_test_merge.drop('NumOT', axis = 1) ], axis = 1 )\n\nX_train = train_test_merge.loc[:(n - 1), :].reset_index(drop = True)\nX_test = train_test_merge.loc[n:, :].reset_index(drop = True)\n\nx_max = X_train.max()\nx_min = X_train.min()\n\nX_train = ( X_train - x_min ) / ( x_max - x_min + 1e-14)\nX_test = ( X_test - x_min ) / ( x_max - x_min + 1e-14)\n\ntrain_labels.value_counts()\n\nX_train.head()\n\nfrom sklearn.linear_model import LogisticRegressionCV\nmodel = LogisticRegressionCV(cv=80,scoring=\"neg_log_loss\",random_state=1\n #,penalty=\"l1\"\n #,Cs= Cs_#list(np.arange(1e-7,1e-9,-0.5e-9)) # [0.5,0.1,0.01,0.001] #list(np.power(1, np.arange(-10, 10)))\n #,max_iter=1000, tol=1e-11\n #,solver=\"liblinear\"\n #,n_jobs=4\n )\nmodel.fit(X_train, train_labels)\n\n#---\nCs = model.Cs_\nlist(np.power(10.0, np.arange(-10, 10)))\ndir(model)\nsco = model.scores_[1].mean(axis=0)\n#---\nimport matplotlib.pyplot as plt\nplt.plot(Cs\n #np.log10(Cs)\n ,sco)\n# plt.ylabel('some numbers')\nplt.show()\nsco.min()\n\nCs_= list(np.arange(1.1e-9 - 5e-11\n ,1.051e-9 \n ,0.2e-13))\nlen(Cs_)\n\nCs_= list(np.arange(1e-11\n ,9.04e-11#1.0508e-9 \n ,0.2e-12))\nlen(Cs_)\n\n#Cs_= list(np.arange(5.6e-13 - ( (0.01e-13)*1)\n# ,5.61e-13 - ( (0.01e-13)*1)#1.0508e-9 \n# ,0.2e-15))\n#len(Cs_)\n\nCs_= list(np.arange(1e-11\n ,5.5e-11#1.0508e-9 \n ,0.2e-12))\nlen(Cs_)\n\nCs_= list(np.arange(1e-14\n ,5.5e-11#1.0508e-9 \n ,0.2e-12))\nlen(Cs_)#awsome\n\n#Cs_= list(np.arange(1.5e-11\n# ,2.53e-11#1.0508e-9 \n# ,0.2e-13)) #+[3.761e-11]\n#len(Cs_)\n\n#X_train.dtypes\n\nCs_= list(np.arange(1e-15\n ,0.51e-10 #1.0508e-9 \n ,0.1e-12))\nlen(Cs_)#new again\n\nCs_= list(np.arange(9e-14\n ,10.1e-13 #1.0508e-9 \n ,0.1e-14))\nlen(Cs_)#new again cont. lowerlevel\n\nCs_= list(np.arange(9e-14\n ,10.1e-13 #1.0508e-9 \n ,0.1e-14))\nlen(Cs_)#new again cont. lowerlevel\n\n#LogisticRegressionCV(Cs=10, class_weight=None, cv=107, dual=False,\n# fit_intercept=True, intercept_scaling=1.0, max_iter=100,\n# multi_class='ovr', n_jobs=1, penalty='l2', random_state=2,\n# refit=True, scoring='neg_log_loss', solver='lbfgs', tol=0.0001,\n# verbose=0) #-0.7\nfrom sklearn.linear_model import LogisticRegression\nmodel = LogisticRegression(scoring=\"neg_log_loss\",random_state=1\n #,penalty=\"l1\"\n ,C=8.129999999999969e-13#list(np.arange(1e-7,1e-9,-0.5e-9)) # [0.5,0.1,0.01,0.001] #list(np.power(1, np.arange(-10, 10)))\n ,max_iter=1000, tol=1e-11\n #,solver=\"liblinear\"\n ,n_jobs=4)\nmodel.fit(X_train, train_labels)\n\n#---\nCs = model.Cs_\nlist(np.power(10.0, np.arange(-10, 10)))\ndir(model)\nsco = model.scores_[1].mean(axis=0)\n#---\nimport matplotlib.pyplot as plt\nplt.plot(Cs\n #np.log10(Cs)\n ,sco)\n# plt.ylabel('some numbers')\nplt.show()\n\nCs= list(np.linspace(9e-15\n ,10.1e-14 #1.0508e-9 \n ,200))\nlen(Cs)#new again cont. lowerlevel\n\nfrom sklearn import svm, grid_search, datasets\nparameters = dict(C=Cs)\nmodel = LogisticRegression(random_state=1\n #,penalty=\"l1\"\n ,C=8.129999999999969e-13#list(np.arange(1e-7,1e-9,-0.5e-9)) # [0.5,0.1,0.01,0.001] #list(np.power(1, np.arange(-10, 10)))\n ,max_iter=1000, tol=1e-11\n ,solver=\"lbfgs\"\n ,n_jobs=1)\nclf = grid_search.GridSearchCV(model, parameters,scoring=\"neg_log_loss\",cv=80,n_jobs=8)\nclf.fit(X_train, train_labels)\n\nscores = [x[1] for x in clf.grid_scores_]\nscores = np.array(scores).reshape(len(Cs))\n\nplt.plot(Cs, scores)\nplt.legend()\nplt.xlabel('Cs')\nplt.ylabel('Mean score')\nplt.show()\n\nprint(\"C:\",clf.best_estimator_.C,\" loss:\",clf.best_score_)\nclf.grid_scores_\n\nscores = [x[1] for x in clf.grid_scores_]\nscores = np.array(scores).reshape(len(Cs))\n\nplt.plot(Cs, scores)\nplt.legend()\nplt.xlabel('Cs')\nplt.ylabel('Mean score')\nplt.show()\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.plot(clf.grid_scores_)\n# plt.ylabel('some numbers')\nplt.show()\n\n\nindex_min = np.argmin(sco)\nCs_[index_min] #3.761e-11\nsco.min()\n\n#list(np.power(10.0, np.arange(-10, 10)))\n#list(np.arange(0.5,1e-4,-0.05))\nprint(sco.max())\n#-0.6931471779248422\nprint(sco.min() < -0.693270048530996)\nprint(sco.min()+0.693270048530996)\nsco.min()\n\nimport matplotlib.pyplot as plt\nplt.plot(model.scores_[1])\n# plt.ylabel('some numbers')\nplt.show()",
"Here we store our probabilities",
"train_test_inner['Pred1'] = model.predict_proba(X_train)[:,1]\ntrain_test_missing['Pred1'] = model.predict_proba(X_test)[:,1]",
"We merge our predictions",
"sub = pd.merge(df_sample_sub, \n pd.concat( [train_test_missing.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']],\n train_test_inner.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']] ],\n axis = 0).reset_index(drop = True),\n on = ['Season', 'TeamID1', 'TeamID2'], how = 'outer')",
"We get the 'average' probability of success for each team",
"team1_probs = sub.groupby('TeamID1')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()\nteam2_probs = sub.groupby('TeamID2')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()",
"Any missing value for the prediciton will be imputed with the product of the probabilities calculated above. We assume these are independent events.",
"sub['Pred'] = sub[['TeamID1', 'TeamID2','Pred1']]\\\n.apply(lambda x : team1_probs.get(x[0]) * ( 1 - team2_probs.get(x[1]) ) if np.isnan(x[2]) else x[2], \n axis = 1)\n\nsub = sub.drop_duplicates(subset=[\"ID\"], keep='first')\n\nsub[['ID', 'Pred']].to_csv('sub.csv', index = False)\n\nsub[['ID', 'Pred']].head(20)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_time_frequency_mixed_norm_inverse.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Compute MxNE with time-frequency sparse prior\nThe TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)\nthat promotes focal (sparse) sources (such as dipole fitting techniques).\nThe benefit of this approach is that:\n\nit is spatio-temporal without assuming stationarity (sources properties\n can vary over time)\nactivations are localized in space, time and frequency in one step.\nwith a built-in filtering process based on a short time Fourier\n transform (STFT), data does not need to be low passed (just high pass\n to make the signals zero mean).\nthe solver solves a convex optimization problem, hence cannot be\n trapped in local minima.\n\nReferences:\nA. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski\nTime-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with\nnon-stationary source activations\nNeuroimage, Volume 70, 15 April 2013, Pages 410-422, ISSN 1053-8119,\nDOI: 10.1016/j.neuroimage.2012.12.051.\nA. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski\nFunctional Brain Imaging with M/EEG Using Structured Sparsity in\nTime-Frequency Dictionaries\nProceedings Information Processing in Medical Imaging\nLecture Notes in Computer Science, 2011, Volume 6801/2011,\n600-611, DOI: 10.1007/978-3-642-22092-0_49\nhttps://doi.org/10.1007/978-3-642-22092-0_49",
"# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\nfrom mne.inverse_sparse import tf_mixed_norm\nfrom mne.viz import plot_sparse_source_estimates\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nfwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'\ncov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'\n\n# Read noise covariance matrix\ncov = mne.read_cov(cov_fname)\n\n# Handling average file\ncondition = 'Left visual'\nevoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))\nevoked = mne.pick_channels_evoked(evoked)\n# We make the window slightly larger than what you'll eventually be interested\n# in ([-0.05, 0.3]) to avoid edge effects.\nevoked.crop(tmin=-0.1, tmax=0.4)\n\n# Handling forward solution\nforward = mne.read_forward_solution(fwd_fname, force_fixed=False,\n surf_ori=True)",
"Run solver",
"# alpha_space regularization parameter is between 0 and 100 (100 is high)\nalpha_space = 50. # spatial regularization parameter\n# alpha_time parameter promotes temporal smoothness\n# (0 means no temporal regularization)\nalpha_time = 1. # temporal regularization parameter\n\nloose, depth = 0.2, 0.9 # loose orientation & depth weighting\n\n# Compute dSPM solution to be used as weights in MxNE\ninverse_operator = make_inverse_operator(evoked.info, forward, cov,\n loose=loose, depth=depth)\nstc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,\n method='dSPM')\n\n# Compute TF-MxNE inverse solution\nstc, residual = tf_mixed_norm(evoked, forward, cov, alpha_space, alpha_time,\n loose=loose, depth=depth, maxit=200, tol=1e-4,\n weights=stc_dspm, weights_min=8., debias=True,\n wsize=16, tstep=4, window=0.05,\n return_residual=True)\n\n# Crop to remove edges\nstc.crop(tmin=-0.05, tmax=0.3)\nevoked.crop(tmin=-0.05, tmax=0.3)\nresidual.crop(tmin=-0.05, tmax=0.3)\n\n# Show the evoked response and the residual for gradiometers\nylim = dict(grad=[-120, 120])\nevoked.pick_types(meg='grad', exclude='bads')\nevoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,\n proj=True)\n\nresidual.pick_types(meg='grad', exclude='bads')\nresidual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,\n proj=True)",
"View in 2D and 3D (\"glass\" brain like 3D plot)",
"plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),\n opacity=0.1, fig_name=\"TF-MxNE (cond %s)\"\n % condition, modes=['sphere'], scale_factors=[1.])\n\ntime_label = 'TF-MxNE time=%0.2f ms'\nclim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])\nbrain = stc.plot('sample', 'inflated', 'rh', views='medial',\n clim=clim, time_label=time_label, smoothing_steps=5,\n subjects_dir=subjects_dir, initial_time=150, time_unit='ms')\nbrain.add_label(\"V1\", color=\"yellow\", scalar_thresh=.5, borders=True)\nbrain.add_label(\"V2\", color=\"red\", scalar_thresh=.5, borders=True)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/sentencepiece
|
python/sentencepiece_python_module_example.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google/sentencepiece/blob/master/python/sentencepiece_python_module_example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nSentencepiece python module\nThis notebook describes comprehensive examples of sentencepiece Python module. \nSince Python module calls C++ API through SWIG, this document is also useful for developing C++ client.\nInstall and data preparation\nWe use the small training data (botchan.txt) in this example. \n(Botchan is a novel written by Natsume Sōseki in 1906. The sample is English-translated one.)",
"!pip install sentencepiece\n!wget https://raw.githubusercontent.com/google/sentencepiece/master/data/botchan.txt",
"Basic end-to-end example",
"import sentencepiece as spm\n\n# train sentencepiece model from `botchan.txt` and makes `m.model` and `m.vocab`\n# `m.vocab` is just a reference. not used in the segmentation.\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000')\n\n# makes segmenter instance and loads the model file (m.model)\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\n# encode: text => id\nprint(sp.encode_as_pieces('This is a test'))\nprint(sp.encode_as_ids('This is a test'))\n\n# decode: id => text\nprint(sp.decode_pieces(['▁This', '▁is', '▁a', '▁t', 'est']))\nprint(sp.decode_ids([209, 31, 9, 375, 586]))\n\n# returns vocab size\nprint(sp.get_piece_size())\n\n# id <=> piece conversion\nprint(sp.id_to_piece(209))\nprint(sp.piece_to_id('▁This'))\n\n# returns 0 for unknown tokens (we can change the id for UNK)\nprint(sp.piece_to_id('__MUST_BE_UNKNOWN__'))\n\n# <unk>, <s>, </s> are defined by default. Their ids are (0, 1, 2)\n# <s> and </s> are defined as 'control' symbol.\nfor id in range(3):\n print(sp.id_to_piece(id), sp.is_control(id))",
"Loads model from byte stream\nSentencepiece's model file is just a serialized protocol buffer. We can instantiate sentencepiece processor from byte object with load_from_serialized_proto method.",
"import tensorflow as tf\n\n# Assumes that m.model is stored in non-Posix file system.\nserialized_model_proto = tf.gfile.GFile('m.model', 'rb').read()\n\nsp = spm.SentencePieceProcessor()\nsp.load_from_serialized_proto(serialized_model_proto)\n\nprint(sp.encode_as_pieces('this is a test'))",
"User defined and control symbols\nWe can define special tokens (symbols) to tweak the DNN behavior through the tokens. Typical examples are BERT's special symbols., e.g., [SEP] and [CLS].\nThere are two types of special tokens:\n\nuser defined symbols: Always treated as one token in any context. These symbols can appear in the input sentence. \ncontrol symbol: We only reserve ids for these tokens. Even if these tokens appear in the input text, they are not handled as one token. User needs to insert ids explicitly after encoding.\n\nFor experimental purpose, user defined symbols are easier to use since user can change the behavior just by modifying the input text. However, we want to use control symbols in the production setting in order to avoid users from tweaking the behavior by feeding these special symbols in their input text.",
"## Example of user defined symbols\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_user --user_defined_symbols=<sep>,<cls> --vocab_size=2000')\n\nsp_user = spm.SentencePieceProcessor()\nsp_user.load('m_user.model')\n\n# ids are reserved in both mode.\n# <unk>=0, <s>=1, </s>=2, <sep>=3, <cls>=4\n# user defined symbols allow these symbol to apper in the text.\nprint(sp_user.encode_as_pieces('this is a test<sep> hello world<cls>'))\nprint(sp_user.piece_to_id('<sep>')) # 3\nprint(sp_user.piece_to_id('<cls>')) # 4\nprint('3=', sp_user.decode_ids([3])) # decoded to <sep>\nprint('4=', sp_user.decode_ids([4])) # decoded to <cls>\n\n## Example of control symbols\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_ctrl --control_symbols=<sep>,<cls> --vocab_size=2000')\n\nsp_ctrl = spm.SentencePieceProcessor()\nsp_ctrl.load('m_ctrl.model')\n\n# control symbols just reserve ids.\nprint(sp_ctrl.encode_as_pieces('this is a test<sep> hello world<cls>'))\nprint(sp_ctrl.piece_to_id('<sep>')) # 3\nprint(sp_ctrl.piece_to_id('<cls>')) # 4\nprint('3=', sp_ctrl.decode_ids([3])) # decoded to empty\nprint('4=', sp_ctrl.decode_ids([4])) # decoded to empty",
"BOS/EOS (<s>, </s>) are defined as control symbols, but we can define them as user defined symbols.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_bos_as_user --user_defined_symbols=<s>,</s> --vocab_size=2000')\n\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\nprint(sp.encode_as_pieces('<s> hello</s>')) # <s>,</s> are segmented. (default behavior)\n\nsp = spm.SentencePieceProcessor()\nsp.load('m_bos_as_user.model')\nprint(sp.encode_as_pieces('<s> hello</s>')) # <s>,</s> are handled as one token.",
"Manipulating BOS/EOS/EOS/PAD symbols\nBOS, EOS, UNK, and PAD ids can be obtained with bos_id(), eos_id(), unk_id(), and pad_id() methods. We can explicitly insert these ids as follows.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000')\n\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\nprint('bos=', sp.bos_id())\nprint('eos=', sp.eos_id())\nprint('unk=', sp.unk_id())\nprint('pad=', sp.pad_id()) # disabled by default\n\n\nprint(sp.encode_as_ids('Hello world'))\n\n# Prepend or append bos/eos ids.\nprint([sp.bos_id()] + sp.encode_as_ids('Hello world') + [sp.eos_id()])",
"Changing the vocab id and surface representation of UNK/BOS/EOS/PAD symbols\nBy default, UNK/BOS/EOS/PAD tokens and their ids are defined as follows:\n|token|UNK|BOS|EOS|PAD|\n---|---\n|surface|<unk>|<s>|</s>|<pad>|\n|id|0|1|2|undefined (-1)|\nWe can change these mappings with --{unk|bos|eos|pad}_id and --{unk|bos|eos|pad}_piece flags.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --vocab_size=2000 --model_prefix=m --pad_id=0 --unk_id=1 --bos_id=2 --eos_id=3 --pad_piece=[PAD] --unk_piece=[UNK] --bos_piece=[BOS] --eos_piece=[EOS]')\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\n\nfor id in range(4):\n print(sp.id_to_piece(id), sp.is_control(id))",
"When -1 is set, this special symbol is disabled. UNK must not be undefined.",
"# Disable BOS/EOS\nspm.SentencePieceTrainer.train('--input=botchan.txt --vocab_size=2000 --model_prefix=m --bos_id=-1 --eos_id=-1')\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\n# <s>, </s> are UNK.\nprint(sp.unk_id())\nprint(sp.piece_to_id('<s>'))\nprint(sp.piece_to_id('</s>'))",
"UNK id is decoded into U+2047 (⁇) by default. We can change UNK surface with --unk_surface=<STR> flag.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --vocab_size=2000 --model_prefix=m')\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\nprint(sp.decode_ids([sp.unk_id()])) # default is U+2047\n\nspm.SentencePieceTrainer.train('--input=botchan.txt --vocab_size=2000 --model_prefix=m --unk_surface=__UNKNOWN__')\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\nprint(sp.decode_ids([sp.unk_id()])) ",
"Sampling and nbest segmentation for subword regularization\nWhen --model_type=unigram (default) is used, we can perform sampling and n-best segmentation for data augmentation. See subword regularization paper [kudo18] for more detail.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000')\n\n# Can obtain different segmentations per request.\n# There are two hyperparamenters for sampling (nbest_size and inverse temperature). see the paper [kudo18] for detail.\nfor n in range(10):\n print(sp.sample_encode_as_pieces('hello world', -1, 0.1))\n \nfor n in range(10):\n print(sp.sample_encode_as_ids('hello world', -1, 0.1))\n\n# get 10 best\nprint(sp.nbest_encode_as_pieces('hello world', 10))\nprint(sp.nbest_encode_as_ids('hello world', 10))",
"BPE (Byte pair encoding) model\nSentencepiece supports BPE (byte-pair-encoding) for subword segmentation with --model_type=bpe flag. We do not find empirical differences in translation quality between BPE and unigram model, but unigram model can perform sampling and n-best segmentation. See subword regularization paper [kudo18] for more detail.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_bpe --vocab_size=2000 --model_type=bpe')\nsp_bpe = spm.SentencePieceProcessor()\nsp_bpe.load('m_bpe.model')\n\nprint('*** BPE ***')\nprint(sp_bpe.encode_as_pieces('thisisatesthelloworld'))\nprint(sp_bpe.nbest_encode_as_pieces('hello world', 5)) # returns an empty list.\n\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_unigram --vocab_size=2000 --model_type=unigram')\nsp_unigram = spm.SentencePieceProcessor()\nsp_unigram.load('m_unigram.model')\n\nprint('*** Unigram ***')\nprint(sp_unigram.encode_as_pieces('thisisatesthelloworld'))\nprint(sp_unigram.nbest_encode_as_pieces('thisisatesthelloworld', 5))",
"Character and word model\nSentencepiece supports character and word segmentation with --model_type=char and --model_type=character flags.\nIn word segmentation, sentencepiece just segments tokens with whitespaces, so the input text must be pre-tokenized.\nWe can apply different segmentation algorithm transparently without changing pre/post processors.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_char --model_type=char --vocab_size=400')\n\nsp_char = spm.SentencePieceProcessor()\nsp_char.load('m_char.model')\n\nprint(sp_char.encode_as_pieces('this is a test.'))\nprint(sp_char.encode_as_ids('this is a test.'))\n\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m_word --model_type=word --vocab_size=2000')\n\nsp_word = spm.SentencePieceProcessor()\nsp_word.load('m_word.model')\n\nprint(sp_word.encode_as_pieces('this is a test.')) # '.' will not be one token.\nprint(sp_word.encode_as_ids('this is a test.'))",
"Text normalization\nSentencepiece provides the following general pre-defined normalization rules. We can change the normalizer with --normaliation_rule_name=<NAME> flag.\n\nnmt_nfkc: NFKC normalization with some additional normalization around spaces. (default)\nnfkc: original: NFKC normalization.\nnmt_nfkc_cf: nmt_nfkc + Unicode case folding (mostly lower casing)\nnfkc_cf: nfkc + Unicode case folding.\nidentity: no normalization",
"import sentencepiece as spm\n\n# NFKC normalization and lower casing.\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000 --normalization_rule_name=nfkc_cf')\n\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\nprint(sp.encode_as_pieces('HELLO WORLD.')) # lower casing and normalization",
"The normalization is performed with user-defined string-to-string mappings and leftmost longest matching.\nWe can also define the custom normalization rules as TSV file. The TSV files for pre-defined normalziation rules can be found in the data directory (sample). The normalization rule is compiled into FST and embedded in the model file. We don't need to specify the normalization configuration in the segmentation phase.\nHere's the example of custom normalization. The TSV file is fed with --normalization_rule_tsv=<FILE> flag.",
"def tocode(s): \n out = [] \n for c in s: \n out.append(str(hex(ord(c))).replace('0x', 'U+')) \n return ' '.join(out) \n\n# TSV format: source Unicode code points <tab> target code points\n# normalize \"don't => do not, I'm => I am\"\nwith open('normalization_rule.tsv', 'w') as f:\n f.write(tocode(\"I'm\") + '\\t' + tocode(\"I am\") + '\\n')\n f.write(tocode(\"don't\") + '\\t' + tocode(\"do not\") + '\\n')\n\nprint(open('normalization_rule.tsv', 'r').read())\n\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000 --normalization_rule_tsv=normalization_rule.tsv')\n\nsp = spm.SentencePieceProcessor()\n# m.model embeds the normalization rule compiled into an FST.\nsp.load('m.model')\nprint(sp.encode_as_pieces(\"I'm busy\")) # normalzied to `I am busy'\nprint(sp.encode_as_pieces(\"I don't know it.\")) # normalized to 'I do not know it.'\n",
"Randomizing training data\nSentencepiece loads all the lines of training data into memory to train the model. However, larger training data increases the training time and memory usage, though they are liner to the training data. When --input_sentence_size=<SIZE> is specified, Sentencepiece randomly samples <SIZE> lines from the whole training data. --shuffle_input_sentence=false disables the random shuffle and takes the first <SIZE> lines.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000 --input_sentence_size=1000')\n\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\nsp.encode_as_pieces('this is a test.')",
"Vocabulary restriction\nWe can encode the text only using the tokens spececified with set_vocabulary method. The background of this feature is described in subword-nmt page.",
"spm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000')\n\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\nprint(sp.encode_as_pieces('this is a test.'))\n\n# Gets all tokens as Python list.\nvocabs = [sp.id_to_piece(id) for id in range(sp.get_piece_size())]\n\n# Aggregates the frequency of each token in the training data.\nfreq = {}\nwith open('botchan.txt', 'r') as f:\n for line in f:\n line = line.rstrip()\n for piece in sp.encode_as_pieces(line):\n freq.setdefault(piece, 0)\n freq[piece] += 1\n \n# only uses the token appearing more than 1000 times in the training data.\nvocabs = list(filter(lambda x : x in freq and freq[x] > 1000, vocabs))\nsp.set_vocabulary(vocabs)\nprint(sp.encode_as_pieces('this is a test.'))\n\n# reset the restriction\nsp.reset_vocabulary()\nprint(sp.encode_as_pieces('this is a test.'))",
"Extracting crossing-words pieces\nSentencepieces does not extract pieces crossing multiple words (here the word means the space delimited tokens). The piece will never contain the whitespace marker (_) in the middle.\n--split_by_whtespace=false disables this restriction and allows to extract pieces crossing multiple words. In CJK (Chinese/Japanese/Korean), this flag will not affect the final segmentation results so much as words are not tokenized with whitespaces in CJK.",
"import re\n\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000 --split_by_whitespace=false')\n\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\n# Gets all tokens as Python list.\nvocabs = [sp.id_to_piece(id) for id in range(sp.get_piece_size())]\n\nfor piece in vocabs[0:500]:\n if re.match('\\w+▁\\w+', piece):\n print(piece)",
"Training sentencepiece model from the word list with frequency\nWe can train the sentencepiece model from the pair of <word, frequency>. First, you make a TSV file where the first column is the word and the second column is the frequency. Then, feed this TSV file with --input_format=tsv flag. Note that when feeding TSV as training data, we implicitly assume that --split_by_whtespace=true.",
"freq={}\nwith open('botchan.txt', 'r') as f:\n for line in f:\n line = line.rstrip()\n for piece in line.split():\n freq.setdefault(piece, 0)\n freq[piece] += 1\n \nwith open('word_freq_list.tsv', 'w') as f:\n for k, v in freq.items():\n f.write('%s\\t%d\\n' % (k, v))\n \n\nimport sentencepiece as spm\n\nspm.SentencePieceTrainer.train('--input=word_freq_list.tsv --input_format=tsv --model_prefix=m --vocab_size=2000')\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\nprint(sp.encode_as_pieces('this is a test.'))",
"Getting byte offsets of tokens\nSentencepiece keeps track of byte offset (span) of each token, which is useful for highlighting the token on top of unnormalized text.\nWe first need to install protobuf module and sentencepiece_pb2.py as the byte offsets and all other meta data for segementation are encoded in protocol buffer.\nencode_as_serialized_proto method resturns serialized SentencePieceText proto. You can get the deserialized object by calling ParseFromString method.\nThe definition of SentencePieceText proto is found here.",
"!pip install protobuf\n!wget https://raw.githubusercontent.com/google/sentencepiece/master/python/sentencepiece_pb2.py\n\nimport sentencepiece_pb2\nimport sentencepiece as spm\n\nspm.SentencePieceTrainer.train('--input=botchan.txt --model_prefix=m --vocab_size=2000')\n\nsp = spm.SentencePieceProcessor()\nsp.load('m.model')\n\n# One best result\nspt = sentencepiece_pb2.SentencePieceText()\nspt.ParseFromString(sp.encode_as_serialized_proto('hello')) # Full width hello\n\n# begin/end (offsets) are pointing to the original input.\nprint(spt)\n\n# Nbest results\nnspt = sentencepiece_pb2.NBestSentencePieceText()\nnspt.ParseFromString(sp.nbest_encode_as_serialized_proto('hello', 5))\n# print(nspt)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/05c57a644672d33707fd1264df7f5617/plot_time_frequency_global_field_power.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Explore event-related dynamics for specific frequency bands\nThe objective is to show you how to explore spectrally localized\neffects. For this purpose we adapt the method described in [1]_ and use it on\nthe somato dataset. The idea is to track the band-limited temporal evolution\nof spatial patterns by using the :term:Global Field Power(GFP) <GFP>.\nWe first bandpass filter the signals and then apply a Hilbert transform. To\nreveal oscillatory activity the evoked response is then subtracted from every\nsingle trial. Finally, we rectify the signals prior to averaging across trials\nby taking the magniude of the Hilbert.\nThen the :term:GFP is computed as described in [2], using the sum of the\nsquares but without normalization by the rank.\nBaselining is subsequently applied to make the :term:GFPs <GFP> comparable\nbetween frequencies.\nThe procedure is then repeated for each frequency band of interest and\nall :term:GFPs <GFP> are visualized. To estimate uncertainty, non-parametric\nconfidence intervals are computed as described in [3] across channels.\nThe advantage of this method over summarizing the Space x Time x Frequency\noutput of a Morlet Wavelet in frequency bands is relative speed and, more\nimportantly, the clear-cut comparability of the spectral decomposition (the\nsame type of filter is used across all bands).\nWe will use this dataset: somato-dataset\nReferences\n.. [1] Hari R. and Salmelin R. Human cortical oscillations: a neuromagnetic\n view through the skull (1997). Trends in Neuroscience 20 (1),\n pp. 44-49.\n.. [2] Engemann D. and Gramfort A. (2015) Automated model selection in\n covariance estimation and spatial whitening of MEG and EEG signals,\n vol. 108, 328-342, NeuroImage.\n.. [3] Efron B. and Hastie T. Computer Age Statistical Inference (2016).\n Cambrdige University Press, Chapter 11.2.",
"# Authors: Denis A. Engemann <denis.engemann@gmail.com>\n# Stefan Appelhoff <stefan.appelhoff@mailbox.org>\n#\n# License: BSD (3-clause)\nimport os.path as op\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import somato\nfrom mne.baseline import rescale\nfrom mne.stats import bootstrap_confidence_interval",
"Set parameters",
"data_path = somato.data_path()\nsubject = '01'\ntask = 'somato'\nraw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',\n 'sub-{}_task-{}_meg.fif'.format(subject, task))\n\n# let's explore some frequency bands\niter_freqs = [\n ('Theta', 4, 7),\n ('Alpha', 8, 12),\n ('Beta', 13, 25),\n ('Gamma', 30, 45)\n]",
"We create average power time courses for each frequency band",
"# set epoching parameters\nevent_id, tmin, tmax = 1, -1., 3.\nbaseline = None\n\n# get the header to extract events\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nfrequency_map = list()\n\nfor band, fmin, fmax in iter_freqs:\n # (re)load the data to save memory\n raw = mne.io.read_raw_fif(raw_fname, preload=True)\n raw.pick_types(meg='grad', eog=True) # we just look at gradiometers\n\n # bandpass filter\n raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up.\n l_trans_bandwidth=1, # make sure filter params are the same\n h_trans_bandwidth=1) # in each band and skip \"auto\" option.\n\n # epoch\n epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline,\n reject=dict(grad=4000e-13, eog=350e-6),\n preload=True)\n # remove evoked response\n epochs.subtract_evoked()\n\n # get analytic signal (envelope)\n epochs.apply_hilbert(envelope=True)\n frequency_map.append(((band, fmin, fmax), epochs.average()))\n del epochs\ndel raw",
"Now we can compute the Global Field Power\nWe can track the emergence of spatial patterns compared to baseline\nfor each frequency band, with a bootstrapped confidence interval.\nWe see dominant responses in the Alpha and Beta bands.",
"# Helper function for plotting spread\ndef stat_fun(x):\n \"\"\"Return sum of squares.\"\"\"\n return np.sum(x ** 2, axis=0)\n\n# Plot\nfig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)\ncolors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))\nfor ((freq_name, fmin, fmax), average), color, ax in zip(\n frequency_map, colors, axes.ravel()[::-1]):\n times = average.times * 1e3\n gfp = np.sum(average.data ** 2, axis=0)\n gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))\n ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)\n ax.axhline(0, linestyle='--', color='grey', linewidth=2)\n ci_low, ci_up = bootstrap_confidence_interval(average.data, random_state=0,\n stat_fun=stat_fun)\n ci_low = rescale(ci_low, average.times, baseline=(None, 0))\n ci_up = rescale(ci_up, average.times, baseline=(None, 0))\n ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)\n ax.grid(True)\n ax.set_ylabel('GFP')\n ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),\n xy=(0.95, 0.8),\n horizontalalignment='right',\n xycoords='axes fraction')\n ax.set_xlim(-1000, 3000)\n\naxes.ravel()[-1].set_xlabel('Time [ms]')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
napjon/krisk
|
notebooks/Intro.ipynb
|
bsd-3-clause
|
[
"Krisk is created for building statistical interactive visualization with pandas+Jupyter integration on top of Echarts.",
"import pandas as pd\nimport krisk.plot as kk\n# Use this when you want to nbconvert the notebook (used by nbviewer)\nfrom krisk import init_notebook; init_notebook()",
"We will be using GapMinder data for examples below.",
"df = pd.read_csv('http://www.stat.ubc.ca/~jenny/notOcto/STAT545A/'\n 'examples/gapminder/data/'\n 'gapminderDataFiveYear.txt', sep='\\t')\n\ndf.head()",
"Let's start by small example. Using bar plot to count the data of category,",
"kk.bar(df,'continent')",
"Note that by default, the plot already used a tooltip. You can hover the plot to see the y-value.\nWe also can plot bar by averaging GDP per capita for each continent,",
"kk.bar(df,'continent',y='gdpPercap',how='mean')",
"We can change x as year, and use the grouping on continent,",
"kk.bar(df,'year',y='gdpPercap',c='continent',how='mean')",
"Stacked and annotate the chart,",
"(kk.bar(df,'year',y='gdpPercap',c='continent',how='mean',stacked=True,annotate=True)\n .set_size(width=1000))",
"Next we can do the same thing with line chart, using area, annotate, and tooltip based on axis,",
"p = kk.line(df,'year',y='gdpPercap',c='continent',how='mean',\n stacked=True,annotate='all',area=True)\np.set_tooltip_style(trigger='axis',axis_pointer='shadow')\np.set_size(width=1000)",
"We can also create a histogram and add theme into it,",
"p = (kk.hist(df,x='lifeExp',c='continent',stacked=True,bins=100))\np.set_tooltip_style(trigger='axis',axis_pointer='shadow')\np.set_theme('vintage')",
"Let's get a little bit advanced. We're going to create scatter points of GapMinder data in 2007. We use Life Expectancy, GDP per Capita, and Population as x,y,size respectively. We also want to add the information on the tooltip, add and reposition toolbox, legend, and title.",
"p = kk.scatter(df[df.year == 2007],'lifeExp','gdpPercap',s='pop',c='continent')\np.set_size(width=1000, height=500)\np.set_tooltip_format(['country','lifeExp','gdpPercap','pop','continent'])\np.set_theme('dark')\np.set_toolbox(save_format='png',restore=True,data_zoom=True)\np.set_legend(orient='vertical',x_pos='-1%',y_pos='-3%')\np.set_title('GapMinder of 2007',x_pos='center',y_pos='-5%')",
"In the next few notebooks, we're going to dig deeper at each of the feature, including what's not being discussed here. But this introduction should give a sense of what krisk is capable of."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Almaz-KG/MachineLearning
|
ml-for-finance/python-for-financial-analysis-and-algorithmic-trading/02-NumPy/2-Numpy-Indexing-and-Selection.ipynb
|
apache-2.0
|
[
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\n<center>Copyright Pierian Data 2017</center>\n<center>For more information, visit us at www.pieriandata.com</center>\nNumPy Indexing and Selection\nIn this lecture we will discuss how to select elements or groups of elements from an array.",
"import numpy as np\n\n#Creating sample array\narr = np.arange(0,11)\n\n#Show\narr",
"Bracket Indexing and Selection\nThe simplest way to pick one or some elements of an array looks very similar to python lists:",
"#Get a value at an index\narr[8]\n\n#Get values in a range\narr[1:5]\n\n#Get values in a range\narr[0:5]",
"Broadcasting\nNumpy arrays differ from a normal Python list because of their ability to broadcast:",
"#Setting a value with index range (Broadcasting)\narr[0:5]=100\n\n#Show\narr\n\n# Reset array, we'll see why I had to reset in a moment\narr = np.arange(0,11)\n\n#Show\narr\n\n#Important notes on Slices\nslice_of_arr = arr[0:6]\n\n#Show slice\nslice_of_arr\n\n#Change Slice\nslice_of_arr[:]=99\n\n#Show Slice again\nslice_of_arr",
"Now note the changes also occur in our original array!",
"arr",
"Data is not copied, it's a view of the original array! This avoids memory problems!",
"#To get a copy, need to be explicit\narr_copy = arr.copy()\n\narr_copy",
"Indexing a 2D array (matrices)\nThe general format is arr_2d[row][col] or arr_2d[row,col]. I recommend usually using the comma notation for clarity.",
"arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45]))\n\n#Show\narr_2d\n\n#Indexing row\narr_2d[1]\n\n\n# Format is arr_2d[row][col] or arr_2d[row,col]\n\n# Getting individual element value\narr_2d[1][0]\n\n# Getting individual element value\narr_2d[1,0]\n\n# 2D array slicing\n\n#Shape (2,2) from top right corner\narr_2d[:2,1:]\n\n#Shape bottom row\narr_2d[2]\n\n#Shape bottom row\narr_2d[2,:]",
"More Indexing Help\nIndexing a 2d matrix can be a bit confusing at first, especially when you start to add in step size. Try google image searching NumPy indexing to fins useful images, like this one:\n<img src= 'http://memory.osu.edu/classes/python/_images/numpy_indexing.png' width=500/>\nConditional Selection\nThis is a very fundamental concept that will directly translate to pandas later on, make sure you understand this part!\nLet's briefly go over how to use brackets for selection based off of comparison operators.",
"arr = np.arange(1,11)\narr\n\narr > 4\n\nbool_arr = arr>4\n\nbool_arr\n\narr[bool_arr]\n\narr[arr>2]\n\nx = 2\narr[arr>x]",
"Great Job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
isendel/machine-learning
|
ml-regression/week3-4/week-4-ridge-regression-assignment-1.ipynb
|
apache-2.0
|
[
"Regression Week 4: Ridge Regression (interpretation)\nIn this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:\n* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression\n* Use matplotlib to visualize polynomial regressions\n* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty\n* Use matplotlib to visualize polynomial regressions under L2 regularization\n* Choose best L2 penalty using cross-validation.\n* Assess the final fit using test data.\nWe will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)\nFire up graphlab create",
"import pandas as pd\nimport numpy as np\nfrom sklearn import linear_model\nimport math\n\ndtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}",
"Polynomial regression, revisited\nWe build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:",
"def polynomial_sframe(feature, degree):\n poly_dataset = pd.DataFrame()\n poly_dataset['power_1'] = feature\n if degree > 1:\n for power in range(2, degree + 1):\n column = 'power_' + str(power)\n poly_dataset[column] = feature**power\n features = poly_dataset.columns.values.tolist()\n #poly_dataset['constant'] = 1\n #return (poly_dataset, ['constant'] + features)\n return (poly_dataset, features)\n\npolynomial_sframe(np.array([1, 2, 3]), 3)",
"Let's use matplotlib to visualize what a polynomial regression looks like on the house data.",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nsales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict)",
"As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.",
"sales = sales.sort(['sqft_living','price'])",
"Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:",
"l2_small_penalty = 1.5e-5",
"Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)\nWith the L2 penalty specified above, fit the model and print out the learned weights.\nHint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.",
"poly_data, features = polynomial_sframe(sales['sqft_living'],15)\nprint(poly_data['power_1'].mean())\nmodel = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)\nmodel.fit(poly_data[features], sales['price'])\nprint(model.coef_)\nprint(model.intercept_)\nplt.plot(poly_data['power_1'], sales['price'], '.',\n poly_data['power_1'], model.predict(poly_data[features]), '-')",
"QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?\nObserve overfitting\nRecall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.\nFirst, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.",
"set_1 = pd.read_csv('wk3_kc_house_set_1_data.csv', dtype=dtype_dict)\nset_2 = pd.read_csv('wk3_kc_house_set_2_data.csv', dtype=dtype_dict)\nset_3 = pd.read_csv('wk3_kc_house_set_3_data.csv', dtype=dtype_dict)\nset_4 = pd.read_csv('wk3_kc_house_set_4_data.csv', dtype=dtype_dict)\nl2_small_penalty=1e-9",
"Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.\nHint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.",
"sales_subset = set_1\npoly_data, features = polynomial_sframe(sales_subset['sqft_living'],15)\nmodel1 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)\nmodel1.fit(poly_data[features], sales_subset['price'])\nprint(model1.coef_)\nplt.plot(poly_data['power_1'], sales_subset['price'], '.',\n poly_data['power_1'], model1.predict(poly_data[features]))\n\nsales_subset = set_2\npoly_data, features = polynomial_sframe(sales_subset['sqft_living'],15)\nmodel2 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)\nmodel2.fit(poly_data[features], sales_subset['price'])\nprint(model2.coef_)\nplt.plot(poly_data['power_1'], sales_subset['price'], '.',\n poly_data['power_1'], model2.predict(poly_data[features]))\n\nsales_subset = set_3\npoly_data, features = polynomial_sframe(sales_subset['sqft_living'],15)\nmodel3 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)\nmodel3.fit(poly_data[features], sales_subset['price'])\nprint(model3.coef_)\nplt.plot(poly_data['power_1'], sales_subset['price'], '.',\n poly_data['power_1'], model3.predict(poly_data[features]))\n\nsales_subset = set_4\npoly_data, features = polynomial_sframe(sales_subset['sqft_living'],15)\nmodel4 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)\nmodel4.fit(poly_data[features], sales_subset['price'])\nprint(model4.coef_)\nplt.plot(poly_data['power_1'], sales_subset['price'], '.',\n poly_data['power_1'], model4.predict(poly_data[features]))",
"The four curves should differ from one another a lot, as should the coefficients you learned.\nQUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered \"smaller\" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)",
"power1_coefs = [model1.coef_[0],model2.coef_[0],model3.coef_[0],model4.coef_[0]]\nprint(power1_coefs)\nprint(power1_coefs.index(min(power1_coefs)))\nprint(power1_coefs.index(max(power1_coefs)))",
"Ridge regression comes to rescue\nGenerally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing \"large\" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)\nWith the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.",
"l2_large_penalty=1.23e2\npower_1_coef = []\n\nsales_subset = set_1\npoly_data, features = polynomial_sframe(sales_subset['sqft_living'],15)\nmodel1 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)\nmodel1.fit(poly_data[features], sales_subset['price'])\nprint(model1.coef_)\npower_1_coef.append(model1.coef_[0])\nplt.plot(poly_data['power_1'], sales_subset['price'], '.',\n poly_data['power_1'], model1.predict(poly_data[features]))\n\nsales_subset = set_2\npoly_data, features = polynomial_sframe(sales_subset['sqft_living'],15)\nmodel2 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)\nmodel2.fit(poly_data[features], sales_subset['price'])\nprint(model2.coef_)\npower_1_coef.append(model2.coef_[0])\nplt.plot(poly_data['power_1'], sales_subset['price'], '.',\n poly_data['power_1'], model2.predict(poly_data[features]))\n\nsales_subset = set_3\npoly_data, features = polynomial_sframe(sales_subset['sqft_living'],15)\nmodel3 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)\nmodel3.fit(poly_data[features], sales_subset['price'])\nprint(model3.coef_)\npower_1_coef.append(model3.coef_[0])\nplt.plot(poly_data['power_1'], sales_subset['price'], '.',\n poly_data['power_1'], model3.predict(poly_data[features]))\n\nsales_subset = set_4\npoly_data, features = polynomial_sframe(sales_subset['sqft_living'],15)\nmodel4 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)\nmodel4.fit(poly_data[features], sales_subset['price'])\nprint(model4.coef_)\npower_1_coef.append(model4.coef_[0])\nplt.plot(poly_data['power_1'], sales_subset['price'], '.',\n poly_data['power_1'], model4.predict(poly_data[features]))",
"These curves should vary a lot less, now that you applied a high degree of regularization.\nQUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered \"smaller\" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)",
"power1_coefs = [model1.coef_[0],model2.coef_[0],model3.coef_[0],model4.coef_[0]]\nprint(power1_coefs)\nprint(power1_coefs.index(min(power1_coefs)))\nprint(power1_coefs.index(max(power1_coefs)))",
"Selecting an L2 penalty via cross-validation\nJust like the polynomial degree, the L2 penalty is a \"magic\" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.\nWe will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:\nSet aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>\nSet aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>\n...<br>\nSet aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set\nAfter this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data. \nTo estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)",
"train_valid_shuffled = pd.read_csv('wk3_kc_house_train_valid_shuffled.csv', dtype=dtype_dict)\ntest = pd.read_csv('wk3_kc_house_test_data.csv', dtype=dtype_dict)",
"Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.\nWith this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.",
"n = len(train_valid_shuffled)\nk = 10 # 10-fold cross-validation\n\nfor i in range(k):\n start = (n*i)/k\n end = (n*(i+1))/k-1\n print(i, (start, end))",
"Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.",
"train_valid_shuffled[0:10] # rows 0 to 9",
"Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.\nExtract the fourth segment (segment 3) and assign it to a variable called validation4.",
"n = len(train_valid_shuffled)\ni = 3\nprint(n)\nstart = (n*i)/10\nend = (n*(i+1))/10\nvalidation4 = train_valid_shuffled[start:end+1]\nprint(start)\nprint(end)",
"To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.",
"print(int(round(validation4['price'].mean(), 0)))",
"After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.\nExtract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.",
"train4 = train_valid_shuffled[:start].append(train_valid_shuffled[end+1:])\nprint(len(train4))\nprint(n - len(train4))",
"To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.",
"print(int(round(train4['price'].mean(), 0)))",
"Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.\n\nFor each i in [0, 1, ..., k-1]:\nCompute starting and ending indices of segment i and call 'start' and 'end'\nForm validation set by taking a slice (start:end+1) from the data.\nForm training set by appending slice (end+1:n) to the end of slice (0:start).\nTrain a linear model using training set just formed, with a given l2_penalty\nCompute validation error using validation set just formed",
"def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):\n validation_errors = []\n for i in range(k):\n n = len(data)\n start = (n*i)/k \n end = (n*(i+1))/k\n validation_set = data[start:end + 1]\n training_set = data[0:start].append(data[end + 1:n])\n model = linear_model.Ridge(alpha=l2_penalty, normalize=True)\n model.fit(training_set[features_list], training_set[output_name])\n\n predictons = model.predict(validation_set[features_list])\n errors = predictons - validation_set[output_name]\n validation_errors.append(errors.T.dot(errors))\n return np.array(validation_errors).mean()",
"Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:\n* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input\n* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)\n * Run 10-fold cross-validation with l2_penalty\n* Report which L2 penalty produced the lowest average validation error.\nNote: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!",
"import sys\nvalidation_errors = []\nlowest_error = sys.float_info.max\npenalty = 0\nfor l2_penalty in np.logspace(1, 7, num=13):\n data_poly, features = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)\n data_poly['price'] = train_valid_shuffled['price']\n average_validation_error = k_fold_cross_validation(10, l2_penalty, data_poly, 'price', features)\n print(l2_penalty)\n print(average_validation_error)\n if average_validation_error < lowest_error:\n lowest_error = average_validation_error\n penalty = l2_penalty\n validation_errors.append(average_validation_error)\n\nprint('Lowest error is: %s for penalty: %s' % (lowest_error, penalty))\n",
"QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?\nYou may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.",
"# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.\n# Using plt.xscale('log') will make your plot more intuitive.\nplt.plot(np.logspace(1, 7, num=13), validation_errors, '-')\nplt.xscale('log')\nprint(validation_errors)",
"Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.",
"data_poly, features = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)\nmodel = linear_model.Ridge(normalize=True, alpha=penalty)\nmodel.fit(data_poly[features], train_valid_shuffled['price'])",
"QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?",
"poly_data_test, features = polynomial_sframe(test['sqft_living'], 15)\npredictions = model.predict(poly_data_test[features])\ntest_errors = predictions - test['price']\nRSS_test = test_errors.T.dot(test_errors)\n\nRSS_test"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mathLab/RBniCS
|
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_pod.ipynb
|
lgpl-3.0
|
[
"TUTORIAL 06 - Unsteady Thermal block problem\nKeywords: POD-Galerkin method, scalar problem\n1. Introduction\nIn this Tutorial, we consider unsteady heat conduction in a two-dimensional domain $\\Omega$.\n<img src=\"data/thermal_block.png\" />\nWe define two subdomains $\\Omega_1$ and $\\Omega_2$, such that\n1. $\\Omega_1$ is a disk centered at the origin of radius $r_0=0.5$, and\n2. $\\Omega_2=\\Omega/\\ \\overline{\\Omega_1}$. \nThe conductivity $\\kappa$ is assumed to be constant on $\\Omega_1$ and $\\Omega_2$, i.e.\n$$\n\\kappa|{\\Omega_1}=\\kappa_0 \\quad \\textrm{and} \\quad \\kappa|{\\Omega_2}=1.\n$$\nFor this problem, we consider $P=2$ parameters:\n1. the first one is related to the conductivity in $\\Omega_1$, i.e. $\\mu_0\\equiv\\kappa_0$ (note that parameters numbering is zero-based);\n2. the second parameter $\\mu_1$ takes into account the constant heat flux over $\\Gamma_{base}$.\nThe parameter vector $\\boldsymbol{\\mu}$ is thus given by \n$$\n\\boldsymbol{\\mu} = (\\mu_0,\\mu_1)\n$$\non the parameter domain\n$$\n\\mathbb{P}=[0.1,10]\\times[-1,1].\n$$\nIn this problem we model the heat transfer process due to the heat flux over the bottom boundary $\\Gamma_{base}$ and the following conditions on the remaining boundaries:\n* the left and right boundaries $\\Gamma_{side}$ are insulated,\n* the top boundary $\\Gamma_{top}$ is kept at a reference temperature (say, zero),\nwith the aim of measuring the average temperature on $\\Gamma_{base}$.\nIn order to obtain a faster approximation of the problem we pursue a model reduction by means of a POD-Galerkin reduced order method.\n2. Parametrized formulation\nLet $u(t;\\boldsymbol{\\mu})$ be the temperature in the domain $\\Omega\\times[0,t_f]$.\nThe strong formulation of the parametrized problem is given by:\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, for $t\\in[0,t_f]$, find $u(t;\\boldsymbol{\\mu})$ such that</center>\n$$\n\\begin{cases}\n \\partial_tu(t;\\boldsymbol{\\mu})- \\text{div} (\\kappa(\\mu_0)\\nabla u(t;\\boldsymbol{\\mu})) = 0 & \\text{in } \\Omega\\times[0,t_f],\\\n u(t=0;\\boldsymbol{\\mu}) = 0 & \\text{in } \\Omega, \\ \n u(t;\\boldsymbol{\\mu}) = 0 & \\text{on } \\Gamma_{top}\\times[0,t_f],\\\n \\kappa(\\mu_0)\\nabla u(t;\\boldsymbol{\\mu})\\cdot \\mathbf{n} = 0 & \\text{on } \\Gamma_{side}\\times[0,t_f],\\\n \\kappa(\\mu_0)\\nabla u(t;\\boldsymbol{\\mu})\\cdot \\mathbf{n} = \\mu_1 & \\text{on } \\Gamma_{base}\\times[0,t_f].\n\\end{cases}\n$$\n<br>\nwhere \n* $\\mathbf{n}$ denotes the outer normal to the boundaries $\\Gamma_{side}$ and $\\Gamma_{base}$,\n* the conductivity $\\kappa(\\mu_0)$ is defined as follows:\n$$\n\\kappa(\\mu_0) =\n\\begin{cases}\n \\mu_0 & \\text{in } \\Omega_1,\\\n 1 & \\text{in } \\Omega_2,\\\n\\end{cases}\n$$\nThe corresponding weak formulation reads:\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, for $t\\in[0,t_f]$, find $u(t;\\boldsymbol{\\mu})\\in\\mathbb{V}$ such that</center>\n$$m\\left(\\partial_tu(t;\\boldsymbol{\\mu}),v;\\boldsymbol{\\mu}\\right) + a\\left(u(t;\\boldsymbol{\\mu}),v;\\boldsymbol{\\mu}\\right)=f(v;\\boldsymbol{\\mu})\\quad \\forall v\\in\\mathbb{V},\\quad \\forall t\\in[0,t_f]$$\nwhere\n\nthe function space $\\mathbb{V}$ is defined as\n$$\n\\mathbb{V} = {v\\in H^1(\\Omega) : v|{\\Gamma{top}}=0}\n$$\nthe parametrized bilinear form $m(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$m(u, v;\\boldsymbol{\\mu})=\\int_{\\Omega} \\partial_tu(t)v \\ d\\boldsymbol{x},$$\nthe parametrized bilinear form $a(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$a(u, v;\\boldsymbol{\\mu})=\\int_{\\Omega} \\kappa(\\mu_0)\\nabla u\\cdot \\nabla v \\ d\\boldsymbol{x},$$\nthe parametrized linear form $f(\\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$f(v; \\boldsymbol{\\mu})= \\mu_1\\int_{\\Gamma_{base}}v \\ ds,$$\n\nThe (compliant) output of interest $s(t;\\boldsymbol{\\mu})$ is given by\n$$s(t;\\boldsymbol{\\mu}) = \\mu_1\\int_{\\Gamma_{base}} u(t;\\boldsymbol{\\mu})$$\nis computed for each $\\boldsymbol{\\mu}$.",
"from dolfin import *\nfrom rbnics import *",
"3. Affine decomposition\nFor this problem the affine decomposition is straightforward:\n$$m(u,v;\\boldsymbol{\\mu})=\\underbrace{1}{\\Theta^{m}_0(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega}uv \\ d\\boldsymbol{x}}{m_0(u,v)},$$\n$$a(u,v;\\boldsymbol{\\mu})=\\underbrace{\\mu_0}{\\Theta^{a}0(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega_1}\\nabla u \\cdot \\nabla v \\ d\\boldsymbol{x}}{a_0(u,v)} \\ + \\ \\underbrace{1}{\\Theta^{a}1(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega_2}\\nabla u \\cdot \\nabla v \\ d\\boldsymbol{x}}{a_1(u,v)},$$\n$$f(v; \\boldsymbol{\\mu}) = \\underbrace{\\mu_1}{\\Theta^{f}0(\\boldsymbol{\\mu})} \\underbrace{\\int{\\Gamma_{base}}v \\ ds}{f_0(v)}.$$\nWe will implement the numerical discretization of the problem in the class\nclass UnsteadyThermalBlock(ParabolicCoerciveProblem):\nby specifying the coefficients $\\Theta^{m}(\\boldsymbol{\\mu})$, $\\Theta^{a}_(\\boldsymbol{\\mu})$ and $\\Theta^{f}(\\boldsymbol{\\mu})$ in the method\ndef compute_theta(self, term):\nand the bilinear forms $m_(u, v)$, $a(u, v)$ and linear forms $f_(v)$ in\ndef assemble_operator(self, term):",
"class UnsteadyThermalBlock(ParabolicCoerciveProblem):\n\n # Default initialization of members\n def __init__(self, V, **kwargs):\n # Call the standard initialization\n ParabolicCoerciveProblem.__init__(self, V, **kwargs)\n # ... and also store FEniCS data structures for assembly\n assert \"subdomains\" in kwargs\n assert \"boundaries\" in kwargs\n self.subdomains, self.boundaries = kwargs[\"subdomains\"], kwargs[\"boundaries\"]\n self.u = TrialFunction(V)\n self.v = TestFunction(V)\n self.dx = Measure(\"dx\")(subdomain_data=self.subdomains)\n self.ds = Measure(\"ds\")(subdomain_data=self.boundaries)\n\n # Return custom problem name\n def name(self):\n return \"UnsteadyThermalBlock1POD\"\n\n # Return theta multiplicative terms of the affine expansion of the problem.\n def compute_theta(self, term):\n mu = self.mu\n if term == \"m\":\n theta_m0 = 1.\n return (theta_m0, )\n elif term == \"a\":\n theta_a0 = mu[0]\n theta_a1 = 1.\n return (theta_a0, theta_a1)\n elif term == \"f\":\n theta_f0 = mu[1]\n return (theta_f0,)\n else:\n raise ValueError(\"Invalid term for compute_theta().\")\n\n # Return forms resulting from the discretization of the affine expansion of the problem operators.\n def assemble_operator(self, term):\n v = self.v\n dx = self.dx\n if term == \"m\":\n u = self.u\n m0 = u * v * dx\n return (m0, )\n elif term == \"a\":\n u = self.u\n a0 = inner(grad(u), grad(v)) * dx(1)\n a1 = inner(grad(u), grad(v)) * dx(2)\n return (a0, a1)\n elif term == \"f\":\n ds = self.ds\n f0 = v * ds(1)\n return (f0,)\n elif term == \"dirichlet_bc\":\n bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 3)]\n return (bc0,)\n elif term == \"inner_product\":\n u = self.u\n x0 = inner(grad(u), grad(v)) * dx\n return (x0,)\n elif term == \"projection_inner_product\":\n u = self.u\n x0 = u * v * dx\n return (x0,)\n else:\n raise ValueError(\"Invalid term for assemble_operator().\")",
"4. Main program\n4.1. Read the mesh for this problem\nThe mesh was generated by the data/generate_mesh.ipynb notebook.",
"mesh = Mesh(\"data/thermal_block.xml\")\nsubdomains = MeshFunction(\"size_t\", mesh, \"data/thermal_block_physical_region.xml\")\nboundaries = MeshFunction(\"size_t\", mesh, \"data/thermal_block_facet_region.xml\")",
"4.2. Create Finite Element space (Lagrange P1, two components)",
"V = FunctionSpace(mesh, \"Lagrange\", 1)",
"4.3. Allocate an object of the UnsteadyThermalBlock class",
"problem = UnsteadyThermalBlock(V, subdomains=subdomains, boundaries=boundaries)\nmu_range = [(0.1, 10.0), (-1.0, 1.0)]\nproblem.set_mu_range(mu_range)\nproblem.set_time_step_size(0.05)\nproblem.set_final_time(3)",
"4.4. Prepare reduction with a POD-Galerkin method",
"reduction_method = PODGalerkin(problem)\nreduction_method.set_Nmax(20, nested_POD=4)\nreduction_method.set_tolerance(1e-8, nested_POD=1e-4)",
"4.5. Perform the offline phase",
"reduction_method.initialize_training_set(100)\nreduced_problem = reduction_method.offline()",
"4.6. Perform an online solve",
"online_mu = (8.0, -1.0)\nreduced_problem.set_mu(online_mu)\nreduced_solution = reduced_problem.solve()\nplot(reduced_solution, reduced_problem=reduced_problem, every=5, interval=500)",
"4.7. Perform an error analysis",
"reduction_method.initialize_testing_set(10)\nreduction_method.error_analysis()",
"4.8. Perform a speedup analysis",
"reduction_method.initialize_testing_set(10)\nreduction_method.speedup_analysis()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.20/_downloads/8763e6c899a8b9971980be1308b5f693/plot_dics.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"DICS for power mapping\nIn this tutorial, we'll simulate two signals originating from two\nlocations on the cortex. These signals will be sinusoids, so we'll be looking\nat oscillatory activity (as opposed to evoked activity).\nWe'll use dynamic imaging of coherent sources (DICS) [1]_ to map out\nspectral power along the cortex. Let's see if we can find our two simulated\nsources.",
"# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>\n#\n# License: BSD (3-clause)",
"Setup\nWe first import the required packages to run this tutorial and define a list\nof filenames for various things we'll be using.",
"import os.path as op\nimport numpy as np\nfrom scipy.signal import welch, coherence, unit_impulse\nfrom matplotlib import pyplot as plt\n\nimport mne\nfrom mne.simulation import simulate_raw, add_noise\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\nfrom mne.time_frequency import csd_morlet\nfrom mne.beamformer import make_dics, apply_dics_csd\n\n# We use the MEG and MRI setup from the MNE-sample dataset\ndata_path = sample.data_path(download=False)\nsubjects_dir = op.join(data_path, 'subjects')\n\n# Filenames for various files we'll be using\nmeg_path = op.join(data_path, 'MEG', 'sample')\nraw_fname = op.join(meg_path, 'sample_audvis_raw.fif')\nfwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')\ncov_fname = op.join(meg_path, 'sample_audvis-cov.fif')\nfwd = mne.read_forward_solution(fwd_fname)\n\n# Seed for the random number generator\nrand = np.random.RandomState(42)",
"Data simulation\nThe following function generates a timeseries that contains an oscillator,\nwhose frequency fluctuates a little over time, but stays close to 10 Hz.\nWe'll use this function to generate our two signals.",
"sfreq = 50. # Sampling frequency of the generated signal\nn_samp = int(round(10. * sfreq))\ntimes = np.arange(n_samp) / sfreq # 10 seconds of signal\nn_times = len(times)\n\n\ndef coh_signal_gen():\n \"\"\"Generate an oscillating signal.\n\n Returns\n -------\n signal : ndarray\n The generated signal.\n \"\"\"\n t_rand = 0.001 # Variation in the instantaneous frequency of the signal\n std = 0.1 # Std-dev of the random fluctuations added to the signal\n base_freq = 10. # Base frequency of the oscillators in Hertz\n n_times = len(times)\n\n # Generate an oscillator with varying frequency and phase lag.\n signal = np.sin(2.0 * np.pi *\n (base_freq * np.arange(n_times) / sfreq +\n np.cumsum(t_rand * rand.randn(n_times))))\n\n # Add some random fluctuations to the signal.\n signal += std * rand.randn(n_times)\n\n # Scale the signal to be in the right order of magnitude (~100 nAm)\n # for MEG data.\n signal *= 100e-9\n\n return signal",
"Let's simulate two timeseries and plot some basic information about them.",
"signal1 = coh_signal_gen()\nsignal2 = coh_signal_gen()\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 4))\n\n# Plot the timeseries\nax = axes[0][0]\nax.plot(times, 1e9 * signal1, lw=0.5)\nax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)',\n title='Signal 1')\nax = axes[0][1]\nax.plot(times, 1e9 * signal2, lw=0.5)\nax.set(xlabel='Time (s)', xlim=times[[0, -1]], title='Signal 2')\n\n# Power spectrum of the first timeseries\nf, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256)\nax = axes[1][0]\n# Only plot the first 100 frequencies\nax.plot(f[:100], 20 * np.log10(p[:100]), lw=1.)\nax.set(xlabel='Frequency (Hz)', xlim=f[[0, 99]],\n ylabel='Power (dB)', title='Power spectrum of signal 1')\n\n# Compute the coherence between the two timeseries\nf, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64)\nax = axes[1][1]\nax.plot(f[:50], coh[:50], lw=1.)\nax.set(xlabel='Frequency (Hz)', xlim=f[[0, 49]], ylabel='Coherence',\n title='Coherence between the timeseries')\nfig.tight_layout()",
"Now we put the signals at two locations on the cortex. We construct a\n:class:mne.SourceEstimate object to store them in.\nThe timeseries will have a part where the signal is active and a part where\nit is not. The techniques we'll be using in this tutorial depend on being\nable to contrast data that contains the signal of interest versus data that\ndoes not (i.e. it contains only noise).",
"# The locations on the cortex where the signal will originate from. These\n# locations are indicated as vertex numbers.\nvertices = [[146374], [33830]]\n\n# Construct SourceEstimates that describe the signals at the cortical level.\ndata = np.vstack((signal1, signal2))\nstc_signal = mne.SourceEstimate(\n data, vertices, tmin=0, tstep=1. / sfreq, subject='sample')\nstc_noise = stc_signal * 0.",
"Before we simulate the sensor-level data, let's define a signal-to-noise\nratio. You are encouraged to play with this parameter and see the effect of\nnoise on our results.",
"snr = 1. # Signal-to-noise ratio. Decrease to add more noise.",
"Now we run the signal through the forward model to obtain simulated sensor\ndata. To save computation time, we'll only simulate gradiometer data. You can\ntry simulating other types of sensors as well.\nSome noise is added based on the baseline noise covariance matrix from the\nsample dataset, scaled to implement the desired SNR.",
"# Read the info from the sample dataset. This defines the location of the\n# sensors and such.\ninfo = mne.io.read_info(raw_fname)\ninfo.update(sfreq=sfreq, bads=[])\n\n# Only use gradiometers\npicks = mne.pick_types(info, meg='grad', stim=True, exclude=())\nmne.pick_info(info, picks, copy=False)\n\n# Define a covariance matrix for the simulated noise. In this tutorial, we use\n# a simple diagonal matrix.\ncov = mne.cov.make_ad_hoc_cov(info)\ncov['data'] *= (20. / snr) ** 2 # Scale the noise to achieve the desired SNR\n\n# Simulate the raw data, with a lowpass filter on the noise\nstcs = [(stc_signal, unit_impulse(n_samp, dtype=int) * 1),\n (stc_noise, unit_impulse(n_samp, dtype=int) * 2)] # stacked in time\nduration = (len(stc_signal.times) * 2) / sfreq\nraw = simulate_raw(info, stcs, forward=fwd)\nadd_noise(raw, cov, iir_filter=[4, -4, 0.8], random_state=rand)",
"We create an :class:mne.Epochs object containing two trials: one with\nboth noise and signal and one with just noise",
"events = mne.find_events(raw, initial_event=True)\ntmax = (len(stc_signal.times) - 1) / sfreq\nepochs = mne.Epochs(raw, events, event_id=dict(signal=1, noise=2),\n tmin=0, tmax=tmax, baseline=None, preload=True)\nassert len(epochs) == 2 # ensure that we got the two expected events\n\n# Plot some of the channels of the simulated data that are situated above one\n# of our simulated sources.\npicks = mne.pick_channels(epochs.ch_names, mne.read_selection('Left-frontal'))\nepochs.plot(picks=picks)",
"Power mapping\nWith our simulated dataset ready, we can now pretend to be researchers that\nhave just recorded this from a real subject and are going to study what parts\nof the brain communicate with each other.\nFirst, we'll create a source estimate of the MEG data. We'll use both a\nstraightforward MNE-dSPM inverse solution for this, and the DICS beamformer\nwhich is specifically designed to work with oscillatory data.\nComputing the inverse using MNE-dSPM:",
"# Compute the inverse operator\nfwd = mne.read_forward_solution(fwd_fname)\ninv = make_inverse_operator(epochs.info, fwd, cov)\n\n# Apply the inverse model to the trial that also contains the signal.\ns = apply_inverse(epochs['signal'].average(), inv)\n\n# Take the root-mean square along the time dimension and plot the result.\ns_rms = np.sqrt((s ** 2).mean())\ntitle = 'MNE-dSPM inverse (RMS)'\nbrain = s_rms.plot('sample', subjects_dir=subjects_dir, hemi='both', figure=1,\n size=600, time_label=title, title=title)\n\n# Indicate the true locations of the source activity on the plot.\nbrain.add_foci(vertices[0][0], coords_as_verts=True, hemi='lh')\nbrain.add_foci(vertices[1][0], coords_as_verts=True, hemi='rh')\n\n# Rotate the view and add a title.\nbrain.show_view(view={'azimuth': 0, 'elevation': 0, 'distance': 550,\n 'focalpoint': [0, 0, 0]})",
"We will now compute the cortical power map at 10 Hz. using a DICS beamformer.\nA beamformer will construct for each vertex a spatial filter that aims to\npass activity originating from the vertex, while dampening activity from\nother sources as much as possible.\nThe :func:mne.beamformer.make_dics function has many switches that offer\nprecise control\nover the way the filter weights are computed. Currently, there is no clear\nconsensus regarding the best approach. This is why we will demonstrate two\napproaches here:\n\nThe approach as described in [2]_, which first normalizes the forward\n solution and computes a vector beamformer.\nThe scalar beamforming approach based on [3]_, which uses weight\n normalization instead of normalizing the forward solution.",
"# Estimate the cross-spectral density (CSD) matrix on the trial containing the\n# signal.\ncsd_signal = csd_morlet(epochs['signal'], frequencies=[10])\n\n# Compute the spatial filters for each vertex, using two approaches.\nfilters_approach1 = make_dics(\n info, fwd, csd_signal, reg=0.05, pick_ori='max-power', normalize_fwd=True,\n inversion='single', weight_norm=None)\nprint(filters_approach1)\n\nfilters_approach2 = make_dics(\n info, fwd, csd_signal, reg=0.1, pick_ori='max-power', normalize_fwd=False,\n inversion='matrix', weight_norm='unit-noise-gain')\nprint(filters_approach2)\n\n# You can save these to disk with:\n# filters_approach1.save('filters_1-dics.h5')\n\n# Compute the DICS power map by applying the spatial filters to the CSD matrix.\npower_approach1, f = apply_dics_csd(csd_signal, filters_approach1)\npower_approach2, f = apply_dics_csd(csd_signal, filters_approach2)\n\n# Plot the DICS power maps for both approaches.\nfor approach, power in enumerate([power_approach1, power_approach2], 1):\n title = 'DICS power map, approach %d' % approach\n brain = power.plot('sample', subjects_dir=subjects_dir, hemi='both',\n figure=approach + 1, size=600, time_label=title,\n title=title)\n\n # Indicate the true locations of the source activity on the plot.\n brain.add_foci(vertices[0][0], coords_as_verts=True, hemi='lh')\n brain.add_foci(vertices[1][0], coords_as_verts=True, hemi='rh')\n\n # Rotate the view and add a title.\n brain.show_view(view={'azimuth': 0, 'elevation': 0, 'distance': 550,\n 'focalpoint': [0, 0, 0]})",
"Excellent! All methods found our two simulated sources. Of course, with a\nsignal-to-noise ratio (SNR) of 1, is isn't very hard to find them. You can\ntry playing with the SNR and see how the MNE-dSPM and DICS approaches hold up\nin the presence of increasing noise. In the presence of more noise, you may\nneed to increase the regularization parameter of the DICS beamformer.\nReferences\n.. [1] Gross et al. (2001). Dynamic imaging of coherent sources: Studying\n neural interactions in the human brain. Proceedings of the National\n Academy of Sciences, 98(2), 694-699.\n https://doi.org/10.1073/pnas.98.2.694\n.. [2] van Vliet, et al. (2018) Analysis of functional connectivity and\n oscillatory power using DICS: from raw MEG data to group-level\n statistics in Python. bioRxiv, 245530. https://doi.org/10.1101/245530\n.. [3] Sekihara & Nagarajan. Adaptive spatial filters for electromagnetic\n brain imaging (2008) Springer Science & Business Media"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kimkipyo/dss_git_kkp
|
통계, 머신러닝 복습/160705화수_25,26일차_뉴럴 네트워크 Neural Network/6.CNN.ipynb
|
mit
|
[
"Convolutional Neural Network\nCNN\n\n\n이미지 분류를 위한 특별한 구조의 Deep Neural Network\n\n\nlocal receptive fields\n\nshared weights\npooling\n\nLocal Receptive Field\n\nInput Layer의 일부 Input에 대해서만 다음 Hidden Layer로 weight 연결\n예: 28x28 Input Layer에서 5x5 영역에 대해서만 weight 연결 \n=> 다음 Hidden Layer의 크기는 (28-5+1)x(28-5+1) = 24x24\nSparse Connectivity\n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz44.png\">\n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz45.png\">\n\nhttp://cs231n.github.io/assets/conv-demo/index.html\n\nShared weights and biases\n\n모든 연결에 대해 공통 weight & bias 계수 사용\n위 예에서 parameter의 수는 26개 (5x5+1)\n\n$$\n\\begin{eqnarray} \n \\sigma\\left(b + \\sum_{l=0}^4 \\sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \\right).\n\\end{eqnarray}\n$$\n\n이 연산은 2-D image filter의 convolution연산과 동일 \n=> Convolution NN\n공통 weight: image kernel, image filter\n\nImage Filter\n<img src=\"http://i.stack.imgur.com/GvsBA.jpg\">",
"import scipy.ndimage\nimg = 255 - sp.misc.face(gray=True).astype(float)\nk = np.zeros((2,2))\nk[:,0] = 1; k[:,1] = -1\nimg2 = np.maximum(0, sp.ndimage.filters.convolve(img, k))\nplt.figure(figsize=(10,5))\nplt.subplot(121)\nplt.imshow(img)\nplt.grid(False)\nplt.subplot(122)\nplt.imshow(img2)\nplt.grid(False)",
"Feature Map\n\n만약 weight가 특정 image patter에 대해 a=1인 출력을 내도록 training 되었다면 \nhidden layer는 feature가 존재하는 위치를 표시\n=> feature map\n여기에서의 feature는 input data를 의미하는 것이 아니라 image 분류에 사용되는 input data의 특정한 pattern을 뜻함\n\n<img src=\"http://www.kdnuggets.com/wp-content/uploads/computer-vision-filters.jpg\">\nMultiple Feature Maps\n\n하나의 공통 weight set은 한 종류의 image feature만 발견 가능\n복수의 feature map (weight set) 필요\n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz46.png\"> \n\nMNIST digit image 에 대해 training이 완료된 20개 feature map의 예\n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/net_full_layer_0.png\" style=\"width:50%;\"> \n<img src=\"http://i.ytimg.com/vi/n6hpQwq7Inw/maxresdefault.jpg\">\nMax Pooling Layer\n\n영역내에서 가장 최대값 출력\n영역내에 feature가 존재하는지의 여부\n전체 영역이 축소 \n\n<img src=\"http://cs231n.github.io/assets/cnn/maxpool.jpeg\" style=\"width:50%;\"> \n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz48.png\">\nL2 pooling\n\nmaximum 값 대신에 영역내의 값의 sum of square 사용\n\nOutput Layer\n\nsoftmax \n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz49.png\">\nDemo\n\nhttp://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html\n\nPython Implementation\n\nhttps://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network3.py\n\n```python\nclass FullyConnectedLayer(object):\ndef __init__(self, n_in, n_out, activation_fn=sigmoid, p_dropout=0.0):\n self.n_in = n_in\n self.n_out = n_out\n self.activation_fn = activation_fn\n self.p_dropout = p_dropout\n # Initialize weights and biases\n self.w = theano.shared(\n np.asarray(\n np.random.normal(\n loc=0.0, scale=np.sqrt(1.0/n_out), size=(n_in, n_out)),\n dtype=theano.config.floatX),\n name='w', borrow=True)\n self.b = theano.shared(\n np.asarray(np.random.normal(loc=0.0, scale=1.0, size=(n_out,)),\n dtype=theano.config.floatX),\n name='b', borrow=True)\n self.params = [self.w, self.b]\n\ndef set_inpt(self, inpt, inpt_dropout, mini_batch_size):\n self.inpt = inpt.reshape((mini_batch_size, self.n_in))\n self.output = self.activation_fn(\n (1-self.p_dropout)*T.dot(self.inpt, self.w) + self.b)\n self.y_out = T.argmax(self.output, axis=1)\n self.inpt_dropout = dropout_layer(\n inpt_dropout.reshape((mini_batch_size, self.n_in)), self.p_dropout)\n self.output_dropout = self.activation_fn(\n T.dot(self.inpt_dropout, self.w) + self.b)\n\ndef accuracy(self, y):\n \"Return the accuracy for the mini-batch.\"\n return T.mean(T.eq(y, self.y_out))\n\n``` \n```python\nclass ConvPoolLayer(object):\n \"\"\"Used to create a combination of a convolutional and a max-pooling\n layer. A more sophisticated implementation would separate the\n two, but for our purposes we'll always use them together, and it\n simplifies the code, so it makes sense to combine them.\n\"\"\"\n\ndef __init__(self, filter_shape, image_shape, poolsize=(2, 2),\n activation_fn=sigmoid):\n \"\"\"`filter_shape` is a tuple of length 4, whose entries are the number\n of filters, the number of input feature maps, the filter height, and the\n filter width.\n\n `image_shape` is a tuple of length 4, whose entries are the\n mini-batch size, the number of input feature maps, the image\n height, and the image width.\n\n `poolsize` is a tuple of length 2, whose entries are the y and\n x pooling sizes.\n\n \"\"\"\n self.filter_shape = filter_shape\n self.image_shape = image_shape\n self.poolsize = poolsize\n self.activation_fn=activation_fn\n # initialize weights and biases\n n_out = (filter_shape[0]*np.prod(filter_shape[2:])/np.prod(poolsize))\n self.w = theano.shared(\n np.asarray(\n np.random.normal(loc=0, scale=np.sqrt(1.0/n_out), size=filter_shape),\n dtype=theano.config.floatX),\n borrow=True)\n self.b = theano.shared(\n np.asarray(\n np.random.normal(loc=0, scale=1.0, size=(filter_shape[0],)),\n dtype=theano.config.floatX),\n borrow=True)\n self.params = [self.w, self.b]\n\ndef set_inpt(self, inpt, inpt_dropout, mini_batch_size):\n self.inpt = inpt.reshape(self.image_shape)\n conv_out = conv.conv2d(\n input=self.inpt, filters=self.w, filter_shape=self.filter_shape,\n image_shape=self.image_shape)\n pooled_out = downsample.max_pool_2d(\n input=conv_out, ds=self.poolsize, ignore_border=True)\n self.output = self.activation_fn(\n pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))\n self.output_dropout = self.output # no dropout in the convolutional layers\n\n```\n```python\nclass SoftmaxLayer(object):\ndef __init__(self, n_in, n_out, p_dropout=0.0):\n self.n_in = n_in\n self.n_out = n_out\n self.p_dropout = p_dropout\n # Initialize weights and biases\n self.w = theano.shared(\n np.zeros((n_in, n_out), dtype=theano.config.floatX),\n name='w', borrow=True)\n self.b = theano.shared(\n np.zeros((n_out,), dtype=theano.config.floatX),\n name='b', borrow=True)\n self.params = [self.w, self.b]\n\ndef set_inpt(self, inpt, inpt_dropout, mini_batch_size):\n self.inpt = inpt.reshape((mini_batch_size, self.n_in))\n self.output = softmax((1-self.p_dropout)*T.dot(self.inpt, self.w) + self.b)\n self.y_out = T.argmax(self.output, axis=1)\n self.inpt_dropout = dropout_layer(\n inpt_dropout.reshape((mini_batch_size, self.n_in)), self.p_dropout)\n self.output_dropout = softmax(T.dot(self.inpt_dropout, self.w) + self.b)\n\ndef cost(self, net):\n \"Return the log-likelihood cost.\"\n return -T.mean(T.log(self.output_dropout)[T.arange(net.y.shape[0]), net.y])\n\ndef accuracy(self, y):\n \"Return the accuracy for the mini-batch.\"\n return T.mean(T.eq(y, self.y_out))\n\n```\n```python\nclass Network(object):\ndef __init__(self, layers, mini_batch_size):\n \"\"\"Takes a list of `layers`, describing the network architecture, and\n a value for the `mini_batch_size` to be used during training\n by stochastic gradient descent.\n\n \"\"\"\n self.layers = layers\n self.mini_batch_size = mini_batch_size\n self.params = [param for layer in self.layers for param in layer.params]\n self.x = T.matrix(\"x\") \n self.y = T.ivector(\"y\")\n init_layer = self.layers[0]\n init_layer.set_inpt(self.x, self.x, self.mini_batch_size)\n for j in xrange(1, len(self.layers)):\n prev_layer, layer = self.layers[j-1], self.layers[j]\n layer.set_inpt(\n prev_layer.output, prev_layer.output_dropout, self.mini_batch_size)\n self.output = self.layers[-1].output\n self.output_dropout = self.layers[-1].output_dropout\n\n\ndef SGD(self, training_data, epochs, mini_batch_size, eta,\n validation_data, test_data, lmbda=0.0):\n \"\"\"Train the network using mini-batch stochastic gradient descent.\"\"\"\n training_x, training_y = training_data\n validation_x, validation_y = validation_data\n test_x, test_y = test_data\n\n # compute number of minibatches for training, validation and testing\n num_training_batches = size(training_data)/mini_batch_size\n num_validation_batches = size(validation_data)/mini_batch_size\n num_test_batches = size(test_data)/mini_batch_size\n\n # define the (regularized) cost function, symbolic gradients, and updates\n l2_norm_squared = sum([(layer.w**2).sum() for layer in self.layers])\n cost = self.layers[-1].cost(self)+\\\n 0.5*lmbda*l2_norm_squared/num_training_batches\n grads = T.grad(cost, self.params)\n updates = [(param, param-eta*grad)\n for param, grad in zip(self.params, grads)]\n\n # define functions to train a mini-batch, and to compute the\n # accuracy in validation and test mini-batches.\n i = T.lscalar() # mini-batch index\n train_mb = theano.function(\n [i], cost, updates=updates,\n givens={\n self.x:\n training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],\n self.y:\n training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]\n })\n validate_mb_accuracy = theano.function(\n [i], self.layers[-1].accuracy(self.y),\n givens={\n self.x:\n validation_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],\n self.y:\n validation_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]\n })\n test_mb_accuracy = theano.function(\n [i], self.layers[-1].accuracy(self.y),\n givens={\n self.x:\n test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],\n self.y:\n test_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]\n })\n self.test_mb_predictions = theano.function(\n [i], self.layers[-1].y_out,\n givens={\n self.x:\n test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size]\n })\n # Do the actual training\n best_validation_accuracy = 0.0\n for epoch in xrange(epochs):\n for minibatch_index in xrange(num_training_batches):\n iteration = num_training_batches*epoch+minibatch_index\n if iteration % 1000 == 0:\n print(\"Training mini-batch number {0}\".format(iteration))\n cost_ij = train_mb(minibatch_index)\n if (iteration+1) % num_training_batches == 0:\n validation_accuracy = np.mean(\n [validate_mb_accuracy(j) for j in xrange(num_validation_batches)])\n print(\"Epoch {0}: validation accuracy {1:.2%}\".format(\n epoch, validation_accuracy))\n if validation_accuracy >= best_validation_accuracy:\n print(\"This is the best validation accuracy to date.\")\n best_validation_accuracy = validation_accuracy\n best_iteration = iteration\n if test_data:\n test_accuracy = np.mean(\n [test_mb_accuracy(j) for j in xrange(num_test_batches)])\n print('The corresponding test accuracy is {0:.2%}'.format(\n test_accuracy))\n print(\"Finished training network.\")\n print(\"Best validation accuracy of {0:.2%} obtained at iteration {1}\".format(\n best_validation_accuracy, best_iteration))\n print(\"Corresponding test accuracy of {0:.2%}\".format(test_accuracy))\n\n``` \nPerformance Test",
"%cd /home/dockeruser/neural-networks-and-deep-learning/src",
"Normal MLP",
"import network3\nfrom network3 import Network\nfrom network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer\n\ntraining_data, validation_data, test_data = network3.load_data_shared()\nmini_batch_size = 10\n\nnet = Network([\n FullyConnectedLayer(n_in=784, n_out=100),\n SoftmaxLayer(n_in=100, n_out=10)], \n mini_batch_size)\n\nnet.SGD(training_data, 10, mini_batch_size, 0.1, validation_data, test_data)",
"Add Convolutional + Pooling Layer",
"net = Network([\n ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), \n filter_shape=(20, 1, 5, 5), \n poolsize=(2, 2)),\n FullyConnectedLayer(n_in=20*12*12, n_out=100),\n SoftmaxLayer(n_in=100, n_out=10)], \n mini_batch_size)\n\nnet.SGD(training_data, 10, mini_batch_size, 0.1, validation_data, test_data) ",
"Add Additional Convolution + Pool Layer\n\n두번째 convolutional-pooling layer의 역할\nfeature map에서 feature가 나타나는 pattern의 포착\nfeature of feature map",
"net = Network([\n ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), \n filter_shape=(20, 1, 5, 5), \n poolsize=(2, 2)),\n ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), \n filter_shape=(40, 20, 5, 5), \n poolsize=(2, 2)),\n FullyConnectedLayer(n_in=40*4*4, n_out=100),\n SoftmaxLayer(n_in=100, n_out=10)], \n mini_batch_size)\n\nnet.SGD(training_data, 10, mini_batch_size, 0.1, validation_data, test_data)",
"Apply ReLu\n\nsigmoid activation functions 보다 성능 향상",
"from network3 import ReLU\n\nnet = Network([\n ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), \n filter_shape=(20, 1, 5, 5), \n poolsize=(2, 2), \n activation_fn=ReLU),\n ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), \n filter_shape=(40, 20, 5, 5), \n poolsize=(2, 2), \n activation_fn=ReLU),\n FullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),\n SoftmaxLayer(n_in=100, n_out=10)], \n mini_batch_size)\n\nnet.SGD(training_data, 60, mini_batch_size, 0.03, validation_data, test_data, lmbda=0.1)",
"History of CNN\n1998 LeNet-5 paper\n\n\"Gradient-based learning applied to document recognition\"\nby Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner\nLeNet-5\nMNIST digit image classification\n\n2012 LRMD paper\n\n\"Building high-level features using large scale unsupervised learning\"\nby Quoc Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, and Andrew Ng (2012). \nStanford and Google\nclassify images from ImageNet\n\naccuracy 9.3% -> 15.8%\n\n\nImage-Net\n\nhttp://image-net.org/\n16 million full color images in 20 thousand categories\nclassified by Amazon's Mechanical Turk service\n\n2012 KSH paper\n\n\"ImageNet classification with deep convolutional neural networks\"\nby Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton (2012).\nImageNet Large-Scale Visual Recognition Challenge (ILSVRC)\ntraining set: 1.2 million ImageNet images, drawn from 1,000 categories\nvalidation and test sets: 50,000 and 150,000 images from the same 1,000 categories\nsome contain multiple objects\naccuracy 84.7%\nAlexNet\nInput Layer: 3×224×224 neurons, (RGB values for a 224×224 image)\n77 hidden layers of neurons\nfirst 55 hidden layers are convolutional layers (some with max-pooling), \nnext 22 layers are fully-connected layers\n\n\nThe ouput layer is a 1,000-unit softmax layer\nReLU (rectified linear units)\nparameters: 60 million\nl2 regularization and dropout\nmomentum-based mini-batch stochastic gradient descent\n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/KSH.jpg\">\n2014 ILSVRC competition\n\ntraining set of 1.2 million images, in 1,000 categories\nGoogLeNet\n22 layers Deep CNN\n93.33%"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
misken/hillmaker-examples
|
notebooks/basic_usage_shortstay_unit_multicats.ipynb
|
apache-2.0
|
[
"Using hillmaker (v0.2.0)\nIn this notebook we'll focus on basic use of hillmaker for analyzing occupancy in a typical hospital setting. The data is fictitious data from a hospital short stay unit (SSU). Patients flow through a SSU for a variety of procedures, tests or therapies. Let's assume patients can be classified into one of five categories of patient types: ART (arterialgram), CAT (post cardiac-cath), MYE (myelogram), IVT (IV therapy), and OTH (other). In addition, patients are given a severity score of 1 or 2 which is related to the amount of time required in hte SSU and the level of resources required. From one of our hospital information systems we were able to get raw data about the entry and exit times of each patient along with their patient type and severity values. For simplicity, the data is in a csv file. We are interested in occupancy statistics (e.g. mean, standard deviation, percentiles) by time of day and by day of week. While overall occupancy statistics are important, we are also interested in occupancy statistics for different patient types and severity levels. Since we also are interested in required staffing for this unit, we'll also use hillmaker to analyze workload levels.\nThis example assumes you are already familiar with statistical occupancy analysis using the old version of Hillmaker or some similar such tool. It also assumes some knowledge of using Python for analytical work.\nThe following blog posts are helpful if you are not familiar with occupancy analysis:\n\nNew version of hillmaker (finally) released - and it's Python \nUsing hillmaker from R with reticulate to analyze time of day patterns in bike share data \nComputing occupancy statistics with Python - Part 1 of 3\nComputing occupancy statistics with Python - Part 2 of 3\n\nCurrent status of code\nThe new hillmaker is implemented as a Python module which can be used by importing hillmaker and then calling the main hillmaker function, make_hills() (or any component function included in the module). This new version of hillmaker is in what I'd call an alpha state. The output does match the Access version for the ShortStay database that I included in the original Hillmaker. Use at your own risk.\nIt is licensed under an Apache 2.0 license. It is a widely used permissive free software license. See https://en.wikipedia.org/wiki/Apache_License for additional information.\nGetting Started\nIn order to use hillmaker, the major steps are:\n\nmake sure you have Python and necessary packages installed,\ndownload and install hillmaker,\nload hillmaker and start using it from either a Jupyter notebook, Python terminal or Python script.\n\nI'll go through each of these in more detail. As a big part of the audience for this post is former users of the MS Access version of Hillmaker using the Windows OS, many of whom have little experience with tools like Python, I'll try to make the transition as easy as possible.\nDependencies\nWhereas the old Hillmaker required MS Access, the new one requires an installation of \nPython 3 (3.7+) along \nwith several Python modules that are widely used for analytics and data science work. \nMost importantly, hillmaker 0.2.0 requires pandas 1.0.0 or later.\nGetting Python and many analytical packages via Anaconda\nAn very easy way to get Python 3 pre-configured with tons of analytical Python packages is to use the Anaconda distro for Python. From their Downloads page:\n\nAnaconda is a completely free Python distribution (including for commercial use and redistribution). \nIt includes more than 300 of the most popular Python packages for science, math, engineering, and \ndata analysis. See the packages included with Anaconda and the Anaconda changelog.\n\nThere are several really nice reasons to use the Anaconda Python distro for data science work:\n\nit comes preconfigured with hundreds of the most popular data science Python packages installed and they just work\nlarge community of Anaconda data science users and vibrant user community on places like StackOverflow\nit has a companion package manager called Conda which makes it easy to install new packages as well as to create and manage virtual environments\n\nIf you use Anaconda, you already have all of the necessary libraries for using hillmaker other than hillmaker itself.\nGetting Hillmaker\nSince 2016, hillmaker has been freely available from the Python Package Index known as PyPi as well as Anaconda Cloud. They are similar to CRAN for R. Source code is also be available from my GitHub site https://github.com/misken/hillmaker and it is an open-source project. If you work with Python, you should know a little bit about Python package installation. There is already a companion project on GitHub called hillmaker-examples which contains, well, examples of hillmaker use cases. \nInstalling Hillmaker\nYou can use either pip or conda to install hillmaker. I suggest learning about Python virtual environments and either using pyenv, virtualenv or conda (preferred) to create a Python virtual environment and then install hillmaker into it. This way you avoid mixing developmental third-party packages like hillmaker with your base Anaconda Python environment. \nStep 1 - Open a terminal and install using Conda or Pip\nTo install using conda:\nsh\nconda install -c https://conda.anaconda.org/hselab hillmaker\nOR\nTo install using pip:\nsh\npip install hillmaker\nStep 2 - Confirm that hillmaker was installed\nUse the conda list command to see all the installed packages in your Anaconda3 root.\nsh\nconda list\nYou should see hillmaker in the listing.\nStep 3 - Confirm that hillmaker can be loaded\nNow fire up a Python session (just type python at a Linux/Mac shell or a Windows Anaconda command prompt) and try:\nimport hillmaker as hm\n\nIf the install went well, you shouldn't get any errors when you import hillmaker. To see the main help docstring, do the following at your Python prompt:\nhelp(hm.make_hills)\n\nUsing hillmaker\nThe rest of this Jupyter notebook will illustrate a few ways to use the hillmaker package to analyze occupancy in our SSU.\nModule imports\nTo run Hillmaker we only need to import a few modules. Since the main Hillmaker function uses Pandas DataFrames for both data input and output, we need to import pandas in addition to hillmaker.",
"import pandas as pd\nimport hillmaker as hm",
"Read main data file containing patient visits to short stay unit\nHere's the first few lines from our csv file containing the patient stop data:\nPatID,InRoomTS,OutRoomTS,PatType,Severity,PatTypeSeverity\n1,01/01/96 07:44 AM,01/01/96 08:50 AM,IVT,1,IVT_1\n2,01/01/96 08:28 AM,01/01/96 09:20 AM,IVT,1,IVT_1\n3,01/01/96 11:44 AM,01/01/96 01:30 PM,MYE,1,MYE_1\n4,01/01/96 11:51 AM,01/01/96 12:55 PM,CAT,1,CAT_1\n5,01/01/96 12:10 PM,01/01/96 01:00 PM,IVT,2,IVT_2\n\nRead the short stay data from a csv file into a DataFrame and tell Pandas which fields to treat as dates.",
"file_stopdata = '../data/ShortStay2.csv'\nstops_df = pd.read_csv(file_stopdata, parse_dates=['InRoomTS','OutRoomTS'])\nstops_df.info() ",
"Check out the top and bottom of stops_df.",
"stops_df.head(7)\n\nstops_df.tail(5)",
"Enhancement to handle multiple categorical fields\nNotice that the PatType field are strings while Severity is integer data. In the previous version of hillmaker (v0.1.1), you could only specify a single category field and it needed to be of type string. So, to compute occupancy statistics by Severity required some data wrangling (convert int to string) and to analyze occupancy by PatType and Severity required further wrangling to concatenate the two fields into a single field that we could feed to hillmaker. Note in the output above that I've included an example of such a concatenation just for illustration purposes. \nIn this latest version, you can specify zero or more categorical fields which can either be string or integer data types. There is no need to create a concatenated version such as the PatTypeSeverity field above. We'll see that you also have finer control over category field subtotaling.\nLet's do some counts of patients by the two categorical fields.",
"stops_df.groupby('PatType')['PatID'].count()\n\nstops_df.groupby('Severity')['PatID'].count()",
"No obvious problems. We'll assume the data was all read in correctly.\nCreating occupancy summaries\nThe primary function in Hillmaker is called make_hills and plays the same role as the Hillmaker function in the original Access VBA version of Hillmaker. Let's get a little help on this function.",
"help(hm.make_hills)",
"Most of the parameters are similar to those in the original VBA version, though a few new ones have been added. Since the VBA version used an Access database as the container for its output, new parameters were added to control output to csv files and/or pandas DataFrames instead.\nExample 1: 60 minute bins, PatientType and Severity, export to csv\nSpecify values for all the required inputs:",
"# Required inputs\nscenario = 'example1'\nin_fld_name = 'InRoomTS'\nout_fld_name = 'OutRoomTS'\nstart = '1/1/1996'\nend = '3/30/1996 23:45'\n\n# Optional inputs\ncat_fld_name = ['PatType', 'Severity']\nverbose = 1\noutput = './output'\n",
"Now we'll call the main make_hills function. We won't capture the return values but will simply take the default behavior of having the summaries exported to csv files. You'll see that the filenames will contain the scenario value.",
"hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, \n catfield=cat_fld_name, \n export_path = output, verbose=verbose)",
"Let's list the contents of the output folder containing the csv files created by hillmaker. For Windows users, the following is the Linux ls command. The leading exclamation point tells Jupyter that this is an operating system command. To list the files in Windows, the equivalent would be:\n!dir output\\example1*.csv",
"!ls ./output/example1*.csv",
"There are three groups of statistical summary files related to arrivals, departures and occupancy. In addition, the intermediate \"bydatetime\" files are also included. The filenames indicate whether or not the statistics are by category we well as if they are by day of week and time of day. \nOccupancy, arrival and departure summaries\nLet's look at the occupancy summaries (the structure is identical for arrivals and departures.) Here's a peek into the middle of example1_occupancy_PatType_Severity_dow_binofday.csv.",
"pd.set_option('precision', 2)\npd.read_csv(\"./output/example1_occupancy_PatType_Severity_dow_binofday.csv\").iloc[100:110]",
"Statistics by day and time but aggregated over all the categories are also available.",
"pd.read_csv(\"./output/example1_occupancy_dow_binofday.csv\").iloc[20:40]",
"For those files without \"dow_binofday\" in their name, the statistics are by category only.",
"pd.read_csv(\"./output/example1_occupancy_PatType_Severity.csv\").head(20)",
"There's even a summary that aggregates over categories and time. Obviously, it contains a single row.",
"pd.read_csv(\"./output/example1_occupancy.csv\")",
"Intermediate bydatetime files\nThe intermediate tables used to compute the summaries we just looked at, are also available both by category and overall. Each row is a single time bin (e.g. date and hour of day). Note that the occupancy values are not necessarily integer since hillmaker's default behavior is to use fractional occupancy contributions for the bins in which the patient arrives and departs (e.g. if the patient arrived half-way through the time bin, they contribute 0.5 to total occupancy during that time bin). This behavior can be changed by specifying edge_bins=2 when calling make_hills.",
"pd.read_csv(\"./output/example1_bydatetime_datetime.csv\").iloc[100:125]\n\npd.read_csv(\"./output/example1_bydatetime_PatType_Severity_datetime.csv\").iloc[100:125]",
"If you've used the previous version of Hillmaker, you'll recognize these files. The default behavior has changed to compute fewer percentiles but any percentiles you want can be computed by specifying them in the percentiles argument to make_hills. \nExample 2: Compute totals for individual category fields, select percentiles, output to DataFrames\nWe'll repeat the example above but use totals=2 so that we get totals computed for each of the category fields in addition to overall totals. I'm also specifying a custom list of percentiles to compute. Instead of exporting CSV files, we'll capture the results as a dictionary of DataFrames.",
"# Required inputs\nscenario = 'example2'\nin_fld_name = 'InRoomTS'\nout_fld_name = 'OutRoomTS'\nstart = '1/1/1996'\nend = '3/30/1996 23:45'\n\n# Optional inputs\ncat_fld_name = ['PatType', 'Severity']\ntotals= 2\npercentiles=[0.5, 0.95]\nverbose = 0 # Silent mode\noutput = './output'\nexport_bydatetime_csv = True\nexport_summaries_csv = True\n",
"Now we'll call make_hills and tuck the results (a dictionary of DataFrames) into a local variable. Then we can explore them a bit with Pandas.",
"example2_dfs = hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, cat_fld_name, \n totals=totals, export_path=output, verbose=verbose,\n export_bydatetime_csv=export_bydatetime_csv, \n export_summaries_csv=export_summaries_csv)",
"The example2_dfs return value is several nested dictionaries eventually leading to pandas DataFrames as values. Let's explore the key structure. It's pretty simple.",
"example2_dfs.keys()",
"Let's explore the 'summaries' key first. As you might guess, this will eventually lead to the statistical summary DataFrames.",
"example2_dfs['summaries'].keys()\n\nexample2_dfs['summaries']['nonstationary'].keys()\n\nexample2_dfs['summaries']['nonstationary']['Severity_dow_binofday'].keys()\n\nexample2_dfs['summaries']['nonstationary']['Severity_dow_binofday']['occupancy']",
"The stationary summaries are similar except that there are no day of week and time bin of day related files.\nNow let's look at the 'bydatetime' key at the top level. Yep, gonna lead to bydatetime DataFrames.",
"example2_dfs['bydatetime'].keys()\n\nexample2_dfs['bydatetime']['PatType_Severity_datetime']",
"Example 3 - Workload hills instead of occupancy\nAssume that we are doing a staffing analysis and want to look at the distribution of workload by time of day and day of week. In order to translate patients to workload, we'll use simple staff to patient ratios based on severity. For example, let's assume that for Severity=1 we want to have a 1:4 staff to patient ratio and for Severity=2 we need a 1:2 ratio. Let's create a new field called workload using these ratios.",
"severity_to_workload = {'1':0.25, '2':0.5}\nstops_df['workload'] = stops_df['Severity'].map(lambda x: severity_to_workload[str(x)])\n\nstops_df.head(10)",
"Now we can create workload hills. I'm just going to compute overall workload by not specifiying a category field. Notice the use of the occ_weight_field argument.",
"# Required inputs\nscenario = 'example3'\nin_fld_name = 'InRoomTS'\nout_fld_name = 'OutRoomTS'\nstart = '1/1/1996'\nend = '3/30/1996 23:45'\n\n# Optional inputs\nocc_weight_field = 'workload'\nverbose = 0\noutput = './output'\n\nexample3_dfs = hm.make_hills(scenario, stops_df, in_fld_name, out_fld_name, start, end, \n occ_weight_field=occ_weight_field, \n export_path = output, verbose=verbose)\n\nexample2_dfs['summaries']['stationary']['Severity']['occupancy']\n\nexample3_dfs['summaries']['stationary']['']['occupancy']",
"We can check the overall mean workload in example3 by doing a weighted average of the mean occupancies by Severity from example2 with the workload ratios as weights.",
"import numpy as np\n\nmean_occ = np.asarray(example2_dfs['summaries']['stationary']['Severity']['occupancy'].loc[:,'mean'])\nmean_occ\n\nratios = [severity_to_workload[str(i+1)] for i in range(2)]\nratios\n\noverall_mean_workload = np.dot(mean_occ, ratios)\noverall_mean_workload",
"Example 4 - Running via a Python script\nOf course, you don't have to run Python statements through a Jupyter notebook. You can create a Python script and run that directly in a terminal. An example, test_shortstay2_multicats.py, can be found in the scripts subfolder of the hillmaker-examples project. You can run it from a command prompt like this:\nsh\npython test_shortstay2_multicats.py\nThere is another example in that folder as well, test_obsim_log.py, that is slightly more complex in that the input data has raw simulation times (i.e. minutes past t=0) and we need to do some datetime math to turn them into calendar based inputs.\nMore elaborate versions of scripts like test_shortstay2_multicats.py can be envisioned. For example, an entire folder of input data files could be processed by enclosing the hm.make_hills call inside a loop over the collection of input files:\nfor log_fn in glob.glob('logs/*.csv'):\n\n # Read the log file and filter by included categories\n stops_df = pd.read_csv(log_fn, parse_dates=[in_fld_name, out_fld_name])\n\n hm.make_hills(scenario, df, in_fld_name, out_fld_name, start, end, cat_fld_name)\n ...\n\nUser interface plans\nOver the years, I (and many others) have used Hillmaker in a variety of ways, including:\n\nMS Access form based GUI\nrun main Hillmaker sub from Access VBA Immediate Window\nrun Hillmaker main sub (and/or components subs) via custom VBA procedures\n\nI'd like users to be able to use the new Python based version in a number of different ways as well. As I've shown in this Jupyter notebook, it can be used by importing the hillmaker module and then calling Hillmaker functions via:\n\nan Jupyter notebook (or any Python terminal such as an IPython shell or QT console, or IDLE)\na Python script with the input arguments set and passed via Python statements\n\nWhile these two options provide tons of flexibility for power users, I also want to create other interfaces that don't require users to write Python code. At a minimum, I plan to create a command line interface (CLI) as well as a GUI that is similar to the old Access version.\nA CLI for Hillmaker\nPython has several nice tools for creating CLI's. Both docopt and argparse are part of the standard library. Layered on top of these are tools like Click. See http://docs.python-guide.org/en/latest/scenarios/cli/ for more. A well designed CLI will make it easy to use Python from the command line in either Windows or Linux. \nA GUI for Hillmaker\nThis is uncharted territory for me. Python has a number of frameworks/toolkits for creating GUI apps. This is not the highest priority for me but I do plan on creating a GUI for Hillmaker. If anyone wants to help with this, awesome."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
timstaley/voeventdb
|
notebooks/notes_on_scoped_session.ipynb
|
gpl-2.0
|
[
"%load_ext autoreload\n%autoreload 2\n\nimport voeventdb\nimport sqlalchemy\n\nimport logging\nlogging.basicConfig()\n\nfrom voeventdb.database.models import Voevent, Base\n\nfrom voeventdb.database import db_utils\nfrom voeventdb.tests.config import testdb_scratch_url, admin_db_url\nif not db_utils.check_database_exists(testdb_scratch_url):\n db_utils.create_database(admin_db_url, testdb_scratch_url.database)\nengine = sqlalchemy.engine.create_engine(testdb_scratch_url)\nBase.metadata.create_all(engine)\nengine.dispose()\n\nimport sqlalchemy\nfrom sqlalchemy.orm import sessionmaker\nengine = sqlalchemy.engine.create_engine(testdb_scratch_url)\nSession = sessionmaker(bind=engine)\ns = Session()\ns.query(Voevent).first()\n\nimport sqlalchemy.orm as orm\n\nsm = orm.sessionmaker()\nscoped_sm = orm.scoped_session(sm)\nscoped_sm.configure(bind=engine) # configures the underlying `sm` sessionmaker object",
"A sessionmaker does not have a query property - we don't expect it to, after all it's for making sessions, not queries:",
"# sm.query(Voevent).count() #<--Raises",
"So, make a session:",
"regular_session = sm()\nregular_session.query(Voevent).count()",
"Ok. We can do the same sort of thing with a scoped session:",
"scoped_session = scoped_sm()\nscoped_session.query(Voevent).count()",
"However - shenanigans! - a sqlalchemy.orm.scoped_session (i.e. a scoped-session factory) has a .query attribute, created via the query_property method. AFAICT this is syntactic sugar, proxying to query attribute of the underlying session.\nThis is documented here:\nhttp://docs.sqlalchemy.org/en/rel_1_0/orm/contextual.html?highlight=scoped_session#implicit-method-access\n(Though not very prominently, considering how heavily it's used in flask-related stuff. Breadcrumbs from e.g. flask-sqlalchemy docs might have been nice.)",
"scoped_sm.query(Voevent).count()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/tcav
|
Run_TCAV_on_colab.ipynb
|
apache-2.0
|
[
"# Clone the entire repo.\n!git clone https://github.com/tensorflow/tcav.git tcav\n%cd tcav\n!ls\n\n%cd /content/tcav/tcav/tcav_examples/image_models/imagenet\n%run download_and_make_datasets.py --source_dir=YOUR_FOLDER --number_of_images_per_folder=10 --number_of_random_folders=10\n\n%cd /content/tcav",
"Running TCAV\nThis notebook walks you through things you need to run TCAV. \nBefore running this notebook, run the following to download all the data.\n```\ncd tcav/tcav_examples/image_models/imagenet\npython download_and_make_datasets.py --source_dir=YOUR_PATH --number_of_images_per_folder=50 --number_of_random_folders=3\n```\nIn high level, you need:\n\nexample images in each folder (you have this if you ran the above)\nimages for each concept\nimages for the class/labels of interest\nrandom images that will be negative examples when learning CAVs (images that probably don't belong to any concepts)\nmodel wrapper (below uses example from tcav/model.py)\nan instance of ModelWrapper abstract class (in model.py). This tells TCAV class (tcav.py) how to communicate with your model (e.g., getting internal tensors)\nact_generator (below uses example from tcav/activation_generator.py)\nan instance of ActivationGeneratorInterface that tells TCAV class how to load example data and how to get activations from the model\n\nRequirements\npip install the tcav and tensorflow packages (or tensorflow-gpu if using GPU)",
"%load_ext autoreload\n%autoreload 2\n\nimport tcav.activation_generator as act_gen\nimport tcav.cav as cav\nimport tcav.model as model\nimport tcav.tcav as tcav\nimport tcav.utils as utils\nimport tcav.utils_plot as utils_plot # utils_plot requires matplotlib\nimport os \nimport tensorflow as tf",
"Step 1. Store concept and target class images to local folders\nand tell TCAV where they are.\nsource_dir: where images of concepts, target class and random images (negative samples when learning CAVs) live. Each should be a sub-folder within this directory.\nNote that random image directories can be in any name. In this example, we are using random500_0, random500_1,.. for an arbitrary reason. \nYou need roughly 50-200 images per concept and target class (10-20 pictures also tend to work, but 200 is pretty safe).\ncav_dir: directory to store CAVs (None if you don't want to store)\ntarget, concept: names of the target class (that you want to investigate) and concepts (strings) - these are folder names in source_dir\nbottlenecks: list of bottleneck names (intermediate layers in your model) that you want to use for TCAV. These names are defined in the model wrapper below.",
"# This is the name of your model wrapper (InceptionV3 and GoogleNet are provided in model.py)\nmodel_to_run = 'GoogleNet'\n# the name of the parent directory that results are stored (only if you want to cache)\nproject_name = 'tcav_class_test'\nworking_dir = '/content/tcav/tcav'\n# where activations are stored (only if your act_gen_wrapper does so)\nactivation_dir = working_dir+ '/activations/'\n# where CAVs are stored. \n# You can say None if you don't wish to store any.\ncav_dir = working_dir + '/cavs/'\n# where the images live. \nsource_dir = '/content/tcav/tcav/tcav_examples/image_models/imagenet/YOUR_FOLDER'\nbottlenecks = [ 'mixed4c'] # @param \n \nutils.make_dir_if_not_exists(activation_dir)\nutils.make_dir_if_not_exists(working_dir)\nutils.make_dir_if_not_exists(cav_dir)\n\n# this is a regularizer penalty parameter for linear classifier to get CAVs. \nalphas = [0.1] \n\ntarget = 'zebra' \nconcepts = [\"dotted\",\"striped\",\"zigzagged\"] \n",
"Step 2. Write your model wrapper\nNext step is to tell TCAV how to communicate with your model. See model.GoogleNetWrapper_public for details.\nYou can define a subclass of ModelWrapper abstract class to do this. Let me walk you thru what each function does (tho they are pretty self-explanatory). This wrapper includes a lot of the functions that you already have, for example, get_prediction.\n1. Tensors from the graph: bottleneck tensors and ends\nFirst, store your bottleneck tensors in self.bottlenecks_tensors as a dictionary. You only need bottlenecks that you are interested in running TCAV with. Similarly, fill in self.ends dictionary with input, logit and prediction tensors.\n2. Define loss\nGet your loss tensor, and assigned it to self.loss. This is what TCAV uses to take directional derivatives. \nWhile doing so, you would also want to set \npython\nself.y_input\nthis simply is a tensorflow place holder for the target index in the logit layer (e.g., 0 index for a dog, 1 for a cat).\nFor multi-class classification, typically something like this works:\npython\nself.y_input = tf.placeholder(tf.int64, shape=[None])\nFor example, for a multiclass classifier, something like below would work. \n```python\n # Construct gradient ops.\n with g.as_default():\n self.y_input = tf.placeholder(tf.int64, shape=[None])\n self.pred = tf.expand_dims(self.ends['prediction'][0], 0)\n\n self.loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(\n labels=tf.one_hot(self.y_input, len(self.labels)),\n logits=self.pred))\nself._make_gradient_tensors()\n\n```\n3. Call _make_gradient_tensors in init() of your wrapper\npython\n_make_gradient_tensors()\ndoes what you expect - given the loss and bottleneck tensors defined above, it adds gradient tensors.\n4. Fill in labels, image shapes and a model name.\nGet the mapping from labels (strings) to indice in the logit layer (int) in a dictionary format.\npython\ndef id_to_label(self, idx)\ndef label_to_id(self, label)\nSet your input image shape at self.image_shape\nSet your model name to self.model_name\nYou are done with writing the model wrapper! I wrote two model wrapers, InceptionV3 and Googlenet.\nsess: a tensorflow session.",
"%cp -av '/content/tcav/tcav/tcav_examples/image_models/imagenet/YOUR_FOLDER/mobilenet_v2_1.0_224' '/content/tcav/tcav/mobilenet_v2_1.0_224'\n%rm '/content/tcav/tcav/tcav_examples/image_models/imagenet/YOUR_FOLDER/mobilenet_v2_1.0_224'\n\n%cp -av '/content/tcav/tcav/tcav_examples/image_models/imagenet/YOUR_FOLDER/inception5h' '/content/tcav/tcav/inception5h'\n%rm '/content/tcav/tcav/tcav_examples/image_models/imagenet/YOUR_FOLDER/inception5h'\n\nsess = utils.create_session()\n\n# GRAPH_PATH is where the trained model is stored.\nGRAPH_PATH = \"/content/tcav/tcav/inception5h/tensorflow_inception_graph.pb\"\n# LABEL_PATH is where the labels are stored. Each line contains one class, and they are ordered with respect to their index in \n# the logit layer. (yes, id_to_label function in the model wrapper reads from this file.)\n# For example, imagenet_comp_graph_label_strings.txt looks like:\n# dummy \n# kit fox\n# English setter\n# Siberian husky ...\n\nLABEL_PATH = \"/content/tcav/tcav/inception5h/imagenet_comp_graph_label_strings.txt\"\n\nmymodel = model.GoogleNetWrapper_public(sess,\n GRAPH_PATH,\n LABEL_PATH)",
"Step 3. Implement a class that returns activations (maybe with caching!)\nLastly, you will implement a class of the ActivationGenerationInterface which TCAV uses to load example data for a given concept or target, call into your model wrapper and return activations. I pulled out this logic outside of mymodel because this step often takes the longest. By making it modular, you can cache your activations and/or parallelize your computations, as I have done in ActivationGeneratorBase.process_and_load_activations in activation_generator.py.\nThe process_and_load_activations method of the activation generator must return a dictionary of activations that has concept or target name as a first key, and the bottleneck name as a second key. So something like:\npython\n{concept1: {bottleneck1: [[0.2, 0.1, ....]]},\nconcept2: {bottleneck1: [[0.1, 0.02, ....]]},\ntarget1: {bottleneck1: [[0.02, 0.99, ....]]}",
"act_generator = act_gen.ImageActivationGenerator(mymodel, source_dir, activation_dir, max_examples=100)",
"You are ready to run TCAV!\nLet's do it.\nnum_random_exp: number of experiments to confirm meaningful concept direction. TCAV will search for this many folders named random500_0, random500_1, etc. You can alternatively set the random_concepts keyword to be a list of folders of random concepts. Run at least 10-20 for meaningful tests. \nrandom_counterpart: as well as the above, you can optionally supply a single folder with random images as the \"positive set\" for statistical testing. Reduces computation time at the cost of less reliable random TCAV scores.",
"import absl\nabsl.logging.set_verbosity(0)\nnum_random_exp=10\n## only running num_random_exp = 10 to save some time. The paper number are reported for 500 random runs. \nmytcav = tcav.TCAV(sess,\n target,\n concepts,\n bottlenecks,\n act_generator,\n alphas,\n cav_dir=cav_dir,\n num_random_exp=num_random_exp)#10)\nprint ('This may take a while... Go get coffee!')\nresults = mytcav.run(run_parallel=False)\nprint ('done!')\n\nutils_plot.plot_results(results, num_random_exp=num_random_exp)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Aditya8795/Python-Scripts
|
Peturn Normally to move Uniformly.ipynb
|
mit
|
[
"How should we peturb the individual components of a $d$ dimensional vector in order to ensure that all directions are treated as equally likely. \nIn Random Search algorithms, we need to sample a few surrounding points representatively. Ideally we would like to sample on the unit ball (pick $v$) around the parameter vector $\\theta$ and take a step of size $\\alpha$ in that direction. This would be like a random walk.\n$$\\theta_{k+1} = \\theta_k + \\alpha v$$\nIn simple random search used in RL, we approximate the gradient by picking $N$ random directions and taking a weighted average of them based on how much reward each of the directions give us.\n$$\\theta_{k+1} = \\theta_k + \\alpha \\frac{\\sum_{i=1}^N (R(\\theta_k + \\alpha v_i) - R(\\theta_k - \\alpha v_i))v_i}{N}$$\nNow the question is how to sample for $v$. In the lower dimensions we can sample from the unit circle using polar coordinates, it should be possible",
"import numpy as np\n\n# returns a random d dimensional vector, a direction to peturb in \ndef direction(d,t):\n # if type == uniform\n if(t == 'u'):\n return np.random.uniform(-2/np.sqrt(d), 2/np.sqrt(d), d)\n elif(t == 'n'):\n return np.random.normal(0, 1/np.sqrt(d), d)\n elif(t == 's'):\n # a point on the N-Sphere r = 1 so it is ommited\n angles = np.random.uniform(0, np.pi, d-2)\n angleLast = np.random.uniform(0, 2*np.pi,1)[0]\n x = np.zeros(d)\n x[0] = np.cos(angles[0])\n for i in range(1,d-1):\n temp = 1\n for j in range(i):\n temp = temp * np.sin(angles[j])\n if(i == d-2):\n x[i] = temp * np.cos(angleLast)\n else:\n x[i] = temp*np.cos(angles[i])\n x[d-1] = x[d-2]*np.tan(angleLast)\n return x\n\n#N = 10000\nN = 100 # number of directions sampled AND the number of dimensions.\nd = N\nhN = []\nnormal = direction(d,'n').reshape(d,1)\n\nfor i in range(N-1):\n hN.append(np.linalg.norm(direction(d,'n')))\n normal = np.concatenate((normal,direction(d,'n').reshape(d,1)), axis = 1)\n \n\n\nimport matplotlib.pyplot as plt\nplt.hist(hN)\nplt.show()\n\nhU = []\n\nuniform = direction(d,'u').reshape(d,1)\n\nfor i in range(N-1):\n hU.append(np.linalg.norm(direction(d,'u')))\n uniform = np.concatenate((uniform,direction(d,'u').reshape(d,1)), axis = 1)\n \n\nimport matplotlib.pyplot as plt\nplt.hist(hU)\nplt.show()\n\nN = 1000\nhS = []\n\nspherical = direction(d,'s').reshape(d,1)\n\nfor i in range(N-1):\n hS.append(np.linalg.norm(direction(d,'s')))\n spherical = np.concatenate((spherical,direction(d,'s').reshape(d,1)), axis = 1)\n \n \nfor i in hS:\n # All vectors are close enough to 1 in length. Even for small d\n if((i-1)>10**-16):\n print(i-1)",
"So from the histograms above we can see all these methods give us points on the unit sphere. (Uniform gives us almost) But are they all uncorrelated? Let us see,\nSee how as $N$ increases the matrix tends to $I$ showing that they are indeed drawn i.i.d",
"np.matmul(normal.T, normal)\n\nnp.matmul(uniform.T, uniform)\n\nnp.matmul(spherical.T, spherical)",
"Now lets see what happens to this vector if we rotate it, lets generate a arbitary orthogonal matrix and apply it and see if it still lies on sphere...."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
StevenPeutz/myDataProjects
|
SQL/SQLquerySizeCalculator.ipynb
|
cc0-1.0
|
[
"Kaggle works great with Google Big Query. Especially when using the 'bq_helper' package (python). <br>\n// Credits and a big thanks to Rachael Tatman e.a. <br>\nHowever, there is a caveat. There is a 5TB query limit. This refers to the scanning of the dataset not the size of the 'response'.\nThis kernel uses the openAQ dataset and the bq_helper package (python) to demonstrate how to see the 'scan' size of your SQL query before actually sending it.",
"# import the python helper package for bigqueey (thank you Rachael Tatman e.a.)\nimport bq_helper\n\n# create the helper object\nopen_aq = bq_helper.BigQueryHelper(active_project=\"bigquery-public-data\", dataset_name=\"openaq\")\n\n#print the tables in the dataset to check everthing went ok so far\nopen_aq.list_tables()\n\n# print the first couple of rows to look at the structure of the dataset\n# not this is somewhat different from the usual way with dataframes; datafram_name.head(number_of_rows_to_show) ...\n# you can still only show e.g. 3 rows by tpying: open_aq.head(\"global_air_quality\", num_rows=3)\nopen_aq.head(\"global_air_quality\")",
"Measuring SQL bigquery size before acrtually executing it with the bq_helper package;",
"query = \"\"\"SELECT value\n FROM `bigquery-public-data.openaq.global_air_quality`\n WHERE value > 0\"\"\"\n# ! the quotations marks around 'bigquery..._quality' are NOT quotation marks, they are and have to be 'backticks': ` !\n\nopen_aq.estimate_query_size(query)",
"this means the SQL query above would take 0.000124 TB to run.",
"query2 = \"\"\"SELECT value\n FROM `bigquery-public-data.openaq.global_air_quality`\n WHERE country = 'NL'\"\"\"\n# ! the quotations marks around 'bigquery..._quality' are NOT quotation marks, they are and have to be 'backticks': ` !\n\nopen_aq.estimate_query_size(query2)",
"and this one would cost 0.000186TB",
"#or in megabyte;\nopen_aq.estimate_query_size(query2) * 1000"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
InsightSoftwareConsortium/SimpleITK-Notebooks
|
Python/21_Transforms_and_Resampling.ipynb
|
apache-2.0
|
[
"Transforms and Resampling <a href=\"https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F21_Transforms_and_Resampling.ipynb\"><img style=\"float: right;\" src=\"https://mybinder.org/badge_logo.svg\"></a>\nThis notebook explains how to apply transforms to images, and how to perform image resampling.",
"import SimpleITK as sitk\nimport numpy as np\n\n%matplotlib inline\nimport gui\nfrom matplotlib import pyplot as plt\nfrom ipywidgets import interact, fixed\n\n# Utility method that either downloads data from the Girder repository or\n# if already downloaded returns the file name for reading from disk (cached data).\n%run update_path_to_download_script\nfrom downloaddata import fetch_data as fdata",
"Creating and Manipulating Transforms\nA number of different spatial transforms are available in SimpleITK.\nThe simplest is the Identity Transform. This transform simply returns input points unaltered.",
"dimension = 2\n\nprint(\"*Identity Transform*\")\nidentity = sitk.Transform(dimension, sitk.sitkIdentity)\nprint(\"Dimension: \" + str(identity.GetDimension()))\n\n# Points are always defined in physical space\npoint = (1.0, 1.0)\n\n\ndef transform_point(transform, point):\n transformed_point = transform.TransformPoint(point)\n print(\"Point \" + str(point) + \" transformed is \" + str(transformed_point))\n\n\ntransform_point(identity, point)",
"Transform are defined by two sets of parameters, the Parameters and FixedParameters. FixedParameters are not changed during the optimization process when performing registration. For the TranslationTransform, the Parameters are the values of the translation Offset.",
"print(\"*Translation Transform*\")\ntranslation = sitk.TranslationTransform(dimension)\n\nprint(\"Parameters: \" + str(translation.GetParameters()))\nprint(\"Offset: \" + str(translation.GetOffset()))\nprint(\"FixedParameters: \" + str(translation.GetFixedParameters()))\ntransform_point(translation, point)\n\nprint(\"\")\ntranslation.SetParameters((3.1, 4.4))\nprint(\"Parameters: \" + str(translation.GetParameters()))\ntransform_point(translation, point)",
"The affine transform is capable of representing translations, rotations, shearing, and scaling.",
"print(\"*Affine Transform*\")\naffine = sitk.AffineTransform(dimension)\n\nprint(\"Parameters: \" + str(affine.GetParameters()))\nprint(\"FixedParameters: \" + str(affine.GetFixedParameters()))\ntransform_point(affine, point)\n\nprint(\"\")\naffine.SetTranslation((3.1, 4.4))\nprint(\"Parameters: \" + str(affine.GetParameters()))\ntransform_point(affine, point)",
"A number of other transforms exist to represent non-affine deformations, well-behaved rotation in 3D, etc. See the Transforms tutorial for more information.\nApplying Transforms to Images\nCreate a function to display the images that is aware of image spacing.",
"def myshow(img, title=None, margin=0.05, dpi=80):\n nda = sitk.GetArrayViewFromImage(img)\n spacing = img.GetSpacing()\n\n ysize = nda.shape[0]\n xsize = nda.shape[1]\n\n figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi\n\n fig = plt.figure(title, figsize=figsize, dpi=dpi)\n ax = fig.add_axes([margin, margin, 1 - 2 * margin, 1 - 2 * margin])\n\n extent = (0, xsize * spacing[1], 0, ysize * spacing[0])\n\n t = ax.imshow(\n nda, extent=extent, interpolation=\"hamming\", cmap=\"gray\", origin=\"lower\"\n )\n\n if title:\n plt.title(title)",
"Create a grid image.",
"grid = sitk.GridSource(\n outputPixelType=sitk.sitkUInt16,\n size=(250, 250),\n sigma=(0.5, 0.5),\n gridSpacing=(5.0, 5.0),\n gridOffset=(0.0, 0.0),\n spacing=(0.2, 0.2),\n)\nmyshow(grid, \"Grid Input\")",
"To apply the transform, a resampling operation is required.",
"def resample(image, transform):\n # Output image Origin, Spacing, Size, Direction are taken from the reference\n # image in this call to Resample\n reference_image = image\n interpolator = sitk.sitkCosineWindowedSinc\n default_value = 100.0\n return sitk.Resample(image, reference_image, transform, interpolator, default_value)\n\n\ntranslation.SetOffset((3.1, 4.6))\ntransform_point(translation, point)\nresampled = resample(grid, translation)\nmyshow(resampled, \"Resampled Translation\")",
"What happened? The translation is positive in both directions. Why does the output image move down and to the left? It important to keep in mind that a transform in a resampling operation defines the transform from the output space to the input space.",
"translation.SetOffset(-1 * np.array(translation.GetParameters()))\ntransform_point(translation, point)\nresampled = resample(grid, translation)\nmyshow(resampled, \"Inverse Resampled\")",
"An affine (line preserving) transformation, can perform translation:",
"def affine_translate(transform, x_translation=3.1, y_translation=4.6):\n new_transform = sitk.AffineTransform(transform)\n new_transform.SetTranslation((x_translation, y_translation))\n resampled = resample(grid, new_transform)\n myshow(resampled, \"Translated\")\n return new_transform\n\n\naffine = sitk.AffineTransform(dimension)\n\ninteract(\n affine_translate,\n transform=fixed(affine),\n x_translation=(-5.0, 5.0),\n y_translation=(-5.0, 5.0),\n);",
"or scaling:",
"def affine_scale(transform, x_scale=3.0, y_scale=0.7):\n new_transform = sitk.AffineTransform(transform)\n matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))\n matrix[0, 0] = x_scale\n matrix[1, 1] = y_scale\n new_transform.SetMatrix(matrix.ravel())\n resampled = resample(grid, new_transform)\n myshow(resampled, \"Scaled\")\n print(matrix)\n return new_transform\n\n\naffine = sitk.AffineTransform(dimension)\n\ninteract(affine_scale, transform=fixed(affine), x_scale=(0.2, 5.0), y_scale=(0.2, 5.0));",
"or rotation:",
"def affine_rotate(transform, degrees=15.0):\n parameters = np.array(transform.GetParameters())\n new_transform = sitk.AffineTransform(transform)\n matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))\n radians = -np.pi * degrees / 180.0\n rotation = np.array(\n [[np.cos(radians), -np.sin(radians)], [np.sin(radians), np.cos(radians)]]\n )\n new_matrix = np.dot(rotation, matrix)\n new_transform.SetMatrix(new_matrix.ravel())\n resampled = resample(grid, new_transform)\n print(new_matrix)\n myshow(resampled, \"Rotated\")\n return new_transform\n\n\naffine = sitk.AffineTransform(dimension)\n\ninteract(affine_rotate, transform=fixed(affine), degrees=(-90.0, 90.0));",
"or shearing:",
"def affine_shear(transform, x_shear=0.3, y_shear=0.1):\n new_transform = sitk.AffineTransform(transform)\n matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))\n matrix[0, 1] = -x_shear\n matrix[1, 0] = -y_shear\n new_transform.SetMatrix(matrix.ravel())\n resampled = resample(grid, new_transform)\n myshow(resampled, \"Sheared\")\n print(matrix)\n return new_transform\n\n\naffine = sitk.AffineTransform(dimension)\n\ninteract(affine_shear, transform=fixed(affine), x_shear=(0.1, 2.0), y_shear=(0.1, 2.0));",
"Composite Transform\nIt is possible to compose multiple transform together into a single transform object. With a composite transform, multiple resampling operations are prevented, so interpolation errors are not accumulated. For example, an affine transformation that consists of a translation and rotation,",
"translate = (8.0, 16.0)\nrotate = 20.0\n\naffine = sitk.AffineTransform(dimension)\naffine = affine_translate(affine, translate[0], translate[1])\naffine = affine_rotate(affine, rotate)\n\nresampled = resample(grid, affine)\nmyshow(resampled, \"Single Transform\")",
"can also be represented with two Transform objects applied in sequence with a Composite Transform,",
"composite = sitk.CompositeTransform(dimension)\ntranslation = sitk.TranslationTransform(dimension)\ntranslation.SetOffset(-1 * np.array(translate))\ncomposite.AddTransform(translation)\naffine = sitk.AffineTransform(dimension)\naffine = affine_rotate(affine, rotate)\n\ncomposite.AddTransform(translation)\ncomposite = sitk.CompositeTransform(dimension)\ncomposite.AddTransform(affine)\n\nresampled = resample(grid, composite)\nmyshow(resampled, \"Two Transforms\")",
"Beware, tranforms are non-commutative -- order matters!",
"composite = sitk.CompositeTransform(dimension)\ncomposite.AddTransform(affine)\ncomposite.AddTransform(translation)\n\nresampled = resample(grid, composite)\nmyshow(resampled, \"Composite transform in reverse order\")",
"Resampling\n<img src=\"resampling.svg\"/><br><br>\nResampling as the verb implies is the action of sampling an image, which itself is a sampling of an original continuous signal.\nGenerally speaking, resampling in SimpleITK involves four components:\n1. Image - the image we resample, given in coordinate system $m$.\n2. Resampling grid - a regular grid of points given in coordinate system $f$ which will be mapped to coordinate system $m$.\n2. Transformation $T_f^m$ - maps points from coordinate system $f$ to coordinate system $m$, $^mp = T_f^m(^fp)$.\n3. Interpolator - method for obtaining the intensity values at arbitrary points in coordinate system $m$ from the values of the points defined by the Image.\nWhile SimpleITK provides a large number of interpolation methods, the two most commonly used are sitkLinear and sitkNearestNeighbor. The former is used for most interpolation tasks, a compromise between accuracy and computational efficiency. The later is used to interpolate labeled images representing a segmentation, it is the only interpolation approach which will not introduce new labels into the result.\nSimpleITK's procedural API provides three methods for performing resampling, with the difference being the way you specify the resampling grid:\n\nResample(const Image &image1, Transform transform, InterpolatorEnum interpolator, double defaultPixelValue, PixelIDValueEnum outputPixelType)\nResample(const Image &image1, const Image &referenceImage, Transform transform, InterpolatorEnum interpolator, double defaultPixelValue, PixelIDValueEnum outputPixelType)\nResample(const Image &image1, std::vector< uint32_t > size, Transform transform, InterpolatorEnum interpolator, std::vector< double > outputOrigin, std::vector< double > outputSpacing, std::vector< double > outputDirection, double defaultPixelValue, PixelIDValueEnum outputPixelType)",
"def resample_display(image, euler2d_transform, tx, ty, theta):\n euler2d_transform.SetTranslation((tx, ty))\n euler2d_transform.SetAngle(theta)\n\n resampled_image = sitk.Resample(image, euler2d_transform)\n plt.imshow(sitk.GetArrayFromImage(resampled_image))\n plt.axis(\"off\")\n plt.show()\n\n\nlogo = sitk.ReadImage(fdata(\"SimpleITK.jpg\"))\n\neuler2d = sitk.Euler2DTransform()\n# Why do we set the center?\neuler2d.SetCenter(\n logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize()) / 2.0)\n)\ninteract(\n resample_display,\n image=fixed(logo),\n euler2d_transform=fixed(euler2d),\n tx=(-128.0, 128.0, 2.5),\n ty=(-64.0, 64.0),\n theta=(-np.pi / 4.0, np.pi / 4.0),\n);",
"Common Errors\nIt is not uncommon to end up with an empty (all black) image after resampling. This is due to:\n1. Using wrong settings for the resampling grid, not too common, but does happen.\n2. Using the inverse of the transformation $T_f^m$. This is a relatively common error, which is readily addressed by invoking the transformations GetInverse method.\nDefining the Resampling Grid\nIn the example above we arbitrarily used the original image grid as the resampling grid. As a result, for many of the transformations the resulting image contained black pixels, pixels which were mapped outside the spatial domain of the original image and a partial view of the original image.\nIf we want the resulting image to contain all of the original image no matter the transformation, we will need to define the resampling grid using our knowledge of the original image's spatial domain and the inverse of the given transformation. \nComputing the bounds of the resampling grid when dealing with an affine transformation is straightforward. An affine transformation preserves convexity with extreme points mapped to extreme points. Thus we only need to apply the inverse transformation to the corners of the original image to obtain the bounds of the resampling grid.\nComputing the bounds of the resampling grid when dealing with a BSplineTransform or DisplacementFieldTransform is more involved as we are not guaranteed that extreme points are mapped to extreme points. This requires that we apply the inverse transformation to all points in the original image to obtain the bounds of the resampling grid.",
"euler2d = sitk.Euler2DTransform()\n# Why do we set the center?\neuler2d.SetCenter(\n logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize()) / 2.0)\n)\n\ntx = 64\nty = 32\neuler2d.SetTranslation((tx, ty))\n\nextreme_points = [\n logo.TransformIndexToPhysicalPoint((0, 0)),\n logo.TransformIndexToPhysicalPoint((logo.GetWidth(), 0)),\n logo.TransformIndexToPhysicalPoint((logo.GetWidth(), logo.GetHeight())),\n logo.TransformIndexToPhysicalPoint((0, logo.GetHeight())),\n]\ninv_euler2d = euler2d.GetInverse()\n\nextreme_points_transformed = [inv_euler2d.TransformPoint(pnt) for pnt in extreme_points]\nmin_x = min(extreme_points_transformed)[0]\nmin_y = min(extreme_points_transformed, key=lambda p: p[1])[1]\nmax_x = max(extreme_points_transformed)[0]\nmax_y = max(extreme_points_transformed, key=lambda p: p[1])[1]\n\n# Use the original spacing (arbitrary decision).\noutput_spacing = logo.GetSpacing()\n# Identity cosine matrix (arbitrary decision).\noutput_direction = [1.0, 0.0, 0.0, 1.0]\n# Minimal x,y coordinates are the new origin.\noutput_origin = [min_x, min_y]\n# Compute grid size based on the physical size and spacing.\noutput_size = [\n int((max_x - min_x) / output_spacing[0]),\n int((max_y - min_y) / output_spacing[1]),\n]\n\nresampled_image = sitk.Resample(\n logo,\n output_size,\n euler2d,\n sitk.sitkLinear,\n output_origin,\n output_spacing,\n output_direction,\n)\nplt.imshow(sitk.GetArrayViewFromImage(resampled_image))\nplt.axis(\"off\")\nplt.show()",
"Are you puzzled by the result? Is the output just a copy of the input? Add a rotation to the code above and see what happens (euler2d.SetAngle(0.79)).\nResampling at a set of locations\nIn some cases you may be interested in obtaining the intensity values at a set of points (e.g. coloring the vertices of a mesh model segmented from an image).\nThe code below generates a random point set in the image and resamples the intensity values at these locations. It is written so that it works for all image-dimensions and types (scalar or vector pixels).",
"img = logo\n\n# Generate random samples inside the image, we will obtain the intensity/color values at these points.\nnum_samples = 10\nphysical_points = []\nfor pnt in zip(*[list(np.random.random(num_samples) * sz) for sz in img.GetSize()]):\n physical_points.append(img.TransformContinuousIndexToPhysicalPoint(pnt))\n\n# Create an image of size [num_samples,1...1], actual size is dependent on the image dimensionality. The pixel\n# type is irrelevant, as the image is just defining the interpolation grid (sitkUInt8 has minimal memory footprint).\ninterp_grid_img = sitk.Image(\n [num_samples] + [1] * (img.GetDimension() - 1), sitk.sitkUInt8\n)\n\n# Define the displacement field transformation, maps the points in the interp_grid_img to the points in the actual\n# image.\ndisplacement_img = sitk.Image(\n [num_samples] + [1] * (img.GetDimension() - 1),\n sitk.sitkVectorFloat64,\n img.GetDimension(),\n)\nfor i, pnt in enumerate(physical_points):\n displacement_img[[i] + [0] * (img.GetDimension() - 1)] = np.array(pnt) - np.array(\n interp_grid_img.TransformIndexToPhysicalPoint(\n [i] + [0] * (img.GetDimension() - 1)\n )\n )\n\n# Actually perform the resampling. The only relevant choice here is the interpolator. The default_output_pixel_value\n# is set to 0.0, but the resampling should never use it because we expect all points to be inside the image and this\n# value is only used if the point is outside the image extent.\ninterpolator_enum = sitk.sitkLinear\ndefault_output_pixel_value = 0.0\noutput_pixel_type = (\n sitk.sitkFloat32\n if img.GetNumberOfComponentsPerPixel() == 1\n else sitk.sitkVectorFloat32\n)\nresampled_points = sitk.Resample(\n img,\n interp_grid_img,\n sitk.DisplacementFieldTransform(displacement_img),\n interpolator_enum,\n default_output_pixel_value,\n output_pixel_type,\n)\n\n# Print the interpolated values per point\nfor i in range(resampled_points.GetWidth()):\n print(\n str(physical_points[i])\n + \": \"\n + str(resampled_points[[i] + [0] * (img.GetDimension() - 1)])\n + \"\\n\"\n )",
"<font color=\"red\">Homework:</font> creating a color mesh\nYou will now use the code for resampling at arbitrary locations to create a colored mesh.\nUsing the color image of the visible human head [img = sitk.ReadImage(fdata('vm_head_rgb.mha'))]:\n1. Implement the marching cubes algorithm to obtain the set of triangles corresponding to the iso-surface of structures of interest (skin, white matter,...).\n2. Find the color associated with each of the triangle vertices using the code above.\n3. Save the data using the ASCII version of the PLY, Polygon File Format (a.k.a. Stanford Triangle Format).\n4. Use meshlab to view your creation.\nCreating thumbnails - changing image size, spacing and intensity range\nAs bio-medical images are most often an-isotropic, have a non uniform size (number of pixels), with a high dynamic range of intensities, some caution is required when converting them to an arbitrary desired size with isotropic spacing and the more common low dynamic intensity range.\nThe code in the following cells illustrates how to take an arbitrary set of images with various sizes, spacings and intensities and resize all of them to a common arbitrary size, isotropic spacing, and low dynamic intensity range.",
"file_names = [\"cxr.dcm\", \"photo.dcm\", \"POPI/meta/00-P.mhd\", \"training_001_ct.mha\"]\nimages = []\nimage_file_reader = sitk.ImageFileReader()\nfor fname in file_names:\n image_file_reader.SetFileName(fdata(fname))\n image_file_reader.ReadImageInformation()\n image_size = list(image_file_reader.GetSize())\n # 2D image posing as a 3D one\n if len(image_size) == 3 and image_size[2] == 1:\n image_size[2] = 0\n image_file_reader.SetExtractSize(image_size)\n images.append(image_file_reader.Execute())\n # 2D image\n elif len(image_size) == 2:\n images.append(image_file_reader.Execute())\n # 3D image grab middle x-z slice\n elif len(image_size) == 3:\n start_index = [0, image_size[1] // 2, 0]\n image_size[1] = 0\n image_file_reader.SetExtractSize(image_size)\n image_file_reader.SetExtractIndex(start_index)\n images.append(image_file_reader.Execute())\n # 4/5D image\n else:\n raise ValueError(f\"{image.GetDimension()}D image not supported.\")\n\n# Notice that in the display the coronal slices are flipped. As we are\n# using matplotlib for display, it is not aware of radiological conventions\n# and treats the image as an isotropic array of pixels.\ngui.multi_image_display2D(images);",
"<font color=\"red\">Homework:</font> Why do some of the images displayed above look different from others?\nWhat are the differences between the various images in the images list? Write code to query them and check their intensity ranges, sizes and spacings. \nThe next cell illustrates how to resize all images to an arbitrary size, using isotropic spacing while maintaining the original aspect ratio.",
"def resize_and_scale_uint8(image, new_size, outside_pixel_value=0):\n \"\"\"\n Resize the given image to the given size, with isotropic pixel spacing\n and scale the intensities to [0,255].\n\n Resizing retains the original aspect ratio, with the original image centered\n in the new image. Padding is added outside the original image extent using the\n provided value.\n\n :param image: A SimpleITK image.\n :param new_size: List of ints specifying the new image size.\n :param outside_pixel_value: Value in [0,255] used for padding.\n :return: a 2D SimpleITK image with desired size and a pixel type of sitkUInt8\n \"\"\"\n # Rescale intensities if scalar image with pixel type that isn't sitkUInt8.\n # We rescale first, so that the zero padding makes sense for all original image\n # ranges. If we resized first, a value of zero in a high dynamic range image may\n # be somewhere in the middle of the intensity range and the outer border has a\n # constant but arbitrary value.\n if (\n image.GetNumberOfComponentsPerPixel() == 1\n and image.GetPixelID() != sitk.sitkUInt8\n ):\n final_image = sitk.Cast(sitk.RescaleIntensity(image), sitk.sitkUInt8)\n else:\n final_image = image\n new_spacing = [\n ((osz - 1) * ospc) / (nsz - 1)\n for ospc, osz, nsz in zip(\n final_image.GetSpacing(), final_image.GetSize(), new_size\n )\n ]\n new_spacing = [max(new_spacing)] * final_image.GetDimension()\n center = final_image.TransformContinuousIndexToPhysicalPoint(\n [sz / 2.0 for sz in final_image.GetSize()]\n )\n new_origin = [\n c - c_index * nspc\n for c, c_index, nspc in zip(center, [sz / 2.0 for sz in new_size], new_spacing)\n ]\n final_image = sitk.Resample(\n final_image,\n size=new_size,\n outputOrigin=new_origin,\n outputSpacing=new_spacing,\n defaultPixelValue=outside_pixel_value,\n )\n return final_image\n\n\n# Select the arbitrary new size\nnew_size = [128, 128]\nresized_images = [resize_and_scale_uint8(image, new_size, 50) for image in images]\ngui.multi_image_display2D(resized_images);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
VlachosGroup/VlachosGroupAdditivity
|
docs/source/WorkshopJupyterNotebooks/OpenMKM_demo/batch/batch.ipynb
|
mit
|
[
"Simulating Batch Reactor\nHere a batch reactor simulation is demoed with pure gas phase mechanism. The gas phase mechanims is called GRI-Mech v3.0 from University of Berkeley. GRI-Mech 3.0 is an optimized mechanism designed to model natural gas combustion, including NO formation and reburn chemistry. The mechanims consists of 53 species and 325 reactions. For more details, refer to http://combustion.berkeley.edu/gri-mech/version30/text30.html.\nFor this demo, we are interested in Hydrogen detonation in a batch reactor under different operating conditions. The conventional overall reaction is written as 2H<sub>2</sub> + O<sub>2</sub> = 2H<sub>2</sub>O. \nNote: The actual simulations are executed from command line. In the jupyter notebook, analysis of the resulting data is carried out.",
"import matplotlib as mpl\nmpl.rcParams['figure.dpi'] = 500\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline \n%config InlineBackend.figure_format = 'retina'",
"1. Adiabatic batch reactor\nUnder adiabatic conditions, no heat is supplied/taken from the reactor. The temperature within the reactor is allowed to change. We try to understand the detonation process via the mole fractions of the reactants, products, and intermediates.\nDiagnostic data Files\nFirst lets look at the diagnostic files with .out extension. These files have .out extension. \n\nspecies.out: Lists the species participating in the mechanism, phase and the composition\n\nHform.out, Sform.out: Formation enthalpy and entropy of species\n\n\nreactions.out: List of reactions\n\nHrxn.out, Srxn.out, Grxn.out: Enthalpy, entropy, and Gibbs energies of reactions\nkc.out, kf.out, kr.out: Equilibrium constant, forward and reverse rate constants.",
"ls adiab/*.out\n\ndef read_file(fname):\n with open(fname) as fp:\n lines = fp.readlines()\n for line in lines:\n print(line)\n\nread_file(\"adiab/kf.out\")",
"Data Files\nThe files are given as _ss.csv and _tr.csv or _ss.dat and _tr.dat depending on the output format selected. _tr indicates transient output, and _ss indicates steady state.\n\ngas_mass_, gas_mole_: Lists the mass fraction and mole fractions of gas phase species respectively\n\ngas_msdot_: Production rate of the gas space species from the surface.\n\n\nsurf_cov_: Coverages of the surface species\n\nrctr_state: State of the reactor (Temperature, Pressure, Density, Internal Energy)",
"ls adiab/*.csv\n\ndf = pd.read_csv(os.path.join('adiab', 'gas_mole_tr.csv'))\ndf.columns = df.columns.str.strip()\n\ndf[\"t_ms\"] = df[\"t(s)\"]*1e3\n\nplt.clf()\nax1 = plt.subplot(1, 1, 1)\nax1.plot('t_ms', 'H', data=df, marker='^', markersize=0.5, label=\"H mole frac\")\nax1.plot('t_ms', 'OH', data=df, marker='v', markersize=0.5, label=\"OH mole frac\")\nax1.plot('t_ms', 'H2O', data=df, marker='*', markersize=0.5, label=\"H2O mole frac\")\nax1.plot('t_ms', 'H2', data=df, marker='o', markersize=0.5, label=\"H2 mole frac\")\nax1.plot('t_ms', 'O2', data=df, marker='<', markersize=0.5, label=\"O2 mole frac\")\nax1.set_xlabel('Time (ms)')\nax1.ticklabel_format(axis='y', style='sci', scilimits=(0,0))\nax1.set_ylabel('Mass Fraction')\n#ax1.set_xlim([0,1])\nax1.legend(loc=\"upper left\", bbox_to_anchor=(1,1))\nplt.tight_layout()\nplt.savefig('GRI30_Hdetonation_adiab_mole.png', dpi=500)\n\nplt.clf()\nax1 = plt.subplot(1, 1, 1)\nax1.plot('t_ms', 'H', data=df, marker='^', markersize=0.5, label=\"H mole frac\")\nax1.plot('t_ms', 'OH', data=df, marker='v', markersize=0.5, label=\"OH mole frac\")\nax1.plot('t_ms', 'H2O', data=df, marker='*', markersize=0.5, label=\"H2O mole frac\")\nax1.plot('t_ms', 'H2', data=df, marker='o', markersize=0.5, label=\"H2 mole frac\")\nax1.plot('t_ms', 'O2', data=df, marker='<', markersize=0.5, label=\"O2 mole frac\")\nax1.set_xlabel('Time (ms)')\nax1.ticklabel_format(axis='y', style='sci', scilimits=(0,0))\nax1.set_ylabel('Mass Fraction')\n#ax1.set_xlim([0,1])\nax1.legend(loc=\"upper left\", bbox_to_anchor=(1,1))\nax1.set_xlim(0.25,0.4)\nplt.tight_layout()\nplt.savefig('GRI30_Hdetonation_adiab_mole_zoomed.png', dpi=500)\n\ndf_isothermal = pd.read_csv(os.path.join('isother', 'gas_mole_tr.csv'))\ndf_isothermal.columns = df_isothermal.columns.str.strip()\ndf_isothermal[\"t_ms\"] = df_isothermal[\"t(s)\"]*1e3\n\nplt.clf()\nax = plt.subplot(1, 1, 1)\nax.plot('t_ms', 'H', data=df, marker='^', markersize=0.5, label=\"H mole frac\")\nax.plot('t_ms', 'OH', data=df, marker='v', markersize=0.5, label=\"OH mole frac\")\nax.plot('t_ms', 'H2O', data=df, marker='*', markersize=0.5, label=\"H2O mole frac\")\nax.plot('t_ms', 'H2', data=df, marker='o', markersize=0.5, label=\"H2 mole frac\")\nax.plot('t_ms', 'O2', data=df, marker='<', markersize=0.5, label=\"O2 mole frac\")\nax.set_xlabel('Time (ms)')\nax.ticklabel_format(axis='y', style='sci', scilimits=(0,0))\nax.set_ylabel('Mass Fraction')\nax.set_xlim(0.25,0.4)\n#ax1.set_xlim([0,1])\nax.legend(loc=\"upper left\", bbox_to_anchor=(1,1))\nplt.tight_layout()\nplt.savefig('GRI30_H-detonation_isother_mole.png', dpi=500)",
"Reactor State Comparison\nHow do the reactor temperature and pressure evolve for the two different operating conditions?",
"adiab_state_df = pd.read_csv(os.path.join('adiab','rctr_state_tr.csv'))\nisotherm_state_df = pd.read_csv(os.path.join('isother','rctr_state_tr.csv'))\nadiab_state_df.columns = adiab_state_df.columns.str.strip()\nisotherm_state_df.columns = isotherm_state_df.columns.str.strip()\n\nisotherm_state_df[\"t_ms\"] = isotherm_state_df[\"t(s)\"]*1e3\nadiab_state_df[\"t_ms\"] = adiab_state_df[\"t(s)\"]*1e3\n\nplt.clf()\nax_comp = plt.subplot(1, 1, 1)\nax_comp.plot('t_ms', 'Temperature(K)', data=isotherm_state_df, marker='^', markersize=0.5, label=\"Isothermal T\")\nax_comp.plot('t_ms', 'Temperature(K)', data=adiab_state_df, marker='v', markersize=0.5, label=\"Adiabatic T\")\nax_comp.set_xlabel('Time (ms)')\nax_comp.ticklabel_format(axis='y', style='sci', scilimits=(0,0))\nax_comp.set_ylabel('Temp (K)')\nax_comp.set_ylim([500,3000])\nax_comp.legend(loc=\"upper left\", bbox_to_anchor=(1,1))\nplt.tight_layout()\nplt.savefig('GRI30_H-detonation_T-comp.png', dpi=500)\n\n\nplt.clf()\nax_comp = plt.subplot(1, 1, 1)\nax_comp.plot('t_ms', 'Pressure(Pa)', data=isotherm_state_df, marker='^', markersize=0.5, label=\"Isothermal T\")\nax_comp.plot('t_ms', 'Pressure(Pa)', data=adiab_state_df, marker='v', markersize=0.5, label=\"Adiabatic T\")\nax_comp.set_xlabel('Time (ms)')\nax_comp.ticklabel_format(axis='y', style='sci', scilimits=(0,0))\nax_comp.set_ylabel('Press (Pa)')\n#ax_comp.set_ylim([500,3000])\nax_comp.legend(loc=\"upper left\", bbox_to_anchor=(1,1))\nplt.tight_layout()\nplt.savefig('GRI30_H-detonation_P-comp.png', dpi=500)",
"Can you explain the temperature and pressure differences for isothermal and adiabatic conditions?\nThink of $pV = nRT$. After detonation, the number of moles of gas in the reactor is reduced. Remember two moles of H<sub>2</sub> and one mole of O<sub>2</sub> is converted to two moles of water vapor. Because T is constant under isothermal conditions and n is reduced, p is automatically reduced because V is constant. Under adiabatic conditions, the chemical energy converting into kinetic energy is contained within the reactor resulting in a rise in T. This results in P also increasing, because change of T dominates reduction in n."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jobovy/stream-stream
|
py/Orbits-for-Nbody.ipynb
|
bsd-3-clause
|
[
"import numpy\nfrom galpy.potential import LogarithmicHaloPotential\nfrom galpy.orbit import Orbit\nfrom galpy.util import bovy_plot, bovy_coords, bovy_conversion\n%pylab inline",
"Initial conditions for $N$-body simulations to create the impact we want\nSetup the potential and coordinate system",
"lp= LogarithmicHaloPotential(normalize=1.,q=0.9)\nR0, V0= 8., 220.",
"Functions for converting coordinates between rectangular to cylindrical:",
"def rectangular_to_cylindrical(xv):\n R,phi,Z= bovy_coords.rect_to_cyl(xv[:,0],xv[:,1],xv[:,2])\n vR,vT,vZ= bovy_coords.rect_to_cyl_vec(xv[:,3],xv[:,4],xv[:,5],R,phi,Z,cyl=True)\n out= numpy.empty_like(xv)\n # Preferred galpy arrangement of cylindrical coordinates\n out[:,0]= R\n out[:,1]= vR\n out[:,2]= vT\n out[:,3]= Z\n out[:,4]= vZ\n out[:,5]= phi\n return out\ndef cylindrical_to_rectangular(xv):\n # Using preferred galpy arrangement of cylindrical coordinates\n X,Y,Z= bovy_coords.cyl_to_rect(xv[:,0],xv[:,5],xv[:,3])\n vX,vY,vZ= bovy_coords.cyl_to_rectvec(xv[:,1],xv[:,2],xv[:,4],xv[:,5])\n out= numpy.empty_like(xv)\n out[:,0]= X\n out[:,1]= Y\n out[:,2]= Z\n out[:,3]= vX\n out[:,4]= vY\n out[:,5]= vZ\n return out",
"At the time of impact, the phase-space coordinates of the GC can be computed using orbit integration:",
"xv_prog_init= numpy.array([30.,0.,0.,0.,105.74895,105.74895])\nRvR_prog_init= rectangular_to_cylindrical(xv_prog_init[:,numpy.newaxis].T)[0,:]\nprog_init= Orbit([RvR_prog_init[0]/R0,RvR_prog_init[1]/V0,RvR_prog_init[2]/V0,\n RvR_prog_init[3]/R0,RvR_prog_init[4]/V0,RvR_prog_init[5]],ro=R0,vo=V0)\ntimes= numpy.linspace(0.,10./bovy_conversion.time_in_Gyr(V0,R0),10001)\nprog_init.integrate(times,lp)\nxv_prog_impact= [prog_init.x(times[-1]),prog_init.y(times[-1]),prog_init.z(times[-1]),\n prog_init.vx(times[-1]),prog_init.vy(times[-1]),prog_init.vz(times[-1])]",
"The DM halo at the time of impact is at the following location:",
"xv_dm_impact= numpy.array([-13.500000,2.840000,-1.840000,6.82200571,132.7700529,149.4174464])\nRvR_dm_impact= rectangular_to_cylindrical(xv_dm_impact[:,numpy.newaxis].T)[0,:]\ndm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,\n RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)\ndm_impact= dm_impact.flip()\ntimes= numpy.linspace(0.,10./bovy_conversion.time_in_Gyr(V0,R0),1001)\ndm_impact.integrate(times,lp)",
"The orbits over the past 10 Gyr for both objects are:",
"prog_init.plot()\ndm_impact.plot(overplot=True)\nplot(RvR_dm_impact[0],RvR_dm_impact[3],'ro')\nxlim(0.,35.)\nylim(-20.,20.)",
"Initial condition for the King cluster\nWe start the King cluster at 10.25 WD time units, which corresponds to 10.25x0.9777922212082034 Gyr. The phase-space coordinates of the cluster are then:",
"prog_backward= prog_init.flip()\nts= numpy.linspace(0.,(10.25*0.9777922212082034-10.)/bovy_conversion.time_in_Gyr(V0,R0),1001)\nprog_backward.integrate(ts,lp)\nprint [prog_backward.x(ts[-1]),prog_backward.y(ts[-1]),prog_backward.z(ts[-1]),\n -prog_backward.vx(ts[-1]),-prog_backward.vy(ts[-1]),-prog_backward.vz(ts[-1])]",
"Initial conditions for the Plummer DM subhalo\nStarting 0.125 time units ago",
"dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,\n RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)\ndm_impact= dm_impact.flip()\nts= numpy.linspace(0.,0.125*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)\ndm_impact.integrate(ts,lp)\nprint [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),\n -dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]",
"Starting 0.25 time units ago",
"dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,\n RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)\ndm_impact= dm_impact.flip()\nts= numpy.linspace(0.,0.25*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)\ndm_impact.integrate(ts,lp)\nprint [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),\n -dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]",
"Starting 0.375 time units ago",
"dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,\n RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)\ndm_impact= dm_impact.flip()\nts= numpy.linspace(0.,0.375*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)\ndm_impact.integrate(ts,lp)\nprint [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),\n -dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]",
"Starting 0.50 time units ago",
"dm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,\n RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)\ndm_impact= dm_impact.flip()\nts= numpy.linspace(0.,0.50*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)\ndm_impact.integrate(ts,lp)\nprint [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),\n -dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]",
"Initial conditions for the Plummer DM subhalo with $\\lambda$ scaled interaction velocities\nTo test the impulse approximation, we want to simulate interactions where the relative velocity ${\\bf w}$ is changed by a factor of $\\lambda$: ${\\bf w} \\rightarrow \\lambda {\\bf w}$. We start by computing the relative velocity for the impacts above and define a function that returns a dark-matter velocity after scaling the relative velocity by $\\lambda$:",
"v_gc= numpy.array([xv_prog_impact[3],xv_prog_impact[4],xv_prog_impact[5]])\nv_dm= numpy.array([6.82200571,132.7700529,149.4174464])\nw_base= v_dm-v_gc\ndef v_dm_scaled(lam):\n return w_base*lam+v_gc",
"Starting 0.25 time units ago, scaled down by 0.5",
"lam= 0.5\nxv_dm_impact= numpy.array([-13.500000,2.840000,-1.840000,v_dm_scaled(lam)[0],v_dm_scaled(lam)[1],v_dm_scaled(lam)[2]])\nRvR_dm_impact= rectangular_to_cylindrical(xv_dm_impact[:,numpy.newaxis].T)[0,:]\ndm_impact= Orbit([RvR_dm_impact[0]/R0,RvR_dm_impact[1]/V0,RvR_dm_impact[2]/V0,\n RvR_dm_impact[3]/R0,RvR_dm_impact[4]/V0,RvR_dm_impact[5]],ro=R0,vo=V0)\ndm_impact= dm_impact.flip()\nts= numpy.linspace(0.,0.25*0.9777922212082034/bovy_conversion.time_in_Gyr(V0,R0),10001)\ndm_impact.integrate(ts,lp)\nprint [dm_impact.x(ts[-1]),dm_impact.y(ts[-1]),dm_impact.z(ts[-1]),\n -dm_impact.vx(ts[-1]),-dm_impact.vy(ts[-1]),-dm_impact.vz(ts[-1])]"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.18/_downloads/d1b18c3376911723f0257fe5003a8477/plot_linear_model_patterns.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Linear classifier on sensor data with plot patterns and filters\nHere decoding, a.k.a MVPA or supervised machine learning, is applied to M/EEG\ndata in sensor space. Fit a linear classifier with the LinearModel object\nproviding topographical patterns which are more neurophysiologically\ninterpretable [1]_ than the classifier filters (weight vectors).\nThe patterns explain how the MEG and EEG data were generated from the\ndiscriminant neural sources which are extracted by the filters.\nNote patterns/filters in MEG data are more similar than EEG data\nbecause the noise is less spatially correlated in MEG than EEG.\nReferences\n.. [1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D.,\n Blankertz, B., & Bießmann, F. (2014). On the interpretation of\n weight vectors of linear models in multivariate neuroimaging.\n NeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067",
"# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Romain Trachel <trachelr@gmail.com>\n# Jean-Remi King <jeanremi.king@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import io, EvokedArray\nfrom mne.datasets import sample\nfrom mne.decoding import Vectorizer, get_coef\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\n\n# import a linear classifier from mne.decoding\nfrom mne.decoding import LinearModel\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.1, 0.4\nevent_id = dict(aud_l=1, vis_l=3)\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(.5, 25, fir_design='firwin')\nevents = mne.read_events(event_fname)\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n decim=2, baseline=None, preload=True)\n\nlabels = epochs.events[:, -1]\n\n# get MEG and EEG data\nmeg_epochs = epochs.copy().pick_types(meg=True, eeg=False)\nmeg_data = meg_epochs.get_data().reshape(len(labels), -1)",
"Decoding in sensor space using a LogisticRegression classifier",
"clf = LogisticRegression(solver='lbfgs')\nscaler = StandardScaler()\n\n# create a linear model with LogisticRegression\nmodel = LinearModel(clf)\n\n# fit the classifier on MEG data\nX = scaler.fit_transform(meg_data)\nmodel.fit(X, labels)\n\n# Extract and plot spatial filters and spatial patterns\nfor name, coef in (('patterns', model.patterns_), ('filters', model.filters_)):\n # We fitted the linear model onto Z-scored data. To make the filters\n # interpretable, we must reverse this normalization step\n coef = scaler.inverse_transform([coef])[0]\n\n # The data was vectorized to fit a single model across all time points and\n # all channels. We thus reshape it:\n coef = coef.reshape(len(meg_epochs.ch_names), -1)\n\n # Plot\n evoked = EvokedArray(coef, meg_epochs.info, tmin=epochs.tmin)\n evoked.plot_topomap(title='MEG %s' % name, time_unit='s')",
"Let's do the same on EEG data using a scikit-learn pipeline",
"X = epochs.pick_types(meg=False, eeg=True)\ny = epochs.events[:, 2]\n\n# Define a unique pipeline to sequentially:\nclf = make_pipeline(\n Vectorizer(), # 1) vectorize across time and channels\n StandardScaler(), # 2) normalize features across trials\n LinearModel(\n LogisticRegression(solver='lbfgs'))) # 3) fits a logistic regression\nclf.fit(X, y)\n\n# Extract and plot patterns and filters\nfor name in ('patterns_', 'filters_'):\n # The `inverse_transform` parameter will call this method on any estimator\n # contained in the pipeline, in reverse order.\n coef = get_coef(clf, name, inverse_transform=True)\n evoked = EvokedArray(coef, epochs.info, tmin=epochs.tmin)\n evoked.plot_topomap(title='EEG %s' % name[:-1], time_unit='s')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gaufung/Data_Analytics_Learning_Note
|
DesignPattern/ProtoTypePattern.ipynb
|
mit
|
[
"原型模式(Prototype Pattern)\n以画布为例",
"class simpleLayer(object):\n background=[0,0,0,0]\n content=\"blank\"\n def getContent(self):\n return self.content\n def getBackgroud(self):\n return self.background\n def paint(self,painting):\n self.content=painting\n def setParent(self,p):\n self.background[3]=p\n def fillBackground(self,back):\n self.background=back",
"在实际的实现中,图层实现会很复杂,这里仅介绍相关的设计模式,做了比较大的抽象,用background表示背景的RGBA,简单用content表示内容,除了直接绘画,还可以设置透明度。",
"dog_layer=simpleLayer()\ndog_layer.paint('Dog')\ndog_layer.fillBackground([0,0,255,0])\nprint('background:',dog_layer.getBackgroud())\nprint('Painting:', dog_layer.getContent())",
"接下来,如果需要再生成一个同样的图层,再填充同样的颜色,再画一只同样狗,该如何做呢?还是按照新建图层、填充背景、画的顺序么?或许你已经发现了,这里可以用复制的方法来实现,而复制(clone)这个动作,就是原型模式的精髓了。\n按照此思路,在图层类中新加入两个方法:clone和deep_clone",
"from copy import copy, deepcopy\nclass simpleLayer(object):\n background=[0,0,0,0]\n content=\"blank\"\n def getContent(self):\n return self.content\n def getBackgroud(self):\n return self.background\n def paint(self,painting):\n self.content=painting\n def setParent(self,p):\n self.background[3]=p\n def fillBackground(self,back):\n self.background=back\n def clone(self):\n return copy(self)\n def deep_clone(self):\n return deepcopy(self)\n\ndog_layer=simpleLayer()\ndog_layer.paint('Dog')\ndog_layer.fillBackground([0,0,255,0])\nprint('background:',dog_layer.getBackgroud())\nprint('Painting:', dog_layer.getContent())\nanother_dog_layer=dog_layer.clone()\nprint('background:',another_dog_layer.getBackgroud())\nprint('Painting:', another_dog_layer.getContent())",
"大多数编程语言中,都会涉及到深拷贝和浅拷贝的问题,一般来说,浅拷贝会拷贝对象内容及其内容的引用或者子对象的引用,但不会拷贝引用的内容和子对象本身;而深拷贝不仅拷贝了对象和内容的引用,也会拷贝引用的内容。所以,一般深拷贝比浅拷贝复制得更加完全,但也更占资源(包括时间和空间资源)。举个例子,下面的场景,可以说明深拷贝和浅拷贝的区别。\nAdvantages\n\n性能极佳\n简化对象创建、同时避免构造函数的约束\n\nUsages\n\n对象在修改过后,需要复制多份的场景。如本例和其它一些涉及到复制、粘贴的场景;\n需要优化资源的情况。如,需要在内存中创建非常多的实例,可以通过原型模式来减少资源消耗。此时,原型模式与工厂模式配合起来,不管在逻辑上还是结构上,都会达到不错的效果;\n某些重复性的复杂工作不需要多次进行。如对于一个设备的访问权限,多个对象不用各申请一遍权限,由一个设备申请后,通过原型模式将权限交给可信赖的对象,既可以提升效率,又可以节约资源。\n\nDisadvantage\n\n深拷贝和浅拷贝的使用需要事先考虑周到\n某些编程语言中,拷贝会影响到静态变量和静态函数的使用"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ssanderson/notebooks
|
quanto/Quantopian_Meetup_Talk_IPython_Notebook.ipynb
|
apache-2.0
|
[
"IPython Notebook",
"# This is a python execution cell.\n# Anything you could do in a python shell or script, you can do here.\n\n# To execute a cell, type CTRL-Enter.\n# You can also type SHIFT-Enter to execute and move to the next cell,\n# and you can type OPTION-Enter to execute and insert a new cell below.\n\ndef foo():\n print \"IPython Notebook is Awesome!\"\n\nfoo()\n\n# The last expression in a cell is always displayed as the cell's output when\n# it's executed.\n[1,2,3,4]",
"This is a level 1 header cell.\nThis is a level 2 header cell.\nThis is a level 3 header cell.\nThis is a level 4 header cell.\nThis is a level 5 header cell.\nThis is a level 6 header cell.\nThis is a Markdown cell.\n\nMarkdown is a text-to-HTML conversion tool for web writers. Markdown allows\n you to write using an easy-to-read, easy-to-write plain text format, then\n convert it to structurally valid XHTML (or HTML). \n\n\nMarkdown supports bulleted lists.\nYou can nest lists as deep as you want.\nList elements don't have to be text either.\n\n\n\n\nReasons to like Markdown:\n\nIt also supports numbered lists.\nIt has support for code() formatting.\n\nYou can embed arbitrary HTML\n <table>\n <tr><td>Such as,</td><td>for example,</td></tr>\n <tr><td>a table.</td><td></td></tr>\n </table>\n\n\nIt even has support for code blocks:\nclass Monad m where\n (>>=) :: m a -> (a -> m b) -> m b\n (>>) :: m a -> m b -> m b\n return :: a -> m a\n fail :: String -> m a\n m >> k = m >>= \\_ -> k\n\n\n\n\n\nNotebook Magics:\nMost of the magics that work in the IPython shell also work in the notebook. Additionally, there are some magics that only work in the notebook.\nAutocomplete and Inline Documentation\n\nPressing the TAB key part of the way through a variable name autocompletes that name.\nAutocomplete also works on attributes (e.g. pandas.DataF<TAB> -> pandas.DataFrame).\nYou can bring up documentation for a function or class inline with SHIFT+TAB\n\nExecute this cell before trying the autocomplete examples in the next cell.",
"import pandas\nclass Point(object):\n \"\"\"\n A class-level docstring.\n \"\"\"\n def __init__(self, x, y=3):\n \"\"\"\n Constructor docstring. SHIFT+TAB will show you this first line.\n \n SHIFT + two TABs will show you the entire docstring.\n \"\"\"\n self.x, self.y = x, y\n \nlong_variable_name = Point(3,4)",
"Try it out!",
"pan # Pressing TAB here will autocomplete pandas.\npandas. # Pressing TAB here will show you the top-level attributes of pandas.\npandas.D # Pressing TAB here will show you DataFrame, DatetimeIndex, and DateOffset.\npandas.DataFrame # Pressing TAB here will show you methods and attributes of DataFrame.\n\nlong_variable_name # Pressing TAB here will autocomplete long_variable_name\n\n # Hold SHIFT and press TAB with your cursor in \nx = Point(1,2) # the parentheses to see info on how to make a Point object.",
"Documentation:\n\nTyping <expression>? shows the function signature and documentation for that expression.\nTyping <expression>?? takes you to the source code for the expression.\nYou can also use the pinfo and pinfo2 magics to get the same info.\nNotebook Only: Press SHIFT+TAB while hovering over an object to open in-line documentation for that callable.",
"pandas.DataFrame?\n\npandas.DataFrame.plot??\n\npandas.DataFrame.|plot",
"Cell Magics\nIn addition to all the magics we saw above, there are additional magics that operate at the cell level. Many of these are focused around interoperation with other languages.\nJavascript",
"%%javascript\nalert('foo')",
"R\nSome cell magics are provided by extensions. Here we load the rpy2's cell magic for interacting with R.",
"%load_ext rpy2.ipython\n\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.randn(10,5), columns=['A','B','C','D','E'])\ndf\n\n# Push our DataFrame into R.\n%Rpush df\n\n%%R\n# Despite also being valid python, this is actually R code!\ncol_A = df['A']\nplot(col_A)\n\n# We can also pull values back out of R!\n%Rpull col_A\ncol_A",
"Builtin Rich Display Formats\nIPython supports a wide array of rich display formats, including:\n* LaTeX\n* Markdown\n* HTML\n* SVG\n* PNG\n...and more",
"import IPython.display\ndir(IPython.display)[:18]",
"LaTeX",
"from IPython.display import Math\nMath(r'w_A = \\frac{\\sigma_B - Cov(r_A, r_B)}{\\sigma_B^2 + \\sigma_A^2 - 2 Cov(r_A, r_B)}')",
"HTML",
"from IPython.display import HTML\nHTML('''\\\nTo learn more about IPython's rich display capabilities, click\n<a href=\"http://ipython.org/ipython-doc/dev/config/integrating.html\">here</a>.\n''')",
"YouTube Video",
"from IPython.display import YouTubeVideo\nYouTubeVideo(\"https://www.youtube.com/watch?v=B_XiSozs-SE\")",
"Customizing Object Display\nIf a class implements one of many _repr_ methods, IPython will use that method to display the object.",
"class Table(object):\n \"\"\"\n A simple table represented as a list of lists.\n \"\"\"\n\n def __init__(self, lists):\n self.lists = lists\n \n def make_row(self, l):\n columns = ''.join('<td>{value}</td>'.format(value=value) for value in l)\n return '<tr>{columns}</tr>'.format(columns=columns)\n \n def _repr_html_(self):\n rows = ''.join(self.make_row(l) for l in self.lists)\n return \"<table>{rows}</table>\".format(rows=rows)\n\nTable(\n [\n [1,2,3], \n [4,5,6]\n ]\n)",
"Further Reading:\n\nUI Widgets\nIPython Parallel\nIPython Extensions\nCustom Language Kernels"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jpwhite3/python-analytics-demo
|
Part_3.ipynb
|
cc0-1.0
|
[
"Statistically meaningful charts\nSeaborn\nThe next module we will explore is Seaborn. Seaborn is a Python visualization library based on matplotlib. It is built on top of matplotlib and tightly integrated with the PyData stack, including support for numpy and pandas data structures and statistical routines from scipy and statsmodels. It provides a high-level interface for drawing attractive statistical graphics... emphasis on STATISTICS. You don't want to use Seaborn as a general purpose charting libray.\nhttp://web.stanford.edu/~mwaskom/software/seaborn/index.html",
"%matplotlib inline\nimport matplotlib\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport warnings\n\nsns.set(color_codes=True)\nwarnings.filterwarnings(\"ignore\")",
"Load up some test data to play with",
"tips = pd.read_csv('input/tips.csv')\n\ntips['tip_percent'] = (tips['tip'] / tips['total_bill'] * 100)\n\ntips.head()\n\ntips.describe()",
"Plotting linear regression\nhttp://web.stanford.edu/~mwaskom/software/seaborn/tutorial/regression.html",
"sns.jointplot(\"total_bill\", \"tip_percent\", tips, kind='reg');\n\nsns.lmplot(x=\"total_bill\", y=\"tip_percent\", hue=\"ordered_alc_bev\", data=tips)\n\nsns.lmplot(x=\"total_bill\", y=\"tip_percent\", col=\"day\", data=tips, aspect=.5)\n\nsns.lmplot(x=\"total_bill\", y=\"tip_percent\", hue='ordered_alc_bev', col=\"time\", row='gender', size=6, data=tips);",
"Plotting logistic regression\nhttp://web.stanford.edu/~mwaskom/software/seaborn/tutorial/regression.html",
"# Let's add some calculated columns\ntips['tip_above_avg'] = np.where(tips['tip_percent'] >= tips['tip_percent'].mean(), 1, 0)\ntips.replace({'Yes': 1, 'No': 0}, inplace=True)\n\ntips.head()\n\nsns.lmplot(x=\"tip_percent\", y=\"ordered_alc_bev\", col='gender', data=tips, logistic=True)\n\nsns.lmplot(x=\"ordered_alc_bev\", y=\"tip_above_avg\", col='gender', data=tips, logistic=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session02/Day5/PracticalMachLearnWorkflowSolutions.ipynb
|
mit
|
[
"from __future__ import division, print_function, absolute_import",
"A Practical Guide to the Machine Learning Workflow:\nSeparating Stars and Galaxies from SDSS\nVersion 0.1\n\nBy AA Miller 2017 Jan 22\nWe will now follow the steps from the machine learning workflow lecture to develop an end-to-end machine learning model using actual astronomical data. As a reminder the workflow is as follows:\n\nData Preparation\nModel Building\nModel Evaluation\nModel Optimization\nModel Predictions\n\nSome of these steps will be streamlined to allow us to fully build a model within the alloted time.\nScience background: Many (nearly all?) of the science applications for LSST data will rely on the accurate separation of stars and galaxies in the LSST imaging data. As an example, imagine measuring galaxy clustering without knowing which sources are galaxies and which are stars. \nDuring this exercise, we will utilize supervised machine-learning methods to separate extended (galaxies) and point sources (stars, QSOs) in imaging data. These methods are highly flexible, and as a result can classify sources at higher fidelity than methods that simply make cuts in a low-dimensional space.",
"import numpy as np\nfrom astropy.table import Table\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Problem 1) Obtain and Examine Training Data\nAs a reminder, for supervised-learning problems we use a training set, sources with known labels, i.e. they have been confirmed as normal stars, QSOs, or galaxies, to build a model to classify new observations where we do not know the source label.\nThe training set for this exercise uses Sloan Digital Sky Survey (SDSS) data. For features, we will start with each $r$-band magnitude measurement made by SDSS. This yields 8 features (twice that of the Iris data set, but significantly fewer than the 454 properties measured for each source in SDSS).\nStep 1 in the ML workflow is data preparation - we must curate the training set. As a reminder: \nA machine-learning model is only as good as its training set. \nThis point cannot be emphasized enough. Machine-learning models are data-driven, they do not capture any physical theory, and thus it is essential that the training set satisfy several criteria. \nTwo of the most important criteria for a good training set are: \n\nthe training set should be unbiased [this is actually really hard to achieve in astronomy since most surveys are magnitude limited]\nthe training set should be representative of the (unobserved or field) population of sources [a training set with no stars will yield a model incapable of discovering point sources]\n\nSo, step 1 (this is a must), we are going to examine the training set to see if anything suspicious is going on. We will use astroquery to directly access the SDSS database, and store the results in an astropy Table. \nNote The SDSS API for astroquery is not standard for the package, which leads to a warning. This is not, however, a problem for our purposes.",
"from astroquery.sdss import SDSS # enables direct queries to the SDSS database",
"While it is possible to look up each of the names of the $r$-band magnitudes in the SDSS PhotoObjAll schema, the schema list is long, and thus difficult to parse by eye. Fortunately, we can identify the desired columns using the database itself:\nselect COLUMN_NAME\nfrom INFORMATION_SCHEMA.Columns\nwhere table_name = 'PhotoObjAll' AND \nCOLUMN_NAME like '%Mag/_r' escape '/'\n\nwhich returns the following list of columns: psfMag_r, fiberMag_r, fiber2Mag_r, petroMag_r, deVMag_r, expMag_r, modelMag_r, cModelMag_r. \nWe now select these magnitude measurements for 10000 stars and galaxies from SDSS. Additionally, we join these photometric measurements with the SpecObjAll table to obtain their spectroscopic classifications, which will serve as labels for the machine-learning model.\nNote - the SDSS database contains duplicate observations, flagged observations, and non-detections, which we condition the query to exclude (as explained further below). We also exclude quasars, as the precise photometric classification of these objects is ambiguous: low-$z$ AGN have resolvable host galaxies, while high-$z$ QSOs are point-sources. Query conditions:\n\np.mode = 1 select only the primary photometric detection of a source\ns.sciencePrimary = 1 select only the primary spectroscopic detection of a source (together with above, prevents duplicates)\np.clean = 1 the SDSS clean flag excludes flagged observations and sources with non-detections\ns.class != 'QSO' removes potentially ambiguous QSOs from the training set",
"sdss_query = \"\"\"SELECT TOP 10000\n p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, \n p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r, \n s.class\n FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid\n WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'\n ORDER BY p.objid ASC\n \"\"\"\nsdss_set = SDSS.query_sql(sdss_query)\nsdss_set",
"To reiterate a point from above: data-driven models are only as good as the training set. Now that we have a potential training set, it is essential to inspect the data for any peculiarities.\nProblem 1a\nCan you easily identify any important properties of the data from the above table?\nIf not - is there a better way to examine the data?\nHint - emphasis on easy.\nSolution 1a\nThis is the first instance where domain knowledge really helps us to tackle this problem. In this case the domain knowledge is the following: PSF measurements of galaxy brightness are terrible. Thus, psfmag_r is very different from the other mag measurements for galaxies, but similar for stars. Of course - this is readily identifiable, even to those without domain knowledge, if we visualize the data.\nProblem 1b\nVisualize the 8 dimensional feature set [this is intentionally open-ended...] \nDoes this visualization reveal anything that is not obvious from the table?\nCan you identify any biases in the training set? \nRemember - always worry about the data\nHint astropy Tables can be converted to pandas DataFrames with the .to_pandas() operator.",
"# complete\n\nimport seaborn as sns\nsns.pairplot(sdss_set.to_pandas(), hue = 'class', diag_kind = 'kde')",
"Solution 1b\nThe visualization confirms our domain knowledge assertion: galaxy PSF measurements differ significantly from the other magnitude measurements. \nThe visualization also reveals the magnitude distribution of the training set, as well as a potential bias: the dip in the distribution at $r' \\approx 19$ mag. There is no reason nature should produce fewer $r' \\approx 19$ mag stars than $r' \\approx 18$ mag stars, and, indeed, this is a bias due to the SDSS spectroscopic targeting algorithm. We will proceed, but we should be weary of this moving forward.\nFinally, to finish off our preparation of the data - we need to create an independent test that will be used to evalute the accuracy/generalization properies of the model after everything has been tuned. Often, independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\\sim{0.2}-0.5$. For this problem we will adopt 0.3.\nsklearn.model_selection has a handy function train_test_split, which will simplify this process.\nProblem 1c Split the 10k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called: train_X, train_y, test_X, test_y, respectively. Use rs for the random_state in train_test_split.\nHint - recall that sklearn utilizes X, a 2D np.array(), and y as the features and labels arrays, respecitively.",
"from sklearn.model_selection import train_test_split\nrs = 2 # we are in second biggest metropolitan area in the US\n\n# complete\n\nX = np.array( # complete\ny = np.array( # complete\n\ntrain_X, test_X, train_y, test_y = train_test_split( X, y, # complete\n\nfrom sklearn.model_selection import train_test_split\nrs = 2 # we are in second biggest metropolitan area in the US\n\nfeats = list(sdss_set.columns)\nfeats.remove('class')\n\nX = np.array(sdss_set[feats].to_pandas())\ny = np.array(sdss_set['class'])\n\ntrain_X, test_X, train_y, test_y = train_test_split( X, y, test_size = 0.3, random_state = rs)",
"Problem 2) An Aside on the Importance of Feature Engineering\nIt has been said that all machine learning is an exercise in feature engineering. \nFeature engineering - the process of creating new features, combining features, removing features, collecting new data to supplement existing features, etc. is essential in the machine learning workflow. As part of the data preparation stage, it is useful to apply domain knowledge to engineer features prior to model construction. [Though it is important to know that feature engineering may be needed at any point in the ML workflow if the model does not provide desired results.]\nDue to a peculiarity of our SDSS training set, we need to briefly craft a separate problem to demonstrate the importance of feature engineering. \nFor this aside, we will train the model on bright ($r' < 18.5$ mag) sources and test the model on faint ($r' > 19.5$ mag) sources. As you might guess the model will not perform well. Following some clever feature engineering, we will be able to improve this. \naside-to-the-aside\nThis exact situation happens in astronomy all the time, and it is known as sample selection bias. In brief, any time a larger aperture telescope is built, or instrumentation is greatly improved, a large swath of sources that were previously undetectable can now be observed. These fainter sources, however, may contain entirely different populations than their brighter counterparts, and thus any models trained on the bright sources will be biased when making predictions on the faint sources.\nWe train and test the model with 10000 sources using an identical query to the one employed above, with the added condition restricting the training set to bright sources and the test set to faint sources.",
"bright_query = \"\"\"SELECT TOP 10000\n p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, \n p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r, \n s.class\n FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid\n WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'\n AND p.cModelMag_r < 18.5\n ORDER BY p.objid ASC\n \"\"\"\nbright_set = SDSS.query_sql(bright_query)\nbright_set\n\nfaint_query = \"\"\"SELECT TOP 10000\n p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, \n p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r, \n s.class\n FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid\n WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'\n AND p.cModelMag_r > 19.5\n ORDER BY p.objid ASC\n \"\"\"\nfaint_set = SDSS.query_sql(faint_query)\nfaint_set",
"Problem 2a \nTrain a $k$ Nearest Neighbors model with $k = 11$ neighbors on the 10k source training set. Note - for this particular problem, the number of neighbors does not matter much.",
"from sklearn.neighbors import KNeighborsClassifier\n\nfeats = # complete\n\n\nbright_X = # complete\nbright_y = # complete\n\nKNNclf = # complete\n\nfrom sklearn.neighbors import KNeighborsClassifier\n\nfeats = list(bright_set.columns)\nfeats.remove('class')\n\nbright_X = np.array(bright_set[feats].to_pandas())\nbright_y = np.array(bright_set['class'])\n\nKNNclf = KNeighborsClassifier(n_neighbors = 11)\nKNNclf.fit(bright_X, bright_y)",
"Problem 2b \nEvaluate the accuracy of the model when applied to the sources in the faint test set. \nDoes the model perform well?\nHint - you may find sklearn.metrics.accuracy_score useful for this exercise.",
"from sklearn.metrics import accuracy_score\n\nfaint_X = # complete\nfaint_y = # complete\n\nfaint_preds = # complete\n\nprint(\"The raw features produce a KNN model with accuracy ~{:.4f}\".format( # complete\n\nfrom sklearn.metrics import accuracy_score\n\nfaint_X = np.array(faint_set[feats].to_pandas())\nfaint_y = np.array(faint_set['class'])\n\nfaint_preds = KNNclf.predict(faint_X)\n\nprint(\"The raw features produce a KNN model with accuracy ~{:.4f}\".format(accuracy_score(faint_y, faint_preds)))",
"Solution 2b \nBased on the pair plots generated above - stars and galaxies appear highly distinct based on their SDSS $r'$-band measurements, thus, this model likely exhibits poor performance. [we will see if we can confirm this]\nLeveraging the same domain knowledge discussed above, namely that galaxies cannot be modeled with a PSF, we can \"normalize\" the magnitude measurements by taking their difference relative to psfMag_r. This normalization has the added advantage of removing any knowledge of the apparent brightness of the sources, which should help when comparing independent bright and faint sets.\nProblem 2c \nNormalize the feature vector relative to psfMag_r, and refit the $k$NN model to the 7 newly engineered features.\nDoes the accuracy improve when predicting the class of sources in the faint test set? \nHint - be sure you apply the eaxct same normalization to both the training and test set",
"bright_Xnorm = # complete\n\nKNNclf = # complete\n\nfaint_predsNorm = # complete\n\nprint(\"The normalized features produce an accuracy ~{:.4f}\".format( # complete\n\nbright_Xnorm = bright_X[:,0][:,np.newaxis] - bright_X[:,1:]\nfaint_Xnorm = faint_X[:,0][:,np.newaxis] - faint_X[:,1:]\n\n\nKNNclf = KNeighborsClassifier(n_neighbors = 11)\nKNNclf.fit(bright_Xnorm, bright_y)\n\nfaint_predsNorm = KNNclf.predict(faint_Xnorm)\n\nprint(\"The normalized features produce an accuracy ~{:.4f}\".format(accuracy_score(faint_y, faint_predsNorm)))",
"Solution 2c \nWow! Normalizing the features produces a huge ($\\sim{35}\\%$) increase in accuracy. Clearly, we should be using normalized magnitude features moving forward.\nIn addition to demonstrating the importance of feature engineering, this exercise teaches another important lesson: contextual features can be dangerous. \nContextual astronomical features can provide very strong priors: stars are more likely close to the galactic plane, supernovae occur next to/on top of galaxies, bluer stars have have lower metallicity, etc. Thus, including contextual information may improve overall model performance.\nHowever, all astronomical training sets are heavily biased. Thus, the strong priors associated with contextual features can lead to severely biased model predictions.\nGenerally, I (AAM) remove all contextual features from my ML models for this reason. If you are building ML models, consider contextual information as it may help overall performance, but... be weary.\nWorry about the data\nProblem 3) Model Building\nAfter the data have been properly curated, the next important choice in the ML workflow is the selection of ML algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.\nShort of that? Try three (or four, or five) different models and choose whichever works the best. \nFor the star-galaxy problem, we will use the Random Forest (RF) algorithm (Breiman 2001) as implemented by scikit-learn.\nRandomForestClassifier is part of the sklearn.ensemble module.\nRF has a number of nice properties for working with astronomical data:\n\nrelative insensitivity to noisy or useless features\ninvariant response to highly non-gaussian feature distributions\nfast, flexible and scales well to large data sets\n\nwhich is why we will adopt it here.\nProblem 3a \nBuild a RF model using the normalized features from the training set.\nInclude 25 trees in the forest using the n_estimators paramater in RandomForestClassifier.",
"import # complete\nrs = 626 # aread code for Pasadena\n\ntrain_Xnorm = # complete\n\nRFclf = # complete\n\nfrom sklearn.ensemble import RandomForestClassifier\nrs = 626 # aread code for Pasadena\n\ntrain_Xnorm = train_X[:,0][:,np.newaxis] - train_X[:,1:]\n\nRFclf = RandomForestClassifier(n_estimators = 25, random_state = rs)\nRFclf.fit(train_Xnorm, train_y)",
"scikit-learn really makes it easy to build ML models.\nAnother nice property of RF is that it naturally provides an estimate of the most important features in the model. \n[Once again - feature engineering comes into play, as it may be necessary to remove correlated features or unimportant features during the model construction in order to reduce run time or allow the model to fit in the available memory.]\nIn this case we don't need to remove any features [RF is relatively immune to correlated or unimportant features], but for completeness we measure the importance of each feature in the model. \nRF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() class. The higher the value, the more important feature. \nProblem 3b \nCalculate the relative importance of each feature. \nWhich feature is most important? Can you make sense of the feature ordering? \nHint - do not dwell too long on the final ordering of the features.",
" # complete\n \n\nprint(\"The relative importance of the features is: \\n{:s}\".format( # complete\n\nprint(RFclf.feature_importances_) # print the importances\n\nindicies = np.argsort(RFclf.feature_importances_)[::-1] # sort the features most imp. --> least imp.\n\n# recall that all features are normalized relative to psfMag_r\nfeatStr = \", \\n\".join(['psfMag_r - {:s}'.format(x) for x in list(np.array(feats)[1:][indicies])])\n\nprint(\"The relative importance of the features is: \\n{:s}\".format(featStr))",
"Solution 3b \npsfMag_r - deVMag_r is the most important feature. This makes sense based on the separation of stars and galaxies in the psfMag_r-deVMag_r plane (see the visualization results above). \nNote - the precise ordering of the features can change due to their strong correlation with each other, though the fiberMag features are always the least important.\nProblem 4) Model Evaluation\nTo evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. This in essence is the essential \"engineering\" step of machine learning [and why I (AAM) often caution against ML for scientific measurements and advocate for engineering-like problems instead]. \nIf the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.\nThe SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.\nTthe SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:\n$$\\mathtt{psfMag} - \\mathtt{cmodelMag} > 0.145.$$\nSources that satisfy this criteria are considered galaxies. \nProblem 4a \nDetermine the baseline for the ML model by measuring the accuracy of the SDSS photometric classifier on the training set. \nHint - you may need to play around with array values to get accuracy_score to work.",
"# complete\n\nprint(\"The SDSS phot model produces an accuracy ~{:.4f}\".format( # complete\n\nphot_y = train_Xnorm[:,6] > 0.145\nphot_class = np.empty(len(phot_y), dtype = '|S6')\nphot_class[phot_y] = 'GALAXY'\nphot_class[phot_y == False] = 'STAR'\n\nprint(\"The SDSS phot model produces an accuracy ~{:.4f}\".format(accuracy_score(train_y, phot_class)))",
"The simple SDSS model sets a high standard! A $\\sim{96}\\%$ accuracy following a single hard cut is a phenomenal performance.\nProblem 4b Using 10-fold cross validation, estimate the accuracy of the RF model.",
"from sklearn.model_selection import # complete\n\nRFpreds = # complete\n\nprint(\"The CV accuracy for the training set is {:.4f}\".format( # complete\n\nfrom sklearn.model_selection import cross_val_predict\n\nRFpreds = cross_val_predict(RFclf, train_Xnorm, train_y, cv = 10)\n\nprint(\"The CV accuracy for the training set is {:.4f}\".format(accuracy_score(train_y, RFpreds)))",
"Phew! Our hard work to build a machine learning model has been rewarded, by creating an improved model: $\\sim{96.9}\\%$ accuracy vs. $\\sim{96.4}\\%$.\n[But - was our effort worth only a $0.5\\%$ improvement in the model?]\nProblem 5) Model Optimization\nWhile the \"off-the-shelf\" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.\nAll machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter:\n\n$N_\\mathrm{tree}$ - the number of trees in the forest n_estimators (default: 10) in sklearn\n$m_\\mathrm{try}$ - the number of (random) features to explore as splitting criteria at each node max_features (default: sqrt(n_features)) in sklearn\nPruning criteria - defined stopping criteria for ending continued growth of the tree, there are many choices for this in sklearn (My preference is min_samples_leaf (default: 1) which sets the minimum number of sources allowed in a terminal node, or leaf, of the tree)\n\nJust as we previously evaluated the model using CV, we must optimize the tuning paramters via CV. Until we \"finalize\" the model by fixing all the input parameters, we cannot evalute the accuracy of the model with the test set as that would be \"snooping.\"\nOn Tuesday we were introduced to GridSearchCV, which is an excellent tool for optimizing model parameters. \nBefore we get to that, let's try to develop some intuition for how the tuning parameters affect the final model predictions.\nProblem 5a \nDetermine the 5-fold CV accuracy for models with $N_\\mathrm{tree}$ = 1, 10, 100. \nHow do you expect changing the number of trees to affect the results?",
"rs = 1936 # year JPL was founded\n\nCVpreds1 = # complete\n\n# complete\n\n# complete\n\nprint(\"The CV accuracy for 1, 10, 100 trees is {:.4f}, {:.4f}, {:.4f}\".format( # complete\n\nrs = 1936 # year JPL was founded\n\nCVpreds1 = cross_val_predict(RandomForestClassifier(n_estimators = 1, random_state=rs), \n train_Xnorm, train_y, cv = 5)\n\nCVpreds10 = cross_val_predict(RandomForestClassifier(n_estimators = 10, random_state=rs), \n train_Xnorm, train_y, cv = 5)\n\nCVpreds100 = cross_val_predict(RandomForestClassifier(n_estimators = 100, random_state=rs), \n train_Xnorm, train_y, cv = 5)\n\nprint(\"The CV accuracy for 1, 10, 100 trees is {:.4f}, {:.4f}, {:.4f}\".format(accuracy_score(train_y, CVpreds1), \n accuracy_score(train_y, CVpreds10), \n accuracy_score(train_y, CVpreds100)))",
"Solution 5a \nUsing a single tree will produce high variance results, as the features selected at the top of the tree greatly influence the final classifications. Thus, we expect it to have the lowest accuracy. \nWhile (in this case) the affect is small, it is clear that $N_\\mathrm{tree}$ affects the model output. \nNow we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? \nBrute force.\nThis data set and the number of tuning parameters is small, so brute force is appropriate (alternatives exist when this isn't the case). We can optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.\nIt is important to remember two general rules of thumb: (i) if the model is optimized at the edge of the grid, refit a new grid centered on that point, and (ii) the results should be stable in the vicinity of the grid maximum. If this is not the case the model is likely overfit. \nProblem 5b \nUse GridSearchCV to perform a 3-fold CV grid search to optimize the RF star-galaxy model. Remember the rules of thumb. \nWhat are the optimal tuning parameters for the model?\nHint 1 - think about the computational runtime based on the number of points in the grid. Do not start with a very dense or large grid.\nHint 2 - if the runtime is long, don't repeat the grid search even if the optimal model is on an edge of the grid",
"rs = 64 # average temperature in Los Angeles\n\nfrom sklearn.model_selection import GridSearchCV\n\ngrid_results = # complete\n\n\nprint(\"The optimal parameters are:\")\nfor key, item in grid_results.best_params_.items(): # warning - slightly different meanings in Py2 & Py3\n print(\"{}: {}\".format(key, item))\n\nrs = 64 # average temperature in Los Angeles\n\nfrom sklearn.model_selection import GridSearchCV\n\ngrid_results = GridSearchCV(RandomForestClassifier(random_state = rs), \n {'n_estimators': [30, 100, 300], 'max_features': [1, 3, 7], 'min_samples_leaf': [1,10]},\n cv = 3)\ngrid_results.fit(train_Xnorm, train_y)\n\nprint(\"The optimal parameters are:\")\nfor key, item in grid_results.best_params_.items(): # warning - slightly different meanings in Py2 & Py3\n print(\"{}: {}\".format(key, item))",
"Now that the model is fully optimized - we are ready for the moment of truth!\nProblem 5c\nUsing the optimized model parameters, train a RF model and estimate the model's generalization error using the test set.\nHow does this compare to the baseline model?",
"RFopt_clf = # complete\n\ntest_preds = # complete\n\nprint('The optimized model produces a generalization error of {:.4f}'.format( # complete\n\nRFopt_clf = RandomForestClassifier(n_estimators=30, max_features=3, min_samples_leaf=10)\nRFopt_clf.fit(train_Xnorm, train_y)\n\ntest_Xnorm = test_X[:,0][:,np.newaxis] - test_X[:,1:]\ntest_preds = RFopt_clf.predict(test_Xnorm)\n\nprint('The optimized model produces a generalization error of {:.4f}'.format(1 - accuracy_score(test_y, test_preds)))",
"Solution 5c\nThe optimized model provides a $\\sim{0.6}\\%$ improvement over the baseline model.\nWe will now examine the performance of the model using some alternative metrics. \nNote - if these metrics are essential for judging the model performance, then they should be incorporated to the workflow in the evaluation stage, prior to examination of the test set. \nProblem 5d\nCalculate the confusion matrix for the model, as determined by the test set.\nIs there symmetry to the misclassifications?",
"from sklearn.metrics import # complete\n\n# complete\n\nfrom sklearn.metrics import confusion_matrix\n\ncm = confusion_matrix(test_y, test_preds)\nprint(cm)",
"Solution 5d\nAdopting galaxies as the positive class, the TPR = 96.7%, while the TNR = 97.1%. Thus, yes, these is ~symmetry to the classifications.\nProblem 5e\nCalculate and plot the ROC curves for both stars and galaxies.\nHint - you'll need probabilities in order to calculate the ROC curve.",
"from sklearn.metrics import roc_curve\n\ntest_preds_proba = # complete\n# complete\n\nfpr, tpr, thresholds = roc_curve( # complete\nplt.plot( # complete\n\nplt.legend()\n\nfrom sklearn.metrics import roc_curve, roc_auc_score\n\ntest_preds_proba = RFopt_clf.predict_proba(test_Xnorm)\ntest_y_stars = np.zeros(len(test_y), dtype = int)\ntest_y_stars[np.where(test_y == \"STAR\")] = 1\ntest_y_galaxies = test_y_stars*-1. + 1\n\nfpr, tpr, thresholds = roc_curve(test_y_stars, test_preds_proba[:,1])\nplt.plot(fpr, tpr, label = r'$\\mathrm{STAR}$', color = \"MediumAquaMarine\")\n\nfpr, tpr, thresholds = roc_curve(test_y_galaxies, test_preds_proba[:,0])\nplt.plot(fpr, tpr, label = r'$\\mathrm{GALAXY}$', color = \"Tomato\")\n\nplt.legend()",
"Problem 5f\nSuppose you want a model that only misclassifies 1% of stars as galaxies. \nWhat classification threshold should be adopted for this model?\nWhat fraction of galaxies does this model miss?\nCan you think of a reason to adopt such a threshold?",
"# complete\n\nfpr01_idx = (np.abs(fpr-0.01)).argmin()\n\ntpr01 = tpr[fpr01_idx]\nthreshold01 = thresholds[fpr01_idx]\n\nprint(\"To achieve FPR = 0.01, a decision threshold = {:.4f} must be adopted\".format(threshold01))\nprint(\"This threshold will miss {:.4f} of galaxies\".format(1 - tpr01))",
"Solution 5f\nWhen building galaxy 2-point correlation functions it is very important to avoid including stars in the statistics as they will bias the final measurement. \nFinally - always remember: \nworry about the data\nChallenge Problem) Taking the Plunge\nApplying the model to field data\nQSOs are unresolved sources that look like stars in optical imaging data. We will now download photometric measurements for 10k QSOs from SDSS and see how accurate the RF model performs for these sources.",
"QSO_query = \"\"\"SELECT TOP 10000 \n p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, \n p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r, \n s.class\n FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid\n WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class = 'QSO'\n ORDER BY s.specobjid ASC\n \"\"\"\nQSO_set = SDSS.query_sql(QSO_query)",
"Challenge 1 \nCalculate the accuracy with which the model classifies QSOs based on the 10k QSOs selected with the above command. How does that accuracy compare to that estimated by the test set?",
"qso_X = np.array(QSO_set[feats].to_pandas())\nqso_y = np.empty(len(QSO_set),dtype='|S4') # we are defining QSOs as stars for this exercise\nqso_y[0:-1] = 'STAR' \nqso_Xnorm = qso_X[:,0][:,np.newaxis] - qso_X[:,1:]\n\nqso_preds = RFclf.predict(qso_Xnorm)\n\nprint(\"The RF model correctly classifies ~{:.4f} of the QSOs\".format(accuracy_score(qso_y, qso_preds)))",
"Challenge 2 \nCan you think of any reasons why the performance would be so much worse for the QSOs than it is for the stars? \nCan you obtain a ~.97 accuracy when classifying QSOs?",
"# As discussed above, low-z AGN have resolved host galaxies which will confuse the classifier, \n# this can be resolved by only selecting high-z QSOs (z > 1.5)\n\nQSO_query = \"\"\"SELECT TOP 10000 \n p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r, \n p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r, \n s.class\n FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid\n WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class = 'QSO'\n AND s.z > 1.5\n ORDER BY s.specobjid ASC\n \"\"\"\nQSO_set = SDSS.query_sql(QSO_query)\n\nqso_X = np.array(QSO_set[feats].to_pandas())\nqso_y = np.empty(len(QSO_set),dtype='|S4') # we are defining QSOs as stars for this exercise\nqso_y[0:-1] = 'STAR' \nqso_Xnorm = qso_X[:,0][:,np.newaxis] - qso_X[:,1:]\n\nqso_preds = RFclf.predict(qso_Xnorm)\n\nprint(\"The RF model correctly classifies ~{:.4f} of the QSOs\".format(accuracy_score(qso_y, qso_preds)))",
"Challenge 3 \nPerform an actual test of the model using \"field\" sources. The SDSS photometric classifier is nearly perfect for sources brighter than $r = 21$ mag. Download a random sample of $r < 21$ mag photometric sources, and classify them using the optimized RF model. Adopting the photometric classifications as ground truth, what is the accuracy of the RF model?\nHint - you'll need to look up the parameter describing photometric classification in SDSS",
"# complete"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DJCordhose/ai
|
notebooks/booster/3-base-line.ipynb
|
mit
|
[
"Base Line for ML",
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\n%pylab inline\n\nimport pandas as pd\nprint(pd.__version__)",
"First Step: Load Data and disassemble for our purposes",
"df = pd.read_csv('./insurance-customers-300.csv', sep=';')\n\ny=df['group']\n\ndf.drop('group', axis='columns', inplace=True)\n\nX = df.as_matrix()\n\ndf.describe()",
"Second Step: Visualizing Prediction",
"# ignore this, it is just technical code\n# should come from a lib, consider it to appear magically \n# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\n\ncmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])\ncmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00'])\ncmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])\nfont_size=25\n\ndef meshGrid(x_data, y_data):\n h = 1 # step size in the mesh\n x_min, x_max = x_data.min() - 1, x_data.max() + 1\n y_min, y_max = y_data.min() - 1, y_data.max() + 1\n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n return (xx,yy)\n \ndef plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title=\"\", mesh=True, fname=None):\n xx,yy = meshGrid(x_data, y_data)\n plt.figure(figsize=(20,10))\n\n if clf and mesh:\n Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])\n # Put the result into a color plot\n Z = Z.reshape(xx.shape)\n plt.pcolormesh(xx, yy, Z, cmap=cmap_light)\n \n plt.xlim(xx.min(), xx.max())\n plt.ylim(yy.min(), yy.max())\n if fname:\n plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k')\n else:\n plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k')\n plt.xlabel(x_label, fontsize=font_size)\n plt.ylabel(y_label, fontsize=font_size)\n plt.title(title, fontsize=font_size)\n if fname:\n plt.savefig(fname)\n\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)\n\nX_train.shape, y_train.shape, X_test.shape, y_test.shape\n\nX_train_kmh_age = X_train[:, :2]\nX_test_kmh_age = X_test[:, :2]\nX_train_2_dim = X_train_kmh_age\nX_test_2_dim = X_test_kmh_age\n\n# 0: red\n# 1: green\n# 2: yellow\n\nclass ClassifierBase:\n def predict(self, X):\n return np.array([ self.predict_single(x) for x in X])\n def score(self, X, y):\n n = len(y)\n correct = 0\n predictions = self.predict(X)\n for prediction, ground_truth in zip(predictions, y):\n if prediction == ground_truth:\n correct = correct + 1\n return correct / n\n\nfrom random import randrange\n\nclass RandomClassifier(ClassifierBase):\n def predict_single(self, x):\n return randrange(3)\n\nrandom_clf = RandomClassifier()\n\nplotPrediction(random_clf, X_train_2_dim[:, 1], X_train_2_dim[:, 0], \n 'Age', 'Max Speed', y_train,\n title=\"Train Data Max Speed vs Age (Random)\")",
"By just randomly guessing, we get approx. 1/3 right, which is what we expect",
"random_clf.score(X_test_2_dim, y_test)",
"Third Step: Creating a Base Line\nCreating a naive classifier manually, how much better is it?",
"class BaseLineClassifier(ClassifierBase):\n def predict_single(self, x):\n try:\n speed, age, km_per_year = x\n except:\n speed, age = x\n km_per_year = 0\n if age < 25:\n if speed > 180:\n return 0\n else:\n return 2\n if age > 75:\n return 0\n if km_per_year > 50:\n return 0\n if km_per_year > 35:\n return 2\n return 1\n\nbase_clf = BaseLineClassifier()\n\nplotPrediction(base_clf, X_train_2_dim[:, 1], X_train_2_dim[:, 0], \n 'Age', 'Max Speed', y_train,\n title=\"Train Data Max Speed vs Age with Classification\")",
"This is the baseline we have to beat",
"base_clf.score(X_test_2_dim, y_test)",
"No overfitting, which is too be expected, as we use general rules rather than inferring from single data points",
"base_clf.score(X_train_2_dim, y_train)",
"Exercise in Code\nForm a group where at least one of you knows a little bit of coding\nChange the rules and try to beat our score\nBe careful: tune on train data only, use test data only for single validation (otherwise you are fooling yourself)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
deepmind/acme
|
examples/quickstart.ipynb
|
apache-2.0
|
[
"Acme: Quickstart\nGuide to installing Acme and training your first D4PG agent.\n<a href=\"https://colab.research.google.com/github/deepmind/acme/blob/master/examples/quickstart.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nSelect your environment library",
"environment_library = 'gym' # @param ['dm_control', 'gym']",
"Installation\nInstall Acme",
"!pip install dm-acme\n!pip install dm-acme[reverb]\n!pip install dm-acme[tf]",
"Install the environment library",
"if environment_library == 'dm_control':\n import distutils.util\n import subprocess\n if subprocess.run('nvidia-smi').returncode:\n raise RuntimeError(\n 'Cannot communicate with GPU. '\n 'Make sure you are using a GPU Colab runtime. '\n 'Go to the Runtime menu and select Choose runtime type.')\n\n mujoco_dir = \"$HOME/.mujoco\"\n\n print('Installing OpenGL dependencies...')\n !apt-get update -qq\n !apt-get install -qq -y --no-install-recommends libglew2.0 > /dev/null\n\n print('Downloading MuJoCo...')\n BASE_URL = 'https://github.com/deepmind/mujoco/releases/download'\n MUJOCO_VERSION = '2.1.1'\n MUJOCO_ARCHIVE = (\n f'mujoco-{MUJOCO_VERSION}-{distutils.util.get_platform()}.tar.gz')\n !wget -q \"{BASE_URL}/{MUJOCO_VERSION}/{MUJOCO_ARCHIVE}\"\n !wget -q \"{BASE_URL}/{MUJOCO_VERSION}/{MUJOCO_ARCHIVE}.sha256\"\n check_result = !shasum -c \"{MUJOCO_ARCHIVE}.sha256\"\n if _exit_code:\n raise RuntimeError(\n 'Downloaded MuJoCo archive is corrupted (checksum mismatch)')\n\n print('Unpacking MuJoCo...')\n MUJOCO_DIR = '$HOME/.mujoco'\n !mkdir -p \"{MUJOCO_DIR}\"\n !tar -zxf {MUJOCO_ARCHIVE} -C \"{MUJOCO_DIR}\"\n\n # Configure dm_control to use the EGL rendering backend (requires GPU)\n %env MUJOCO_GL=egl\n\n print('Installing dm_control...')\n # Version 0.0.416848645 is the first one to support MuJoCo 2.1.1.\n !pip install -q dm_control>=0.0.416848645\n\n print('Checking that the dm_control installation succeeded...')\n try:\n from dm_control import suite\n env = suite.load('cartpole', 'swingup')\n pixels = env.physics.render()\n except Exception as e:\n raise e from RuntimeError(\n 'Something went wrong during installation. Check the shell output above '\n 'for more information.\\n'\n 'If using a hosted Colab runtime, make sure you enable GPU acceleration '\n 'by going to the Runtime menu and selecting \"Choose runtime type\".')\n else:\n del suite, env, pixels\n\n !echo Installed dm_control $(pip show dm_control | grep -Po \"(?<=Version: ).+\")\n\nelif environment_library == 'gym':\n !pip install gym",
"Install visualization packages",
"!sudo apt-get install -y xvfb ffmpeg\n!pip install imageio\n!pip install PILLOW\n!pip install pyvirtualdisplay",
"Import Modules",
"import IPython\n\nfrom acme import environment_loop\nfrom acme import specs\nfrom acme import wrappers\nfrom acme.agents.tf import d4pg\nfrom acme.tf import networks\nfrom acme.tf import utils as tf2_utils\nfrom acme.utils import loggers\nimport numpy as np\nimport sonnet as snt\n\n# Import the selected environment lib\nif environment_library == 'dm_control':\n from dm_control import suite\nelif environment_library == 'gym':\n import gym\n\n# Imports required for visualization\nimport pyvirtualdisplay\nimport imageio\nimport base64\n\n# Set up a virtual display for rendering.\ndisplay = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()",
"Load an environment\nWe can now load an environment. In what follows we'll create an environment and grab the environment's specifications.",
"if environment_library == 'dm_control':\n environment = suite.load('cartpole', 'balance')\n \nelif environment_library == 'gym':\n environment = gym.make('MountainCarContinuous-v0')\n environment = wrappers.GymWrapper(environment) # To dm_env interface.\n\nelse:\n raise ValueError(\n \"Unknown environment library: {};\".format(environment_library) +\n \"choose among ['dm_control', 'gym'].\")\n\n# Make sure the environment outputs single-precision floats.\nenvironment = wrappers.SinglePrecisionWrapper(environment)\n\n# Grab the spec of the environment.\nenvironment_spec = specs.make_environment_spec(environment)\n",
"## Create a D4PG agent",
"#@title Build agent networks\n\n# Get total number of action dimensions from action spec.\nnum_dimensions = np.prod(environment_spec.actions.shape, dtype=int)\n\n# Create the shared observation network; here simply a state-less operation.\nobservation_network = tf2_utils.batch_concat\n\n# Create the deterministic policy network.\npolicy_network = snt.Sequential([\n networks.LayerNormMLP((256, 256, 256), activate_final=True),\n networks.NearZeroInitializedLinear(num_dimensions),\n networks.TanhToSpec(environment_spec.actions),\n])\n\n# Create the distributional critic network.\ncritic_network = snt.Sequential([\n # The multiplexer concatenates the observations/actions.\n networks.CriticMultiplexer(),\n networks.LayerNormMLP((512, 512, 256), activate_final=True),\n networks.DiscreteValuedHead(vmin=-150., vmax=150., num_atoms=51),\n])\n\n\n# Create a logger for the agent and environment loop.\nagent_logger = loggers.TerminalLogger(label='agent', time_delta=10.)\nenv_loop_logger = loggers.TerminalLogger(label='env_loop', time_delta=10.)\n\n# Create the D4PG agent.\nagent = d4pg.D4PG(\n environment_spec=environment_spec,\n policy_network=policy_network,\n critic_network=critic_network,\n observation_network=observation_network,\n sigma=1.0,\n logger=agent_logger,\n checkpoint=False\n)\n\n# Create an loop connecting this agent to the environment created above.\nenv_loop = environment_loop.EnvironmentLoop(\n environment, agent, logger=env_loop_logger)",
"Run a training loop",
"# Run a `num_episodes` training episodes.\n# Rerun this cell until the agent has learned the given task.\nenv_loop.run(num_episodes=100)",
"Visualize an evaluation loop\nHelper functions for rendering and vizualization",
"# Create a simple helper function to render a frame from the current state of\n# the environment.\nif environment_library == 'dm_control':\n def render(env):\n return env.physics.render(camera_id=0)\nelif environment_library == 'gym':\n def render(env):\n return env.environment.render(mode='rgb_array')\nelse:\n raise ValueError(\n \"Unknown environment library: {};\".format(environment_library) +\n \"choose among ['dm_control', 'gym'].\")\n\ndef display_video(frames, filename='temp.mp4'):\n \"\"\"Save and display video.\"\"\"\n\n # Write video\n with imageio.get_writer(filename, fps=60) as video:\n for frame in frames:\n video.append_data(frame)\n\n # Read video and display the video\n video = open(filename, 'rb').read()\n b64_video = base64.b64encode(video)\n video_tag = ('<video width=\"320\" height=\"240\" controls alt=\"test\" '\n 'src=\"data:video/mp4;base64,{0}\">').format(b64_video.decode())\n\n return IPython.display.HTML(video_tag)",
"Run and visualize the agent in the environment for an episode",
"timestep = environment.reset()\nframes = [render(environment)]\n\nwhile not timestep.last():\n # Simple environment loop.\n action = agent.select_action(timestep.observation)\n timestep = environment.step(action)\n\n # Render the scene and add it to the frame stack.\n frames.append(render(environment))\n\n# Save and display a video of the behaviour.\ndisplay_video(np.array(frames))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sahilm89/lhsmdu
|
lhsmdu/benchmark/Comparing LHSMDU and MC sampling.ipynb
|
mit
|
[
"Comparing MC and LHS methods for sampling from a uniform distribution\nThis note compares the moments of the emperical uniform distribution sampled using Latin Hypercube sampling with Multi-Dimensional Uniformity (LHSMDU) with the NumPy random number generator with theoretical moments of a uniform distribution.",
"import numpy as np\nimport lhsmdu\nimport matplotlib.pyplot as plt\n\ndef simpleaxis(axes, every=False):\n if not isinstance(axes, (list, np.ndarray)):\n axes = [axes]\n for ax in axes:\n ax.spines['top'].set_visible(False)\n ax.spines['right'].set_visible(False)\n if every:\n ax.spines['bottom'].set_visible(False)\n ax.spines['left'].set_visible(False)\n ax.get_xaxis().tick_bottom()\n ax.get_yaxis().tick_left()\n ax.set_title('')",
"Params",
"seed = 1\nnp.random.seed(seed)\nlhsmdu.setRandomSeed(seed)\n\nnumDimensions = 2\nnumSamples = 100\nnumIterations = 100",
"Theoretical values",
"theoretical_mean = 0.5\ntheoretical_std = np.sqrt(1./12)",
"Emperical mean ($\\mu$) and standard deviation ($\\sigma$) estimates for 100 samples",
"mc_Mean, lhs_Mean = [], []\nmc_Std, lhs_Std = [], []\n\nfor iterate in range(numIterations):\n a = np.random.random((numDimensions,numSamples))\n b = lhsmdu.sample(numDimensions,numSamples)\n mc_Mean.append(np.mean(a))\n lhs_Mean.append(np.mean(b))\n mc_Std.append(np.std(a))\n lhs_Std.append(np.std(b))",
"Plotting mean estimates",
"fig, ax = plt.subplots()\nax.plot(range(numIterations), mc_Mean, 'ko', label='numpy')\nax.plot(range(numIterations), lhs_Mean, 'o', c='orange', label='lhsmdu')\nax.hlines(xmin=0, xmax=numIterations, y=theoretical_mean, linestyles='--', label='theoretical value', zorder=3)\nax.set_xlabel(\"Iteration #\")\nax.set_ylabel(\"$\\mu$\")\nax.legend(frameon=False)\nsimpleaxis(ax)\nplt.show()",
"Plotting standard deviation estimates",
"fig, ax = plt.subplots()\nax.plot(range(numIterations), mc_Std, 'ko', label='numpy')\nax.plot(range(numIterations), lhs_Std, 'o', c='orange', label='lhsmdu')\nax.hlines(xmin=0, xmax=numIterations, y=theoretical_std, linestyles='--', label='theoretical value', zorder=3)\nax.set_xlabel(\"Iteration #\")\nax.set_ylabel(\"$\\sigma$\")\nax.legend(frameon=False)\nsimpleaxis(ax)\nplt.show()",
"Across different number of samples",
"mc_Std, lhs_Std = [], []\nmc_Mean, lhs_Mean = [], []\nnumSamples = range(1,numIterations)\nfor iterate in numSamples:\n a = np.random.random((numDimensions,iterate))\n b = lhsmdu.sample(numDimensions,iterate)\n mc_Mean.append(np.mean(a))\n lhs_Mean.append(np.mean(b))\n mc_Std.append(np.std(a))\n lhs_Std.append(np.std(b))",
"Plotting mean estimates",
"fig, ax = plt.subplots()\nax.plot(numSamples, mc_Mean, 'ko', label='numpy')\nax.plot(numSamples, lhs_Mean, 'o', c='orange', label='lhsmdu')\nax.hlines(xmin=0, xmax=numIterations, y=theoretical_mean, linestyles='--', label='theoretical value', zorder=3)\nax.set_xlabel(\"Number of Samples\")\nax.set_ylabel(\"$\\mu$\")\nax.legend(frameon=False)\nsimpleaxis(ax)\nplt.show()",
"Plotting standard deviation estimates",
"fig, ax = plt.subplots()\nax.plot(numSamples, mc_Std, 'ko', label='numpy')\nax.plot(numSamples, lhs_Std, 'o', c='orange', label='lhsmdu')\nax.hlines(xmin=0, xmax=numIterations, y=theoretical_std, linestyles='--', label='theoretical value', zorder=3)\nax.set_xlabel(\"Number of Samples\")\nax.set_ylabel(\"$\\sigma$\")\nax.legend(frameon=False)\nsimpleaxis(ax)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
thalesians/tsa
|
src/jupyter/python/pypeincoming.ipynb
|
apache-2.0
|
[
"This Jupyter notebook should be used in conjunction with pypeoutgoing.ipynb.\nRun through the following cells...",
"import os, sys\nsys.path.append(os.path.abspath('../../main/python'))\n\nimport thalesians.tsa.pypes as pypes\n\npype = pypes.Pype(pypes.Direction.INCOMING, name='EXAMPLE', port=5758); pype",
"Then run the following cell and send some values from pypeoutgoing.ipynb running in another window. The will be sent over the \"pype\". Watch them printed below once they are received:",
"for x in pype: print(x)",
"Once you have finished experimenting, you can close the pype:",
"pype.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.24/_downloads/2dd868e4ea307404d807080fb341eb26/evoked_topomap.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Plotting topographic maps of evoked data\nLoad evoked data and plot topomaps for selected time points using multiple\nadditional options.",
"# Authors: Christian Brodbeck <christianbrodbeck@nyu.edu>\n# Tal Linzen <linzen@nyu.edu>\n# Denis A. Engeman <denis.engemann@gmail.com>\n# Mikołaj Magnuski <mmagnuski@swps.edu.pl>\n# Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD-3-Clause",
"sphinx_gallery_thumbnail_number = 5",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom mne.datasets import sample\nfrom mne import read_evokeds\n\nprint(__doc__)\n\npath = sample.data_path()\nfname = path + '/MEG/sample/sample_audvis-ave.fif'\n\n# load evoked corresponding to a specific condition\n# from the fif file and subtract baseline\ncondition = 'Left Auditory'\nevoked = read_evokeds(fname, condition=condition, baseline=(None, 0))",
"Basic :func:~mne.viz.plot_topomap options\nWe plot evoked topographies using :func:mne.Evoked.plot_topomap. The first\nargument, times allows to specify time instants (in seconds!) for which\ntopographies will be shown. We select timepoints from 50 to 150 ms with a\nstep of 20ms and plot magnetometer data:",
"times = np.arange(0.05, 0.151, 0.02)\nevoked.plot_topomap(times, ch_type='mag', time_unit='s')",
"If times is set to None at most 10 regularly spaced topographies will be\nshown:",
"evoked.plot_topomap(ch_type='mag', time_unit='s')",
"We can use nrows and ncols parameter to create multiline plots\nwith more timepoints.",
"all_times = np.arange(-0.2, 0.5, 0.03)\nevoked.plot_topomap(all_times, ch_type='mag', time_unit='s',\n ncols=8, nrows='auto')",
"Instead of showing topographies at specific time points we can compute\naverages of 50 ms bins centered on these time points to reduce the noise in\nthe topographies:",
"evoked.plot_topomap(times, ch_type='mag', average=0.05, time_unit='s')",
"We can plot gradiometer data (plots the RMS for each pair of gradiometers)",
"evoked.plot_topomap(times, ch_type='grad', time_unit='s')",
"Additional :func:~mne.viz.plot_topomap options\nWe can also use a range of various :func:mne.viz.plot_topomap arguments\nthat control how the topography is drawn. For example:\n\ncmap - to specify the color map\nres - to control the resolution of the topographies (lower resolution\n means faster plotting)\noutlines='skirt' to see the topography stretched beyond the head circle\ncontours to define how many contour lines should be plotted",
"evoked.plot_topomap(times, ch_type='mag', cmap='Spectral_r', res=32,\n outlines='skirt', contours=4, time_unit='s')",
"If you look at the edges of the head circle of a single topomap you'll see\nthe effect of extrapolation. There are three extrapolation modes:\n\nextrapolate='local' extrapolates only to points close to the sensors.\nextrapolate='head' extrapolates out to the head circle.\nextrapolate='box' extrapolates to a large box stretching beyond the\n head circle.\n\nThe default value extrapolate='auto' will use 'local' for MEG sensors\nand 'head' otherwise. Here we show each option:",
"extrapolations = ['local', 'head', 'box']\nfig, axes = plt.subplots(figsize=(7.5, 4.5), nrows=2, ncols=3)\n\n# Here we look at EEG channels, and use a custom head sphere to get all the\n# sensors to be well within the drawn head surface\nfor axes_row, ch_type in zip(axes, ('mag', 'eeg')):\n for ax, extr in zip(axes_row, extrapolations):\n evoked.plot_topomap(0.1, ch_type=ch_type, size=2, extrapolate=extr,\n axes=ax, show=False, colorbar=False,\n sphere=(0., 0., 0., 0.09))\n ax.set_title('%s %s' % (ch_type.upper(), extr), fontsize=14)\nfig.tight_layout()",
"More advanced usage\nNow we plot magnetometer data as topomap at a single time point: 100 ms\npost-stimulus, add channel labels, title and adjust plot margins:",
"evoked.plot_topomap(0.1, ch_type='mag', show_names=True, colorbar=False,\n size=6, res=128, title='Auditory response',\n time_unit='s')\nplt.subplots_adjust(left=0.01, right=0.99, bottom=0.01, top=0.88)",
"We can also highlight specific channels by adding a mask, to e.g. mark\nchannels exceeding a threshold at a given time:",
"# Define a threshold and create the mask\nmask = evoked.data > 1e-13\n\n# Select times and plot\ntimes = (0.09, 0.1, 0.11)\nevoked.plot_topomap(times, ch_type='mag', time_unit='s', mask=mask,\n mask_params=dict(markersize=10, markerfacecolor='y'))",
"Or by manually picking the channels to highlight at different times:",
"times = (0.09, 0.1, 0.11)\n_times = ((np.abs(evoked.times - t)).argmin() for t in times)\nsignificant_channels = [\n ('MEG 0231', 'MEG 1611', 'MEG 1621', 'MEG 1631', 'MEG 1811'),\n ('MEG 2411', 'MEG 2421'),\n ('MEG 1621')]\n_channels = [np.in1d(evoked.ch_names, ch) for ch in significant_channels]\n\nmask = np.zeros(evoked.data.shape, dtype='bool')\nfor _chs, _time in zip(_channels, _times):\n mask[_chs, _time] = True\n\nevoked.plot_topomap(times, ch_type='mag', time_unit='s', mask=mask,\n mask_params=dict(markersize=10, markerfacecolor='y'))",
"Animating the topomap\nInstead of using a still image we can plot magnetometer data as an animation,\nwhich animates properly only in matplotlib interactive mode.",
"times = np.arange(0.05, 0.151, 0.01)\nfig, anim = evoked.animate_topomap(\n times=times, ch_type='mag', frame_rate=2, time_unit='s', blit=False)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
blua/deep-learning
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
mit
|
[
"Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.",
"%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')",
"Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.\n\nExercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.",
"def model_inputs(real_dim, z_dim):\n inputs_real = \n inputs_z = \n \n return inputs_real, inputs_z",
"Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.\n\nExercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.",
"def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope # finish this\n # Hidden layer\n h1 = \n # Leaky ReLU\n h1 = \n \n # Logits and tanh output\n logits = \n out = \n \n return out",
"Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\nExercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.",
"def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope # finish this\n # Hidden layer\n h1 =\n # Leaky ReLU\n h1 =\n \n logits =\n out =\n \n return out, logits",
"Hyperparameters",
"# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.1",
"Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).\n\nExercise: Build the network from the functions you defined earlier.",
"tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = \n\n# Generator network here\ng_model, g_logits = \n# g_model is the generator output\n\n# Disriminator network here\nd_model_real, d_logits_real = \nd_model_fake, d_logits_fake = ",
"Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.\n\nExercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.",
"# Calculate losses\nd_loss_real = \n\nd_loss_fake = \n\nd_loss = \n\ng_loss = ",
"Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.\n\nExercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.",
"# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = \ng_vars = \nd_vars = \n\nd_train_opt = \ng_train_opt = ",
"Training",
"batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)",
"Training loss\nHere we'll check out the training losses for the generator and discriminator.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.",
"def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_ = view_samples(-1, samples)",
"Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!",
"rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!",
"saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jphall663/GWU_data_mining
|
02_analytical_data_prep/src/py_part_2_discretization.ipynb
|
apache-2.0
|
[
"License\n\nCopyright (C) 2017 J. Patrick Hall, jphall@gwu.edu\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nSimple discretization - Pandas and numpy\nImports",
"import pandas as pd # pandas for handling mixed data sets \nimport numpy as np # numpy for basic math and matrix operations",
"Create sample data set",
"scratch_df = pd.DataFrame({'x1': pd.Series(np.random.randn(20))}) \n\nscratch_df",
"Discretize",
"scratch_df['x1_discrete'] = pd.DataFrame(pd.cut(scratch_df['x1'], 5))\nscratch_df"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jbusecke/xarrayutils
|
docs/vertical_coords.ipynb
|
mit
|
[
"Navigating between various vertical coordinates in the ocean\nIn oceanography it is often required to transform the vertical coordinate system of data. The large scale circulation for example, follows surfaces of constant density rather than constant depth levels, thus certain analyses require a coordinate transformation.\nThe general process of vertical coordinates transformation as discussed here, consists of two steps: Regridding and remapping.\nRegridding: The process of definining the target depth coordinates (e.g. the depth defined by certain density levels or e.g. just a different depth spacing)\nRemapping: Converting a data profile from the source depth profile to the target depth profile.\nThere are several methods to perform both regridding and remapping implemented in xarrayutils.vertical_coordinates. In the following we will see how to implement different combinations of method for both processes. For general applications the conservative conservative_remap function is recommended for remapping, due to the property of conserving total tracer mass (see examples below).\n\nAt the moment these functions require very explicit input (dimension names etc), which will be simplified with a wrapper function in the future.",
"%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport xarray as xr\nimport matplotlib.pyplot as plt",
"Dont forget to install\nconda install gcsfs zarr fsspec\nfor the docs\nWe are going to investigate the coordinate transformations using data from an indealized high resolution model run produced by Dhruv Balwada(to read more about the data see his paper).",
"import fsspec\nds = xr.open_zarr(fsspec.get_mapper('gcs://pangeo-data/balwada/channel_ridge_resolutions/20km/tracer_10day_snap'), consolidated=True)\nds = ds.isel(time=slice(0, 20))\n# ds.to_zarr('offline_backup.zarr')\n# ds = xr.open_zarr('offline_backup.zarr')",
"Adjusting the vertical coordinate orientation\nAll the following steps crucially assume that the depth values are increasing with the depth dimension. The values can be negative but then have progress towards less negative values as you follow the logical depth index. \nThis dataset has negative depth values that decrease. The simplest fix is to flip the sign of all depth values.",
"for dim in ['Z', 'Zp1', 'Zl', 'Zu']:\n ds.coords[dim] = -ds[dim]\n\nds.Z",
"Another issue with this dataset is that in the surface, the profile is sometimes unstable (density does not strictly increase with depth). This will cause issues when interpolating, and thus for now we use a time mean of the dataset between ~200-1500m depth.",
"# check monotonicity for all profiles\nds = ds.mean('time')\nds = ds.isel(YC=slice(10,-10), Z=slice(28,60), Zp1=slice(28,61))\n# since we operate on the depth dimension we convert it to a single chunk to avoid doing this all the time later\nds = ds.chunk({'Z':-1})\n\nassert (ds.T.diff('Z') < 0).all(['Z']).all()",
"Transforming to a different (spatially uniform) depth coordinate system\nIn this case we will transform the dataset to different depth coordinates (we leave out the regridding step and manually provide a new depth grid). This example might be useful when you want to convert different depth grids (e.g. from several observational products) into a uniform grid.\nUsing linear interpolation as remapping\nFor the simplest case (using linear interpolation for both the regridding and remapping), xarray offers all the necessary tools built in.",
"# define a new depth array\nz_new = np.arange(10,2000, 20)\n\nds_z_new = ds.interp(Z=z_new)\nds_z_new.Z",
"The depth coordinate Z is now regularly spaced instead of the surface refined resolution of the original dataset, we 'remapped' all data onto a new vertical grid using linear interpolation. Lets compare the actual data.",
"ds.PTRACER01.isel(XC=40, YC=40).plot(y='Z', yincrease=False)\nds_z_new.PTRACER01.isel(XC=40, YC=40).plot(ls='--', y='Z', yincrease=False)",
"Visually that looks pretty good and these results might be sufficient for certain applications. The biggest downside of this approach is that the total amount of tracer is not conserved.",
"dz_original = ds.drF #vertical cell thickness of the model grid\ntracer_intz_original = (ds.PTRACER01 * dz_original).sum('Z')\ndz_new = 20 #This is easy to infer since the grid is uniformly spaced\ntracer_intz_new = (ds_z_new.PTRACER01 * dz_new).sum('Z')\n\nprint(tracer_intz_original.isel(XC=40, YC=40).load())\nprint(tracer_intz_new.isel(XC=40, YC=40).load())",
"The difference might seem small but for certain applications (e.g. budget reconstruction), this is not acceptable.\nHowever, we can do better by using the conservative_remap function:\nUsing conservative remapping\nFor this method we need not only the cell centers, but instead need to provide the depths of the vertical bounding surfaces.",
"from xarrayutils.vertical_coordinates import conservative_remap\n\n# the conservative remapping needs information about the upper and lower bounds of the source and target cells.\nbounds_original = ds.Zp1 # depth position of vertical bounding surface\nbounds_new = xr.DataArray(np.arange(0,2020, 20), dims=['new_bounds'])\n\n# now we can remap the tracer data again (at the moment the depth dimensions have to be explicitly defined).\nds_z_cons_new = conservative_remap(ds.PTRACER01,bounds_original, bounds_new,\n z_dim='Z', z_bnd_dim='Zp1', z_bnd_dim_target='new_bounds', mask=True) # the associated depth dimensions for each array\n# replace the new depth dimension values with the appropriate depth \nds_z_cons_new.coords['remapped'] = xr.DataArray(z_new, coords=[('remapped', z_new)])\n\nds_z_cons_new\n\nds.PTRACER01.isel(XC=40, YC=40).plot(y='Z', yincrease=False)\nds_z_cons_new.isel(XC=40, YC=40).plot(ls='--', y='remapped', yincrease=False)",
"conservative_remap takes into account every overlap between source and target cells. See for instance the uppermost value of ds_z_cons_new, the low value is due to the fact that there is a traceramount in the upper half of the first source cell, which is then distributed over the much larger targer cell. This ensures that the full tracer amount is conserved to floating point precision.",
"tracer_intz_cons_new = (ds_z_cons_new * dz_new).sum('remapped')\nnp.isclose(tracer_intz_original.isel(XC=40, YC=40).load(), tracer_intz_cons_new.isel(XC=40, YC=40).load())",
"This is in fact true for every grid position:",
"np.isclose(tracer_intz_original, tracer_intz_cons_new).all()",
"Ok this is nice, but it really gets interesting when we define our new depth coordinates \nSwitching to potential temperature coordinates using only linear interpolation\nRegridding using linear interpolation\nThe xarray internals can only help us if we want to interpolate on values of a dimension (1D), e.g. if we want to know the temperature as a function of a new depth. For the case of temperature coordinates, we aim to do the opposite: Find the depth for a given temperature value. This is not possible with xarray at the moment, but can be achieved using linear_interpolation_regrid.\n\nThe values used to find the new cell depths have to be monotonic! Currently there is no check implemented for this but non-monotonic fields can lead to undesired behaviour. See above for how to check for monotonicity.",
"from xarrayutils.vertical_coordinates import linear_interpolation_regrid\n\nt_vals = np.arange(0.6,3, 0.01)\ntemperature_values = xr.DataArray(t_vals, coords=[('t', t_vals)]) # define the new temperature grid\n\nz_temp_coord = linear_interpolation_regrid(ds.Z, ds.T, temperature_values, target_value_dim='t') \n\nplt.subplot(2,1,1)\nds.T.isel(XC=40).plot(x='YC', yincrease=False)\nds.T.isel(XC=40).plot.contour(levels=[2], colors='w',x='YC', yincrease=False)\nplt.subplot(2,1,2)\nz_temp_coord.isel(XC=40).plot(x='YC')\nplt.axhline(2, color='w')",
"As you can see in this example, the line of constant temperature is moving deeper with increasing y, and that is reflected in the depth values along a constant depth coordinate in the regridded values. Now we can remap other data values on the corresponding depths. The simplest method is again linear interpolation:\nUsing linear interpolation as remapping",
"from xarrayutils.vertical_coordinates import linear_interpolation_remap\n\n# we cant have nans, so just fill them with an out of bounds value\n# z_temp_coord = z_temp_coord.fillna(1e8) # this should be fixed now?\n\nds_temp_linear = linear_interpolation_remap(ds.Z, ds.T, z_temp_coord)\n# this requires me to downgrade dask, should be solved soon (see here https://github.com/pydata/xarray/pull/3660)\nds_temp_linear\n\nds_temp_linear.isel(XC=40).plot(x='YC', robust=True)",
"As expected, when we remap the temperature field itself, we get a matching horizontal stratification. Now lets do something more interesting and look at the tracer field on a constant temperature surface of 2 deg:",
"ds_temp_linear = linear_interpolation_remap(ds.Z, ds.PTRACER01, z_temp_coord)\nds_temp_linear.sel(remapped=2, method='nearest').plot(robust=True)",
"Pretty neat, but again there is no guarantee that the total tracer content is preserved with linear interpolation, but with a few modifications we can implement the conservative remapping here as well.\nUsing conservative remapping",
"# we need to capture all the tracer cells, so if we dont specify our temperature range covering the Tracer min and max, the total tracer amount will not be conserved.\nt_vals = np.hstack([ds.T.min().load().data[np.newaxis], np.arange(0.6,3, 0.01), ds.T.max().load().data[np.newaxis]])\nt_vals\n# for now the results are ordered by the bin values. Since the temperature is decreasing, we need to flip the logical axis.\n# We could also do that after the regridding, but this seems more elegant.\nt_vals = np.flip(t_vals)\n\ntemperature_values = xr.DataArray(t_vals, coords=[('t', t_vals)]) # define the new temperature grid\ntemperature_values\n\n# # additionally to covering the full range of `ds.T`, we need to provide the bounds of `z` to the function.\nz_temp_bounds = linear_interpolation_regrid(ds.Z, ds.T, temperature_values, z_bounds=ds.Zp1 ,target_value_dim='t', z_bounds_dim='Zp1')\nz_temp_bounds.isel(XC=40, YC=40).load()",
"And now we can use these cell bounding values exactly like we did before.",
"# now we can remap the tracer data again (at the moment the depth dimensions have to be explicitly defined).\nds_temp_cons = conservative_remap(ds.PTRACER01,bounds_original, z_temp_bounds,\n z_dim='Z', z_bnd_dim='Zp1', z_bnd_dim_target='regridded', mask=True) # the associated depth dimensions for each array\n# # replace the new depth dimension values with the appropriate depth (here the middle of the temperature cell bounds)\nt_vals = z_temp_bounds.coords['regridded'].data\nt_vals = 0.5 * (t_vals[1:] + t_vals[0:-1])\n\nds_temp_cons.coords['remapped'] = xr.DataArray(t_vals, coords=[('remapped', t_vals)])\n\nplt.figure(figsize=[20,5])\nplt.subplot(1,3,1)\nds_temp_linear.sel(remapped=2, method='nearest').plot(robust=True)\nplt.subplot(1,3,2)\nds_temp_cons.sel(remapped=2, method='nearest').plot(robust=True)\nplt.subplot(1,3,3)\n(ds_temp_cons.sel(remapped=2, method='nearest')-ds_temp_linear.sel(remapped=2, method='nearest')).plot(robust=True)",
"And most importantly, the vertical tracer content is again conserved:",
"dz_remapped = z_temp_bounds.diff('regridded').rename({'regridded':'remapped'})\ndz_remapped.coords['remapped'] = ds_temp_cons.coords['remapped']\n\ntracer_intz_remapped_temp = (ds_temp_cons*dz_remapped).sum('remapped')\n\nxr.testing.assert_allclose(tracer_intz_original, tracer_intz_remapped_temp)",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
PySEE/PyRankine
|
notebook/RankineCycle81-82.ipynb
|
mit
|
[
"The Examples Rankine Cycle 8.1,8.2\nMichael J. Moran, Howard N. Shapiro, Daisie D. Boettner, Margaret B. Bailey. Fundamentals of Engineering Thermodynamics(7th Edition). John Wiley & Sons, Inc. 2011\nChapter 8 : Vapor Power Systems:\n1 EXAMPLE 8.1 Analyzing an Ideal Rankine Cycle P438\n2 EXAMPLE 8.2 Analyzing a Rankine Cycle with Irreversibilities P444\n1 Example 8.1: Analyzing an Ideal Rankine Cycle\nSteam is the working fluid in an ideal Rankine cycle. \nSaturated vapor enters the turbine at 8.0 MPa and saturated liquid exits the condenser at a pressure of 0.008 MPa. \nThe net power output of the cycle is 100 MW.\n\n\nProcess 1–2: Isentropic expansion of the working fluid through the turbine from saturated vapor at state 1 to the condenser pressure.\nProcess 2–3: Heat transfer from the working fluid as it flows at constant pressure\nthrough the condenser with saturated liquid at state 3.\nProcess 3–4: Isentropic compression in the pump to state 4 in the compressed liquid region.\nProcess 4–1: Heat transfer to the working fluid as it flows at constant pressure through the boiler to complete the cycle.\n\nDetermine for the cycle\n(a) the thermal efficiency,\n(b) the back work ratio, \n(c) the mass flow rate of the steam,in kg/h,\n(d) the rate of heat transfer, Qin, into the working fluid as it passes through the boiler, in MW,\n(e) the rate of heat transfer, Qout, from the condensing steam as it passes through the condenser, in MW,\n(f) the mass flow rate of the condenser cooling water, in kg/h, if cooling water enters the condenser at 15°C and exits at 35°C.\nEngineering Model:\n\n\n1 Each component of the cycle is analyzed as a control volume at steady state. The control volumes are shown on the accompanying sketch by dashed lines.\n\n\n2 All processes of the working fluid are internally reversible.\n\n\n3 The turbine and pump operate adiabatically.\n\n\n4 Kinetic and potential energy effects are negligible.\n\n\n5 Saturated vapor enters the turbine. Condensate exits the condenser as saturated liquid.\n\n\nTo begin the analysis, we fix each of the principal states(1,2,3,4) located on the accompanying schematic and T–s diagrams.\n1.1 States",
"from seuif97 import *\n\n# State 1\np1 = 8.0 # in MPa\nt1 = px2t(p1, 1)\nh1 = px2h(p1, 1) # h1 = 2758.0 From table A-3 kj/kg\ns1 = px2s(p1, 1) # s1 = 5.7432 From table A-3 kj/kg.k\n\n# State 2 ,p2=0.008\np2 = 0.008\ns2 = s1\nt2 = ps2t(p2, s2)\nh2 = ps2h(p2, s2)\n\n# State 3 is saturated liquid at 0.008 MPa\np3 = 0.008\nt3 = px2t(p3, 0)\nh3 = px2h(p3, 0) # kj/kg\ns3 = px2s(p3, 0)\n\n# State 4\np4 = p1\ns4 = s3\nh4 = ps2h(p4, s4)\nt4 = ps2h(p4, s4)",
"1..2 Analysis the Cycle\n(a) The thermal efficiency\nThe net power developed by the cycle is\n$\\dot{W}_{cycle}=\\dot{W}_t-\\dot{W}_p$\nMass and energy rate balances for control volumes around the turbine and pump give,respectively\n$\\frac{\\dot{W}_t}{\\dot{m}}=h_1-h_2$\n$\\frac{\\dot{W}_p}{\\dot{m}}=h_4-h_3$\nwhere $\\dot{m}$ is the mass flow rate of the steam. The rate of heat transfer to the working fluid as it passes through the boiler is determined using mass and energy rate balances as\n$\\frac{\\dot{Q}_{in}}{\\dot{m}}=h_1-h_4$\nThe thermal efficiency is then\n$\\eta=\\frac{\\dot{W}t-\\dot{W}_p}{\\dot{Q}{in}}=\\frac{(h_1-h_2)-(h_4-h_3)}{h_1-h_4}$",
"# Part(a)\n# Mass and energy rate balances for control volumes\n# around the turbine and pump give, respectively\n\n# turbine\nwtdot = h1 - h2\n# pump\nwpdot = h4-h3\n\n# The rate of heat transfer to the working fluid as it passes\n# through the boiler is determined using mass and energy rate balances as\nqindot = h1-h4\n\n# thermal efficiency\neta = (wtdot-wpdot)/qindot\n\n# Result for part a\nprint('(a) The thermal efficiency for the cycle is {:>.2f}%'.format(eta*100))",
"(b) The back work ratio is\n$bwr=\\frac{\\dot{W}_p}{\\dot{W}_t}=\\frac{h_4-h_3}{h_1-h_2}$\n(c) The mass flow rate of the steam can be obtained from the expression for the net power given in part (a)\n$\\dot{m}=\\frac{\\dot{W}_{cycle}}{(h_1-h_2)-(h_4-h_3)}$\n(d) With the expression for $\\dot{Q}_{in}$ in from part (a) and previously determined specific enthalpy values\n$\\dot{Q}_{in}=\\dot{m}(h_1-h_4)$\n(e) Mass and energy rate balances applied to a control volume enclosing the steam side of the condenser give\n$\\dot{Q}_{out}=\\dot{m}(h_2-h_3)$\n(f) Taking a control volume around the condenser, the mass and energy rate balances give at steady state\n$\\require{cancel} 0=\\dot{\\cancel{Q}}^{0}{cv}-\\dot{\\cancel{w}}^{0}{cv}+\\dot{m}{cw}(h{cw,in}-h_{cw,out})+\\dot{m}(h_2-h_3)$\nwhere $\\dot{m}{cw}$ is the mass flow rate of the cooling water. Solving for $\\dot{m}{cw}$\n$\\dot{m}{cw}=\\frac{\\dot{m}(h_2-h_3)}{h{cw,in}-h_{cw,out}}$",
"# Part(b)\n# back work ratio:bwr, defined as the ratio of the pump work input to the work\n# developed by the turbine.\nbwr = wpdot/wtdot #\n\n# Result\nprint('(b) The back work ratio is {:>.2f}%'.format(bwr*100))\n\n# Part(c)\nWcycledot = 100.00 # the net power output of the cycle in MW\nmdot = (Wcycledot*10**3*3600)/((h1-h2)-(h4-h3)) # mass flow rate in kg/h\n\n# Result\nprint('(c) The mass flow rate of the steam is {:>.2f}kg/h'.format(mdot))\n\n# Part(d)\nQindot = mdot*qindot/(3600*10**3) # in MW\n\n# Results\nprint('(d) The rate of heat transfer Qindot into the working fluid as' +\n ' it passes through the boiler is {:>.2f}MW'.format(Qindot))\n\n# Part(e)\nQoutdot = mdot*(h2-h3)/(3600*10**3) # in MW\n\n# Results\nprint('(e) The rate of heat transfer Qoutdot from the condensing steam ' +\n 'as it passes through the condenser is {:>.2f}MW.'.format(Qoutdot))\n\n# Part(f)\n\n# Given:\ntcwin = 15\ntcwout = 35\n\nhcwout = tx2h(tcwout, 0) # From table A-2,hcwout= 146.68 kj/kg\n\nhcwin = tx2h(tcwin, 0) # hcwin 62.99\nmcwdot = (Qoutdot*10**3*3600)/(hcwout-hcwin) # in kg/h\n\n# Results\nprint('(f) The mass flow rate of the condenser cooling water is {:>.2f}kg/h.'.format(mcwdot))",
"2 Example8.2 :Analyzing a Rankine Cycle with Irreversibilities\nReconsider the vapor power cycle of Example 8.1, but include in the analysis that the turbine and the pump each have an isentropic efficiency of 85%. \nDetermine for the modified cycle \n\n\n(a) the thermal efficiency, \n\n\n(b) the mass flow rate of steam, in kg/h, for a net power output of 100MW, \n\n\n(c) the rate of heat transfer $\\dot{Q}_{in}$ in into the working fluid as it passes through the boiler, in MW, \n\n\n(d) the rate of heat transfer $\\dot{Q}_{out}$ out from the condensing steam as it passes through the condenser, in MW, \n\n\n(e) the mass flow rate of the condenser cooling water, in kg/h, if cooling water enters the condenser at 15°C and exits as 35°C.\n\n\nSOLUTION\nKnown: A vapor power cycle operates with steam as the working fluid. The turbine and pump both have efficiencies of 85%.\nFind: Determine the thermal efficiency, the mass flow rate, in kg/h, the rate of heat transfer to the working fluid as it passes through the boiler, in MW, the heat transfer rate from the condensing steam as it passes through thecondenser, in MW, and the mass flow rate of the condenser cooling water, in kg/h.\nEngineering Model:\n\n\nEach component of the cycle is analyzed as a control volume at steady state.\n\n\nThe working fluid passes through the boiler and condenser at constant pressure. Saturated vapor enters the turbine. The condensate is saturated at the condenser exit.\n\n\nThe turbine and pump each operate adiabatically with an efficiency of 85%.\n\n\nKinetic and potential energy effects are negligible\n\n\n\nAnalysis:\nOwing to the presence of irreversibilities during the expansion of the steam through the turbine, there is an increase in specific entropy from turbine inlet to exit, as shown on the accompanying T–s diagram. Similarly,there is an increase in specific entropy from pump inlet to exit.\nLet us begin the analysis by fixing each of the principal states.\n1.2 States",
"from seuif97 import *\n\n# State 1\np1 = 8.0 # in MPa\nt1 =px2t(p1,1) \nh1=px2h(p1,1) # h1 = 2758.0 From table A-3 kj/kg\ns1=px2s(p1,1) # s1 = 5.7432 From table A-3 kj/kg.k\n\n# State 2 ,p2=0.008\np2=0.008\ns2s = s1\nh2s=ps2h(p2,s2s)\nt2s=ps2t(p2,s2s)\netat_t=0.85\nh2=h1-etat_t*(h1-h2s)\nt2 =ph2t(p2,h2) \ns2 =ph2s(p2,h2) \n\n# State 3 is saturated liquid at 0.008 MPa\np3 = 0.008 \nt3=px2t(p3,0) \nh3 =px2h(p3,0) # kj/kg\ns3 =px2s(p3,0) \n\n#State 4 \np4 = p1\ns4s=s3\nh4s =ps2h(p4,s4s)\nt4s =ps2t(p4,s4s) \netat_p=0.85\nh4=h3+(h4s-h3)/etat_p\nt4 =ph2t(p4,h4) \ns4 =ph2s(p4,h4)",
"2.2 Analysis the Cycle",
"# Part(a)\neta = ((h1-h2)-(h4-h3))/(h1-h4) # thermal efficiency\n\n# Result for part (a)\nprint('Thermal efficiency is: {:>.2f}%'.format(100*eta))\n\n# Part(b)\nWcycledot = 100 # given,a net power output of 100 MW\n# Calculations\nmdot = (Wcycledot*(10**3)*3600)/((h1-h2)-(h4-h3))\n# Result for part (b)\nprint('The mass flow rate of steam for a net power output of 100 MW is {:>.2f}kg/h'.format(mdot))\n\n# Part(c)\nQindot = mdot*(h1-h4)/(3600 * 10**3)\n# Result\nprint('The rate of heat transfer Qindot into the working fluid as it passes through the boiler, is {:>.2f}MW.'.format(Qindot))\n\n# Part(d)\nQoutdot = mdot*(h2-h3)/(3600*10**3)\n# Result\nprint('The rate of heat transfer Qoutdot from the condensing steam as it passes through the condenser, is {:>.2f}MW.'.format(Qoutdot))\n\n# Part(e)\ntcwin = 15\ntcwout = 35\nhcwout = tx2h(tcwout, 0) # From table A-2,hcwout= 146.68 kj/kg\nhcwin = tx2h(tcwin, 0) # hcwin 62.99\nmcwdot = (Qoutdot*10**3*3600)/(hcwout-hcwin)\n# Result\nprint('The mass flow rate of the condenser cooling water, is {:>.2f}kg/h'.format(mcwdot))",
"1.2.3 T-S Diagram",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nplt.figure(figsize=(10.0,5.0))\n\n# saturated vapor and liquid entropy lines \nnpt = np.linspace(10,647.096-273.15,200) # range of temperatures\nsvap = [s for s in [tx2s(t, 1) for t in npt]]\nsliq = [s for s in [tx2s(t, 0) for t in npt]]\nplt.plot(svap, npt, 'r-')\nplt.plot(sliq, npt, 'b-')\n\nt=[t1,t2s,t3,t4s+15]\ns=[s1,s2s,s3,s4s]\n\n# point 5\nt.append(px2t(p1,0))\ns.append(px2s(p1,0))\n\nt.append(t1)\ns.append(s1)\n\nplt.plot(s, t, 'ko-')\n\ntb=[t1,t2]\nsb=[s1,s2]\nplt.plot(sb, tb, 'k--')\ntist=[t2,t2s]\nsist=[s2,s2s]\nplt.plot(sist, tist, 'ko-')\n\nsp=[s3,s3+0.3]\ntp=[t3,ps2t(p4,s3+0.3)+15]\nplt.plot(sp, tp, 'ko--')\n\ntist=[t2,t2s]\nsist=[s2,px2s(p2,1)]\nplt.plot(sist, tist, 'g-')\n\nplt.xlabel('Entropy (kJ/(kg K)')\nplt.ylabel('Temperature (°C)')\nplt.grid()",
"1.3 Discussion of Examples 8.1 and 8.2\nThe effect of irreversibilities within the turbine and pump can be gauged by comparing values from Example 8.2 with their counterparts in Example 8.1. In Example 8.2,\nthe turbine work per unit of mass is less and the pump work per unit of mass is greater than in Example 8.1, as can be confirmed using data from these examples.\nThe thermal efficiency in Example 8.2 is less than in the ideal case of Example 8.1 \nFor a fixed net power output (100 MW), the smaller net work output per unit mass in Example 8.2 dictates a greater mass flow rate of steam than in Example 8.1. The magnitude of the heat transfer to cooling water is also greater in Example 8.2 than in Example 8.1; consequently, a greater mass flow rate of cooling water is required."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ocefpaf/intro_python_notebooks
|
02-NumPy.ipynb
|
mit
|
[
"Aula 02 - NumPy\nObjetivos\n\nApresentar o objeto array de N-dimensões\nGuia de funções sofisticadas (broadcasting)\nTour nos sub-módulos para: Álgebra Linear, transformada de Fourier, números aleatórios, etc\nUso para integrar código C/C++ e Fortran",
"a = [0.1, 0.25, 0.03]\nb = [400, 5000, 6e4]\nc = a + b\nc\n\n[e1+e2 for e1, e2 in zip(a, b)]\n\nimport math\n\nmath.tanh(c)\n\n[math.tanh(e) for e in c]",
"Python é uma linguagem excelente para \"propósitos gerais\", com um sintaxe\nclara elegível, tipos de dados (data types) funcionais (strings, lists, sets,\ndictionaries, etc) e uma biblioteca padrão vasta.\nEntretanto não é um linguagem desenhada especificamente para matemática e\ncomputação científica. Não há forma fácil de representar conjunto de dados\nmultidimensionais nem ferramentas para álgebra linear e manipulação de\nmatrizes.\n(Os blocos essenciais para quase todos os problemas de computação\ncientífica.)\nPor essas razões que o NumPy existe. Em geral importamos o NumPy como np:",
"import numpy as np",
"NumPy, em seu núcleo, fornece apenas um objeto array.\n<img height=\"300\" src=\"files/anatomyarray.png\" >",
"lst = [10, 20, 30, 40]\n\narr = np.array([10, 20, 30, 40])\n\nprint(lst)\n\nprint(arr)\n\nprint(lst[0], arr[0])\nprint(lst[-1], arr[-1])\nprint(lst[2:], arr[2:])",
"A diferença entre list e array é que a arrays são homógenas!",
"lst[-1] = 'Um string'\nlst\n\narr[-1] = 'Um string'\narr\n\narr.dtype\n\narr[-1] = 1.234\narr",
"Voltando às nossas lista a e b",
"a = [0.1, 0.25, 0.03]\nb = [400, 5000, 6e4]\n\na = np.array(a)\nb = np.array(b)\nc = a + b\nc\n\nnp.tanh([a, b])\n\na * b\n\nnp.dot(a, b)\n\nnp.matrix(a) * np.matrix(b).T",
"Data types\n\nbool\nuint8\nint (Em Python2 é machine dependent)\nint8\nint32\nint64\nfloat (Sempre é machine dependent Matlab double)\nfloat32\nfloat64\n\n(http://docs.scipy.org/doc/numpy/user/basics.types.html.)\nCuriosidades...",
"np.array(255, dtype=np.uint8)\n\nfloat_info = '{finfo.dtype}: max={finfo.max:<18}, approx decimal precision={finfo.precision};'\nprint(float_info.format(finfo=np.finfo(np.float32)))\nprint(float_info.format(finfo=np.finfo(np.float64)))",
"https://en.wikipedia.org/wiki/Floating_point\nCriando arrays:",
"np.zeros(3, dtype=int)\n\nnp.zeros(5, dtype=float)\n\nnp.ones(5, dtype=complex)\n\na = np.empty([3, 3])\na\n\na.fill(np.NaN)\na",
"Métodos das arrays",
"a = np.array([[1, 2, 3], [1, 2, 3]])\na\n\nprint('Tipo de dados : {}'.format(a.dtype))\nprint('Número total de elementos : {}'.format(a.size))\nprint('Número de dimensões : {}'.format(a.ndim))\nprint('Forma : {}'.format(a.shape))\nprint('Memória em bytes : {}'.format(a.nbytes))",
"Outros métodos matemáticos/estatísticos úteis:",
"print('Máximo e mínimo : {}'.format(a.min(), a.max()))\nprint('Some é produto de todos os elementos : {}'.format(a.sum(), a.prod()))\nprint('Média e desvio padrão : {}'.format(a.mean(), a.std()))\n\na.mean(axis=0)\n\na.mean(axis=1)",
"Métodos que auxiliam na criação de arrays.",
"np.zeros(a.shape) == np.zeros_like(a)\n\nnp.arange(1, 2, 0.2)\n\na = np.linspace(1, 10, 5) # Olhe também `np.logspace`\na",
"5 amostras aleatórias tiradas da distribuição normal de média 0 e variância 1.",
"np.random.randn(5)",
"5 amostras aleatórias tiradas da distribuição normal de média 10 e variância 3.",
"np.random.normal(10, 3, 5)",
"Máscara condicional",
"mask = np.where(a <= 5) # Para quem ainda vive em MatlabTM world.\nmask\n\nmask = a <= 5 # Melhor não?\nmask\n\na[mask]",
"Temos também as masked_arrays",
"import numpy.ma as ma\n\nma.masked_array(a, mask)",
"Salvando e carregando novamente os dados:\n\nnp.save\nnp.savez\nnp.load",
"a = np.random.rand(10)\n\nb = np.linspace(0, 10, 10)\n\nnp.save('arquivo_a', a)\n\nnp.save('arquivo_b', b)\n\nnp.savez('arquivo_ab', a=a, b=b)\n\n\n%%bash\n\nls *.np*\n\nc = np.load('arquivo_ab.npz')\n\nc.files",
"Operações: +, -, , /, //, *, %",
"c['b'] // c['a']\n\na = np.array([1, 2, 3])\na **= 2\na",
"Manipulando dados reais\nVamos utilizar os dados do programa de observação do oceano Pirata.\nhttp://www.goosbrasil.org/pirata/dados/",
"np.loadtxt(\"./data/dados_pirata.csv\", delimiter=',')\n\n!head -3 ./data/dados_pirata.csv\n\ndata = np.loadtxt(\"./data/dados_pirata.csv\", skiprows=1, usecols=range(2, 16), delimiter=',')\n\ndata.shape, data.dtype\n\ndata[data == -99999.] = np.NaN\ndata\n\ndata.max(), data.min()\n\nnp.nanmax(data), np.nanmin(data)\n\nnp.nanargmax(data), np.nanargmin(data)\n\nnp.unravel_index(np.nanargmax(data), data.shape), np.unravel_index(np.nanargmin(data), data.shape)\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\n\nax.plot(data[:, 0])\nax.plot(data[:, -1])",
"Dados com máscara (Masked arrays)",
"plt.pcolormesh(data)\n\nimport numpy.ma as ma\n\ndata = ma.masked_invalid(data)\n\nplt.pcolormesh(np.flipud(data.T))\nplt.colorbar()\n\ndata.max(), data.min(), data.mean()\n\nz = [1, 10, 100, 120, 13, 140, 180, 20, 300, 40,5, 500, 60, 80]\n\nfig, ax = plt.subplots()\nax.plot(data[42, :], z, 'ko')\nax.invert_yaxis()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jupyter/docker-demo-images
|
notebooks/Welcome to Spark with Python.ipynb
|
bsd-3-clause
|
[
"Welcome to Apache Spark with Python\n\nApache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming. \n- http://spark.apache.org/\n\nIn this notebook, we'll train two classifiers to predict survivors in the Titanic dataset. We'll use this classic machine learning problem as a brief introduction to using Apache Spark local mode in a notebook.",
"import pyspark \nfrom pyspark.mllib.regression import LabeledPoint\nfrom pyspark.mllib.classification import LogisticRegressionWithSGD\nfrom pyspark.mllib.tree import DecisionTree",
"First we create a SparkContext, the main object in the Spark API. This call may take a few seconds to return as it fires up a JVM under the covers.",
"sc = pyspark.SparkContext()",
"Sample the data\nWe point the context at a CSV file on disk. The result is a RDD, not the content of the file. This is a Spark transformation.",
"raw_rdd = sc.textFile(\"datasets/COUNT/titanic.csv\")",
"We query RDD for the number of lines in the file. The call here causes the file to be read and the result computed. This is a Spark action.",
"raw_rdd.count()",
"We query for the first five rows of the RDD. Even though the data is small, we shouldn't get into the habit of pulling the entire dataset into the notebook. Many datasets that we might want to work with using Spark will be much too large to fit in memory of a single machine.",
"raw_rdd.take(5)",
"We see a header row followed by a set of data rows. We filter out the header to define a new RDD containing only the data rows.",
"header = raw_rdd.first()\ndata_rdd = raw_rdd.filter(lambda line: line != header)",
"We take a random sample of the data rows to better understand the possible values.",
"data_rdd.takeSample(False, 5, 0)",
"We see that the first value in every row is a passenger number. The next three values are the passenger attributes we might use to predict passenger survival: ticket class, age group, and gender. The final value is the survival ground truth.\nCreate labeled points (i.e., feature vectors and ground truth)\nNow we define a function to turn the passenger attributions into structured LabeledPoint objects.",
"def row_to_labeled_point(line):\n '''\n Builds a LabelPoint consisting of:\n \n survival (truth): 0=no, 1=yes\n ticket class: 0=1st class, 1=2nd class, 2=3rd class\n age group: 0=child, 1=adults\n gender: 0=man, 1=woman\n '''\n passenger_id, klass, age, sex, survived = [segs.strip('\"') for segs in line.split(',')]\n klass = int(klass[0]) - 1\n \n if (age not in ['adults', 'child'] or \n sex not in ['man', 'women'] or\n survived not in ['yes', 'no']):\n raise RuntimeError('unknown value')\n \n features = [\n klass,\n (1 if age == 'adults' else 0),\n (1 if sex == 'women' else 0)\n ]\n return LabeledPoint(1 if survived == 'yes' else 0, features)",
"We apply the function to all rows.",
"labeled_points_rdd = data_rdd.map(row_to_labeled_point)",
"We take a random sample of the resulting points to inspect them.",
"labeled_points_rdd.takeSample(False, 5, 0)",
"Split for training and test\nWe split the transformed data into a training (70%) and test set (30%), and print the total number of items in each segment.",
"training_rdd, test_rdd = labeled_points_rdd.randomSplit([0.7, 0.3], seed = 0)\n\ntraining_count = training_rdd.count()\ntest_count = test_rdd.count()\n\ntraining_count, test_count",
"Train and test a decision tree classifier\nNow we train a DecisionTree model. We specify that we're training a boolean classifier (i.e., there are two outcomes). We also specify that all of our features are categorical and the number of possible categories for each.",
"model = DecisionTree.trainClassifier(training_rdd, \n numClasses=2, \n categoricalFeaturesInfo={\n 0: 3,\n 1: 2,\n 2: 2\n })",
"We now apply the trained model to the feature values in the test set to get the list of predicted outcomines.",
"predictions_rdd = model.predict(test_rdd.map(lambda x: x.features))",
"We bundle our predictions with the ground truth outcome for each passenger in the test set.",
"truth_and_predictions_rdd = test_rdd.map(lambda lp: lp.label).zip(predictions_rdd)",
"Now we compute the test error (% predicted survival outcomes == actual outcomes) and display the decision tree for good measure.",
"accuracy = truth_and_predictions_rdd.filter(lambda v_p: v_p[0] == v_p[1]).count() / float(test_count)\nprint('Accuracy =', accuracy)\nprint(model.toDebugString())",
"Train and test a logistic regression classifier\nFor a simple comparison, we also train and test a LogisticRegressionWithSGD model.",
"model = LogisticRegressionWithSGD.train(training_rdd)\n\npredictions_rdd = model.predict(test_rdd.map(lambda x: x.features))\n\nlabels_and_predictions_rdd = test_rdd.map(lambda lp: lp.label).zip(predictions_rdd)\n\naccuracy = labels_and_predictions_rdd.filter(lambda v_p: v_p[0] == v_p[1]).count() / float(test_count)\nprint('Accuracy =', accuracy)",
"The two classifiers show similar accuracy. More information about the passengers could definitely help improve this metric."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tuanvu216/udacity-course
|
intro_to_statistics/.ipynb_checkpoints/Lesson 7 - Programming Bayes-checkpoint.ipynb
|
mit
|
[
"Table of Contents\n<p><div class=\"lev1\"><a href=\"#Complement\"><span class=\"toc-item-num\">1 </span>Complement</a></div><div class=\"lev1\"><a href=\"#Two-flips\"><span class=\"toc-item-num\">2 </span>Two flips</a></div><div class=\"lev1\"><a href=\"#Three-Flips\"><span class=\"toc-item-num\">3 </span>Three Flips</a></div><div class=\"lev1\"><a href=\"#Flip-Two-Coins\"><span class=\"toc-item-num\">4 </span>Flip Two Coins</a></div><div class=\"lev1\"><a href=\"#Flip-One-Of-Two\"><span class=\"toc-item-num\">5 </span>Flip One Of Two</a></div><div class=\"lev1\"><a href=\"#Cancer-Example-1\"><span class=\"toc-item-num\">6 </span>Cancer Example 1</a></div><div class=\"lev1\"><a href=\"#Calculate-Total\"><span class=\"toc-item-num\">7 </span>Calculate Total</a></div><div class=\"lev1\"><a href=\"#Cancer-Example-2\"><span class=\"toc-item-num\">8 </span>Cancer Example 2</a></div><div class=\"lev1\"><a href=\"#Program-Bayes-Rule\"><span class=\"toc-item-num\">9 </span>Program Bayes Rule</a></div><div class=\"lev1\"><a href=\"#Program-Bayes-Rule-2\"><span class=\"toc-item-num\">10 </span>Program Bayes Rule 2</a></div><div class=\"lev1\"><a href=\"#Conclusion\"><span class=\"toc-item-num\">11 </span>Conclusion</a></div>\n\n# Complement\n\nSo as the first exercise, say this is the probability, let's print the probability of the inverse event. Let's make the function over here that takes p but returns 1 - p.",
"def f(p):\n return 1-p\n\nprint f(0.3)",
"Two flips\n\nSuppose we have a coin with probability p. For example, p might be 0.5.\nYou flip the coin twice and I want to compute the probability that this coin comes up head and heads in these 2 flips--obviously that's 0.5 times 0.5.",
"def f(p):\n return p*p\n\nprint f(0.3)",
"Three Flips\nJust like before it will be an input to the function f and now I'm going to flip the coin 3 times and I want you to calculate the probability that the heads comes up exactly once. Three is not a variable so you could only works for 3 flips not for 2 or 4 but the only input variable is going to be the coin probability 0.5.",
"def f(p):\n return 3 * p * (1-p) * (1-p)\n\nprint f(0.5)\nprint f(0.8)",
"Flip Two Coins\n\nSo coin 1 has a probability of heads equals P₁ and coin 2 has a probability of heads equals P₂ and this might not be different probabilities.\nIn my programming environment, I can account this by making 2 arguments separated by a comma, for example, 0.5 and 0.8, and then the function takes as an input, 2 arguments, P₁ and P₂, and then I can use both of these variables in the return assignment.\nLet’s now flip both coins and write the code that computes the probability that coin 1 equals heads and coin 2 equals heads for example of 0.5 and 0.8, this would be?",
"def f(p1,p2): \n return p1 * p2\n\nprint f(0.5,0.8)",
"Flip One Of Two\n\nSo two coins again, C1, C2. And let's say each coin has its own probability of coming up heads. \nFor the first coin, we're going to call it P1, and for the second, P2. And for reasons that should be clear later, we write it as a conditional. \nSo that means, if the coin you're flipping is C1, then the probability of heads equals P1. If the coin we're flipping is C2, then the probability of heads will be P2. \nNow, this alludes to the fact that I really want you to pick a coin here. You're going to pick one coin, and the probability of you pick coin one, C1, is P0. And logically, it follows the probability of picking coin two, the other coin, is 1 minus P0. \nAnd I'm interested in the probability that heads come up under the scenario where you first pick a coin at random and then flip the coin. And in this exercise, I give you some very concrete numbers. P0 is 0.3, P1 is 0.5, and P2 is 0.9.",
"def f(p0,p1,p2): \n return p0 * p1 +(1-p0) * p2\n\nprint f(0.3,0.5,0.9)",
"Answer\nAnd the answer is 0.78. And the way I got this, you might have picked point C1. That happens with 0.3 probability, and then we have a 0.5 chance to find heads. Or we might have picked coin two, which has a probability of 1 minus 0.3, 0.7, and then chance of seeing head is 0.9. We work this all out, we get 0.78.\n<img src=\"images/Screen Shot 2016-05-07 at 11.23.58 AM.png\"/>\nScreenshot taken from Udacity\n<!--TEASER_END-->\n\nCancer Example 1\n\nLet's go the cancer example. These are prior probability of cancer we should call P₀. \nThis is a probability of a positive test given cancer. I call this P₁ and careful, there's a probability of a negative test result for don't have cancer and I call this P₂.\nJust to check suppose probability of cancer is 0.1, the sensitivity is 0.9, specificity is 0.8.\nGiven the probability that a test will come out positive. It's not Bayes rule yet, it's a simpler calculation and you should know exactly how to do this.\n\n```\n- P(C) = p0 = 0.1\n- P(Pos|C) = p1 = 0.9\n- P(Neg|not C) = p2 = 0.8\n\nP(C|Pos) = P(C) x P(Pos|C) = 0.1 * 0.9 = 0.09\nP(not C|Pos) = P(not C) x P(Pos|not C) = 0.9 * 0.2 = 0.18\nP(Pos) = P(C|Pos) + P(not C|Pos) = 0.27\n```\n\n<img src=\"images/Screen Shot 2016-05-07 at 11.41.03 AM.png\"/>\nScreenshot taken from Udacity\n<!--TEASER_END-->\n\nCalculate Total\nSo now I want you to write the computer code that accepts arbitrary P₀, P₁, P₂ and calculates the resulting probability of a positive test result.",
"#Calculate the probability of a positive result given that\n#p0=P(C)\n#p1=P(Positive|C)\n#p2=P(Negative|Not C)\n\ndef f(p0,p1,p2):\n return p0 * p1 + (1-p0) * (1-p2)\n\nprint f(0.1, 0.9, 0.8)",
"Cancer Example 2\nLet's look at the posterior probability of cancer given that we received the positive test result, and let's first do this manually for the example given up here.\n- P(C|Pos) = P(C) x P(Pos|C) = 0.1 * 0.9 = 0.09\n- P(not C|Pos) = P(not C) x P(Pos|not C) = 0.9 * 0.2 = 0.18\n- P(Pos) = P(C|Pos) + P(not C|Pos) = 0.27\n- P(C|Pos) = P(C|Pos)/P(Pos) = 0.09/0.27 = 0.8\nAnswer\n\nAnd the answer is 0.0333 or a 1/3 and now we're going to apply the entire arsenal of inference we just learned about.\nThe joint probability of cancer and positive is 0.1 * 0.9. That's the joint that's not normalized.\nSo let's normalize it and we normalize it by the sum of the joint for cancer and the joint for non-cancer. Joint for cancer we just computed but the joint for non-cancer assumes the opposite prior 1-0.1 and it applies the positive result of a non-cancer case.\nNow because the specificity first is negative, we have to do the same trick as before and multiply it with 1-0.8. When you worked this out, you find this to be 0 to 0.9 divided 0 to 0.9 + 0.9 0.2 that is 0.18\nSo if you put these all of this together, you get exactly a third\n\n<img src=\"images/Screen Shot 2016-05-07 at 12.10.04 PM.png\"/>\nScreenshot taken from Udacity\n<!--TEASER_END-->\n\nProgram Bayes Rule\n\nSo I want you to program this in the IDE where there are three input parameters P⁰, P¹ and P².\nFor those values, you should get a 1/3 and for those values over here, 0.01 as a prior 0.7 as sensitivity and 0.9 as specificity, you'll get 0.066 approximately. So write this code and check whether these examples work for you.\n\n<img src=\"images/Screen Shot 2016-05-07 at 12.12.13 PM.png\"/>\nScreenshot taken from Udacity\n<!--TEASER_END-->",
"#Return the probability of A conditioned on B given that \n#P(A)=p0, P(B|A)=p1, and P(Not B|Not A)=p2 \n\ndef f(p0,p1,p2):\n return p0 * p1 / (p0 * p1 + (1-p0) * (1-p2))\n\nprint f(0.1, 0.9, 0.8)\nprint f(0.01, 0.7, 0.9)",
"Program Bayes Rule 2\n\nNow, let's do one last modification and let's write this procedure assuming you observed a negative test result. \nThis means the posterior of having cancer under a negative result is 0.0137 for those numbers over here and about 0.00336 for those numbers over here.\nIn both cases, the posterior is significantly smaller than the prior 6because we received negative test results.\n\n<img src=\"images/Screen Shot 2016-05-07 at 12.20.38 PM.png\"/>\nScreenshot taken from Udacity\n<!--TEASER_END-->\n\nAnswer\n- And here's my implementation for the cancerous case.\n- You don't have to plug in the measurement probability to see a negative test result, which is one minus the sensitivity and in the normalizer, we copy the first term over in the second term of the noncancer hypothesize. we just put in the specificity and when you put this all together and run the procedure, we indeed get 0.013698 and so on.",
"#Return the probability of A conditioned on Not B given that \n#P(A)=p0, P(B|A)=p1, and P(Not B|Not A)=p2 \n\ndef f(p0,p1,p2):\n return p0 * (1-p1) / (p0 * (1-p1) + (1-p0) * p2)\n\nprint f(0.1, 0.9, 0.8)\nprint f(0.01, 0.7, 0.9)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
drvinceknight/gt
|
nbs/chapters/04-Nash-equilibria.ipynb
|
mit
|
[
"Best responses\n\nDefinition of a best response\nVideo\nIn a two player game $(A,B)\\in{\\mathbb{R}^{m\\times n}}^2$ a mixed strategy $\\sigma_r^*$ of the row player is a best response to a column players' strategy $\\sigma_c$ iff:\n$$\n\\sigma_r^*=\\text{argmax}_{\\sigma_r\\in S_r}\\sigma_rA\\sigma_c^T.\n$$\nSimilarly a mixed strategy $\\sigma_c^*$ of the column player is a best response to a row players' strategy $\\sigma_r$ iff:\n$$\n\\sigma_c^*=\\text{argmax}_{\\sigma_c\\in S_c}\\sigma_rB\\sigma_c^T.\n$$\n\nIn other words: a best response strategy maximise the utility of a player given a known strategy of the other player.\nBest responses in the Prisoners Dilemma\nConsider the Prisoners Dilemma:\n$$\nA = \\begin{pmatrix}\n3 & 0\\\n5 & 1\n\\end{pmatrix}\\qquad\nB = \\begin{pmatrix}\n3 & 5\\\n0 & 1\n\\end{pmatrix}\n$$\nWe can easily identify the pure strategy best responses by underlying the corresponding utilities. For the row player, we will underline the best utility in each column:\n$$\nA = \\begin{pmatrix}\n3 & 0\\\n\\underline{5} & \\underline{1}\n\\end{pmatrix}\n$$\nFor the column player we underling the best utility in each row:\n$$\nB = \\begin{pmatrix}\n3 & \\underline{5}\\\n0 & \\underline{1}\n\\end{pmatrix}\n$$\nWe see that both players' best responses are their second strategy.\nBest responses in matching pennies\nVideo\nConsider matching pennies with the best responses underlined:\n$$\nA = \\begin{pmatrix}\n\\underline{1} & -1\\\n-1 & \\underline{1}\n\\end{pmatrix}\\qquad\nB = \\begin{pmatrix}\n-1 & \\underline{1}\\\n\\underline{1} & -1\n\\end{pmatrix}\n$$\nWe see that the best response now depend on what the opponent does.\nLet us consider the best responses against a mixed strategy (and apply the previous definition):\n\nAssume $\\sigma_r=(x,1-x)$\nAssume $\\sigma_c=(y,1-y)$\n\nWe have:\n$$\nA\\sigma_c^T = \\begin{pmatrix}\n2y-1\\\n1-2y\n\\end{pmatrix}\\qquad\n\\sigma_rB = \\begin{pmatrix}\n1-2x & 2x-1\n\\end{pmatrix}\n$$",
"import sympy as sym\nimport numpy as np\nsym.init_printing()\n\nx, y = sym.symbols('x, y')\nA = sym.Matrix([[1, -1], [-1, 1]])\nB = - A\nsigma_r = sym.Matrix([[x, 1-x]])\nsigma_c = sym.Matrix([y, 1-y])\nA * sigma_c, sigma_r * B",
"Those two vectors gives us the utilities to the row/column player when they play either of their pure strategies:\n\n$(A\\sigma_c^T)_i$ is the utility of the row player when playing strategy $i$ against $\\sigma_c=(y, 1-y)$\n$(\\sigma_rB)_j$ is the utility of the column player when playing strategy $j$ against $\\sigma_r=(x, 1-x)$\n\nLet us plot these (using matplotlib):",
"import matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\nmatplotlib.rc(\"savefig\", dpi=100) # Increase the quality of the images (not needed)\n\nys = [0, 1]\nrow_us = [[(A * sigma_c)[i].subs({y: val}) for val in ys] for i in range(2)]\nplt.plot(ys, row_us[0], label=\"$(A\\sigma_c^T)_1$\")\nplt.plot(ys, row_us[1], label=\"$(A\\sigma_c^T)_2$\")\nplt.xlabel(\"$\\sigma_c=(y, 1-y)$\")\nplt.title(\"Utility to player 1\")\nplt.legend();\n\nxs = [0, 1]\nrow_us = [[(sigma_r * B)[j].subs({x: val}) for val in xs] for j in range(2)]\nplt.plot(ys, row_us[0], label=\"$(\\sigma_rB)_1$\")\nplt.plot(ys, row_us[1], label=\"$(\\sigma_rB)_2$\")\nplt.xlabel(\"$\\sigma_r=(x, 1-x)$\")\nplt.title(\"Utility to column player\")\nplt.legend();",
"We see that the best responses to the mixed strategies are given as:\n$$\n\\sigma_r^ = \n\\begin{cases}\n(1, 0),&\\text{ if } y > 1/2\\\n(0, 1),&\\text{ if } y < 1/2\\\n\\text{indifferent},&\\text{ if } y = 1/2\n\\end{cases}\n\\qquad\n\\sigma_c^ = \n\\begin{cases}\n(0, 1),&\\text{ if } x > 1/2\\\n(1, 0),&\\text{ if } x < 1/2\\\n\\text{indifferent},&\\text{ if } x = 1/2\n\\end{cases}\n$$\nIn this particular case we see that for any given strategy, the opponents' best response is either a pure strategy or a mixed strategy in which case they are indifferent between the pure strategies.\nFor example:\n\nIf $\\sigma_c=(1/4, 3/4)$ ($y=1/4$) then the best response is $\\sigma_r^*=(0,1)$\nIf $\\sigma_c=(1/2, 1/2)$ ($y=1/2$) then any mixed strategy is a best response but in fact both pure strategies would give the same utility (the lines intersect).\n\nThis observation generalises to our first theorem:\n\nBest response condition\nVideo\nIn a two player game $(A,B)\\in{\\mathbb{R}^{m\\times n}}^2$ a mixed strategy $\\sigma_r^*$ of the row player is a best response to a column players' strategy $\\sigma_c$ iff:\n$${\\sigma_r^*}i > 0 \\Rightarrow (A\\sigma_c^T)_i = \\max{k}(A\\sigma_c^T)_k\\text{ for all }1\\leq i\\leq m$$\nProof of best response condition\n$(A\\sigma_c^T)_i$ is the utility of the row player when they play their $i$th strategy. Thus:\n$$\\sigma_rA\\sigma_c^T=\\sum_{i=1}^{m}{\\sigma_r}_i(A\\sigma_c^T)_i$$\nLet $u=\\max_{k}(A\\sigma_c^T)_k$. Thus:\n$$\n\\begin{align}\n\\sigma_rA\\sigma_c^T&=\\sum_{i=1}^{m}{\\sigma_r}i(u - u + (A\\sigma_c^T)_i)\\\n &=\\sum{i=1}^{m}{\\sigma_r}iu - \\sum{i=1}^{m}{\\sigma_r}i(u - (A\\sigma_c^T)_i)\\\n &=u - \\sum{i=1}^{m}{\\sigma_r}_i(u - (A\\sigma_c^T)_i)\n\\end{align}$$\nWe know that $u - (A\\sigma_c^T)_i\\geq 0$, thus the largest $\\sigma_rA\\sigma_c^T$ can be is $u$ which occurs iff ${\\sigma_r}_i > 0 \\Rightarrow (A\\sigma_c^T)_i = u$ as required.\n\nReturning to our previous example. If $\\sigma_c=(1/2, 1/2)$, $(A\\sigma_c^T)=(0, 0)$, thus $(A\\sigma_c^T)_i = 0$ for all $i$.\nNote that while any strategy is a best response to $(1/2, 1/2)$ the pair of strategies $(\\sigma_r, \\sigma_c) = ((1/2, 1/2), (1/2, 1/2))$ are the only two strategies that are best responses to each other. This coordinate is called a Nash equilibrium.\nDefinition of Nash equilibrium\nVideo\nIn a two player game $(A,B)\\in{\\mathbb{R}^{m\\times n}}^2$, $(\\sigma_r, \\sigma_c)$ is a Nash equilibrium if $\\sigma_r$ is a best response to $\\sigma_c$ and vice versa."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/graphics
|
tensorflow_graphics/projects/neural_voxel_renderer/train.ipynb
|
apache-2.0
|
[
"Copyright 2020 Google LLC.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Neural Voxel Renderer\n\nThis notebook illustrates how to train Neural Voxel Renderer (CVPR2020) in Tensorflow 2.\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/projects/neural_voxel_renderer/train.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/projects/neural_voxel_renderer/train.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nSetup and imports\nIf Tensorflow Graphics is not installed on your system, the following cell can install the Tensorflow Graphics package for you.",
"!pip install tensorflow_graphics\n\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow_graphics.projects.neural_voxel_renderer import helpers\nfrom tensorflow_graphics.projects.neural_voxel_renderer import models\n\nimport datetime\nimport matplotlib.pyplot as plt\nimport os\nimport re\nimport time\n\nVOXEL_SIZE = (128, 128, 128, 4)",
"Dataset loading\nWe store our data in TFRecords with custom protobuf messages. Each training element contains the input voxels, the voxel rendering, the light position and the target image. The data is preprocessed (eg the colored voxels have been rendered and placed accordingly). See this colab on how to generate the training/testing TFRecords.",
"# Functions for dataset generation from a set of TFRecords.\ndecode_proto = tf.compat.v1.io.decode_proto\n\n\ndef tf_image_normalize(image):\n \"\"\"Normalizes the image [-1, 1].\"\"\"\n return (2 * tf.cast(image, tf.float32) / 255.) - 1\n\n\ndef neural_voxel_plus_proto_get(element):\n \"\"\"Extracts the contents from a VoxelSample proto to tensors.\"\"\"\n _, values = decode_proto(element,\n \"giotto_blender.NeuralVoxelPlusSample\",\n [\"name\",\n \"voxel_data\",\n \"rerendering_data\",\n \"image_data\",\n \"light_position\"],\n [tf.string,\n tf.string,\n tf.string,\n tf.string,\n tf.float32])\n filename = tf.squeeze(values[0])\n voxel_data = tf.squeeze(values[1])\n rerendering_data = tf.squeeze(values[2])\n image_data = tf.squeeze(values[3])\n light_position = values[4]\n voxels = tf.io.decode_raw(voxel_data, out_type=tf.uint8)\n voxels = tf.cast(tf.reshape(voxels, VOXEL_SIZE), tf.float32) / 255.0\n rerendering = tf.cast(tf.image.decode_image(rerendering_data, channels=3),\n tf.float32)\n rerendering = tf_image_normalize(rerendering)\n image = tf.cast(tf.image.decode_image(image_data, channels=3), tf.float32)\n image = tf_image_normalize(image)\n return filename, voxels, rerendering, image, light_position\n\n\ndef _expand_tfrecords_pattern(tfr_pattern):\n \"\"\"Helper function to expand a tfrecord patter\"\"\"\n def format_shards(m):\n return '{}-?????-of-{:0>5}{}'.format(*m.groups())\n tfr_pattern = re.sub(r'^([^@]+)@(\\d+)([^@]+)$', format_shards, tfr_pattern)\n return tfr_pattern\n\n\ndef tfrecords_to_dataset(tfrecords_pattern,\n mapping_func,\n batch_size,\n buffer_size=5000):\n \"\"\"Generates a TF Dataset from a rio pattern.\"\"\"\n with tf.name_scope('Input/'):\n tfrecords_pattern = _expand_tfrecords_pattern(tfrecords_pattern)\n dataset = tf.data.Dataset.list_files(tfrecords_pattern, shuffle=True)\n dataset = dataset.interleave(tf.data.TFRecordDataset, cycle_length=16)\n dataset = dataset.shuffle(buffer_size=buffer_size)\n dataset = dataset.map(mapping_func)\n dataset = dataset.batch(batch_size)\n return dataset\n\n# Download the example data, licensed under the Apache License, Version 2.0\n!rm -rf /tmp/tfrecords_dir/\n!mkdir /tmp/tfrecords_dir/\n!wget -P /tmp/tfrecords_dir/ https://storage.googleapis.com/tensorflow-graphics/notebooks/neural_voxel_renderer/train-00012-of-00100.tfrecord\n\ntfrecords_dir = '/tmp/tfrecords_dir/'\ntfrecords_pattern = os.path.join(tfrecords_dir, 'train@100.tfrecord')\n\nbatch_size = 5\nmapping_function = neural_voxel_plus_proto_get\ndataset = tfrecords_to_dataset(tfrecords_pattern, mapping_function, batch_size)\n\n# Visualize some examples\n_, ax = plt.subplots(1, 4, figsize=(10, 10))\ni = 0\nfor a in dataset.take(4):\n (filename,\n voxels,\n vox_render,\n target,\n light_position) = a\n ax[i].imshow(target[0]*0.5+0.5)\n ax[i].axis('off')\n i += 1\nplt.show()\n",
"Train the model\nNVR+ is trained with Adam optimizer and L1 and perceptual VGG loss.",
"# ==============================================================================\n# Defining model and optimizer\nLEARNING_RATE = 0.002\n\nnvr_plus_model = models.neural_voxel_renderer_plus_tf2()\noptimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE)\n\n# Saving and logging directories\ncheckpoint_dir = '/tmp/checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\ncheckpoint = tf.train.Checkpoint(optimizer=optimizer, model=nvr_plus_model)\nlog_dir=\"/tmp/logs/\"\nsummary_writer = tf.summary.create_file_writer(\n log_dir + \"fit/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\"))\n\n# ==============================================================================\n# VGG loss\nVGG_LOSS_LAYER_NAMES = ['block1_conv1', 'block2_conv1']\nVGG_LOSS_LAYER_WEIGHTS = [1.0, 0.1]\nVGG_LOSS_WEIGHT = 0.001\n\ndef vgg_layers(layer_names):\n \"\"\" Creates a vgg model that returns a list of intermediate output values.\"\"\"\n # Load our model. Load pretrained VGG, trained on imagenet data\n vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')\n vgg.trainable = False\n outputs = [vgg.get_layer(name).output for name in layer_names]\n model = tf.keras.Model([vgg.input], outputs)\n return model\n\n\nvgg_extractor = vgg_layers(VGG_LOSS_LAYER_NAMES)\n\n# ==============================================================================\n# Total loss\ndef network_loss(output, target):\n # L1 loss\n l1_loss = tf.reduce_mean(tf.abs(target - output))\n # VGG loss\n vgg_output = vgg_extractor((output*0.5+0.5)*255)\n vgg_target = vgg_extractor((target*0.5+0.5)*255)\n vgg_loss = 0\n for l in range(len(VGG_LOSS_LAYER_WEIGHTS)):\n layer_loss = tf.reduce_mean(tf.square(vgg_target[l] - vgg_output[l]))\n vgg_loss += VGG_LOSS_LAYER_WEIGHTS[l]*layer_loss\n # Final loss\n total_loss = l1_loss + VGG_LOSS_WEIGHT*vgg_loss\n return l1_loss, vgg_loss, total_loss\n\n@tf.function\ndef train_step(input_voxels, input_rendering, input_light, target, epoch):\n with tf.GradientTape() as tape:\n network_output = nvr_plus_model([input_voxels, \n input_rendering, \n input_light],\n training=True)\n l1_loss, vgg_loss, total_loss = network_loss(network_output, target)\n network_gradients = tape.gradient(total_loss,\n nvr_plus_model.trainable_variables)\n optimizer.apply_gradients(zip(network_gradients,\n nvr_plus_model.trainable_variables))\n\n with summary_writer.as_default():\n tf.summary.scalar('total_loss', total_loss, step=epoch)\n tf.summary.scalar('l1_loss', l1_loss, step=epoch)\n tf.summary.scalar('vgg_loss', vgg_loss, step=epoch)\n tf.summary.image('Vox_rendering', \n input_rendering*0.5+0.5, \n step=epoch, \n max_outputs=4)\n tf.summary.image('Prediction', \n network_output*0.5+0.5, \n step=epoch, \n max_outputs=4)\n\ndef training_loop(train_ds, epochs):\n for epoch in range(epochs):\n start = time.time()\n\n # Train\n for n, (_, voxels, vox_rendering, target, light) in train_ds.enumerate():\n print('.', end='')\n if (n+1) % 100 == 0:\n print()\n train_step(voxels, vox_rendering, light, target, epoch)\n print()\n\n # saving (checkpoint) the model every 20 epochs\n if (epoch + 1) % 20 == 0:\n checkpoint.save(file_prefix = checkpoint_prefix)\n\n print ('Time taken for epoch {} is {} sec\\n'.format(epoch + 1,\n time.time()-start))\n checkpoint.save(file_prefix = checkpoint_prefix)\n\nNUMBER_OF_EPOCHS = 100\ntraining_loop(dataset, NUMBER_OF_EPOCHS)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chausler/talks
|
melb_data_science/y_py_2015_04_23.ipynb
|
apache-2.0
|
[
"%matplotlib inline",
"<p style=\"text-align: center;\"> Y Py?</p>\n\n<img src=\"https://dl.dropboxusercontent.com/u/5880397/zendesk.jpg\" width=500>\nChris Hausler\nData Engineer @ Zendesk\n<br><br>\n <center><img src=\"https://dl.dropboxusercontent.com/u/5880397/anaconda_logo_web.png\"/><br>\n <b><a href=\"http://continuum.io/downloads\">http://continuum.io/downloads</a></b><br/><br/><br>\nCompletely free enterprise-ready Python distribution for large-scale<br><br> data processing, predictive analytics, and scientific computing\n</center>\nWhat I want from my data stack\n\nMunging\nPlotting\nLearning\nProductionising\nCollaborating\n\nMunging\nAnd introduction to Pandas\nPandas",
"import pandas as pd",
"Data Frames",
"ages = pd.DataFrame([['John', 25],\n ['Mary', 9],\n ['Radek', 16],\n ['Mia', 64],\n ['Geroge', 4],\n ['Katrin', 21]])\nages",
"Column Names and Index",
"ages.columns = ['Name', 'Age']\nages.set_index('Name', inplace=True)\nages",
"Using the Index",
"ages.ix[['John', 'Mary']]\n\nages.ix[['John', 'Mary', 'Thomas']]",
"Basic Arithmetic",
"ages * 2",
"Another DataFrame",
"genders = pd.DataFrame(['male', 'female', 'male', 'female'],\n index=['John', 'Mary', 'Alberto', 'Karyn'],\n columns=['Gender'])\ngenders",
"Joins",
"people = genders.join(ages, how='left')\npeople",
"Dealing with Missing Values",
"people",
"1. Get rid of them",
"people.dropna()",
"2. Fill them with something",
"people.fillna(people.Age.mean())",
"3. Use a function like forward fill",
"people.fillna(method='ffill')",
"Loading Data\n\nread_csv\nread_excel\nread_hdf\nread_sql\nread_json\nread_clipboard\n....\n\nGet some real data",
"# get the data from here\n# https://data.melbourne.vic.gov.au/api/views/b2ak-trbp/rows.csv?accessType=DOWNLOAD\ndata = pd.read_csv('pedestrian_count.csv', index_col=0, parse_dates=[0])\ndata.head()",
"Basic Info",
"data.info()",
"Descriptive stats",
"data.describe()",
"Plotting\nAnd advanced Pandas operations",
"import pylab as plt\nimport seaborn as sns\nsns.set_context('poster')",
"Bridge pedestrian counts",
"bridges = data[['Webb Bridge', 'Princes Bridge', 'Sandridge Bridge']]\nax = bridges.plot(figsize=(10, 6))\n_ = ax.set_ylabel('# Pedestrians')",
"Daily totals for pedestrians",
"ax = bridges.resample('D', how='sum').plot(figsize=(10, 6))\n_ = ax.set_ylabel('# Pedestrians')",
"Monthly Pedestrian Volume",
"axs = bridges.resample('M', how='sum').plot(subplots=True, figsize=(12, 7.5))\n_ = axs[1].set_ylabel('# Pedestrians over the Month')",
"Median Pedestrians per hour, Flagstaff July 2014",
"dt = data.ix['2014-07']['Flagstaff Station']\nax = dt.groupby(dt.index.hour).median().plot(kind='bar', figsize=(10, 6))\nax.set_ylabel('# Pedestrians')\n_ = ax.set_xlabel('Hour of Day')",
"Daily Pedestrian Variation",
"fig, ax = plt.subplots(1, figsize=(8, 4))\ndata.resample('D', how='sum').boxplot(ax=ax)\nax.set_xticklabels(data.columns, rotation=90, size=16)\nax.set_ylim(0, 40000)\n_ = ax.set_ylabel('# Pedestrians')",
"Scatter Plots",
"with sns.axes_style(\"white\"):\n sns.jointplot('Princes Bridge', 'Flinders St Underpass', data, size=7);",
"(Machine) Learning\nAnd Introduction to Scikit-Learn\n<img src=\"https://dl.dropboxusercontent.com/u/5880397/ml_map.png\" width=1000/>\nLoad some Titanic Data",
"# get data here https://www.kaggle.com/c/titanic/download/train.csv\ntitanic = pd.read_csv('train.csv').drop(['Name', 'Ticket', 'PassengerId'],\n axis=1)\ntitanic.head()",
"Make some new features / cleanup",
"titanic[\"Alone\"] = ((titanic.Parch + titanic.SibSp) == 0) * 1\ntitanic['Cabin'] = titanic.Cabin.fillna('NA')\ntitanic['Embarked'] = titanic.Embarked.fillna('NA')\ntitanic['Age'] = titanic.Age.fillna(titanic.Age.median())\ntitanic.head()",
"Survival Probability",
"g = sns.factorplot(\"Pclass\", \"Survived\", \"Sex\",\n data=titanic, kind=\"bar\",\n size=6, palette=\"muted\")\ng.despine(left=True)\n_ = g.set_ylabels(\"Survival Probability\")",
"Building Categorical Features",
"from sklearn.preprocessing import LabelBinarizer\nlbl = LabelBinarizer().fit(titanic.Embarked)\nprint lbl.classes_\nprint lbl.transform(titanic.Embarked)",
"We can do them all at once",
"import numpy as np\n\nlbl = LabelBinarizer()\nX_categorical = np.hstack([lbl.fit_transform(titanic[c])\n for c in ['Cabin', 'Embarked', 'Sex']])\n\nprint 'Array shape:', X_categorical.shape\nX_categorical",
"The Dataset",
"y = titanic.pop('Survived').values\n\nX_numeric = titanic._get_numeric_data().values\nX = np.hstack([X_numeric, X_categorical])",
"First Pass Cross-Validation",
"from sklearn.cross_validation import cross_val_score\nfrom sklearn.linear_model import SGDClassifier\n\nclf = SGDClassifier(loss='log')\nscores = cross_val_score(clf, X, y, cv=3, scoring='accuracy')\nprint \"Accuracy: {:.2f}\".format(scores.mean())",
"Can we do better?",
"for a in [0.0001, 0.001, 0.01, 0.1, 1., 10, 100]:\n clf = SGDClassifier(loss='log', alpha=a)\n scores = cross_val_score(clf, X, y, cv=3, scoring='accuracy') \n print \"Alpha: {:.4f}\\tAccuracy: {:.2f}\".format(a, scores.mean())",
"Can we do even better??",
"from sklearn.grid_search import RandomizedSearchCV\n\nparams = {'alpha': np.logspace(-4, 4, 50),\n 'loss': ['log', 'modified_huber', 'perceptron'],\n 'penalty': ['l1', 'l2'],\n 'n_iter': [50, 100, 200]}\nclf = SGDClassifier()\nrandom_search = RandomizedSearchCV(clf, params, n_iter=100,\n scoring='accuracy', n_jobs=4,\n verbose=1)\nrandom_search.fit(X, y)\nprint \"Best Accuracy: {:.2f}\".format(random_search.best_score_)\nprint random_search.best_params_",
"What about different classifiers?",
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.naive_bayes import GaussianNB\n\nclassifiers = [\n SGDClassifier(),\n KNeighborsClassifier(3),\n SVC(kernel=\"linear\"),\n SVC(gamma=2),\n DecisionTreeClassifier(),\n RandomForestClassifier(),\n AdaBoostClassifier(),\n GaussianNB(),\n LogisticRegression(class_weight='auto')]\n",
"Train them...",
"res = []\nnames = []\nfor clf in classifiers:\n scores = cross_val_score(clf, X, y, cv=5, scoring='accuracy')\n names.append(clf.__class__.__name__)\n res.append(scores.mean())",
"Compare them",
"fig, ax = plt.subplots(1, figsize=(14, 6))\nsns.barplot(np.array(names), np.array(res), ci=None, palette=\"muted\", ax=ax)\nax.set_ylabel(\"Accuracy\")\n_ = plt.xticks(rotation=50, ha='right', size=12)\n",
"Productionising\nAnd full stack development\n<b style=\"color:dimgray\">Munging & Adhoc Analysis:</b><br>\n<span style=\"font-size:80%\">Pandas, Scipy, Beautifulsoup</span>\n<b style=\"color:dimgray\">Visualisation:</b><br>\n<span style=\"font-size:80%\"> Matplotlib, seaborn, bokeh, ggplot2 </span>\n<b style=\"color:dimgray\">Model Building:</b><br>\n<span style=\"font-size:80%\"> Scikit-Learn, StatsModels</span> \n<b style=\"color:dimgray\">Serving Results or Models:</b><br>\n<span style=\"font-size:80%\"> Flask, Django, Tornado</span> \n<b style=\"color:dimgray\">Testing:</b><br>\n<span style=\"font-size:80%\"> Pytest, pyunit, nose</span>\nTesting\n<img src=\"https://dl.dropboxusercontent.com/u/5880397/jenga.jpg\" width=300/>\nScaling\n<img src=\"https://dl.dropboxusercontent.com/u/5880397/spark-logo.png\" width=400>\nCollaborating\nAnd using IPython Notebooks\n<img src=\"https://dl.dropboxusercontent.com/u/5880397/ipython_notebook.png\" width=650>\nPS> We're hiring\n\nFront End Engineer\nData Engineer\nTest Engineer\n<br/><br/><br/>\n\n<div align='right'><a>chausler@zendesk.com</a></div>"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
diging/methods
|
1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb
|
gpl-3.0
|
[
"%pylab inline\n\nimport nltk\nimport pandas as pd\nfrom helpers import normalize_token, filter_token\nimport pymc\nimport matplotlib.pyplot as plt\nfrom itertools import chain\nfrom scipy import stats",
"1.2.4. Comparing word use between corpora\nIn previous notebooks we examined changes in word use over time using several different statistical approaches. In this notebook, we will examine differences in word use between two different corpora. \nWeb of Science dataset\nIn this notebook we will use data retrieved from the ISI Web of Science database. One corpus is from the journal Plant Journal over the period 1991-2013. The other corpus is from the journal Plant Journal, 19991-2013. Each corpus is comprised of several WoS field-tagged metadata files contained in a folder.\nTethne's WoS parser can load all of the data files in a single directory all at once. This may take a few minutes, since Tethne goes to a lot of trouble in indexing all of the records for easy access later on.",
"from tethne.readers import wos\n\npj_corpus = wos.read('../data/Baldwin/PlantJournal/')\npp_corpus = wos.read('../data/Baldwin/PlantPhysiology/')",
"Conditional frequency distribution\nThis next step should look familiar. We will create a conditional frequency distribution for words in these two corpora. We have two conditions: the journal is Plant Physiology and the journal is Plant Journal.",
"word_counts = nltk.ConditionalFreqDist([\n (paper.journal, normalize_token(token))\n for paper in chain(pj_corpus, pp_corpus) # chain() strings the two corpora together.\n for token in nltk.word_tokenize(getattr(paper, 'abstract', '')) \n if filter_token(token)\n])",
"Now we can use tabulate to generate a contingency table showing the number of times each word is used within each journal.",
"# Don't run this without setting ``samples``!\nword_counts.tabulate(samples=['photosynthesis', 'growth', 'stomatal']) ",
"Is there a difference?\nAs a first step, we may wish to establish whether or not there is a difference between the two corpora. In this simplistic example, we will compare the rate at which a specific word is used in the two journals. In practice, your comparisons will probably be more sophisticated -- but this is a starting point.\nSo: Is the term photosynthesis used disproportionately in Plant Physiology compared to Plant Journal?\n$H_0: P(\"photosynthesis\" \\Bigm|J = \"Plant Journal\") = P(\"photosynthesis\" \\Bigm| J=\"Plant Physiology\")$\nTo test this hypothesis, we will use Dunning's log-likelihood ratio, which is a popular metric in text analysis. In a nutshell, we want to assess whether or not the relative use of the term \"photosynthesis\" is sufficiently skewed to reject the null hypothesis.\nThe log likelihood ratio is calculated from a contingency table, similar to the one above. For a single word, our table will show the number of tokens that are the word \"photosynthesis\", and the number of tokens that are not, for each journal.\n[ show table here ]\n$\n\\sum_i O_i ln \\frac{O_i}{E_i}\n$\nwhere $O_i$ is the observed value in cell $i$, and $E_i$ is the expected value in cell $i$.\nFirst we will calculate the observed contingency table.",
"plant_jour_photosynthesis = word_counts['PLANT JOURNAL']['photosynthesis']\nplant_jour_notphotosynthesis = word_counts['PLANT JOURNAL'].N() - plant_jour_photosynthesis\n\nplant_phys_photosynthesis = word_counts['PLANT PHYSIOLOGY']['photosynthesis']\nplant_phys_notphotosynthesis = word_counts['PLANT PHYSIOLOGY'].N() - plant_phys_photosynthesis\n\n# Create a 2x2 array.\ncontingency_table = np.array([[plant_jour_photosynthesis, plant_jour_notphotosynthesis],\n [plant_phys_photosynthesis, plant_phys_notphotosynthesis]], \n dtype=int)\n\ncontingency_table",
"To calculate the expected values, we first calculate the expected probabilities of each word under the null hypothesis. The probability of \"photosynthesis\" occurring is the total number of occurrences of \"photosynthesis\" (sum of the first column) divided by the total number of tokens (sum of the whole table). The probability of \"photosynthesis\" not occuring is calculated similarly, using the second column.",
"# We multiply the values in the contingency table by 1. to coerce the\n# integers to floating-point numbers, so that we can divide without\n# losing precision.\nexpected_probabilities = 1.*contingency_table.sum(axis=0)/contingency_table.sum()\n\nexpected_probabilities",
"Now we calculate the expected counts from those probabilities. The expected counts can be found by multiplying the probabilities of the word occuring and not occuring by the total number of tokens in each corpus.",
"# We multiply each 2-element array by a square matrix containing ones, and then\n# transpose one of the resulting matrices so that the product gives the expected\n# counts.\nexpected_counts = np.floor((np.ones((2, 2))*expected_probabilities)*\\\n (np.ones((2, 2))*contingency_table.sum(axis=1)).T).astype(int)\n\nexpected_counts",
"Now we obtain the log likelihood using the equation above:",
"loglikelihood = np.sum(1.*contingency_table*np.log(1.*contingency_table/expected_counts))\n\nloglikelihood",
"So, do the two corpora differ in terms of their use of the word \"photosynthesis\"? In other words, can we reject the null hypothesis (that they do not)? Per Dunning (1993), under the null hypothesis the distribution of the test statistic (log likelihood) should follow a $\\chi^2$ distribution. So we can obtain the probability of the calculated log-likelihood under the null hypothesis using the PDF of $\\chi^2$ with one degree of freedom.\nThe Scientific Python (SciPy) package has a whole bunch of useful distributions, including $\\chi^2$.",
"distribution = stats.chi2(df=1) # df: degrees of freedom.",
"Here's the PDF of $\\chi^2$ with one degree of freedom.",
"X = np.arange(1, 100, 0.1)\nplt.plot(X, distribution.pdf(X), lw=2)\nplt.ylabel('Probability')\nplt.xlabel('Value of $\\chi^2$')\nplt.show()",
"We can calculate the probability of our observed log-likelihood from the PDF. If it is less than 0.05, then we can reject the null hypothesis.",
"distribution.pdf(loglikelihood), distribution.pdf(loglikelihood) < 0.05 ",
"Money.\nA Bayesian approach\nWe have shown that these two corpora differ significantly in their usage of the term \"photosynthesis\". In many cases, we may want to go one step further, and actually quantify that difference. We can use a similar approach to the one that we used when comparing word use between years: use an MCMC simulation to infer mean rates of use (and credibility intervals) for each corpus. \nRather than starting with a null hypothesis that there is no difference between corpora, we will begin with the belief that there is an independent rate of use for each corpus. We will then infer those rates, and sample from their posterior distributions to generate credible intervals.\nOnce again, we will model the rate of use with the Poisson distribution. So we must generate count data for evenly-sized chunks of each corpus. We'll put all of our count observations into a single dataframe.",
"count_data = pd.DataFrame(columns=['Journal', 'Year', 'Count'])\nchunk_size = 400 # This shouldn't be too large.\ni = 0\n\n# The slice() function automagically divides each corpus up into\n# sequential years. We can use chain() to combine the two iterators\n# so that we only have to write this code once.\nfor year, papers in chain(pj_corpus.slice(), pp_corpus.slice()):\n tokens = [normalize_token(token) \n for paper in papers # getattr() lets us set a default.\n for token in nltk.word_tokenize(getattr(paper, 'abstract', '')) \n if filter_token(token)]\n\n N = len(tokens) # Number of tokens in this year.\n for x in xrange(0, N, chunk_size):\n current_chunk = tokens[x:x+chunk_size] \n count = nltk.FreqDist(current_chunk)['photosynthesis']\n\n # Store the count for this chunk as an observation.\n count_data.loc[i] = [paper.journal, year, count]\n i += 1 # Increment the index variable.\n\nPJ_mean = pymc.Gamma('PJ_mean', beta=1.)\nPP_mean = pymc.Gamma('PP_mean', beta=1.)\n\nPJ_counts = pymc.Poisson('PJ_counts', \n mu=PJ_mean, \n value=count_data[count_data.Journal == 'PLANT JOURNAL'].Count, \n observed=True)\n\nPP_counts = pymc.Poisson('PP_counts', \n mu=PP_mean, \n value=count_data[count_data.Journal == 'PLANT PHYSIOLOGY'].Count, \n observed=True)\n\nmodel = pymc.Model({\n 'PJ_mean': PJ_mean,\n 'PP_mean': PP_mean,\n 'PJ_counts': PJ_counts,\n 'PP_counts': PP_counts\n})\n\n\nM1 = pymc.MCMC(model)\nM2 = pymc.MCMC(model)\nM3 = pymc.MCMC(model)\n\nM1.sample(iter=20000, burn=2000, thin=20)\nM2.sample(iter=20000, burn=2000, thin=20)\nM3.sample(iter=20000, burn=2000, thin=20)\n\npymc.Matplot.plot(M1)\n\nPJ_mean_samples = M1.PJ_mean.trace()[:]\nPJ_mean_samples = np.append(PJ_mean_samples, M2.PJ_mean.trace()[:])\nPJ_mean_samples = np.append(PJ_mean_samples, M3.PJ_mean.trace()[:])\nPP_mean_samples = M1.PP_mean.trace()[:]\nPP_mean_samples = np.append(PP_mean_samples, M2.PP_mean.trace()[:])\nPP_mean_samples = np.append(PP_mean_samples, M3.PP_mean.trace()[:])\n\n# Plot the 95% credible interval as box/whiskers.\nplt.boxplot([PJ_mean_samples, PP_mean_samples],\n whis=[2.5, 97.5],\n labels=['Plant Journal', 'Plant Physiology'],\n showfliers=False)\nplt.ylim(0, 0.3)\nplt.ylabel('Rate for term \"photosyntheis\"')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fionapigott/Data-Science-45min-Intros
|
python-interfaces/python-interfaces.ipynb
|
unlicense
|
[
"Interfaces in Python\nReferences\nThis tutorial is primarily inspired by chapters 9,11,12 in Luciano Ramalho's book \"Fluent Python\".",
"# imports\nimport math,decimal,random",
"Introduction\n...or, a very brief overview of object-oriented programming and type systems.\nDefinition: objects are “a location in memory having a value and possibly referenced by an identifier.”\nObjects have a type, which a classification scheme to reduce the probability of errors.\nIn the programming language context:\n* variables refer to objects\n* a class is a template for creating objects\n* instances of classes are often used to represent objects\nThere are important differences between types and classes, but we'll use them interchangably here.\nIn Python, objects are almost always instances of classes. \n* Python class definitions are class instances themselves\nAn object's class enumerates the object’s properties:\n* Identity (inheritance) \n* Attributes\n* Methods of interaction\nDefinition: an interface (or protocol) is an agreed-upon set of rules by which unrelated objects interact\nThe key question, in which Python offers an interesting choice: do we interact with objects according to their identity, or (some subset of) their attributes?\nFor efficient Python programming, it's important to understand the costs and benefits for your answer to this question.\n\"Traditional\" Inheritence\nParadigm: an object’s capabilities are defined by its identity...its unique attributes and its parents’ attributes. Sets of related attributes are assigned to an object via inheritence.",
"class Animal:\n \"\"\" \n the Animal class can be used to describe something that has a well-defined number of legs\n \"\"\"\n n_legs = -1\n\na = Animal()\na.n_legs",
"At this point, I'm not too worried about the mechanism by which object attributes are set. However, if the thing represented by an Animal instance truly always has a well-defined number of legs, and that number doesn't change (no starfish, no apputation), then we should set this at object creation time.",
"class Animal:\n def __init__(self,n_legs=-1):\n \"\"\"use the constructor's kw args 'n_legs' to set the number of legs\"\"\"\n self.n_legs = n_legs\n\ncow = Animal(4)\ncow.n_legs\n\nsnake = Animal(0)\nsnake.n_legs",
"To represent a more nuanced set of identities, we need to provide more classes. For example, pets have names, but are also animals.",
"class Pet(Animal):\n def __init__(self,name=None,n_legs=-1):\n self.name = name\n super().__init__(n_legs)\n\nfido = Pet(name=\"Fido\",n_legs=4)\nprint(\"The pet's name is \" + fido.name + '.')\nprint(\"It has \" + str(fido.n_legs) + \" legs.\")",
"We're interested in more than simple, variable attributes. What about interaction? Remember that encapsulation of data provides a more robust framework for abstracting operations.",
"class Cat(Pet):\n def make_a_sound(self):\n return \"Meow\"\nclass Dog(Pet):\n def make_a_sound(self):\n \"\"\"return a random sound\"\"\"\n sounds = ['Arf','Grrrrrr']\n return sounds[round(random.random())]\n\npets = []\npets.append(Cat(name=\"Kitty\"))\npets.append(Dog(name=\"Buddy\"))\nfor pet in pets:\n print(pet.name + ' says \"' + pet.make_a_sound() + '\"')",
"A more realistic example\nAdd functionality via subclassing.",
"class ListOfThings:\n def __init__(self,x):\n self.things = x\n def get_the_things(self):\n return self.things\n \nclass OrderedListOfThings(ListOfThings):\n def get_the_things(self):\n return sorted(self.things)\n\na_list = ListOfThings([1,3,4,2])\na_list.get_the_things()\n\nan_ordered_list = OrderedListOfThings([1,3,4,2])\nan_ordered_list.get_the_things()",
"Problems\nProblems with defining use solely by inheritence: \n* Multiple inheritence is hard. What is the method resolution order?\n* Rigid/brittle structure...what if a base class definition changes?\n* There are Python-specific issues with inheriting from builtin classes\nDuck Typing\n\"Don’t check whether it is-a duck: check whether it quacks-like-a duck, walks-like-a duck, etc, etc, depending on exactly what subset of duck-like behavior you need to play your language-games with. (comp.lang.python, Jul. 26, 2000)\n— Alex Martelli\"\nThe paradigm: classify and interact with objects according to their attributes, not according to their identity. \nWhile Python is very much an object-oriented programming language (i.e. objects have identity, sometimes more than one), it broadly uses protocols rather than object identity to implement functionality.\nAnother way of making the contrast: in the traditional inheritence model, we enable an object to do a useful thing by specifying its identity. In Python, we start with the useful thing, and define how objects must behave to do that thing.",
"# simple example: make two objects\n\n# this object has a clear sense of length\nx = [4,3,2,1]\n\n# what would the length of an integer be?\ny = 3\n\nlen(x)\n\nlen(y)",
"In the previous example, we see that some objects follow the length protocol, and some don't. Specifically, the length protocol defines a global function len, and the method by which it interfaces with objects...namely, their __len__ method. \nWith reference to the duck metaphor, the length protocol say (two different versions): \n* \"If you are a thing that has size or length, then you should implement a __len__ method, so that unrelated objects know how to interact with you\". \n* \"If it acts like a thing with size or length, i.e. implements a __len__ method, then I know how to get its length.\"",
"# can we _force_ something to follow a protocol?\n\ndef my_identity_function(x):\n return x\n\nmy_identity_function('three')\n\n# now explicitly set the value of the __len__ attribute\n\nsetattr(my_identity_function,'__len__','my length!')\ndir(my_identity_function)\n\nlen(my_identity_function)",
"[sad trombone]...functions types are defined in C and can't be truly modified, despite our modification of the object's namespace dictionary. \nNOTE:\nBe careful with modifying or subclassing Python's builtin types: str, int, float, list, dict, and the like, as well as functions, class definition objects, and other such objects. Let's spend a minute seeing how this fails, then we'll hop back to our attempt to make a modifiable integer.",
"# make a dict that replaces the value with a pair of the value\n\nclass DoppelDict(dict):\n def __setitem__(self, key, value):\n \"\"\"__setitem__ is called by the [] operator\"\"\"\n super().__setitem__(key, [value] * 2)\n\n# set one k,v pair via the constructor\ndd = DoppelDict(one=1)\n# set another k,v pair with the square bracket operators\ndd['two'] = 2\ndd",
"Because dict is a builtin class, it ignores attribute modifications applied via namespace changes.\nLet's now try to define a length-y decimal object. To define a modifiable class, let's use Python's decimal package, which is designed to represent a decimal interface.",
"y = decimal.Decimal(5)\ny\n\nlen(y)",
"Yay! It didn't work!",
"# now for our decimal with length\n\n# arbitrarily define length the number of digits to the left of the decimal point\nclass LengthyDecimal(decimal.Decimal):\n def __len__(self):\n return math.floor(math.log10(self)) + 1\n\ny = LengthyDecimal(5)\ny\n\n# length is the integer representation of log10\nlen(y)\n\ny = LengthyDecimal(555.44)\nlen(y)",
"Yay!\nNow let's try an integer that follows the length protocol.",
"# let the Decimal class manage construction and all the other attributes\n# enforce integer qualities only when __len__ is called\n\n# arbitrarily define length as the log10 of the integer representation of the Decimal\nclass LengthyInteger(decimal.Decimal):\n def __len__(self):\n return int(math.log(int(self),10))\ny = LengthyInteger(6)\nprint(len(y))\ny = LengthyInteger(16)\nprint(len(y))",
"Nice! We made our object quack like a duck without defining it to be a duck. LengthyInteger/LengthDecimal doe not inherit from a class that provides the needed functionality. \nYou've seen an example of taking an object that does a thing, and modifying it to conform to an interface. But before you go off and start thinking about defining new interfaces, let's step back...\nInterfaces and the python data model\nGeneral idea: cooperate with essential protocols as much as possible. \nEssential protocols\nOften defined in terms of global functions (len, print) acting on correspondingly named object attributes (__len__, __repr__).\nOther examples:\n* callability: implement __call__\n\niterability, iterables, sequence are related protocols\n\n\n\na \"file-like\" object: implements the functions needed to read / write bytes-like data.\n\nA Pythonic object\nThis example implements many common Python interfaces.",
"# copied directy from \"Fluent Python\", pp. 298-300\n\nfrom array import array \nimport reprlib\nimport math\nimport numbers\nimport functools\nimport operator\nimport itertools\n\nclass Vector: \n typecode = 'd'\n def __init__(self, components):\n self._components = array(self.typecode, components)\n def __iter__(self):\n return iter(self._components)\n def __repr__(self):\n components = reprlib.repr(self._components) \n components = components[components.find('['):-1] \n return 'Vector({})'.format(components)\n def __str__(self):\n return str(tuple(self))\n def __bytes__(self):\n return (bytes([ord(self.typecode)]) +\n bytes(self._components))\n def __eq__(self, other):\n return (len(self) == len(other) and\n all(a == b for a, b in zip(self, other)))\n def __hash__(self):\n hashes = (hash(x) for x in self)\n return functools.reduce(operator.xor, hashes, 0)\n def __abs__(self):\n return math.sqrt(sum(x * x for x in self))\n def __bool__(self):\n return bool(abs(self))\n def __len__(self):\n return len(self._components)\n def __getitem__(self, index): \n cls = type(self)\n if isinstance(index, slice):\n return cls(self._components[index])\n elif isinstance(index, numbers.Integral): return self._components[index]\n else:\n msg = '{.__name__} indices must be integers' \n raise TypeError(msg.format(cls))\n \n shortcut_names = 'xyzt'\n\n def __getattr__(self, name): \n cls = type(self)\n if len(name) == 1:\n pos = cls.shortcut_names.find(name) \n if 0 <= pos < len(self._components):\n return self._components[pos]\n msg = '{.__name__!r} object has no attribute {!r}' \n raise AttributeError(msg.format(cls, name))\n \n def angle(self, n):\n r = math.sqrt(sum(x * x for x in self[n:])) \n a = math.atan2(r, self[n-1])\n if (n == len(self) - 1) and (self[-1] < 0):\n return math.pi * 2 - a \n else:\n return a\n \n def angles(self):\n return (self.angle(n) for n in range(1, len(self)))\n \n def __format__(self, fmt_spec=''):\n if fmt_spec.endswith('h'): # hyperspherical coordinates\n fmt_spec = fmt_spec[:-1]\n coords = itertools.chain([abs(self)],\n self.angles())\n outer_fmt = '<{}>' \n else:\n coords = self\n outer_fmt = '({})'\n components = (format(c, fmt_spec) for c in coords) \n return outer_fmt.format(', '.join(components))\n \n @classmethod\n def frombytes(cls, octets):\n typecode = chr(octets[0])\n memv = memoryview(octets[1:]).cast(typecode) \n return cls(memv)\n",
"So what should I do?\nDon't...\n\nDon't implement every possible interface for every object you build. Just do enough that it works.\nDon't define new interfaces. The Python data model includes a very robust set of interface definitions.\nDon't define new abstract base classes (unless you're building a brand new framework).\n\nType Tests\nDon't test an object's type. Test its conformity to the protocol that matters. Use try blocks.",
"def object_to_str(obj):\n try:\n return str(obj)\n except TypeError:\n return \"Don't know how to represent argument as a string\"\n\nobject_to_str({'a':1,'b':[5]})\n\nobject_to_str(open('tmp.txt','w'))",
"Type Tests II\nDo test an object's interface with an Abstract Base Class",
"from collections import abc\nmy_dict = {}\nisinstance(my_dict, abc.Mapping)",
"Object building\nConstruct an object so that it follows the protocols that define the desired functionality."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
geography-munich/sciprog
|
material/sub/koldunov/05 - Graphs and maps - Matplotlib and Basemap.ipynb
|
apache-2.0
|
[
"Graphs and maps (Matplotlib and Basemap)\nNikolay Koldunov\nkoldunovn@gmail.com\nThis is part of Python for Geosciences notes.\n=============\nMatplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.\nUsually we import matplotlib as follows:",
"%matplotlib inline\nimport matplotlib.pylab as plt\nimport numpy as np",
"This allows inline graphics in IPython (Jupyter) notebooks and imports functions nessesary for ploting as plt. In addition we import numpy as np.\nLet's prepare some data:",
"x = np.linspace(0,10,20)\ny = x ** 2",
"Plot is as easy as this:",
"plt.plot(x,y);",
"Line style and labels are controlled in a way similar to Matlab:",
"plt.plot(x, y, 'r--o')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('title');",
"You can plot several individual lines at once:",
"plt.plot(x, y, 'r--o', x, y ** 1.1, 'bs', x, y ** 1.2, 'g^-' );",
"One more example:",
"mu, sigma = 100, 15\nx = mu + sigma * np.random.randn(10000)\n\n# the histogram of the data\nn, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)\n\nplt.xlabel('Smarts')\nplt.ylabel('Probability')\nplt.title('Histogram of IQ')\nplt.text(60, .025, r'$\\mu=100,\\ \\sigma=15$')\nplt.axis([40, 160, 0, 0.03])\nplt.grid(True)",
"If you feel a bit playful (only in matplotlib > 1.3):",
"with plt.xkcd():\n x = np.linspace(0, 1)\n y = np.sin(4 * np.pi * x) * np.exp(-5 * x)\n\n plt.fill(x, y, 'r')\n plt.grid(False)",
"Following example is from matplotlib - 2D and 3D plotting in Python - great place to start for people interested in matplotlib.",
"n = np.array([0,1,2,3,4,5])\nxx = np.linspace(-0.75, 1., 100)\nx = np.linspace(0, 5, 10)\n\nfig, axes = plt.subplots(1, 4, figsize=(12,3))\n\naxes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))\n\naxes[1].step(n, n**2, lw=2)\n\naxes[2].bar(n, n**2, align=\"center\", width=0.5, alpha=0.5)\n\naxes[3].fill_between(x, x**2, x**3, color=\"green\", alpha=0.5);",
"When you going to plot something more or less complicated in Matplotlib, the first thing you do is open the Matplotlib example gallery and choose example closest to your case.\nYou can directly load python code (or basically any text file) to the notebook. This time we download code from the Matplotlib example gallery:",
"# %load http://matplotlib.org/mpl_examples/pylab_examples/griddata_demo.py\nfrom numpy.random import uniform, seed\nfrom matplotlib.mlab import griddata\nimport matplotlib.pyplot as plt\nimport numpy as np\n# make up data.\n#npts = int(raw_input('enter # of random points to plot:'))\nseed(0)\nnpts = 200\nx = uniform(-2, 2, npts)\ny = uniform(-2, 2, npts)\nz = x*np.exp(-x**2 - y**2)\n# define grid.\nxi = np.linspace(-2.1, 2.1, 100)\nyi = np.linspace(-2.1, 2.1, 200)\n# grid the data.\nzi = griddata(x, y, z, xi, yi, interp='linear')\n# contour the gridded data, plotting dots at the nonuniform data points.\nCS = plt.contour(xi, yi, zi, 15, linewidths=0.5, colors='k')\nCS = plt.contourf(xi, yi, zi, 15, cmap=plt.cm.rainbow,\n vmax=abs(zi).max(), vmin=-abs(zi).max())\nplt.colorbar() # draw colorbar\n# plot data points.\nplt.scatter(x, y, marker='o', c='b', s=5, zorder=10)\nplt.xlim(-2, 2)\nplt.ylim(-2, 2)\nplt.title('griddata test (%d points)' % npts)\nplt.show()\n",
"Maps ... using Basemap\nIn order to create a map we have to first import some data. We are going to use NCEP reanalysis file from previous section:",
"from netCDF4 import Dataset\n\nf =Dataset('air.sig995.2012.nc')",
"Here we create netCDF variable objec for air (we would like to have acces to some of the attributes), but from lat and lon we import only data valies:",
"air = f.variables['air']\nlat = f.variables['lat'][:]\nlon = f.variables['lon'][:]",
"Easiest way to look at the array is imshow:",
"plt.imshow(air[0,:,:])\nplt.colorbar();",
"But we want some real map :) First convert data from air:",
"air_c = air[:] - 273.15",
"Our coordinate variables are vectors:",
"lat.shape",
"For the map we need 2d coordinate arrays. Convert lot lan to 2d:",
"lon2, lat2 = np.meshgrid(lon,lat)",
"Import Basemap - library for plotting 2D data on maps:",
"from mpl_toolkits.basemap import Basemap",
"Create Basemap instance (with certain characteristics) and convert lon lat to map coordinates",
"m = Basemap(projection='npstere',boundinglat=60,lon_0=0,resolution='l')\nx, y = m(lon2, lat2)",
"Creating the map now is only two lines:",
"m.drawcoastlines()\nm.contourf(x,y,air_c[0,:,:])",
"We can make the map look prettier by adding couple of lines:",
"fig = plt.figure(figsize=(15,7))\nm.fillcontinents(color='gray',lake_color='gray')\nm.drawcoastlines()\nm.drawparallels(np.arange(-80.,81.,20.))\nm.drawmeridians(np.arange(-180.,181.,20.))\nm.drawmapboundary(fill_color='white')\nm.contourf(x,y,air_c[0,:,:],40)\nplt.title('Monthly mean SAT')\nplt.colorbar()",
"You can change map characteristics by changin the Basemap instance:",
"m = Basemap(projection='ortho',lat_0=45,lon_0=-100,resolution='l')\nx, y = m(lon2, lat2)",
"While the rest of the code might be the same:",
"fig = plt.figure(figsize=(15,7))\n#m.fillcontinents(color='gray',lake_color='gray')\nm.drawcoastlines()\nm.drawparallels(np.arange(-80.,81.,20.))\nm.drawmeridians(np.arange(-180.,181.,20.))\nm.drawmapboundary(fill_color='white')\ncs = m.contourf(x,y,air_c[0,:,:],20)\nplt.title('Monthly mean SAT')",
"One more map exampe:",
"m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\\\n llcrnrlon=0,urcrnrlon=360,resolution='c')\nx, y = m(lon2, lat2)\n\nfig = plt.figure(figsize=(15,7))\n#m.fillcontinents(color='gray',lake_color='gray')\nm.drawcoastlines()\nm.drawparallels(np.arange(-80.,81.,20.))\nm.drawmeridians(np.arange(0.,360.,20.))\nm.drawmapboundary(fill_color='white')\ncs = m.contourf(x,y,air[0,:,:],20)\nplt.title('Monthly mean SAT')",
"Links:\n\nBasemap Example Gallery\nPyNGL Gallery\nggplot for python\nBokeh\nd3py"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jarrodmcc/OpenFermion
|
examples/openfermion_tutorial.ipynb
|
apache-2.0
|
[
"Introduction to OpenFermion\nNote that all the examples below must be run sequentially within a section.\nInitializing the FermionOperator data structure\nFermionic systems are often treated in second quantization where arbitrary operators can be expressed using the fermionic creation and annihilation operators, $a^\\dagger_k$ and $a_k$. The fermionic ladder operators play a similar role to their qubit ladder operator counterparts, $\\sigma^+k$ and $\\sigma^-_k$ but are distinguished by the canonical fermionic anticommutation relations, ${a^\\dagger_i, a^\\dagger_j} = {a_i, a_j} = 0$ and ${a_i, a_j^\\dagger} = \\delta{ij}$. Any weighted sums of products of these operators are represented with the FermionOperator data structure in OpenFermion. The following are examples of valid FermionOperators:\n$$\n\\begin{align}\n& a_1 \\nonumber \\\n& 1.7 a^\\dagger_3 \\nonumber \\\n&-1.7 \\, a^\\dagger_3 a_1 \\nonumber \\\n&(1 + 2i) \\, a^\\dagger_4 a^\\dagger_3 a_9 a_1 \\nonumber \\\n&(1 + 2i) \\, a^\\dagger_4 a^\\dagger_3 a_9 a_1 - 1.7 \\, a^\\dagger_3 a_1 \\nonumber\n\\end{align}\n$$\nThe FermionOperator class is contained in $\\textrm{ops/_fermion_operators.py}$. In order to support fast addition of FermionOperator instances, the class is implemented as hash table (python dictionary). The keys of the dictionary encode the strings of ladder operators and values of the dictionary store the coefficients. The strings of ladder operators are encoded as a tuple of 2-tuples which we refer to as the \"terms tuple\". Each ladder operator is represented by a 2-tuple. The first element of the 2-tuple is an int indicating the tensor factor on which the ladder operator acts. The second element of the 2-tuple is Boole: 1 represents raising and 0 represents lowering. For instance, $a^\\dagger_8$ is represented in a 2-tuple as $(8, 1)$. Note that indices start at 0 and the identity operator is an empty list. Below we give some examples of operators and their terms tuple:\n$$\n\\begin{align}\nI & \\mapsto () \\nonumber \\\na_1 & \\mapsto ((1, 0),) \\nonumber \\\na^\\dagger_3 & \\mapsto ((3, 1),) \\nonumber \\\na^\\dagger_3 a_1 & \\mapsto ((3, 1), (1, 0)) \\nonumber \\\na^\\dagger_4 a^\\dagger_3 a_9 a_1 & \\mapsto ((4, 1), (3, 1), (9, 0), (1, 0)) \\nonumber\n\\end{align}\n$$\nNote that when initializing a single ladder operator one should be careful to add the comma after the inner pair. This is because in python ((1, 2)) = (1, 2) whereas ((1, 2),) = ((1, 2),). The \"terms tuple\" is usually convenient when one wishes to initialize a term as part of a coded routine. However, the terms tuple is not particularly intuitive. Accordingly, OpenFermion also supports another user-friendly, string notation below. This representation is rendered when calling \"print\" on a FermionOperator.\n$$\n\\begin{align}\nI & \\mapsto \\textrm{\"\"} \\nonumber \\\na_1 & \\mapsto \\textrm{\"1\"} \\nonumber \\\na^\\dagger_3 & \\mapsto \\textrm{\"3^\"} \\nonumber \\\na^\\dagger_3 a_1 & \\mapsto \\textrm{\"3^}\\;\\textrm{1\"} \\nonumber \\\na^\\dagger_4 a^\\dagger_3 a_9 a_1 & \\mapsto \\textrm{\"4^}\\;\\textrm{3^}\\;\\textrm{9}\\;\\textrm{1\"} \\nonumber\n\\end{align}\n$$\nLet's initialize our first term! We do it two different ways below.",
"from openfermion.ops import FermionOperator\n\nmy_term = FermionOperator(((3, 1), (1, 0)))\nprint(my_term)\n\nmy_term = FermionOperator('3^ 1')\nprint(my_term)",
"The preferred way to specify the coefficient in openfermion is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operands (such as +=) modify classes whereas binary operands such as + create copies. Important caveats are that the empty tuple FermionOperator(()) and the empty string FermionOperator('') initializes identity. The empty initializer FermionOperator() initializes the zero operator.",
"good_way_to_initialize = FermionOperator('3^ 1', -1.7)\nprint(good_way_to_initialize)\n\nbad_way_to_initialize = -1.7 * FermionOperator('3^ 1')\nprint(bad_way_to_initialize)\n\nidentity = FermionOperator('')\nprint(identity)\n\nzero_operator = FermionOperator()\nprint(zero_operator)",
"Note that FermionOperator has only one attribute: .terms. This attribute is the dictionary which stores the term tuples.",
"my_operator = FermionOperator('4^ 1^ 3 9', 1. + 2.j)\nprint(my_operator)\nprint(my_operator.terms)",
"Manipulating the FermionOperator data structure\nSo far we have explained how to initialize a single FermionOperator such as $-1.7 \\, a^\\dagger_3 a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \\, a^\\dagger_4 a^\\dagger_3 a_9 a_1 - 1.7 \\, a^\\dagger_3 a_1$. To do this, just add together two FermionOperators! We demonstrate below.",
"from openfermion.ops import FermionOperator\n\nterm_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)\nterm_2 = FermionOperator('3^ 1', -1.7)\nmy_operator = term_1 + term_2\nprint(my_operator)\n\nmy_operator = FermionOperator('4^ 3^ 9 1', 1. + 2.j)\nterm_2 = FermionOperator('3^ 1', -1.7)\nmy_operator += term_2\nprint('')\nprint(my_operator)",
"The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially important when trying to construct a very large FermionOperator. FermionOperators also support a wide range of builtins including, str(), repr(), ==, !=, =, , /, /=, +, +=, -, -=, - and **. Note that since FermionOperators involve floats, == and != check for (in)equality up to numerical precision. We demonstrate some of these methods below.",
"term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j)\nterm_2 = FermionOperator('3^ 1', -1.7)\n\nmy_operator = term_1 - 33. * term_2\nprint(my_operator)\n\nmy_operator *= 3.17 * (term_2 + term_1) ** 2\nprint('')\nprint(my_operator)\n\nprint('')\nprint(term_2 ** 3)\n\nprint('')\nprint(term_1 == 2.*term_1 - term_1)\nprint(term_1 == my_operator)",
"Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here.",
"from openfermion.utils import commutator, count_qubits, hermitian_conjugated, normal_ordered\n\n# Get the Hermitian conjugate of a FermionOperator, count its qubit, check if it is normal-ordered.\nterm_1 = FermionOperator('4^ 3 3^', 1. + 2.j)\nprint(hermitian_conjugated(term_1))\nprint(term_1.is_normal_ordered())\nprint(count_qubits(term_1))\n\n# Normal order the term.\nterm_2 = normal_ordered(term_1)\nprint('')\nprint(term_2)\nprint(term_2.is_normal_ordered())\n\n# Compute a commutator of the terms.\nprint('')\nprint(commutator(term_1, term_2))",
"The QubitOperator data structure\nThe QubitOperator data structure is another essential part of openfermion. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperator is used to store fermion operators. For instance $X_0 Z_3 Y_4$ is a QubitOperator. The internal representation of this as a terms tuple would be $((0, \\textrm{\"X\"}), (3, \\textrm{\"Z\"}), (4, \\textrm{\"Y\"}))$. Note that one important difference between QubitOperator and FermionOperator is that the terms in QubitOperator are always sorted in order of tensor factor. In some cases, this enables faster manipulation. We initialize some QubitOperators below.",
"from openfermion.ops import QubitOperator\n\nmy_first_qubit_operator = QubitOperator('X1 Y2 Z3')\nprint(my_first_qubit_operator)\nprint(my_first_qubit_operator.terms)\n\noperator_2 = QubitOperator('X3 Z4', 3.17)\noperator_2 -= 77. * my_first_qubit_operator\nprint('')\nprint(operator_2)",
"Jordan-Wigner and Bravyi-Kitaev\nopenfermion provides functions for mapping FermionOperators to QubitOperators.",
"from openfermion.ops import FermionOperator\nfrom openfermion.transforms import jordan_wigner, bravyi_kitaev\nfrom openfermion.utils import eigenspectrum, hermitian_conjugated\n\n# Initialize an operator.\nfermion_operator = FermionOperator('2^ 0', 3.17)\nfermion_operator += hermitian_conjugated(fermion_operator)\nprint(fermion_operator)\n\n# Transform to qubits under the Jordan-Wigner transformation and print its spectrum.\njw_operator = jordan_wigner(fermion_operator)\nprint('')\nprint(jw_operator)\njw_spectrum = eigenspectrum(jw_operator)\nprint(jw_spectrum)\n\n# Transform to qubits under the Bravyi-Kitaev transformation and print its spectrum.\nbk_operator = bravyi_kitaev(fermion_operator)\nprint('')\nprint(bk_operator)\nbk_spectrum = eigenspectrum(bk_operator)\nprint(bk_spectrum)",
"We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero entries.",
"from openfermion.transforms import reverse_jordan_wigner\n\n# Initialize QubitOperator.\nmy_operator = QubitOperator('X0 Y1 Z2', 88.)\nmy_operator += QubitOperator('Z1 Z4', 3.17)\nprint(my_operator)\n\n# Map QubitOperator to a FermionOperator.\nmapped_operator = reverse_jordan_wigner(my_operator)\nprint('')\nprint(mapped_operator)\n\n# Map the operator back to qubits and make sure it is the same.\nback_to_normal = jordan_wigner(mapped_operator)\nback_to_normal.compress()\nprint('')\nprint(back_to_normal)",
"Sparse matrices and the Hubbard model\nOften, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There is code in both openfermion.transforms and openfermion.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOperator or other more advanced classes such as InteractionOperator to a scipy.sparse.csc matrix. There are numerous functions in openfermion.utils which one can call on the sparse operators such as \"get_gap\", \"get_hartree_fock_state\", \"get_ground_state\", etc. We show this off by computing the ground state energy of the Hubbard model. To do that, we use code from the openfermion.hamiltonians module which constructs lattice models of fermions such as Hubbard models.",
"from openfermion.hamiltonians import fermi_hubbard\nfrom openfermion.transforms import get_sparse_operator, jordan_wigner\nfrom openfermion.utils import get_ground_state\n\n# Set model.\nx_dimension = 2\ny_dimension = 2\ntunneling = 2.\ncoulomb = 1.\nmagnetic_field = 0.5\nchemical_potential = 0.25\nperiodic = 1\nspinless = 1\n\n# Get fermion operator.\nhubbard_model = fermi_hubbard(\n x_dimension, y_dimension, tunneling, coulomb, chemical_potential,\n magnetic_field, periodic, spinless)\nprint(hubbard_model)\n\n# Get qubit operator under Jordan-Wigner.\njw_hamiltonian = jordan_wigner(hubbard_model)\njw_hamiltonian.compress()\nprint('')\nprint(jw_hamiltonian)\n\n# Get scipy.sparse.csc representation.\nsparse_operator = get_sparse_operator(hubbard_model)\nprint('')\nprint(sparse_operator)\nprint('\\nEnergy of the model is {} in units of T and J.'.format(\n get_ground_state(sparse_operator)[0]))",
"Hamiltonians in the plane wave basis\nA user can write plugins to openfermion which allow for the use of, e.g., third-party electronic structure package to compute molecular orbitals, Hamiltonians, energies, reduced density matrices, coupled cluster amplitudes, etc using Gaussian basis sets. We may provide scripts which interface between such packages and openfermion in future but do not discuss them in this tutorial.\nWhen using simpler basis sets such as plane waves, these packages are not needed. openfermion comes with code which computes Hamiltonians in the plane wave basis. Note that when using plane waves, one is working with the periodized Coulomb operator, best suited for condensed phase calculations such as studying the electronic structure of a solid. To obtain these Hamiltonians one must choose to study the system without a spin degree of freedom (spinless), one must the specify dimension in which the calculation is performed (n_dimensions, usually 3), one must specify how many plane waves are in each dimension (grid_length) and one must specify the length scale of the plane wave harmonics in each dimension (length_scale) and also the locations and charges of the nuclei. One can generate these models with plane_wave_hamiltonian() found in openfermion.hamiltonians. For simplicity, below we compute the Hamiltonian in the case of zero external charge (corresponding to the uniform electron gas, aka jellium). We also demonstrate that one can transform the plane wave Hamiltonian using a Fourier transform without effecting the spectrum of the operator.",
"from openfermion.hamiltonians import jellium_model\nfrom openfermion.utils import eigenspectrum, fourier_transform, Grid\nfrom openfermion.transforms import jordan_wigner\n\n# Let's look at a very small model of jellium in 1D.\ngrid = Grid(dimensions=1, length=3, scale=1.0)\nspinless = True\n\n# Get the momentum Hamiltonian.\nmomentum_hamiltonian = jellium_model(grid, spinless)\nmomentum_qubit_operator = jordan_wigner(momentum_hamiltonian)\nmomentum_qubit_operator.compress()\nprint(momentum_qubit_operator)\n\n# Fourier transform the Hamiltonian to the position basis.\nposition_hamiltonian = fourier_transform(momentum_hamiltonian, grid, spinless)\nposition_qubit_operator = jordan_wigner(position_hamiltonian)\nposition_qubit_operator.compress()\nprint('')\nprint (position_qubit_operator)\n\n# Check the spectra to make sure these representations are iso-spectral.\nspectral_difference = eigenspectrum(momentum_qubit_operator) - eigenspectrum(position_qubit_operator)\nprint('')\nprint(spectral_difference)",
"Basics of MolecularData class\nData from electronic structure calculations can be saved in an OpenFermion data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the electronic structure calculations are either expensive to compute or difficult to converge (e.g. one needs to mess around with different types of SCF routines to make things converge). Accordingly, we anticipate that users will want some way to automatically database the results of their electronic structure calculations so that important data (such as the SCF integrals) can be looked up on-the-fly if the user has computed them in the past. OpenFermion supports a data provenance strategy which saves key results of the electronic structure calculation (including pointers to files containing large amounts of data, such as the molecular integrals) in an HDF5 container.\nThe MolecularData class stores information about molecules. One initializes a MolecularData object by specifying parameters of a molecule such as its geometry, basis, multiplicity, charge and an optional string describing it. One can also initialize MolecularData simply by providing a string giving a filename where a previous MolecularData object was saved in an HDF5 container. One can save a MolecularData instance by calling the class's .save() method. This automatically saves the instance in a data folder specified during OpenFermion installation. The name of the file is generated automatically from the instance attributes and optionally provided description. Alternatively, a filename can also be provided as an optional input if one wishes to manually name the file.\nWhen electronic structure calculations are run, the data files for the molecule can be automatically updated. If one wishes to later use that data they either initialize MolecularData with the instance filename or initialize the instance and then later call the .load() method.\nBasis functions are provided to initialization using a string such as \"6-31g\". Geometries can be specified using a simple txt input file (see geometry_from_file function in molecular_data.py) or can be passed using a simple python list format demonstrated below. Atoms are specified using a string for their atomic symbol. Distances should be provided in angstrom. Below we initialize a simple instance of MolecularData without performing any electronic structure calculations.",
"from openfermion.hamiltonians import MolecularData\n\n# Set parameters to make a simple molecule.\ndiatomic_bond_length = .7414\ngeometry = [('H', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]\nbasis = 'sto-3g'\nmultiplicity = 1\ncharge = 0\ndescription = str(diatomic_bond_length)\n\n# Make molecule and print out a few interesting facts about it.\nmolecule = MolecularData(geometry, basis, multiplicity,\n charge, description)\nprint('Molecule has automatically generated name {}'.format(\n molecule.name))\nprint('Information about this molecule would be saved at:\\n{}\\n'.format(\n molecule.filename))\nprint('This molecule has {} atoms and {} electrons.'.format(\n molecule.n_atoms, molecule.n_electrons))\nfor atom, atomic_number in zip(molecule.atoms, molecule.protons):\n print('Contains {} atom, which has {} protons.'.format(\n atom, atomic_number))",
"If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in OpenFermion's MolecularData objects. Currently plugins are available for Psi4 (OpenFermion-Psi4) and PySCF (OpenFermion-PySCF), and there may be more in the future. For the purposes of this example, we will load data that ships with OpenFermion to make a plot of the energy surface of hydrogen. Note that helper functions to initialize some interesting chemical benchmarks are found in openfermion.utils.",
"# Set molecule parameters.\nbasis = 'sto-3g'\nmultiplicity = 1\nbond_length_interval = 0.1\nn_points = 25\n\n# Generate molecule at different bond lengths.\nhf_energies = []\nfci_energies = []\nbond_lengths = []\nfor point in range(3, n_points + 1):\n bond_length = bond_length_interval * point\n bond_lengths += [bond_length]\n description = str(round(bond_length,2))\n print(description)\n geometry = [('H', (0., 0., 0.)), ('H', (0., 0., bond_length))]\n molecule = MolecularData(\n geometry, basis, multiplicity, description=description)\n \n # Load data.\n molecule.load()\n\n # Print out some results of calculation.\n print('\\nAt bond length of {} angstrom, molecular hydrogen has:'.format(\n bond_length))\n print('Hartree-Fock energy of {} Hartree.'.format(molecule.hf_energy))\n print('MP2 energy of {} Hartree.'.format(molecule.mp2_energy))\n print('FCI energy of {} Hartree.'.format(molecule.fci_energy))\n print('Nuclear repulsion energy between protons is {} Hartree.'.format(\n molecule.nuclear_repulsion))\n for orbital in range(molecule.n_orbitals):\n print('Spatial orbital {} has energy of {} Hartree.'.format(\n orbital, molecule.orbital_energies[orbital]))\n hf_energies += [molecule.hf_energy]\n fci_energies += [molecule.fci_energy]\n\n# Plot.\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.figure(0)\nplt.plot(bond_lengths, fci_energies, 'x-')\nplt.plot(bond_lengths, hf_energies, 'o-')\nplt.ylabel('Energy in Hartree')\nplt.xlabel('Bond length in angstrom')\nplt.show()",
"The geometry data needed to generate MolecularData can also be retreived from the PubChem online database by inputting the molecule's name.",
"from openfermion.utils import geometry_from_pubchem\n\nmethane_geometry = geometry_from_pubchem('methane')\nprint(methane_geometry)",
"InteractionOperator and InteractionRDM for efficient numerical representations\nFermion Hamiltonians can be expressed as $H = h_0 + \\sum_{pq} h_{pq}\\, a^\\dagger_p a_q + \\frac{1}{2} \\sum_{pqrs} h_{pqrs} \\, a^\\dagger_p a^\\dagger_q a_r a_s$ where $h_0$ is a constant shift due to the nuclear repulsion and $h_{pq}$ and $h_{pqrs}$ are the famous molecular integrals. Since fermions interact pairwise, their energy is thus a unique function of the one-particle and two-particle reduced density matrices which are expressed in second quantization as $\\rho_{pq} = \\left \\langle p \\mid a^\\dagger_p a_q \\mid q \\right \\rangle$ and $\\rho_{pqrs} = \\left \\langle pq \\mid a^\\dagger_p a^\\dagger_q a_r a_s \\mid rs \\right \\rangle$, respectively.\nBecause the RDMs and molecular Hamiltonians are both compactly represented and manipulated as 2- and 4- index tensors, we can represent them in a particularly efficient form using similar data structures. The InteractionOperator data structure can be initialized for a Hamiltonian by passing the constant $h_0$ (or 0), as well as numpy arrays representing $h_{pq}$ (or $\\rho_{pq}$) and $h_{pqrs}$ (or $\\rho_{pqrs}$). Importantly, InteractionOperators can also be obtained by calling MolecularData.get_molecular_hamiltonian() or by calling the function get_interaction_operator() (found in openfermion.transforms) on a FermionOperator. The InteractionRDM data structure is similar but represents RDMs. For instance, one can get a molecular RDM by calling MolecularData.get_molecular_rdm(). When generating Hamiltonians from the MolecularData class, one can choose to restrict the system to an active space.\nThese classes inherit from the same base class, PolynomialTensor. This data structure overloads the slice operator [] so that one can get or set the key attributes of the InteractionOperator: $\\textrm{.constant}$, $\\textrm{.one_body_coefficients}$ and $\\textrm{.two_body_coefficients}$ . For instance, InteractionOperator[(p, 1), (q, 1), (r, 0), (s, 0)] would return $h_{pqrs}$ and InteractionRDM would return $\\rho_{pqrs}$. Importantly, the class supports fast basis transformations using the method PolynomialTensor.rotate_basis(rotation_matrix).\nBut perhaps most importantly, one can map the InteractionOperator to any of the other data structures we've described here.\nBelow, we load MolecularData from a saved calculation of LiH. We then obtain an InteractionOperator representation of this system in an active space. We then map that operator to qubits. We then demonstrate that one can rotate the orbital basis of the InteractionOperator using random angles to obtain a totally different operator that is still iso-spectral.",
"from openfermion.hamiltonians import MolecularData\nfrom openfermion.transforms import get_fermion_operator, get_sparse_operator, jordan_wigner\nfrom openfermion.utils import get_ground_state\nimport numpy\nimport scipy\nimport scipy.linalg\n\n# Load saved file for LiH.\ndiatomic_bond_length = 1.45\ngeometry = [('Li', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))]\nbasis = 'sto-3g'\nmultiplicity = 1\n\n# Set Hamiltonian parameters.\nactive_space_start = 1\nactive_space_stop = 3\n\n# Generate and populate instance of MolecularData.\nmolecule = MolecularData(geometry, basis, multiplicity, description=\"1.45\")\nmolecule.load()\n\n# Get the Hamiltonian in an active space.\nmolecular_hamiltonian = molecule.get_molecular_hamiltonian(\n occupied_indices=range(active_space_start),\n active_indices=range(active_space_start, active_space_stop))\n\n# Map operator to fermions and qubits.\nfermion_hamiltonian = get_fermion_operator(molecular_hamiltonian)\nqubit_hamiltonian = jordan_wigner(fermion_hamiltonian)\nqubit_hamiltonian.compress()\nprint('The Jordan-Wigner Hamiltonian in canonical basis follows:\\n{}'.format(qubit_hamiltonian))\n\n# Get sparse operator and ground state energy.\nsparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)\nenergy, state = get_ground_state(sparse_hamiltonian)\nprint('Ground state energy before rotation is {} Hartree.\\n'.format(energy))\n\n# Randomly rotate.\nn_orbitals = molecular_hamiltonian.n_qubits // 2\nn_variables = int(n_orbitals * (n_orbitals - 1) / 2)\nnumpy.random.seed(1)\nrandom_angles = numpy.pi * (1. - 2. * numpy.random.rand(n_variables))\nkappa = numpy.zeros((n_orbitals, n_orbitals))\nindex = 0\nfor p in range(n_orbitals):\n for q in range(p + 1, n_orbitals):\n kappa[p, q] = random_angles[index]\n kappa[q, p] = -numpy.conjugate(random_angles[index])\n index += 1\n\n # Build the unitary rotation matrix.\n difference_matrix = kappa + kappa.transpose()\n rotation_matrix = scipy.linalg.expm(kappa)\n\n # Apply the unitary.\n molecular_hamiltonian.rotate_basis(rotation_matrix)\n\n# Get qubit Hamiltonian in rotated basis.\nqubit_hamiltonian = jordan_wigner(molecular_hamiltonian)\nqubit_hamiltonian.compress()\nprint('The Jordan-Wigner Hamiltonian in rotated basis follows:\\n{}'.format(qubit_hamiltonian))\n\n# Get sparse Hamiltonian and energy in rotated basis.\nsparse_hamiltonian = get_sparse_operator(qubit_hamiltonian)\nenergy, state = get_ground_state(sparse_hamiltonian)\nprint('Ground state energy after rotation is {} Hartree.'.format(energy))",
"Quadratic Hamiltonians and Slater determinants\nThe general electronic structure Hamiltonian\n$H = h_0 + \\sum_{pq} h_{pq}\\, a^\\dagger_p a_q + \\frac{1}{2} \\sum_{pqrs} h_{pqrs} \\, a^\\dagger_p a^\\dagger_q a_r a_s$ contains terms that act on up to 4 sites, or\nis quartic in the fermionic creation and annihilation operators. However, in many situations\nwe may fruitfully approximate these Hamiltonians by replacing these quartic terms with\nterms that act on at most 2 fermionic sites, or quadratic terms, as in mean-field approximation theory.\nThese Hamiltonians have a number of\nspecial properties one can exploit for efficient simulation and manipulation of the Hamiltonian, thus\nwarranting a special data structure. We refer to Hamiltonians which\nonly contain terms that are quadratic in the fermionic creation and annihilation operators\nas quadratic Hamiltonians, and include the general case of non-particle conserving terms as in\na general Bogoliubov transformation. Eigenstates of quadratic Hamiltonians can be prepared\nefficiently on both a quantum and classical computer, making them amenable to initial guesses for\nmany more challenging problems.\nA general quadratic Hamiltonian takes the form\n$$H = \\sum_{p, q} (M_{pq} - \\mu \\delta_{pq}) a^\\dagger_p a_q + \\frac{1}{2} \\sum_{p, q} (\\Delta_{pq} a^\\dagger_p a^\\dagger_q + \\Delta_{pq}^* a_q a_p) + \\text{constant},$$\nwhere $M$ is a Hermitian matrix, $\\Delta$ is an antisymmetric matrix,\n$\\delta_{pq}$ is the Kronecker delta symbol, and $\\mu$ is a chemical\npotential term which we keep separate from $M$ so that we can use it\nto adjust the expectation of the total number of particles.\nIn OpenFermion, quadratic Hamiltonians are conveniently represented and manipulated\nusing the QuadraticHamiltonian class, which stores $M$, $\\Delta$, $\\mu$ and the constant. It is specialized to exploit the properties unique to quadratic Hamiltonians. Like InteractionOperator and InteractionRDM, it inherits from the PolynomialTensor class.\nThe BCS mean-field model of superconductivity is a quadratic Hamiltonian. The following code constructs an instance of this model as a FermionOperator, converts it to a QuadraticHamiltonian, and then computes its ground energy:",
"from openfermion.hamiltonians import mean_field_dwave\nfrom openfermion.transforms import get_quadratic_hamiltonian\n\n# Set model.\nx_dimension = 2\ny_dimension = 2\ntunneling = 2.\nsc_gap = 1.\nperiodic = True\n\n# Get FermionOperator.\nmean_field_model = mean_field_dwave(\n x_dimension, y_dimension, tunneling, sc_gap, periodic=periodic)\n\n# Convert to QuadraticHamiltonian\nquadratic_hamiltonian = get_quadratic_hamiltonian(mean_field_model)\n\n# Compute the ground energy\nground_energy = quadratic_hamiltonian.ground_energy()\nprint(ground_energy)",
"Any quadratic Hamiltonian may be rewritten in the form\n$$H = \\sum_p \\varepsilon_p b^\\dagger_p b_p + \\text{constant},$$\nwhere the $b_p$ are new annihilation operators that satisfy the fermionic anticommutation relations, and which are linear combinations of the old creation and annihilation operators. This form of $H$ makes it easy to deduce its eigenvalues; they are sums of subsets of the $\\varepsilon_p$, which we call the orbital energies of $H$. The following code computes the orbital energies and the constant:",
"orbital_energies, constant = quadratic_hamiltonian.orbital_energies()\nprint(orbital_energies)\nprint()\nprint(constant)",
"Eigenstates of quadratic hamiltonians are also known as fermionic Gaussian states, and they can be prepared efficiently on a quantum computer. One can use OpenFermion to obtain circuits for preparing these states. The following code obtains the description of a circuit which prepares the ground state (operations that can be performed in parallel are grouped together), along with a description of the starting state to which the circuit should be applied:",
"from openfermion.utils import gaussian_state_preparation_circuit\n\ncircuit_description, start_orbitals = gaussian_state_preparation_circuit(quadratic_hamiltonian)\nfor parallel_ops in circuit_description:\n print(parallel_ops)\nprint('')\nprint(start_orbitals)",
"In the circuit description, each elementary operation is either a tuple of the form $(i, j, \\theta, \\varphi)$, indicating the operation $\\exp[i \\varphi a_j^\\dagger a_j]\\exp[\\theta (a_i^\\dagger a_j - a_j^\\dagger a_i)]$, which is a Givens rotation of modes $i$ and $j$, or the string 'pht', indicating the particle-hole transformation on the last fermionic mode, which is the operator $\\mathcal{B}$ such that $\\mathcal{B} a_N \\mathcal{B}^\\dagger = a_N^\\dagger$ and leaves the rest of the ladder operators unchanged. Operations that can be performed in parallel are grouped together.\nIn the special case that a quadratic Hamiltonian conserves particle number ($\\Delta = 0$), its eigenstates take the form\n$$\\lvert \\Psi_S \\rangle = b^\\dagger_{1}\\cdots b^\\dagger_{N_f}\\lvert \\text{vac} \\rangle,\\qquad\nb^\\dagger_{p} = \\sum_{k=1}^N Q_{pq}a^\\dagger_q,$$\nwhere $Q$ is an $N_f \\times N$ matrix with orthonormal rows. These states are also known as Slater determinants. OpenFermion also provides functionality to obtain circuits for preparing Slater determinants starting with the matrix $Q$ as the input."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.