repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
vierth/chinese_stylometry
|
Stanford DH Asia Python Basics.ipynb
|
gpl-3.0
|
[
"Digital Humanities Asia Workshop\nStylometerics and Genre Research in Imperial Chinese Studies\nPython Basics\nPaul Vierthaler, Boston College\n@pvierth, vierthal@bc.edu\nPython Basics\nWe will need to know a little bit about Python in order for the following tutorial to make sense. This will not be a comprehensive introduction to Python. It will be just enough to get us started.\nWe are using Python 3.\nMany people are still using Python 2.7. We are using 3 because it is easier to use to study Chinese language. Python 2.7 is no longer being updated (aside from security updates).\nPython is case sensitive.\nA is not the same thing as a.\nPython doesn't like Chinese punctuation.\nYou will need to use quotation marks throughout your code. Be careful, if you type commas or quotation marks while typing in Chinese, Python will not know how to handle them.\nA # (hashtag) starts a comment.\nThis allows you to tell someone who is reading your code what it does.\nVariables\nA variable can be thought of as a way to contain information. You have to create them in order to use them. They can store any type of information in Python. We can call them ALMOST anything we want. We should avoid reserved words in Python (if, or, file, etc.). It should start with a lowercase letter and not an uppercase one or a number.\nVariables allow us to save information to reuse it later.",
"# You can store integers\nx = 10\n\n# You can store strings\ny = \"Hi, my name is Paul\"\n\n# A variable can be as long as you like. It is best to use variable names\n# that express what the variable is.\nlong_variable_names_work_too = 1.3\n\nhi = 'hello'",
"Printing\nThis statement allows you to print (display) something to the console. This is not sending anything to your printer.",
"print(\"It will change\")",
"A Few Data Types\nIntegers\nWhole numbers. This behave a bit differently than expected when you divide them. Integer division drops the remainder. By default, Python 3 performs float division. Two slashes (//) allow for Integer division (we will see it later).",
"# Here are some integers:\n2\n5\n5000\n\n# Here is some regular division:\n5/2\n\n# Here is some integer division:\n5//2",
"Floats\nFloating Point Numbers (decimal numbers). These need to be treated with care as well, as they are estimations of precise numbers. Remember, under the hood, the computer uses binary.",
"# Here are some floating point numbers:\n1.4\n200.12\n.008\n\n# Here is some floating point number division:\n1000.15/13",
"Note the trailing numbers. They are not extremely precise. Be careful\nStrings\nWords. These are denoted using either single or double quotation marks.",
"\"This is a string.\"\n'This is also a string.'",
"Getting a substring\nYou can get a single character, or a substring by refering the index of the desired characters by number. Python is 0 index (meaning 0 is the first place, not 1).",
"my_string = \"This is my string.\"\nprint(my_string[0])\nprint(my_string[11:15])\nprint(my_string[-4:])",
"Boolean Values\nTrue and False are important values. They allow us to create logic in our programs.\nChecking Boolean Values\n<\nis less than\n>\nis greater than\n<=\nis less than or equal to\n>=\nis greater than or equal to\n==\nis equal to\n!=\nis not equal to",
"print(1<5)\nprint(2>5)\nprint(4==4)",
"Lists\nThis is an object that can store information. It is ordered and very useful. Denoted with square brackets.",
"# This is an empty list:\n[]\n\n# This is a list with some information.\n[1, 2, 3, 4, 5, 6]",
"Retriving Information From Lists\nYou can get information out of a list by calling an item's index. Python is 0 indexed (meaning the first element is a 0 not a 1).",
"numbers = [1,2,3,4,5,6]\nprint(numbers[0])",
"Dictionaries\nDictionaries also store information but are not ordered. They use keys to refer to values. They are denoted with curly brackets.",
"# This is an empty list:\n{}\n\n# This is a list with some information:\n{\"Independence Day\":\"July 4th\", \"Halloween\":\"October 31st\", \"Labor Day 2016\":\"September 6th\"}",
"Retreiving information from a dictionary",
"holiday_dates = {\"Independence Day\":\"July 4th\", \"Halloween\":\"October 31st\", \"Labor Day 2016\":\"September 6th\"}\n\nprint(holiday_dates[\"Halloween\"])",
"Python Structure\nPython uses indentation to denote code blocks. Some languages use keywords, like \"end\"\nLoops\nloops allow you to run the same piece of code over and over.\nWhile loops\nWhile a statment is true, execute the code inside the block:",
"i = 0\nwhile i < 4:\n print(i)\n \n # Increase i by one. This can also be written i += 1\n i = i + 1",
"For loops\nIterate through each item in a list (or other enumerable object).",
"animals = [\"tiger\", \"lion\", \"monkey\", \"pig\"]\nfor animal in animals:\n print(animal)",
"Libraries\nWe can use code other people have written by importing libraries. These extend the basic functionality of Python. They are not automatically imported into the namespace for efficiency reasons. Anaconda comes with many libraries that will make our life much easier. We will import them as needed",
"import math, os, re"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
waltervh/BornAgain-tutorial
|
talks/day_3/advanced_geometry_M/Offspecular.ipynb
|
gpl-3.0
|
[
"Off-Specular simulation\nOff-specular simulation is a technique developed to study roughness or micromagnetism at micrometric scale [1]. For the moment BornAgain has only the limited support for off-specular simulation. User feedback is required to continue development.\n\nOff-specular Geometry [1]\nThe term off-specular scattering is typically ised for experiment geometries where $\\mathbf{q}$ is not strictly perpendicular to the sample surface. Following features can be encountered in off-specular scattering experiment: Yoneda peaks, Bragg sheets, diffuse scattering, magnetic spin-flip scattering, and correlated and uncorrelated roughness [2].\n\nCreate an off-specular simulation in BornAgain GUI\n\nStart a new project Welcome view->New project\nGo to the Instrument view and add an Offspec instrument.\nSet the instrument parameters as follows.\n \nSwitch to the Sample view. Create a sample as shown below:\n \n Create 4 layers (from bottom to top): \nSi substrate, $\\delta=7.6\\cdot 10^{-6}$, $\\beta=1.7\\cdot 10^{-7}$. Assign roughness with Sigma 0.46 nm, Hurst parameter 0.5 and CorrelationLength 100 nm.\nNb layer of thickness 5.8 nm, $\\delta=2.4\\cdot 10^{-5}$, $\\beta=1.5\\cdot 10^{-6}$. No roughness.\nSi layer of thickness 3 nm. No roughness.\nAir layer.\n\n\nSwitch to Simulation view. Set option Include specular peak to Yes.\n \n\nRun simulation. Vary the intensity scale. You should be able to see the specular line and Yonedas. To see the Bragg sheets we need to increase a number of [Si/Nb] double-layers to at least 10. Let's do it in Python.\nOff-specular simulation with BornAgain Python API\nGo to Simulation View and click button Export to Python script. Save the script somewhere. The script should look as shown below.\nExercise:\nChange the script to add layer_2 and layer_3 10 times. Hint: use for loop, take care of indentations.\nExercise (Advanced)\nAdd exponentially decreasing roughness to all Si layers (except of substrate). The RMS roughness of the layer $n$ should be calculated as\n$$\\sigma_n = \\sigma_0\\cdot e^{-0.01n}$$\nwhere $\\sigma_0=0.46$nm\nSet the roughness of all the layers to be fully correlated.",
"%matplotlib inline\n\n# %load offspec_ex.py\nimport numpy as np\nimport bornagain as ba\nfrom bornagain import deg, angstrom, nm, kvector_t\n\ndef get_sample():\n # Defining Materials\n material_1 = ba.HomogeneousMaterial(\"Air\", 0.0, 0.0)\n material_2 = ba.HomogeneousMaterial(\"Si\", 7.6e-06, 1.7e-07)\n material_3 = ba.HomogeneousMaterial(\"Nb\", 2.4e-05, 1.5e-06)\n\n # Defining Layers\n layer_1 = ba.Layer(material_1)\n layer_2 = ba.Layer(material_2, 3)\n layer_3 = ba.Layer(material_3, 5.8)\n layer_4 = ba.Layer(material_2)\n\n # Defining Roughness Parameters\n layerRoughness_1 = ba.LayerRoughness(0.46, 0.5, 10.0*nm)\n\n # Defining Multilayers\n multiLayer_1 = ba.MultiLayer()\n # uncomment the line below to add vertical cross correlation length\n # multiLayer_1.setCrossCorrLength(200)\n multiLayer_1.addLayer(layer_1)\n #=================================\n # put your code here \n multiLayer_1.addLayer(layer_2)\n multiLayer_1.addLayer(layer_3)\n #==================================\n multiLayer_1.addLayerWithTopRoughness(layer_4, layerRoughness_1)\n return multiLayer_1\n\n\ndef get_simulation():\n simulation = ba.OffSpecSimulation()\n simulation.setDetectorParameters(10, -1.0*deg, 1.0*deg, 100, 0.0*deg, 5*deg)\n \n simulation.setDetectorResolutionFunction(ba.ResolutionFunction2DGaussian(0.005*deg, 0.005*deg))\n alpha_i_axis = ba.FixedBinAxis(\"alpha_i\", 100, 0.0*deg, 5*deg)\n simulation.setBeamParameters(0.154*nm, alpha_i_axis, 0.0*deg)\n simulation.setBeamIntensity(1.0e+08)\n simulation.getOptions().setIncludeSpecular(True)\n return simulation\n\n\ndef run_simulation():\n sample = get_sample()\n simulation = get_simulation()\n simulation.setSample(sample)\n simulation.runSimulation()\n return simulation.result()\n\n\nif __name__ == '__main__': \n result = run_simulation()\n ba.plot_simulation_result(result, intensity_max=10.0)\n",
"Solution\nRun the line below to see the solution.",
"%load offspec.py",
"References\n[1] Daillant, J., Gibaud, A. (Eds.), X-ray and Neutron Reflectivity: Principles and Applications, Lect. Notes Phys. 770 (Springer, Berlin Heidelberg 2009), DOI 10.1007978-3-540-88588-7\n[2] Ott, F. & Kozhevnikov, S. (2011). J. Appl. Cryst. 44, 359-369."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
BrownDwarf/ApJdataFrames
|
notebooks/Muzic2014.ipynb
|
mit
|
[
"ApJdataFrames Muzic2014\nTitle: SUBSTELLAR OBJECTS IN NEARBY YOUNG CLUSTERS (SONYC). VIII. SUBSTELLAR POPULATION IN LUPUS 3\nAuthors: Muzic et al.\nData is from this paper:\nhttp://iopscience.iop.org/0004-637X/785/2/159/article#apj492858t1",
"%pylab inline\n\nimport seaborn as sns\nsns.set_context(\"notebook\", font_scale=1.5)\n\n#import warnings\n#warnings.filterwarnings(\"ignore\")\n\nimport pandas as pd",
"Table 2 - Photometry and Spectral Types for the Objects of Spectral Type M, or Slightly Earlier, Identified Toward Lupus 3\nInternet is still not working",
"tbl2 = pd.read_clipboard(#\"http://iopscience.iop.org/0004-637X/785/2/159/suppdata/apj492858t2_ascii.txt\",\n sep='\\t', skiprows=[0,1,2,4], skipfooter=7, engine='python', na_values=\" sdotsdotsdot\", usecols=range(15))\ntbl2.rename(columns={\"Unnamed: 6\": \"Av_phot\", \"Unnamed: 8\":\"Av_spec\"}, inplace=True)\ntbl2\n\n! mkdir ../data/Muzic2014\n\ntbl2.to_csv(\"../data/Muzic2014/tbl2.csv\", index=False)",
"Table 4-\nhttp://iopscience.iop.org/0004-637X/785/2/159/suppdata/apj492858t4_ascii.txt",
"tbl4 = pd.read_clipboard(#\"http://iopscience.iop.org/0004-637X/785/2/159/suppdata/apj492858t4_ascii.txt\",\n sep='\\t', skiprows=[0,1,2,4], skipfooter=2, engine='python', na_values=\" sdotsdotsdot\", usecols=range(8))\ntbl4\n\ntbl4.to_csv(\"../data/Muzic2014/tbl4.csv\", index=False)",
"Script finished."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.22/_downloads/59a29cf7eb53c7ab95857dfb2e3b31ba/plot_40_sensor_locations.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Working with sensor locations\nThis tutorial describes how to read and plot sensor locations, and how\nthe physical location of sensors is handled in MNE-Python.\n :depth: 2\nAs usual we'll start by importing the modules we need and loading some\nexample data <sample-dataset>:",
"import os\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D # noqa\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, preload=True, verbose=False)",
"About montages and layouts\n:class:Montages <mne.channels.DigMontage> contain sensor\npositions in 3D (x, y, z, in meters), and can be used to set\nthe physical positions of sensors. By specifying the location of sensors\nrelative to the brain, :class:Montages <mne.channels.DigMontage> play an\nimportant role in computing the forward solution and computing inverse\nestimates.\nIn contrast, :class:Layouts <mne.channels.Layout> are idealized 2-D\nrepresentations of sensor positions, and are primarily used for arranging\nindividual sensor subplots in a topoplot, or for showing the approximate\nrelative arrangement of sensors as seen from above.\nWorking with built-in montages\nThe 3D coordinates of MEG sensors are included in the raw recordings from MEG\nsystems, and are automatically stored in the info attribute of the\n:class:~mne.io.Raw file upon loading. EEG electrode locations are much more\nvariable because of differences in head shape. Idealized montages for many\nEEG systems are included during MNE-Python installation; these files are\nstored in your mne-python directory, in the\n:file:mne/channels/data/montages folder:",
"montage_dir = os.path.join(os.path.dirname(mne.__file__),\n 'channels', 'data', 'montages')\nprint('\\nBUILT-IN MONTAGE FILES')\nprint('======================')\nprint(sorted(os.listdir(montage_dir)))",
".. sidebar:: Computing sensor locations\nIf you are interested in how standard (\"idealized\") EEG sensor positions\nare computed on a spherical head model, the `eeg_positions`_ repository\nprovides code and documentation to this end.\n\nThese built-in EEG montages can be loaded via\n:func:mne.channels.make_standard_montage. Note that when loading via\n:func:~mne.channels.make_standard_montage, provide the filename without\nits file extension:",
"ten_twenty_montage = mne.channels.make_standard_montage('standard_1020')\nprint(ten_twenty_montage)",
"Once loaded, a montage can be applied to data via one of the instance methods\nsuch as :meth:raw.set_montage <mne.io.Raw.set_montage>. It is also possible\nto skip the loading step by passing the filename string directly to the\n:meth:~mne.io.Raw.set_montage method. This won't work with our sample\ndata, because it's channel names don't match the channel names in the\nstandard 10-20 montage, so these commands are not run here:",
"# these will be equivalent:\n# raw_1020 = raw.copy().set_montage(ten_twenty_montage)\n# raw_1020 = raw.copy().set_montage('standard_1020')",
":class:Montage <mne.channels.DigMontage> objects have a\n:meth:~mne.channels.DigMontage.plot method for visualization of the sensor\nlocations in 3D; 2D projections are also possible by passing\nkind='topomap':",
"fig = ten_twenty_montage.plot(kind='3d')\nfig.gca().view_init(azim=70, elev=15)\nten_twenty_montage.plot(kind='topomap', show_names=False)",
"Controlling channel projection (MNE vs EEGLAB)\nChannel positions in 2d space are obtained by projecting their actual 3d\npositions using a sphere as a reference. Because 'standard_1020' montage\ncontains realistic, not spherical, channel positions, we will use a different\nmontage to demonstrate controlling how channels are projected to 2d space.",
"biosemi_montage = mne.channels.make_standard_montage('biosemi64')\nbiosemi_montage.plot(show_names=False)",
"By default a sphere with an origin in (0, 0, 0) x, y, z coordinates and\nradius of 0.095 meters (9.5 cm) is used. You can use a different sphere\nradius by passing a single value to sphere argument in any function that\nplots channels in 2d (like :meth:~mne.channels.DigMontage.plot that we use\nhere, but also for example :func:mne.viz.plot_topomap):",
"biosemi_montage.plot(show_names=False, sphere=0.07)",
"To control not only radius, but also the sphere origin, pass a\n(x, y, z, radius) tuple to sphere argument:",
"biosemi_montage.plot(show_names=False, sphere=(0.03, 0.02, 0.01, 0.075))",
"In mne-python the head center and therefore the sphere center are calculated\nusing fiducial points. Because of this the head circle represents head\ncircumference at the nasion and ear level, and not where it is commonly\nmeasured in 10-20 EEG system: above nasion at T4/T8, T3/T7, Oz, Fz level.\nNotice below that by default T7 and Oz channels are placed within the head\ncircle, not on the head outline:",
"biosemi_montage.plot()",
"If you have previous EEGLAB experience you may prefer its convention to\nrepresent 10-20 head circumference with the head circle. To get EEGLAB-like\nchannel layout you would have to move the sphere origin a few centimeters\nup on the z dimension:",
"biosemi_montage.plot(sphere=(0, 0, 0.035, 0.094))",
"Instead of approximating the EEGLAB-esque sphere location as above, you can\ncalculate the sphere origin from position of Oz, Fpz, T3/T7 or T4/T8\nchannels. This is easier once the montage has been applied to the data and\nchannel positions are in the head space - see\nthis example <ex-topomap-eeglab-style>.\nReading sensor digitization files\nIn the sample data, setting the digitized EEG montage was done prior to\nsaving the :class:~mne.io.Raw object to disk, so the sensor positions are\nalready incorporated into the info attribute of the :class:~mne.io.Raw\nobject (see the documentation of the reading functions and\n:meth:~mne.io.Raw.set_montage for details on how that works). Because of\nthat, we can plot sensor locations directly from the :class:~mne.io.Raw\nobject using the :meth:~mne.io.Raw.plot_sensors method, which provides\nsimilar functionality to\n:meth:montage.plot() <mne.channels.DigMontage.plot>.\n:meth:~mne.io.Raw.plot_sensors also allows channel selection by type, can\ncolor-code channels in various ways (by default, channels listed in\nraw.info['bads'] will be plotted in red), and allows drawing into an\nexisting matplotlib axes object (so the channel positions can easily be\nmade as a subplot in a multi-panel figure):",
"fig = plt.figure()\nax2d = fig.add_subplot(121)\nax3d = fig.add_subplot(122, projection='3d')\nraw.plot_sensors(ch_type='eeg', axes=ax2d)\nraw.plot_sensors(ch_type='eeg', axes=ax3d, kind='3d')\nax3d.view_init(azim=70, elev=15)",
"It's probably evident from the 2D topomap above that there is some\nirregularity in the EEG sensor positions in the sample dataset\n<sample-dataset> — this is because the sensor positions in that dataset are\ndigitizations of the sensor positions on an actual subject's head, rather\nthan idealized sensor positions based on a spherical head model. Depending on\nwhat system was used to digitize the electrode positions (e.g., a Polhemus\nFastrak digitizer), you must use different montage reading functions (see\ndig-formats). The resulting :class:montage <mne.channels.DigMontage>\ncan then be added to :class:~mne.io.Raw objects by passing it to the\n:meth:~mne.io.Raw.set_montage method (just as we did above with the name of\nthe idealized montage 'standard_1020'). Once loaded, locations can be\nplotted with :meth:~mne.channels.DigMontage.plot and saved with\n:meth:~mne.channels.DigMontage.save, like when working with a standard\nmontage.\n<div class=\"alert alert-info\"><h4>Note</h4><p>When setting a montage with :meth:`~mne.io.Raw.set_montage`\n the measurement info is updated in two places (the ``chs``\n and ``dig`` entries are updated). See `tut-info-class`.\n ``dig`` may contain HPI, fiducial, or head shape points in\n addition to electrode locations.</p></div>\n\nRendering sensor position with mayavi\nIt is also possible to render an image of a MEG sensor helmet in 3D, using\nmayavi instead of matplotlib, by calling :func:mne.viz.plot_alignment",
"fig = mne.viz.plot_alignment(raw.info, trans=None, dig=False, eeg=False,\n surfaces=[], meg=['helmet', 'sensors'],\n coord_frame='meg')\nmne.viz.set_3d_view(fig, azimuth=50, elevation=90, distance=0.5)",
":func:~mne.viz.plot_alignment requires an :class:~mne.Info object, and\ncan also render MRI surfaces of the scalp, skull, and brain (by passing\nkeywords like 'head', 'outer_skull', or 'brain' to the\nsurfaces parameter) making it useful for assessing coordinate frame\ntransformations <plot_source_alignment>. For examples of various uses of\n:func:~mne.viz.plot_alignment, see plot_montage,\n:doc:../../auto_examples/visualization/plot_eeg_on_scalp, and\n:doc:../../auto_examples/visualization/plot_meg_sensors.\nWorking with layout files\nAs with montages, many layout files are included during MNE-Python\ninstallation, and are stored in the :file:mne/channels/data/layouts folder:",
"layout_dir = os.path.join(os.path.dirname(mne.__file__),\n 'channels', 'data', 'layouts')\nprint('\\nBUILT-IN LAYOUT FILES')\nprint('=====================')\nprint(sorted(os.listdir(layout_dir)))",
"You may have noticed that the file formats and filename extensions of the\nbuilt-in layout and montage files vary considerably. This reflects different\nmanufacturers' conventions; to make loading easier the montage and layout\nloading functions in MNE-Python take the filename without its extension so\nyou don't have to keep track of which file format is used by which\nmanufacturer.\nTo load a layout file, use the :func:mne.channels.read_layout function, and\nprovide the filename without its file extension. You can then visualize the\nlayout using its :meth:~mne.channels.Layout.plot method, or (equivalently)\nby passing it to :func:mne.viz.plot_layout:",
"biosemi_layout = mne.channels.read_layout('biosemi')\nbiosemi_layout.plot() # same result as: mne.viz.plot_layout(biosemi_layout)",
"Similar to the picks argument for selecting channels from\n:class:~mne.io.Raw objects, the :meth:~mne.channels.Layout.plot method of\n:class:~mne.channels.Layout objects also has a picks argument. However,\nbecause layouts only contain information about sensor name and location (not\nsensor type), the :meth:~mne.channels.Layout.plot method only allows\npicking channels by index (not by name or by type). Here we find the indices\nwe want using :func:numpy.where; selection by name or type is possible via\n:func:mne.pick_channels or :func:mne.pick_types.",
"midline = np.where([name.endswith('z') for name in biosemi_layout.names])[0]\nbiosemi_layout.plot(picks=midline)",
"If you're working with a :class:~mne.io.Raw object that already has sensor\npositions incorporated, you can create a :class:~mne.channels.Layout object\nwith either the :func:mne.channels.make_eeg_layout function or\n(equivalently) the :func:mne.channels.find_layout function.",
"layout_from_raw = mne.channels.make_eeg_layout(raw.info)\n# same result as: mne.channels.find_layout(raw.info, ch_type='eeg')\nlayout_from_raw.plot()",
"<div class=\"alert alert-info\"><h4>Note</h4><p>There is no corresponding ``make_meg_layout`` function because sensor\n locations are fixed in a MEG system (unlike in EEG, where the sensor caps\n deform to fit each subject's head). Thus MEG layouts are consistent for a\n given system and you can simply load them with\n :func:`mne.channels.read_layout`, or use :func:`mne.channels.find_layout`\n with the ``ch_type`` parameter, as shown above for EEG.</p></div>\n\nAll :class:~mne.channels.Layout objects have a\n:meth:~mne.channels.Layout.save method that allows writing layouts to disk,\nin either :file:.lout or :file:.lay format (which format gets written is\ninferred from the file extension you pass to the method's fname\nparameter). The choice between :file:.lout and :file:.lay format only\nmatters if you need to load the layout file in some other software\n(MNE-Python can read either format equally well).\n.. LINKS"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
masterfish2015/my_project
|
python/.ipynb_checkpoints/python3.5_learning_chapter1_introduction-checkpoint.ipynb
|
mit
|
[
"The Python Tutorial\nhttps://docs.python.org/3.5/tutorial/index.html\n1.An Informal Introduction to Python\n1.1 Using Python as a Calculator\n1.1.1. Numbers\n[1]直接数值运算",
"print('2+2=',2+2)\nprint('50-5*6=',50-5*6)\nprint('(50-5*6)/4=',(50-5*6)/4)\nprint('17/3=',17/3) #代数除\nprint('17//3=',17//3) #整除\nprint('17%3=',17%3) #取余数\nprint('5*3+2=', 5*3+2)\nprint('5**2=',5**2)\nprint('4**(0.5)=',4**(0.5))",
"[2]变量",
"width=20\nheight=5*9\nprint('width=',width)\nprint('height=',height)\nprint('width*height=',width*height)\n# print('n is not initialize, n=', n)\nn=100\nprint('n=',n)",
"1.1.2 String\n【1】string literals",
"print('spam eggs') #single quates\nprint('doesn\\'t') #escape char \\'=='\nprint(\"doesn't\") #mix using \" and '\nprint('\"yes,\" he said.')\n#escape char \\n==change line\nprint('first line.\\nsecond line') \nprint(\"c:\\some\\name\") #note \\n \nprint(r\"c:\\some\\name\")#r表示字符串没有escape\n#'''或\"\"\"都可以显示多行的原始string\nprint('''\\\nUsage: thingy [OPTION]\n -h Display this usage message\n -H hostname Hostname to connect to\n''')",
"[2] string operator",
"print('hello ''world!')\nprint(\"hello \"+\"world!\") #+将两个字符串连接\nprint(3*'ha') #*生成重复的字符串"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maxkleiner/maXbox4
|
BASTA_2020_matplotlib_presentation2.ipynb
|
gpl-3.0
|
[
"<a href=\"https://colab.research.google.com/github/maxkleiner/maXbox4/blob/master/BASTA_2020_matplotlib_presentation2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nSession BASTA 2020 & EKON 24: Matplotlib\nFor matplotlib, we will directly learn through examples. This is much easier as you can directly look at the results on the plots\n\nmatplotlib ist das Python Modul zur grafischen Darstellung von Daten.\nEs ist ähnlich zu den Grafischen Funktionen von Matlab\nmatplotlib.pyplot enthält die Funktionen um Graphen zu erstellen.\nEs wird oft als plt importiert:",
"import matplotlib.pyplot as plt",
"Diese nächste Linie ist nur um plots in jupyter notebooks zu darstellen",
"%matplotlib inline",
"Zuerst generieren wir daten für die plots",
"import numpy as np\nx = np.arange(-5,5.01,0.5)\ny1 = 1*x + 1.5 +np.random.normal(0, 1, len(x))\ny2 = 2*x +np.random.normal(0, 1, len(x))",
"Daten plotten\n\nDie plt.figure Funktion generiert eine neue Figur\nMit der plt.plot Funktion kann man Daten plotten.\nplt.show zeigt den plot auf dem Bildschirm\nplt.savefig speichert die Figur\nplt.close schliesst die Figur",
"plt.figure()\nplt.plot(x, y1)\nplt.show()\n#plt.savefig(\"path/to/plot.png\")\nplt.close()",
"Linie und Symbol formattierung\n\nplt.plot nimmt als Optionales Positional Argument eine format string, um Farbe, Linie/Symbol zu kontrollieren",
"plt.figure()\nplt.plot(x, y1, \"b\")\nplt.plot(x, y2, \"g--\")\nplt.show()\nplt.close()",
"Anstatt der Format string kann man auch alles einzel kontrollieren:\n* color: Farbe\n* linestyle: Linien Typ\n* marker: Symbol",
"plt.figure()\nplt.plot(x, y1, marker=\"x\", color=\"g\")\nplt.plot(x, y2, linestyle=\"--\", color=\"m\")\nplt.show()\nplt.close()",
"Man kann noch viel mehr mit den keyword argumenten kontrollieren. Zum Beispiel:\n* markersize: grösse der Symbole\n* linewidth: breite der Striche",
"plt.figure()\nplt.plot(x, y1, marker=\"x\", color=\"g\", markersize=10)\nplt.plot(x, y2, linestyle=\"--\", color=\"m\", linewidth=4)\nplt.show()\nplt.close()",
"Daten Labels\n\nlabelkeyword in plt.plot, um einer Linie einen Namen zu geben\nLabels werden mit plt.legend() angezeigt",
"plt.figure()\nplt.plot(x, y1, \"ro\", label=\"y1\")\nplt.plot(x, y2, \"g--\", label=\"y2\")\nplt.legend(loc=\"best\")\nplt.show()\nplt.close()",
"Title und Achsen Labels\nplt hat Funktionen um alle Aspekten von der Figur zu Kontrollieren:\n* plt.title um einen Title zu setzen\n* plt.xlabel und plt.ylabel für die Achsen Titeln zu setzen\n* plt.xlim und plt.ylim um die Achsen Limiten zu setzen",
"plt.figure()\nplt.plot(x, y1)\nplt.xlabel(\"x-label\")\nplt.ylabel(\"y-label\")\nplt.title(\"My plot title\")\nplt.show()\nplt.close()",
"Grösse von den Text Elementen werden mit fontsize kontrolliert:",
"plt.figure()\nplt.plot(x, y1)\nplt.xlabel(\"x-label\", fontsize =24)\nplt.ylabel(\"y-label\", fontsize =18)\nplt.title(\"My BASTA plot title\", fontsize =40)\nplt.show()\nplt.close()",
"In allen Text Felder kann man auch die Latex syntax brauchen um Mathematische symbolen zu representieren. Einfach das Latex zwischen $ zeichen:",
"plt.figure()\nplt.plot(x, y1)\nplt.xlabel(\"distance in [$\\AA$]\")\nplt.ylabel(\"$a*x+b$\")\nplt.title(\"My function $\\sqrt{f_a(x)}$\")\nplt.show()\nplt.close()",
"Figur grösse\nDie grösse der Figur wird in plt.figure() mit figsize angegeben:",
"plt.figure(figsize = (8, 4))\nplt.plot(x, y1, color = 'r', label=\"line\")\nplt.plot(x, y2, 'x', markersize = 10,label=\"markers\")\nplt.legend(loc=\"best\")\nplt.show()\nplt.close()",
"Achsen kontrollieren\n\nplt.xlim und plt.ylim um der range von den Achsen zu kontrollieren\nplt.xscale(\"log\") für ein logaritmischer Achsen",
"plt.figure()\nplt.plot(x, y1)\nplt.xlim((1, 5))\nplt.ylim((-10, 10))\nplt.xscale(\"log\")\nplt.plot(x, y2)\nplt.xscale(\"linear\")\nplt.show()\nplt.close()",
"Errorbar plotten\nErrorbars werden mit plt.errorbar erstellt",
"y1_err = np.random.normal(0, 1, len(x))\nx_err = np.random.normal(0, 1, len(x))\nplt.figure()\nplt.errorbar(x, y1, yerr=y1_err, marker = \"o\", capsize = 5)\nplt.errorbar(x, y2, xerr=x_err, marker = \"x\", capsize = 6)\nplt.show()\nplt.close()",
"Histograms\nplt.hist(data) rechnet das hitogram von den daten in data, und generiert direkt einen plot",
"norm_dist = np.random.normal(0, 1, 500)\nplt.hist(norm_dist)\nplt.show()\nplt.close()",
"nbins kontrolliert wieviele bins man brauchen will.\nnormed kontrolliert ob man die Number von Daten in jedem bin will oder die Probabilität Verteilung",
"plt.hist(norm_dist, density=True, bins=20)\nplt.show()\nplt.close()",
"Mann kann auch das histogram für mehrere datensätze gleichzeitig rechnen",
"norm_dist1 = np.random.normal(0, 1, 1000)\nnorm_dist2 = np.random.normal(1, 0.5, 1000)\nplt.hist([norm_dist1,norm_dist2], bins=20, density=True)\nplt.show()\nplt.close()",
"Die plt.hist funktion gibt die bin Anfang und Ende, die Werte, und dir Patches zurück. Mann kann das brauchen um einen Normaler Linien Plot zu generieren, der einfacher Lesbar ist",
"hist_y, hist_x, p = plt.hist(norm_dist1, bins=20, density=True)\nhist_x = 0.5 * (hist_x[1:] + hist_x[:-1])\nplt.close()\nplt.figure()\nplt.plot(hist_x, hist_y, \"rx-\")\nplt.show()\nplt.close()",
"Linien und text\n\nplt.vlines um vertikale Linien hinzufügen\nplt.hlines um horizontale Linien hinzufügen\n\nDie nehmen als erstes argument die Position (oder eine Liste von Positionen) wo die Liene sein soll, und dann wo si anfängt und aufhört",
"plt.hist([norm_dist1,norm_dist2], bins=20, density=True)\nplt.hlines(0.5, plt.xlim()[0], plt.xlim()[1], linestyle=\":\", color=\"r\")\nplt.vlines([0,1.0],*plt.ylim(),linestyle = \"--\")\nplt.show()\nplt.close()",
"plt.text(xpos, ypos, text) um text hinzufügen an der Position (xpos, ypos)\nplt.annotate(text, xy=(xpos, ypos), xytext=(text_x, text_y)) um text hinzufügen an der Position (text_x, text_y) mit einem Pfeil der auf (xpos, ypos) zeigt.",
"plt.figure()\nplt.plot(x, y1,\"x\")\nplt.text(0,0, \"text at position 0,0\", fontsize=16)\nplt.annotate(\"point to 1,1\",xy=(1, 1), xytext=(3, 2),arrowprops=dict(facecolor='black'))\nplt.show()\nplt.close()",
"Subplots\n\nEine Figure mit mehreren Plots wird mit plt.subplots generiert. Gibt die Figur und eine liste von axes zurück\nDan einfach die elementen von axes brauchen um zu plotten\nmit plt.subplots_adjust kann man die position von den subplots kontrollieren\n\nFür axes Objekten werden mehrere Methoden von plt umgenennt: set_ wird vor dem Namen eingefügt. Zum Beispiel:\n* plt.title -> ax.set_title\n* plt.xlabel -> ax.set_xlabel\n* plt.xlim -> ax.set_xlim",
"f, axes = plt.subplots(2,2,sharey = \"row\", figsize = (10,8))\nf.suptitle(\"overall title\")\naxes[0,0].plot(x,y1)\naxes[0,1].errorbar(x, y2, y1_err, capsize = 5)\naxes[1,0].hist(norm_dist1, density=True, bins=20)\naxes[1,1].hist(norm_dist1, density=True, bins=20, cumulative=True)\nfor i,axl in enumerate(axes):\n for j,ax in enumerate(axl):\n ax.set_title(\"subtitle ({},{})\".format(i,j))\n ax.set_xlabel(\"x-label ({},{})\".format(i,j))\n axl[0].set_ylabel(\"y-label\")\nplt.subplots_adjust(right = 0.98, wspace = 0.02, hspace = 0.25)\nplt.show()",
"Alle defaults kontrollieren\nAlle defaults befinden sich in matplotlib.rcParams. Die kann mann verändern und es kontrolliert alle plots. rcParams ist ein dictionnary.",
"import matplotlib as mpl\nmpl.rcParams['font.size'] = 18\nmpl.rcParams['legend.fontsize'] = 16\nmpl.rcParams['xtick.labelsize'] = 20\nmpl.rcParams['xtick.top'] = True\nmpl.rcParams['xtick.major.size'] = 12\nplt.hist([norm_dist1,norm_dist2], bins=20, density=True)\nplt.show()\nplt.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/awi/cmip6/models/sandbox-3/ocnbgchem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: AWI\nSource ID: SANDBOX-3\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:38\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'awi', 'sandbox-3', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
inhwane/kookmin
|
w04_exercies_indexing&slicing_function.ipynb
|
mit
|
[
"국민대, 파이썬, 데이터\nW04 Exercise, Indexing&Slicing, Function\n\nTable of Contents\n\nPractice Makes Perfect\nChinese Zodiac\nHashtags\nMorse\nCard Deck\n문장 안의 단어 개수\n단어의 순서를 바꿔 출력\n단어의 순서와 그 단어의 글자 순서 바꿔 출력\n\n\nIndexing and Slicing\nFunction\nObject-Oriented Programming",
"from IPython.display import Image",
"References\n\n파이썬 완벽 가이드(원제: Python Essential Reference), 한빛미디어, David Beazley\nLearning Python 5th Edition, Oreilly, Mark Lutz\n\n\n1. Practice Makes Perfect\n이번 시간에는 함께 아래 문제를 풀어보도록 하겠습니다. 이 URL(https://floobits.com/initialkommit/kookmin)로 접근하셔서 같이 보도록 하겠습니다.\n1.1 Chinese Zodiac\n아래에 표에 나와있는 값을 이용해서 사용자가 입력한 년도에 해당되는 띠를 알려주세요. 사용자가 어떤 값을 입력해도 말이지요.\nYear(ex)|Animal|Remain\n--|--\n2000|용|8\n2001|뱀|9\n2002|말|10\n2003|양|11\n2004|원숭이|0\n2005|닭|1\n2006|개|2\n2007|돼지|3\n2008|쥐|4\n2009|소|5\n2010|호랑이|6\n2011|토끼|7\n1.2 Hashtags",
"# 게시글 제목\ntitle = \"On top of the world! Life is so fantastic if you just let it. \\\nI have never been happier. #nyc #newyork #vacation #traveling\"\n\n# Write your code below.",
"1.3 Morse",
"# 모스부호\nmorse = {\n '.-':'A','-...':'B','-.-.':'C','-..':'D','.':'E','..-.':'F',\n '--.':'G','....':'H','..':'I','.---':'J','-.-':'K','.-..':'L',\n '--':'M','-.':'N','---':'O','.--.':'P','--.-':'Q','.-.':'R',\n '...':'S','-':'T','..-':'U','...-':'V','.--':'W','-..-':'X',\n '-.--':'Y','--..':'Z'\n}\n \n# 풀어야할 암호\ncode = '.... . ... .-.. . . .--. ... . .- .-. .-.. -.--'\n\n# Write your code below.",
"1.4 Card Deck",
"front = ['s', 'c', 'd', 'h', ] # Spade, Club, Diamond, Heart\nback = [\n\t'2',\n\t'3',\n\t'4',\n\t'5',\n\t'6',\n\t'7',\n\t'8',\n\t'9',\n\t'T', # Ten\n\t'J', # Jack\n\t'Q', # Queen\n\t'K', # King\n\t'A', # Ace\n]\n\n# Write your code below.",
"1.5 문장 안의 단어 개수",
"# 주어진 문장을 모두 소문자로 만들고 ',', '.'을 제거하라.\n# 그리고 각 단어가 몇 개 사용했는지 Counting하라.\n\ns = 'We propose to start by making it possible to teach programming in Python, \\\nan existing scripting language, and to focus on creating \\\na new development environment and teaching materials for it.'\n\n# Write your code below.",
"1.6 단어의 순서를 바꿔 출력",
"# 단어의 순서를 바꿔서 출력하라.\n\ns = \"Sometimes I feel like a data scientist\"\n\n# Write your code below.",
"1.7 단어의 순서와 그 단어의 글자 순서 바꿔 출력",
"# 단어의 순서를 바꾸고\n# 단어의 글자 순서도 바꿔 출력하라.\n# tsitneics atada a ekil leef I semitemoS\"\n\ns = \"Sometimes I feel like a data scientist\"\n\n# Write your code below.",
"2. Indexing and Slicing\n항목|설명\n--|--\ns[i]|순서열의 원소 i를 반환\ns[i:j]|조각을 반환\ns[i:k:stride]|확장 분할에 의한 조각을 반환\n\nList\nTuple\nStr\n\n등 모든 순서열 객체에 Index와 Slice를 적용할 수 있습니다. 그러나 순서가 없는 객체인 사전(Dictionary)에는 순서가 없기 때문에 Slice는 적 적용되지 않고 오직 Index만 적용이 됩니다.\n2.1 Index\n순서열(List, Tuple, Str)",
"s = 'bicycle'\n\ns[0]",
"사전(Dictionary)",
"morse\n\nmorse['....']\n\nmorse.get('....')",
"2.2 Slice",
"Image(filename='images/slicing.png')\n\ns = 'bicycle'\n\ns[1:7]\n\ns[1:7:2] # Skipping\n\ns[1::2]\n\ns[:7:2]\n\nl = list(range(10))\nl\n\nl[2:5] = [20, 30]\nl\n\nl[2:5] = 100\nl\n\nl[2:5] = [100]\nl\n\ndel l[5:7]\nl",
"위 연습 문제 중 Hashtags로 이동해서 '#'만 빼봅시다.\n2.3 Stride",
"s[::3]\n\ns[::-1]\n\ns[::-2]",
"3. Function\n\n함수, 객체 지향 프로그래밍에서는 메소드(Method)라고 부르기도 합니다.\n쉽게 생각해보면 함수는 '특정 행동'을 말합니다. 즉 함수를 호출하면 함수안의 내용이 실행이 되는 것입니다.\n함수는 호출해야 실행되는 객체이므로 Callable Object 라고도 합니다.\n\n3.1 Input and Output",
"Image(filename='images/function2.png')",
"Input: Argument, Parameter, 매개변수, 인자\nOutput: Return, 반환값\n\n위의 그림을 봐서 알 수 있듯이 함수에는 Input/Output이라는 개념이 있습니다. 만약 Input이 있다면 Input을 인자(Argument, Parameter)라고 부릅니다. 또한, 만약 Output이 있다면 Output을 Return Value 혹은 반환값이라고 말합니다. 그러나 위에서 '만약 ...가 있다면'이라는 표현을 썼듯이 항상 있는 것은 아닙니다.\n\n어떻게 생겼을까!?\n1) 인자값이 없을 경우: function ( )\n2) 인자값이 있을 경우: function ( 1, \"1\", name=\"python )\n\n3.2 Why Use Function?\nMaximizing Code Reuse and Minimizing Redundancy",
"Image(filename='images/function.png')",
"위 그림 안의 있는 코드가 뭘 뜻하는지는 몰라도 뭔가 계속해서 중복된다는 것은 보일 것입니다. 이렇듯 함수를 통해 코드가 반복될 때 중복을 피하고 코드 재사용을 최대화할 수 있습니다.\n3.3 How to Use\n함수를 선언할 때는 def라는 예약어를 사용합니다.\n1) 함수를 선언하는 방법\n- Argument가 없을 경우 선언",
"def addition0():\n print(\"더하기 함수입니다.\")\n # Argument의 개수와 상관없이 Return Value가 있을 수도 있고 없을 수도 있습니다.\n\naddition0()",
"- Return Value가 없을 경우 선언",
"def addition1(x, y):\n print(x + y)\n\naddition1(1, 2)",
"- Return Value가 있을 경우 선언",
"def addition2(x, y):\n return x + y\n\nprint(\"%d과 %d의 합은 %s입니다.\" % (10, 30, addition2(10, 30)))",
"2) 함수를 실행하는 방법",
"addition0()\n\naddition1(10, 10)\n\na = addition1(100, 200)\nprint(a)\n\nprint(addition2(10, 20))\n\na = addition2(10, 20)\nprint(a)",
"3.4 Arguments\n\nArgument\n인자\nParameter\n함수 안으로 전달되는 값\n\n\n*args\n인자 이름 앞에 별표(*)를 삽입하면 함수는 여러 개의 인자를 받을 수 있습니다.\n인자 이름은 임의로 만들 수 있으나 관행 상 args라고 많이 사용합니다.",
"def printa(x, y, *args):\n print(type(args)) # *args는 Tuple 형식\n for item in args: # *args는 x, y를 뺀 나머지\n print(item)\n\nprinta(\"a\", \"b\", \"c\", \"d\", \"c\", \"1\")",
"3.5 Keyword Arguments\n\n키워드 인자(keyword argument)\n인자값을 전달할 때 각 인자의 이름과 값을 직접 지정할 수 있는데 이를 Keyword Argument라고 합니다.\n위의 *args와 마찬가지로 인자 이름 앞에 별표(*) 2개를 붙이면 여러 개의 인자를 받을 수 있습니다.\n**kwagrs 인자는 사전 객체이며 관행 상 kwagrs라는 이름을 사용합니다.\n기본값 설정\nkeyword 이름과 함께 값을 처음부터 할당할 수 있는데 이 때 그 값은 기본값이 됩니다.",
"def foo(x=10, greeting=\"hello\", **kwargs):\n print(kwargs)\n print(type(kwargs))\n print(kwargs.get('you'))\n print(kwargs['you'])\n print(greeting, x, kwargs.get('you'))\n\nfoo(you=\"mbc\")",
"3.6 Doc String\n복잡한 함수를 만들었다고 가정해보겠습니다. 이 함수의 이름만으론 이 함수의 목적을 알기가 어려울 것입니다. 이럴 때 함수의 선언부 바로 아래에 Multiple Comments(\"\"\" ... \"\"\")로 함수에 대한 설명을 적어 놓으면 후에 문서화하거나 다른 사람들이 소스를 볼 때 매우 유익할 것입니다.",
"def foo():\n '''This is a foo.'''\n return \"foo\"\n\nfoo()",
"Doc String을 보는 방법",
"foo.__doc__\n\nhelp(foo)",
"3.7 Annotations\n위에 Doc String과 마찬가지로 함수에 주석을 달 수 있습니다. 다른 개발자가 복잡하고 알기 어려운 함수를 볼 때 매우 유용하게 사용할 수 있겠죠!?",
"def add(x:int, y:int) -> int:\n \"\"\"더하기 함수입니다.\"\"\"\n return x + y",
"Annotations를 보는 방법",
"help(add)\n\nadd.__annotations__",
"특히 위의 Doc String과 Annotation은 오픈 소스가 많아지면서 유용하게 쓰일 수 있습니다. 이런 것을 Informational Metadata 혹은 Metadata라고 합니다. 이런 부분을 잘 기입하면 나중에 문서화할 때도 많은 도움이 됩니다. Python을 문서화해주는 오픈 소스는 대표적으로 Sphinx가 있습니다.\n\n4. Practice Makes Perfect\n4.1 함수를 이용해 평균을 구해보세요.",
"# 사용자가 몇 개의 인수든 상관없이 모든 인수의 평균을 구하는 함수를 만들어봅시다.\n\n# Write your code below.\ndef avg(*args):\n return sum(args) / len(args)\n\nprint(avg(1, 2))",
"4.2 미국에 있는 주(State)의 수도는 무엇인지 알아낼 수 있는 함수를 만들어보세요.",
"# 미국의 주의 수도를 찾는 함수를 만드세요.\n\nSTATES_CAPITALS = {\n 'Alabama': 'Montgomery',\n 'Alaska': 'Juneau',\n 'Arizona': 'Phoenix',\n 'Wyoming': 'Cheyenne',\n}\n\n# Write your code below.\ndef find_capital(name=''):\n return STATES_CAPITALS.get(name, 'a')\n\nprint(find_capital(name='Al'))",
"4.3 Algorithm - Bubble Sort\n이번엔 지금까지 배운 것을 총동원해서 Bubble Sort 알고리즘을 풀어보도록 하겠습니다.",
"from IPython.display import YouTubeVideo",
"Bubble Sort",
"YouTubeVideo('Cq7SMsQBEUw')",
"Sorting Algorithms",
"YouTubeVideo('ZZuD6iUe3Pc')",
"사실 지금까지 많은 것을 배웠습니다. 그리고 지금까지 배운 것만으로도 많은 문제를 해결할 수 있습니다. 문제를 해결하기 위해 지식보다는 문제를 어떻게 해결해 나가야할지 사고력이 중요해지고 있습니다. 프로그래밍에서는 이를 알고리즘이라는 것으로 부릅니다. 이번에는 알고리즘 중에 정렬(Sorting)이라는 것을 보며 프로그래밍 실력을 키워보도록 하겠습니다.\nSorting 알고리즘은 일반적으로 나와있는 방법이 여러가지가 있습니다. 어디까지나 사고하는 방법이기 때문에 방법은 더 많을 수 있으나 대표적인 방법들이 9가지 정도가 있습니다. 그 중에서도 가장 쉽게(?) 생각해볼 수 있는 Bubble Sort를 보도록 하겠습니다.",
"target_list = [54, 26, 93, 17, 77, 31, 44, 55, 20]\n\nlen(target_list)\n\nfor item in range(len(target_list)):\n print(item)",
"위와 같이 무작위로 정렬되어 있는 숫자 9개가 있습니다. 이를 Bubble Sort 알고리즘을 이용해서 오름차순으로 정렬해보도록 하겠습니다. 잘 기억해두셔야 할 것은 이 것은 사고하는 방법이므로 정답은 없다는 것입니다. 그리고 머릿속에서 어떻게 풀어나가면 좋을지 먼저 생각해보시기 바랍니다.\n\nhint\npython\n[54, 26, 93, 17, 77, 31, 44, 55, 20] # 1\n[26, 54, 93, 17, 77, 31, 44, 55, 20] # 2\n[26, 54, 93, 17, 77, 31, 44, 55, 20] # 3\n[26, 54, 17, 93, 77, 31, 44, 55, 20] # 4\n[26, 54, 17, 77, 93, 31, 44, 55, 20] # 5\n[26, 54, 17, 77, 31, 93, 44, 55, 20] # 6\n[26, 54, 17, 77, 31, 44, 93, 55, 20] # 7\n[26, 54, 17, 77, 31, 44, 55, 93, 20] # 8\n[26, 54, 17, 77, 31, 44, 55, 20, 93] # 9"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ledrui/week4_Ridge_Regression
|
Overfitting_Demo_Ridge_Lasso.ipynb
|
mit
|
[
"Overfitting demo\nCreate a dataset based on a true sinusoidal relationship\nLet's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \\sin(4x)$:",
"import graphlab\nimport math\nimport random\nimport numpy\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"Create random values for x in interval [0,1)",
"random.seed(98103)\nn = 30\nx = graphlab.SArray([random.random() for i in range(n)]).sort()",
"Compute y",
"y = x.apply(lambda x: math.sin(4*x))",
"Add random Gaussian noise to y",
"random.seed(1)\ne = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])\ny = y + e",
"Put data into an SFrame to manipulate later",
"data = graphlab.SFrame({'X1':x,'Y':y})\ndata",
"Create a function to plot the data, since we'll do it many times",
"def plot_data(data): \n plt.plot(data['X1'],data['Y'],'k.')\n plt.xlabel('x')\n plt.ylabel('y')\n\nplot_data(data)",
"Define some useful polynomial regression functions\nDefine a function to create our features for a polynomial regression model of any degree:",
"def polynomial_features(data, deg):\n data_copy=data.copy()\n for i in range(1,deg):\n data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']\n return data_copy",
"Define a function to fit a polynomial linear regression model of degree \"deg\" to the data in \"data\":",
"def polynomial_regression(data, deg):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,l1_penalty=0.,\n validation_set=None,verbose=False)\n return model",
"Define function to plot data and predictions made, since we are going to use it many times.",
"def plot_poly_predictions(data, model):\n plot_data(data)\n\n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n \n # Create 200 points in the x axis and compute the predicted value for each point\n x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})\n y_pred = model.predict(polynomial_features(x_pred,deg))\n \n # plot predictions\n plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')\n plt.legend(loc='upper left')\n plt.axis([0,1,-1.5,2])",
"Create a function that prints the polynomial coefficients in a pretty way :)",
"def print_coefficients(model): \n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n\n # Get learned parameters as a list\n w = list(model.coefficients['value'])\n\n # Numpy has a nifty function to print out polynomials in a pretty way\n # (We'll use it, but it needs the parameters in the reverse order)\n print 'Learned polynomial for degree ' + str(deg) + ':'\n w.reverse()\n print numpy.poly1d(w)",
"Fit a degree-2 polynomial\nFit our degree-2 polynomial to the data generated above:",
"model = polynomial_regression(data, deg=2)",
"Inspect learned parameters",
"print_coefficients(model)",
"Form and plot our predictions along a grid of x values:",
"plot_poly_predictions(data,model)",
"Fit a degree-4 polynomial",
"model = polynomial_regression(data, deg=4)\nprint_coefficients(model)\nplot_poly_predictions(data,model)",
"Fit a degree-16 polynomial",
"model = polynomial_regression(data, deg=16)\nprint_coefficients(model)",
"Woah!!!! Those coefficients are crazy! On the order of 10^6.",
"plot_poly_predictions(data,model)",
"Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.\n\n\n# \n# \nRidge Regression\nRidge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\\|w\\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called \"L2_penalty\").\nDefine our function to solve the ridge objective for a polynomial regression model of any degree:",
"def polynomial_ridge_regression(data, deg, l2_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n return model",
"Perform a ridge fit of a degree-16 polynomial using a very small penalty strength",
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Perform a ridge fit of a degree-16 polynomial using a very large penalty strength",
"model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Let's look at fits for a sequence of increasing lambda values",
"for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:\n model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)\n print 'lambda = %.2e' % l2_penalty\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('Ridge, lambda = %.2e' % l2_penalty)",
"Perform a ridge fit of a degree-16 polynomial using a \"good\" penalty strength\nWe will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider \"leave one out\" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.",
"# LOO cross validation -- return the average MSE\ndef loo(data, deg, l2_penalty_values):\n # Create polynomial features\n polynomial_features(data, deg)\n \n # Create as many folds for cross validatation as number of data points\n num_folds = len(data)\n folds = graphlab.cross_validation.KFold(data,num_folds)\n \n # for each value of l2_penalty, fit a model for each fold and compute average MSE\n l2_penalty_mse = []\n min_mse = None\n best_l2_penalty = None\n for l2_penalty in l2_penalty_values:\n next_mse = 0.0\n for train_set, validation_set in folds:\n # train model\n model = graphlab.linear_regression.create(train_set,target='Y', \n l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n \n # predict on validation set \n y_test_predicted = model.predict(validation_set)\n # compute squared error\n next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()\n \n # save squared error in list of MSE for each l2_penalty\n next_mse = next_mse/num_folds\n l2_penalty_mse.append(next_mse)\n if min_mse is None or next_mse < min_mse:\n min_mse = next_mse\n best_l2_penalty = l2_penalty\n \n return l2_penalty_mse,best_l2_penalty",
"Run LOO cross validation for \"num\" values of lambda, on a log scale",
"l2_penalty_values = numpy.logspace(-4, 10, num=10)\nl2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)",
"Plot results of estimating LOO for each value of lambda",
"plt.plot(l2_penalty_values,l2_penalty_mse,'k-')\nplt.xlabel('$\\L2_penalty$')\nplt.ylabel('LOO cross validation error')\nplt.xscale('log')\nplt.yscale('log')",
"Find the value of lambda, $\\lambda_{\\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit",
"best_l2_penalty\n\nmodel = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)",
"Lasso Regression\nLasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called \"L1_penalty\"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\\|w\\|$.\nDefine our function to solve the lasso objective for a polynomial regression model of any degree:",
"def polynomial_lasso_regression(data, deg, l1_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,\n l1_penalty=l1_penalty,\n validation_set=None, \n solver='fista', verbose=False,\n max_iterations=3000, convergence_threshold=1e-10)\n return model",
"Explore the lasso solution as a function of a few different penalty strengths\nWe refer to lambda in the lasso case below as \"l1_penalty\"",
"for l1_penalty in [0.0001, 0.01, 0.1, 10]:\n model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)\n print 'l1_penalty = %e' % l1_penalty\n print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))",
"Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tlapicka/IPythonNotebooks
|
Harmonicka_analyza--trojuhelnik.ipynb
|
gpl-2.0
|
[
"Součet harmonických složek\nMějme následující časový průběh:\n$$ u(t)=\\frac{8}{\\pi^2} \\cos(\\omega t - {\\pi\\over 2}) + \\frac{8}{(3\\pi)^2} \\cos(3\\omega t +{\\pi\\over 2}) + \\frac{8}{(5\\pi)^2} \\cos(5\\omega t -{\\pi\\over 2})+ \\frac{8}{(7\\pi)^2} \\cos(7\\omega t+{\\pi\\over 2}) + \\dots$$\nČasové průběhy\nvytvoříme jednotlivé harnonocké složky",
"t=linspace(0,1,1000)\nu1=8./pi/pi*cos(2*pi*1*t-pi/2)\nu3=8./3/3/pi/pi*cos(2*pi*3*t+pi/2)\nu5=8./5/5/pi/pi*cos(2*pi*5*t-pi/2)\nu7=8./7/7/pi/pi*cos(2*pi*7*t+pi/2)",
"vykreslíme jednotlivé harmonické složky",
"figure(figsize=(10,7))\nplot(t,u1)\nplot(t,u3)\nplot(t,u5,t,u7)\nminorticks_on()\nmatplotlib.ticker.AutoMinorLocator(9)\nxlabel(r'$\\rightarrow$ \\\\t [s]',fontsize=16, x=0.9 )\nylabel(r'u [V] $\\uparrow$',fontsize=16, y=0.9, rotation=0)\ngrid(True, 'major', linewidth=1)\ngrid(True, 'minor', linewidth=0.5)",
"Vykreslíme celkový časový průběh",
"u=u1+u3+u5+u7\n\nfigure(figsize=(10,7))\nplot(t,u,linewidth=2)\nminorticks_on()\nxlabel(r'$\\rightarrow$ \\\\t [s]',fontsize=16, x=0.9 )\nylabel(r'u [V] $\\uparrow$',fontsize=16, y=0.9, rotation=0)\ngrid(True,'both')",
"Vydíme, že složením několika napětí tvaru funkce sinus dostáváme tvar trojúhelníku -- tedy lineární funkce.\nAmplitudové spektrum",
"U= 8./pi/pi, 8./pi/pi/3/3 , 8/pi/pi/5/5 , 8./pi/pi/7/7\nf=(1,3,5,7)\nfigure(figsize=(10,5))\nstem(f,U,markerfmt='o' )\nxlim((0,8))\nxticks(arange(0,8))\nminorticks_on()\nxlabel(r'$\\rightarrow$ \\\\f [Hz]', x=0.9 )\nylabel('U [V] $\\uparrow$',fontsize=16, y=0.9, rotation=0)\ntitle(u'Amplitudové frekvenční spektrum')\ngrid(True)",
"Fázové spektrum",
"figure(figsize(10,5))\nfi= 2 * ( -pi/2, pi/2 )\nstem(f,fi, basefmt='k.')\nxlim(0,8)\nyticks((-pi,-pi/2,0,pi/2, pi),(r'$-\\pi$',r'$-{\\pi \\over 2}$', '0', r'{$\\pi \\over 2$}', r'$\\pi$') )\nxlabel(r'$\\rightarrow$ \\\\f [Hz]', x=0.9 )\nylabel(r'$\\varphi$ [rad] $\\uparrow$',fontsize=16, y=0.9, rotation=0)\ntitle(u'Fázové frekvenční spektrum')\ngrid(1)\n\nU= 8./pi/pi, 8./pi/pi/3/3 , 8/pi/pi/5/5 , 8./pi/pi/7/7\nfi= 2 * ( -pi/2, pi/2 )\nf=(1,3,5,7)\n\n\nfigure(figsize=(10,8))\nsubplot(211)\n\nstem(f,U,markerfmt='o' )\nxlim((0,8))\nxticks(arange(0,8))\nminorticks_on()\nxlabel(r'$\\rightarrow$ \\\\f [Hz]', x=0.9 )\nylabel('U [V] $\\uparrow$',fontsize=16, y=0.9, rotation=0)\ntitle(u'Amplitudové frekvenční spektrum')\ngrid(True)\n\nsubplot(212)\n\nstem(f,fi, basefmt='k.')\nxlim(0,8)\nyticks((-pi,-pi/2,0,pi/2, pi),(r'$-\\pi$',r'$-{\\pi \\over 2}$', '0', r'{$\\pi \\over 2$}', r'$\\pi$') )\nxlabel(r'$\\rightarrow$ \\\\f [Hz]', x=0.9 )\nylabel(r'$\\varphi$ [rad] $\\uparrow$',fontsize=16, y=0.9, rotation=0)\ntitle(u'Fázové frekvenční spektrum')\ngrid(1)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nicjhan/MOM6-examples
|
ocean_only/double_gyre/Visualizing and animating sea-surface height.ipynb
|
gpl-3.0
|
[
"We will use matplotlib.pyplot for plotting and scipy's netcdf package for reading the model output. The %pylab inline causes figures to appear in the page and conveniently alias pyplot to plt (which is becoming a widely used alias).\nThis analysis assumes you changed DAYMAX to some multiple of 5 so that there are multiple time records in the model output.\nTo see this notebook with figures, see https://gist.github.com/adcroft/2a2b91d66625fd534372.",
"%pylab inline\nimport scipy.io.netcdf",
"We first create a netcdf object, or \"handle\", to the netcdf file. We'll also list all the objects in the netcdf object.",
"prog_file = scipy.io.netcdf_file('prog__0001_006.nc')\nprog_file.variables",
"Now we will create a variable object for the \"e\" variable in the file. Again, I'm labelling it as a handle to distinguish it from a numpy array or raw data.\nWe'll also look at an \"attribute\" and print the shape of the data.",
"e_handle = prog_file.variables['e']\nprint('Description =', e_handle.long_name)\nprint('Shape =',e_handle.shape)",
"\"e\" is 4-dimensional. netcdf files and objects are index [n,k,j,i] for the time-, vertical-, meridional-, zonal-axes.\nLet's take a quick look at the first record [n=0] of the top interface [k=0].",
"plt.pcolormesh( e_handle[0,0] )",
"The data looks OKish. No scale! And see that \"<matplotlib...>\" line? That's a handle returned by the matplotlib function. Hide it with a semicolon. Let's add a scale and change the colormap.",
"plt.pcolormesh( e_handle[0,0], cmap=cm.seismic ); plt.colorbar();",
"We have 4D data but can only visualize by projecting on a 2D medium (the page). Let's solve that by going interactive!",
"import ipywidgets",
"We'll need to know the range to fix the color scale...",
"[e_handle[:,0].min(), e_handle[:,0].max()]",
"We define a simple function that takes the record number as an argument and then plots the top interface (k=0) for that record. We then use the interact() function to do some magic!",
"def plot_ssh(record):\n plt.pcolormesh( e_handle[record,0], cmap=cm.spectral )\n plt.clim(-.5,.8) # Fixed scale here\n plt.colorbar()\n\nipywidgets.interact(plot_ssh, record=(0,e_handle.shape[0]-1,1));",
"Unable to scroll the slider steadily enough? We'll use a loop to redraw for us...",
"from IPython import display\n\nfor n in range( e_handle.shape[0]):\n display.display(plt.gcf())\n plt.clf()\n plot_ssh(n)\n display.clear_output(wait=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ChadFulton/statsmodels
|
examples/notebooks/tsa_arma_0.ipynb
|
bsd-3-clause
|
[
"Autoregressive Moving Average (ARMA): Sunspots data",
"%matplotlib inline\n\nfrom __future__ import print_function\nimport numpy as np\nfrom scipy import stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm\n\nfrom statsmodels.graphics.api import qqplot",
"Sunpots Data",
"print(sm.datasets.sunspots.NOTE)\n\ndta = sm.datasets.sunspots.load_pandas().data\n\ndta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))\ndel dta[\"YEAR\"]\n\ndta.plot(figsize=(12,8));\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)\n\narma_mod20 = sm.tsa.ARMA(dta, (2,0)).fit(disp=False)\nprint(arma_mod20.params)\n\narma_mod30 = sm.tsa.ARMA(dta, (3,0)).fit(disp=False)\n\nprint(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)\n\nprint(arma_mod30.params)\n\nprint(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)",
"Does our model obey the theory?",
"sm.stats.durbin_watson(arma_mod30.resid.values)\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nax = arma_mod30.resid.plot(ax=ax);\n\nresid = arma_mod30.resid\n\nstats.normaltest(resid)\n\nfig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nfig = qqplot(resid, line='q', ax=ax, fit=True)\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)\n\nr,q,p = sm.tsa.acf(resid.values.squeeze(), qstat=True)\ndata = np.c_[range(1,41), r[1:], q, p]\ntable = pd.DataFrame(data, columns=['lag', \"AC\", \"Q\", \"Prob(>Q)\"])\nprint(table.set_index('lag'))",
"This indicates a lack of fit.\n\n\nIn-sample dynamic prediction. How good does our model do?",
"predict_sunspots = arma_mod30.predict('1990', '2012', dynamic=True)\nprint(predict_sunspots)\n\nfig, ax = plt.subplots(figsize=(12, 8))\nax = dta.loc['1950':].plot(ax=ax)\nfig = arma_mod30.plot_predict('1990', '2012', dynamic=True, ax=ax, plot_insample=False)\n\ndef mean_forecast_err(y, yhat):\n return y.sub(yhat).mean()\n\nmean_forecast_err(dta.SUNACTIVITY, predict_sunspots)",
"Exercise: Can you obtain a better fit for the Sunspots model? (Hint: sm.tsa.AR has a method select_order)\nSimulated ARMA(4,1): Model Identification is Difficult",
"from statsmodels.tsa.arima_process import arma_generate_sample, ArmaProcess\n\nnp.random.seed(1234)\n# include zero-th lag\narparams = np.array([1, .75, -.65, -.55, .9])\nmaparams = np.array([1, .65])",
"Let's make sure this model is estimable.",
"arma_t = ArmaProcess(arparams, maparams)\n\narma_t.isinvertible\n\narma_t.isstationary",
"What does this mean?",
"fig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nax.plot(arma_t.generate_sample(nsample=50));\n\narparams = np.array([1, .35, -.15, .55, .1])\nmaparams = np.array([1, .65])\narma_t = ArmaProcess(arparams, maparams)\narma_t.isstationary\n\narma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5)\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2)",
"For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags. \nThe partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags.",
"arma11 = sm.tsa.ARMA(arma_rvs, (1,1)).fit(disp=False)\nresid = arma11.resid\nr,q,p = sm.tsa.acf(resid, qstat=True)\ndata = np.c_[range(1,41), r[1:], q, p]\ntable = pd.DataFrame(data, columns=['lag', \"AC\", \"Q\", \"Prob(>Q)\"])\nprint(table.set_index('lag'))\n\narma41 = sm.tsa.ARMA(arma_rvs, (4,1)).fit(disp=False)\nresid = arma41.resid\nr,q,p = sm.tsa.acf(resid, qstat=True)\ndata = np.c_[range(1,41), r[1:], q, p]\ntable = pd.DataFrame(data, columns=['lag', \"AC\", \"Q\", \"Prob(>Q)\"])\nprint(table.set_index('lag'))",
"Exercise: How good of in-sample prediction can you do for another series, say, CPI",
"macrodta = sm.datasets.macrodata.load_pandas().data\nmacrodta.index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))\ncpi = macrodta[\"cpi\"]",
"Hint:",
"fig = plt.figure(figsize=(12,8))\nax = fig.add_subplot(111)\nax = cpi.plot(ax=ax);\nax.legend();",
"P-value of the unit-root test, resoundingly rejects the null of a unit-root.",
"print(sm.tsa.adfuller(cpi)[1])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lorgor/vulnmine
|
vulnmine/ipynb/170523-train-vendor-match.ipynb
|
gpl-3.0
|
[
"Train vendor matching algorithm\nThe ML classification algorithm has to be retrained from time to time e.g. when scikit-learn undergoes a major release upgrade.\nThe specific algorithm used is a RandomForest classifier. In initial testing using a k-fold cross-validation approach, this algorithm outperformed several other simple classification algorithms\nThe training proceeds as follows:\n\nFirst the algorithm is tuned for typical data by using a grid search.\nNext the ML classifier is run on the training data using the optimum parameters.\nFinally the trained model is stored for future use.",
"# Initialize\n\nimport pandas as pd\nimport numpy as np\nimport pip #needed to use the pip functions\n\n# Show versions of all installed software to help debug incompatibilities.\n\nfor i in pip.get_installed_distributions(local_only=True):\n print(i)",
"Read in the vendor training data\nRead in the manually labelled vendor training data.\nFormat it and convert to two numpy arrays for input to the scikit-learn ML algorithm.",
"try:\n df_label_vendors = pd.io.parsers.read_csv(\n \"/home/jovyan/work/shared/data/csv/label_vendors.csv\",\n error_bad_lines=False,\n warn_bad_lines=True,\n quotechar='\"',\n encoding='utf-8')\nexcept IOError as e:\n print('\\n\\n***I/O error({0}): {1}\\n\\n'.format(\n e.errno, e.strerror))\n\n# except ValueError:\n# self.logger.critical('Could not convert data to an integer.')\nexcept:\n print(\n '\\n\\n***Unexpected error: {0}\\n\\n'.format(\n sys.exc_info()[0]))\n raise\n\n# Number of records / columns\n\ndf_label_vendors.shape\n\n# Print out some sample values\n\ndf_label_vendors.sample(5)\n\n# Check that all rows are labelled\n\n# (Should return \"False\")\n\ndf_label_vendors['match'].isnull().any()\n\n# Format training data as \"X\" == \"features, \"y\" == target.\n# The target value is the 1st column.\ndf_match_train1 = df_label_vendors[['match','fz_ptl_ratio', 'fz_ptl_tok_sort_ratio', 'fz_ratio', 'fz_tok_set_ratio', 'fz_uwratio','ven_len', 'pu0_len']]\n\n# Convert into 2 numpy arrays for the scikit-learn ML classification algorithms.\nnp_match_train1 = np.asarray(df_match_train1)\nX, y = np_match_train1[:, 1:], np_match_train1[:, 0]\n\nprint(X.shape, y.shape)",
"Use a grid search to tune the ML algorithm\nOnce the best algorithm has been determined, it should be tuned for optimal performance with the data.\nThis is done using a grid search. From the scikit-learn documentation:\n\nParameters that are not directly learnt within estimators can be set by searching a parameter space for the best Cross-validation: evaluating estimator performance score... Any parameter provided when constructing an estimator may be optimized in this manner.\n\nRather than do a compute-intensive search of the entire parameter space, a randomized search is done to find reasonably efficient parameters.\nThis code was modified from the scikit-learn sample code.",
"#\tNow find optimum parameters for model using Grid Search\n\nfrom time import time\nfrom scipy.stats import randint as sp_randint\n\nfrom sklearn.model_selection import RandomizedSearchCV\nfrom sklearn.ensemble import RandomForestClassifier\n\n# build a classifier\nclf = RandomForestClassifier()\n\n# Utility function to report best scores\ndef report(results, n_top=3):\n for i in range(1, n_top + 1):\n candidates = np.flatnonzero(results['rank_test_score'] == i)\n for candidate in candidates:\n print(\"Model with rank: {0}\".format(i))\n print(\"Mean validation score: {0:.3f} (std: {1:.3f})\".format(\n results['mean_test_score'][candidate],\n results['std_test_score'][candidate]))\n print(\"Parameters: {0}\".format(results['params'][candidate]))\n print(\"\")\n \n\n# specify parameters and distributions to sample from\nparam_dist = {\"n_estimators\": sp_randint(20, 100),\n \"max_depth\": [3, None],\n \"max_features\": sp_randint(1,7),\n \"min_samples_split\": sp_randint(2,7),\n \"min_samples_leaf\": sp_randint(1, 7),\n \"bootstrap\": [True, False],\n \"class_weight\": ['auto', None],\n \"criterion\": [\"gini\", \"entropy\"]}\n\n# run randomized search\nn_iter_search = 40\nrandom_search = RandomizedSearchCV(clf, param_distributions=param_dist,\n n_iter=n_iter_search)\n\nstart = time()\nrandom_search.fit(X, y)\nprint(\"RandomizedSearchCV took %.2f seconds for %d candidates\"\n \" parameter settings.\" % ((time() - start), n_iter_search))\nreport(random_search.cv_results_)",
"Run the ML classifier with optimum parameters on the test data\nBased on the above, and ignoring default values, the optimum set of parameters would be something like the following:\n'bootstrap':True, 'min_samples_leaf': 2, 'n_estimators': 40, 'min_samples_split': 4, 'criterion':'entropy', 'max_features': 3, 'max_depth: 3, 'class_weight': None\n\nThe RandomForest classifier is now trained on the test data to produce the model.",
"clf = RandomForestClassifier(\n bootstrap=True,\n min_samples_leaf=2,\n n_estimators=40,\n min_samples_split=4,\n criterion='entropy',\n max_features=3,\n max_depth=3,\n class_weight=None\n)\n\n# Train model on original training data\nclf.fit(X, y)\n\n# save model for future use\n\nfrom sklearn.externals import joblib\njoblib.dump(clf, '/home/jovyan/work/shared/data/models/vendor_classif_trained_Rdm_Forest.pkl.z') \n\n# Test loading\n\nclf = joblib.load('/home/jovyan/work/shared/data/models/vendor_classif_trained_Rdm_Forest.pkl.z' )"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kit-cel/wt
|
sigNT/tutorial/ls_polynomial.ipynb
|
gpl-2.0
|
[
"Content and Objective\n\n\nShow result of LS estimator for polynomials:\n\nGiven $(x_i, y_i), i=1,...,N$ \nAssume polynomial model (plus awgn) to be valid\nGet LS estimate for polynomial coefficients and show result\n\n\n\nMethod: Sample groups and get estimator",
"# importing\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n# showing figures inline\n%matplotlib inline\n\n# plotting options \nfont = {'size' : 30}\nplt.rc('font', **font)\nplt.rc('text', usetex=True)\n\nmatplotlib.rc('figure', figsize=(30, 15) )",
"Parameters",
"# define number of samples\nN = 20\n\n# define degrees of polynomials\nK_actual = 8\nK_est = 2\n\n# randomly sample coefficients of polynomial and \"noise-it\"\ncoeffs = np.random.rand( K_actual ) * ( -1 )**np.random.randint( 2, size=K_actual )\ncoeffs /= np.linalg.norm( coeffs, 1 )\nf = np.polynomial.polynomial.Polynomial( coeffs )\n\nx_fine = np.linspace( 0, 1, 100)\n\n# define variance of noise\nsigma2 = .0\n\n# define random measuring points\nx_sample = np.sort( np.random.choice( x_fine, N, replace=False) )\nf_sample = f( x_sample ) + np.sqrt( sigma2 ) * np.random.randn( x_sample.size )",
"Do LS Estimation",
"X_LS = np.zeros( ( N, K_est ) )\n\nfor _n in range( N ):\n for _k in range( K_est ):\n X_LS[ _n, _k ] = ( x_sample[ _n ] )** _k\n \na_LS = np.matmul (np.linalg.pinv( X_LS ), f_sample )\n\nf_LS = np.polynomial.polynomial.Polynomial( a_LS )\n\n\nprint( 'Actual coefficients:\\n{}\\n'.format( coeffs ) )\n\nprint( 'LS estimation:\\n{}'.format( a_LS ) )",
"Plotting",
"# plot results\nplt.plot( x_fine, f( x_fine ), label='$f(x)$' )\nplt.plot( x_sample, f_sample, '-x', ms=12, label='$(x_i, y_i)$' )\n\n\nplt.plot( x_fine, f_LS( x_fine ), ms=12, label='$f_{LS}(x)$' )\n\nplt.grid( True ) \nplt.legend()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
scraperwiki/databaker
|
databaker/tutorial/Introduction.ipynb
|
agpl-3.0
|
[
"Introduction\nDatabaker is an Open Source Python library for converting semi-structured spreadsheets into computer-friendly datatables. The resulting data can be stored into Pandas data tables or the ONS-specific WDA format.\nThe system is embedded into the interactive programming environment called Jupyter for fast prototyping and development, and depends for its spreadsheet processing on messytables and xypath.\nInstall it with the command:\n\npip3 install databaker\n\nYour main interaction with databaker is through the Jupyter notebook interface. There are many tutorials to show you how to master this system elsewhere on-line. \nOnce you've have a working program to converts a particular spreadsheet style into the output which you want, there are ways to rerun the notebook on other spreadsheets externally or from the command line. \nExample\nAlthough Databaker can handle spreadsheets of any size, here is a tiny example from the tutorials to illustrate what it does.",
"from databaker.framework import *\n\ntab = loadxlstabs(\"example1.xls\", \"beatles\", verbose=False)[0]\nsavepreviewhtml(tab, verbose=False)\n",
"Conversion segments\nDatabaker gives you tools to help you write the code to navigate around the spreadsheet and select the cells and their correspondences. \nWhen you are done your code will look like the following. \nYou can click on the OBS (observation) cells to see how they connect to the headings.",
"r1 = tab.excel_ref('B3').expand(RIGHT)\nr2 = tab.excel_ref('A3').fill(DOWN)\ndimensions = [ \n HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE), \n HDim(r1, \"Vehicles\", DIRECTLY, ABOVE), \n HDim(r2, \"Name\", DIRECTLY, LEFT), \n HDimConst(\"Category\", \"Beatles\")\n]\nobservations = tab.excel_ref('B4').expand(DOWN).expand(RIGHT).is_not_blank().is_not_whitespace()\nc1 = ConversionSegment(observations, dimensions)\nsavepreviewhtml(c1)\n",
"Output in pandas\nPandas data tables provides an enormous scope for further processing and cleaning of the data. \nTo make full use of its power you should become familiar with its Time series functionality, which will allows you to plot, resample and align multple data sources at once.",
"c1.topandas()",
"Output in WDA Observation File\nThe WDA system in the ONS has been the primary use for this library. If you need output into WDA the result would look like the following:",
"print(writetechnicalCSV(None, c1))",
"Further notes\nDatabaker has been developed by the Sensible Code Company on contract from the Office of National Statistics.\nThe first version was written in 2014 and ran only as a command line script where previews were made by via a coloured Excel spreadsheet. This version still exists under the version 1.2.0 tag and the documentation is hosted here.\nThis new version was developed at the end of 2015 to take advantage of the interactive programming capabilities of Jupyter and the freedom not to maintain backward compatibility.\nSee the remaining tutorial notebooks for more details."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
thehackerwithin/berkeley
|
code_examples/spring17_survey/survey.ipynb
|
bsd-3-clause
|
[
"The Hacker Within Spring 2017 survey\nby R. Stuart Geiger, freely licensed CC-BY 4.0, MIT license\nImporting and processing data\nImporting libraries",
"import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Importing data and previewing",
"df = pd.read_csv(\"survey.tsv\",sep=\"\\t\")\ndf[0:4]",
"Creating two dataframes: df_topics for interest/experience about topics and df_meta for questions about THW",
"df_topics = df\ndf_topics = df_topics.drop(['opt_out', 'Skill level', 'Personal experience', 'Presentation style'], axis=1)\n\ndf_meta = df\ndf_meta = df[['Skill level', 'Personal experience', 'Presentation style']]",
"Topic interest\nEach topic (e.g. Python, R, GitHub) has one cell, with a list based on the items checked. \n\nIf someone clicked \"I want this at THW\", there will be a 1. \nIf someone clicked \"I really want this at THW,\" there will be a 2. \nIf someone clicked \"I know something about this...\" there will be a 3. \n\nThese are mutually independent -- if someone clicked all of them, the value would be \"1, 2, 3\" and so on.\nAssumptions for calculating interest: If someone clicked that they just wanted a topic, add 1 to the topic's score. If someone clicked that they really wanted it, add 3 to the topic's score. If they clicked both, just add 3, not 4.",
"topic_interest = {}\ntopic_teaching = {}\n\nfor topic in df_topics:\n \n topic_interest[topic] = 0\n topic_teaching[topic] = 0\n\n for row in df_topics[topic]:\n \n # if row contains only value 1, increment interest dict by 1\n if str(row).find('1')>=0 and str(row).find('2')==-1:\n topic_interest[topic] += 1\n \n # if row contains value 2, increment interest dict by 3\n if str(row).find('2')>=0:\n topic_interest[topic] += 3\n \n if str(row).find('3')>=0:\n topic_teaching[topic] += 1 ",
"Results",
"topic_interest_df = pd.DataFrame.from_dict(topic_interest, orient=\"index\")\ntopic_interest_df.sort_values([0], ascending=False)\n\ntopic_interest_df = topic_interest_df.sort_values([0], ascending=True)\ntopic_interest_df.plot(figsize=[8,14], kind='barh', fontsize=20)",
"Topic expertise",
"topic_teaching_df = pd.DataFrame.from_dict(topic_teaching, orient=\"index\")\ntopic_teaching_df = topic_teaching_df[topic_teaching_df[0] != 0]\ntopic_teaching_df.sort_values([0], ascending=False)\n\ntopic_teaching_df = topic_teaching_df.sort_values([0], ascending=True)\ntopic_teaching_df.plot(figsize=[8,10], kind='barh', fontsize=20)",
"Meta questions about THW",
"df_meta['Personal experience'].replace([1, 2, 3], ['1: Beginner', '2: Intermediate', '3: Advanced'], inplace=True)\ndf_meta['Skill level'].replace([1, 2, 3], ['1: Beginner', '2: Intermediate', '3: Advanced'], inplace=True)\ndf_meta['Presentation style'].replace([1,2,3,4,5], [\"1: 100% presentation / 0% hackathon\", \"2: 75% presentation / 25% hackathon\", \"3: 50% presentation / 50% hackathon\", \"4: 25% presentation / 75% hackathon\", \"5: 100% hackathon\"], inplace = True)\n\ndf_meta = df_meta.dropna()\ndf_meta[0:4]",
"Personal experience with scientific computing",
"pe_df = df_meta['Personal experience'].value_counts(sort=False).sort_index(ascending=False)\npe_plot = pe_df.plot(kind='barh', fontsize=20, figsize=[8,4])\nplt.title(\"What is your personal experience with scientific computing?\", size=20)",
"What skill level should we aim for?",
"skill_df = df_meta['Skill level'].value_counts(sort=False).sort_values(ascending=False)\nskill_plot = skill_df.plot(kind='barh', fontsize=20, figsize=[8,4])\nplt.title(\"What skill level should we aim for?\", size=20)",
"What should our sessions look like?",
"style_df = df_meta['Presentation style'].value_counts(sort=False).sort_index(ascending=False)\nstyle_plot = style_df.plot(kind='barh', fontsize=20, figsize=[8,4])\nplt.title(\"Session format\", size=20)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
flowmatters/veneer-py
|
doc/examples/results/functional-unit-results.ipynb
|
isc
|
[
"Retrieving results from functional units\nRetrieving results for functional units has been problematic because Functional unit results don't fit neatly within the three main criteria (NetworkElement - which maps to catchment, RecordingElement and RecordingVariable).\nLong story short - the typical behaviour is that you get N subcatchments * M functional units * M functional units.\nSo, in the example here, 10 catchments, 5 functional units leads to 10 x 5 x 5 = 250 time series being retrieved, rather than the expected 50.\nThis can partly be dealt with by using an appropriate column naming function (veneer.name_for_fu_and_sc) and then filtering the results, but its still annoying, slow and wastes memory.\nIn recent versions of Veneer, you can now request results for a specific Functional Unit. You need to call v.retrieve_multiple_time_series multiple times, but you end up with just what you're after",
"import veneer\n\nv = veneer.Veneer()\n\nset(v.model.catchment.get_functional_unit_types())\n\nv.drop_all_runs()\n\nv.configure_recording(enable=[{'RecordingVariable':'Total Flow'}])\n\nv.run_model()\n",
"Standard behaviour\nFunctional units results repeated.",
"res = v.retrieve_multiple_time_series(criteria={'RecordingVariable':'Total Flow'})\nres[:10]\n\nlen(res.columns)\n\nres.columns",
"Slight improvement\nAdd functional unit to the column name. Same amount of data, but at least you can distinguish the data",
"res = v.retrieve_multiple_time_series(criteria={'RecordingVariable':'Total Flow'},name_fn=veneer.name_for_fu_and_sc)\nres[:10]\n\nres.columns",
"Better - use FunctionalUnit criteria\nRecent versions of Veneer support the FunctionalUnit criteria. Note this is only available on retrieve_multiple_time_series - not available on configure_recording.\nThis version is bundled with Source 4.6 onwards and available in custom Veneer builds for earlier versions of Source",
"v.retrieve_multiple_time_series?\n\nres = v.retrieve_multiple_time_series(criteria={'RecordingVariable':'Total Flow','FunctionalUnit':'GeneratedFunctionalUnit0'},\n name_fn=veneer.name_for_fu_and_sc)\nres[:10]\n\nres.columns"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs
|
development/examples/requiv_max_limit.ipynb
|
gpl-3.0
|
[
"jktebop: requiv_max_limit\nHere we'll examine how well jktebop agrees with PHOEBE with increased distortion.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.4,<2.5\"\n\nimport phoebe\n\nb = phoebe.default_binary()\n\nb.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101))",
"In order to allow jktebop to compute models, we'll set requiv_max_limit=1.0, effectively disabling the error that would otherwise be raised at a default factor of 0.5 by b.run_checks_compute.",
"b.add_compute('jktebop', requiv_max_limit=1.0)",
"And to avoid any issues with falling outside the atmosphere grids, we'll set a simple flat limb-darkening model and disable irradiation.",
"b.set_value_all('ld_mode', 'manual')\nb.set_value_all('ld_func', 'linear')\nb.set_value_all('ld_coeffs', [0.5])\nb.set_value_all('irrad_method', 'none')",
"For PHOEBE, we'll use blackbody atmospheres (again to avoid any issues of falling out of the grid). For jktebop, we'll keep 'ck2004' - this will only be used to compute the flux-scaling factor based on mean stellar values, so should not fall outside the grid.\nAt a quick glance, we can see the jktebop agrees quite well at a factor of 0.6, but noticeable differences appear by 0.7 (keep in mind, the default value before an error will be raise within PHOEBE is 0.5, but this can be adjusted as necessary, with caution).",
"requiv_max = b.get_value('requiv_max', component='primary', context='component')\nfor requiv_max_factor in [0.6, 0.7, 0.8, 0.9, 1.0]:\n b.set_value('requiv', component='primary', value=requiv_max_factor*requiv_max)\n \n b.run_compute(kind='phoebe', atm='blackbody', model='phoebe2_model', overwrite=True)\n b.run_compute(kind='jktebop', model='jktebop_model', overwrite=True)\n \n _ = b.plot(context='model', \n title='requiv = {:0.1f} / requiv_max'.format(requiv_max_factor), \n draw_title=True, \n legend=True, show=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gasabr/AtoD
|
experiments/winrate_matrix.ipynb
|
mit
|
[
"import json\nimport numpy as np\nimport pandas as pd\nfrom itertools import combinations, product\n\nfrom atod import Hero, Heroes\n\nn_heroes = 115\n\nwith open('data/players_in_matches.json', 'r') as fp:\n players_in_matches = json.load(fp)\n\n# TODO:\n# Print some info about dataset:\n# * first match date\n# * last match date\n# * number of matches\n\n# TODO:\n# def how_many_nans(X: pd.DataFrame) -> int:\n\nmatches = dict()\n\nfor record in players_in_matches:\n # create match in matches dictionary with arrays for\n # winners and losers ids\n matches.setdefault(str(record['match_id']), \n {\n 'winners': [],\n 'loosers': [],\n }\n )\n if record['win']:\n # add hero to winners of this match\n matches[str(record['match_id'])]['winners'].append(record['hero_id'])\n else:\n # add hero to losers\n matches[str(record['match_id'])]['loosers'].append(record['hero_id'])\n\n# length of matches should be 10 times smaller than length of players...\n# since there are 10 players in each match\nassert len(matches), len(players_in_matches) / 10\n\n# crete and fill \n# TODO: rename matrices\nmatches_together = np.zeros((n_heroes, n_heroes))\nmatches_won = np.zeros((n_heroes, n_heroes))\nmatches_lost = np.zeros((n_heroes, n_heroes))\nmatches_against = np.zeros((n_heroes, n_heroes))\n\nfor match in matches.values():\n # for winners\n # sorting is needed to have upper traingular matrix\n # combinations produces all heroes pairs with smaller id first\n for hero1, hero2 in combinations(sorted(match['winners']), 2):\n matches_together[hero1][hero2] += 1\n matches_won[hero1][hero2] += 1\n \n for hero1, hero2 in combinations(sorted(match['loosers']), 2):\n matches_together[hero1][hero2] += 1\n \n for looser, winner in product(match['loosers'], match['winners']):\n matches_against[looser][winner] += 1\n matches_against[winner][looser] += 1\n matches_lost[looser][winner] += 1\n\n# minimum number of matches for pair of heroes to be included in dataset\nmin_matches_played = 10\nmax_winrate = .65\nmax_matches_together = max([max(a) for a in matches_together])\nwere_nulls = sum([a.shape[0] - np.count_nonzero(a) for a in matches_together])\n\n# if combination of 2 heroes were used less than `min_matches` times,\n# don't count their win(lose)rate (it would be NaN in result matrix)\nmatches_together[matches_together < min_matches_played] = np.NaN\nmatches_together[matches_together > max_winrate] = max_winrate\nmatches_against[matches_against < min_matches_played] = np.NaN\n\nbecome_nulls = sum([a.shape[0] - np.count_nonzero(a) for a in matches_together])\n\nprint(become_nulls - were_nulls)\n\n# find maximum amount of matches played by 2 heroes\nmax_matches_played = np.nanmax([np.nanmax(hero) \n for hero in matches_together])\n\n# some combinations were played more than another, so\n# there is more confidence in picking this kind of heroes (tiny-wi)\n\nwinrate_ = (matches_won / matches_together) * (1 + matches_together / max_matches_played)\nwinrate = pd.DataFrame(winrate_)\nwinrate.dropna(axis=0, how='all', inplace=True)\nwinrate.dropna(axis=1, how='all', inplace=True)\nwinrate.head()\n\nlose_rate_ = matches_lost / matches_against\nlose_rate = pd.DataFrame(lose_rate_)\nlose_rate.dropna(axis=0, how='all', inplace=True)\nlose_rate.dropna(axis=1, how='all', inplace=True)\nlose_rate.head()\n\nn = winrate.shape[0]\n# how many heroes pairs don't have enough matches to have\n# meaningful winrate\nn_bad_pairs = n**2 - winrate.count().sum() - (n**2 - n)/2\nn_pairs = (n**2 - n)/2\nprint('Percent of pairs with not enough matches to count them:', \n n_bad_pairs / n_pairs)",
"Building a pick\nIdea: user gives 2 heroes as input, after that algorithms searches for the best next hero till there are 5 of them. The best hero would be choosen by maximazing the weight of edges in heroes graph. Heroes graph -- vertices are rows in winrate matrix and edges are winrates of heroes pairs.",
"def get_next_hero(pick, against=[], ban=[]):\n best_connection = -100\n next_pick = 0\n\n for next_hero_id in winrate.index:\n # if this hero is not in the opening\n if next_hero_id not in pick and next_hero_id not in ban \\\n and next_hero_id not in against:\n \n total_connection = 0\n for picked_hero in pick:\n hero1, hero2 = sorted([next_hero_id, picked_hero])\n total_connection += winrate.loc[hero1][hero2]\n \n for enemy in against:\n total_connection -= lose_rate.loc[next_hero_id][enemy]\n\n if total_connection > best_connection:\n best_hero = next_hero_id\n best_connection = total_connection\n\n return best_hero.item()\n\npick = Heroes()\npick.add(Hero.from_name(''))\n\nban = Heroes()\nban.add(Hero.from_name('Shadow Fiend'))\nban.add(Hero.from_name('Invoker'))\n\nagainst = Heroes()\nagainst.add(Hero.from_name('Slardar'))\nagainst.add(Hero.from_name('Witch Doctor'))\n\nwhile len(pick) < 5:\n next_hero = get_next_hero(list(pick.get_ids()),\n ban=list(ban.get_ids()),\n against=list(against.get_ids()))\n pick.add(Hero(next_hero))\n \nprint(pick.get_names())",
"A lot of attempts to build a pick from a random hero gave me the next thought: maximum weighted winrate should be limited by some value. Because otherwise, same combinations of heroes will appear over and over again. For example, all the values in winrate matrix more than .6 should be equal to .6 or weights should be somehow.\nFirst idea really improves performance!",
"h1 = Hero(4)\nh2 = Hero(108)\nprint(h1.name, h2.name)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
mitdbg/modeldb
|
client/workflows/demos/registry/pytorch-fashion-mnist-end-to-end.ipynb
|
mit
|
[
"Deploying Tensorflow models on Verta\nWithin Verta, a \"Model\" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.) See more here.\nThis notebook provides an example of how to deploy a PyTorch model on Verta as a Verta Standard Model either via convenience functions (for Keras) or by extending VertaModelBase.\n0. Imports",
"import torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom torchvision import datasets\nfrom torchvision.transforms import ToTensor, Lambda, Compose\nimport matplotlib.pyplot as plt",
"0.1 Verta import and setup",
"# restart your notebook if prompted on Colab\ntry:\n import verta\nexcept ImportError:\n !pip install verta\n\nimport os\n\n# Ensure credentials are set up, if not, use below\n# os.environ['VERTA_EMAIL'] = \n# os.environ['VERTA_DEV_KEY'] = \n# os.environ['VERTA_HOST'] =\n\nfrom verta import Client\n\nclient = Client(os.environ['VERTA_HOST'])",
"1. Model Training\n1.1 Load training data",
"training_data = datasets.FashionMNIST(\n root=\"data\",\n train=True,\n download=True,\n transform=ToTensor(),\n)\n\n# Download test data from open datasets.\ntest_data = datasets.FashionMNIST(\n root=\"data\",\n train=False,\n download=True,\n transform=ToTensor(),\n)\n\nbatch_size = 64\n\n# Create data loaders.\ntrain_dataloader = DataLoader(training_data, batch_size=batch_size)\ntest_dataloader = DataLoader(test_data, batch_size=batch_size)\n\nfor X, y in test_dataloader:\n print(\"Shape of X [N, C, H, W]: \", X.shape)\n print(\"Shape of y: \", y.shape, y.dtype)\n break\n",
"1.2 Define network",
"# Get cpu or gpu device for training.\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\nprint(\"Using {} device\".format(device))\n\n# Define model\nclass NeuralNetwork(nn.Module):\n def __init__(self):\n super(NeuralNetwork, self).__init__()\n self.flatten = nn.Flatten()\n self.linear_relu_stack = nn.Sequential(\n nn.Linear(28*28, 512),\n nn.ReLU(),\n nn.Linear(512, 512),\n nn.ReLU(),\n nn.Linear(512, 10),\n nn.ReLU()\n )\n\n def forward(self, x):\n x = self.flatten(x)\n logits = self.linear_relu_stack(x)\n return logits\n\nmodel = NeuralNetwork().to(device)\nprint(model)\n\nloss_fn = nn.CrossEntropyLoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=1e-3)",
"1.3 Train/test code",
"def train(dataloader, model, loss_fn, optimizer):\n size = len(dataloader.dataset)\n for batch, (X, y) in enumerate(dataloader):\n X, y = X.to(device), y.to(device)\n \n # Compute prediction error\n pred = model(X)\n loss = loss_fn(pred, y)\n \n # Backpropagation\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n if batch % 100 == 0:\n loss, current = loss.item(), batch * len(X)\n print(f\"loss: {loss:>7f} [{current:>5d}/{size:>5d}]\")\n\n\ndef test(dataloader, model, loss_fn):\n size = len(dataloader.dataset)\n num_batches = len(dataloader)\n model.eval()\n test_loss, correct = 0, 0\n with torch.no_grad():\n for X, y in dataloader:\n X, y = X.to(device), y.to(device)\n pred = model(X)\n test_loss += loss_fn(pred, y).item()\n correct += (pred.argmax(1) == y).type(torch.float).sum().item()\n test_loss /= num_batches\n correct /= size\n print(f\"Test Error: \\n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \\n\")\n\n\nepochs = 5\nfor t in range(epochs):\n print(f\"Epoch {t+1}\\n-------------------------------\")\n train(train_dataloader, model, loss_fn, optimizer)\n test(test_dataloader, model, loss_fn)\nprint(\"Done!\")\n\ntorch.save(model.state_dict(), \"model.pth\")\nprint(\"Saved PyTorch Model State to model.pth\")\n\nmodel = NeuralNetwork()\nmodel.load_state_dict(torch.load(\"model.pth\"))\n\nclasses = [\n \"T-shirt/top\",\n \"Trouser\",\n \"Pullover\",\n \"Dress\",\n \"Coat\",\n \"Sandal\",\n \"Shirt\",\n \"Sneaker\",\n \"Bag\",\n \"Ankle boot\",\n]\n\nmodel.eval()\nx, y = test_data[0][0], test_data[0][1]\nwith torch.no_grad():\n pred = model(x)\n predicted, actual = classes[pred[0].argmax(0)], classes[y]\n print(f'Predicted: \"{predicted}\", Actual: \"{actual}\"')",
"2. Register Model for deployment",
"registered_model = client.get_or_create_registered_model(\n name=\"fashion-mnist\", labels=[\"computer-vision\", \"pytorch\"])",
"2.1 Register from the model object\nIf you are in the same file where you have the model object handy, use the code below to package the model",
"from verta.environment import Python\n\nmodel_version = registered_model.create_standard_model_from_torch(\n model,\n environment=Python(requirements=[\"torch\", \"torchvision\"]),\n name=\"v1\",\n)",
"2.2 (OR) Register a serialized version of the model using the VertaModelBase",
"from verta.registry import VertaModelBase\n\nclass FashionMNISTClassifier(VertaModelBase):\n def __init__(self, artifacts):\n self.model = NeuralNetwork()\n model.load_state_dict(torch.load(artifacts[\"model.pth\"]))\n \n def predict(self, batch_input):\n results = []\n for one_input in batch_input:\n with torch.no_grad():\n pred = model(x)\n results.append(pred)\n return results\n\nmodel_version = registered_model.create_standard_model(\n model_cls=FashionMNISTClassifier,\n environment=Python(requirements=[\"torch\", \"torchvision\"]),\n artifacts={\"model.pth\" : \"model.pth\"},\n name=\"v2\"\n)",
"3. Deploy model to endpoint",
"fashion_mnist_endpoint = client.get_or_create_endpoint(\"fashion-mnist\")\nfashion_mnist_endpoint.update(model_version, wait=True)\n\ndeployed_model = fashion_mnist_endpoint.get_deployed_model()\ndeployed_model.predict([test_data[0][0]])",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
staeiou/wiki-stat-notebooks
|
retention_20180712/wiki_edit_counts_retention.ipynb
|
mit
|
[
"Visualizations of editing activity in en.wikipedia.org\nBy Stuart Geiger, Berkeley Institute for Data Science\n(C) 2016, Released under The MIT license.\nThis data is collected and aggregated by Erik Zachte, which is here for the English Wikipedia. I have just copied that data from HTML tables into a CSV (which is not done here), then imported it into Pandas dataframes, and plotted it with matplotlib.\nProcessing and cleaning data",
"import pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import ScalarFormatter\nimport numpy as np\n%matplotlib inline\nmatplotlib.style.use('ggplot')\n\n\n# Data by Erik Zachte at https://stats.wikimedia.org/EN/TablesWikipediaEN.htm\ncounts = pd.read_csv(\"edit_counts.tsv\", sep=\"\\t\")\n\n\n# Data by Stuart Geiger\n# Random sample of 50000 users who have registered and made a userpage\n# Then, how many edits did they make 1 to 2 years after their registration date\n\nretention = pd.read_csv(\"retention.tsv\", sep=\"\\t\")\nretention.reg_date=pd.to_datetime(retention.reg_date,format=\"%Y%m%d%H%M%S\")\n\ndef survived(edits):\n if edits > 0:\n return 1\n else:\n return 0\n\nretention['survival'] = retention.edits_1yr.apply(survived)\n\nretention[0:10]\n\n# Convert dates to datetimes\ncounts.date=pd.to_datetime(counts.date,infer_datetime_format=True)\n\n\n# Peek at the dataset\ncounts[0:10]",
"Some of the columns use 'k' for thousands and 'M' for millions, so we need to convert them.",
"\ndef units_convert(s):\n \"\"\"\n Convert cells with k and M to times 1,000 and 1,000,000 respectively\n \n I got this solution from \n http://stackoverflow.com/questions/14218728/converting-string-of-numbers-and-letters-to-int-float-in-pandas-dataframe\n \"\"\"\n \n powers = {'k': 1000, 'M': 10 ** 6}\n\n if(s[-1] == 'k' or s[-1] == 'M'):\n try:\n power = s[-1]\n return float(s[:-1]) * powers[power]\n except TypeError:\n return float(s)\n else:\n return float(s)\n \n\n\n# Apply this function to the columns that have 'k' or 'M' units, store them as new _float columns\ncounts['edits_float']=counts.edits.apply(units_convert)\ncounts['article_count_float']=counts['article count'].apply(units_convert)\n\n# Make sure we've got data types figured out\ncounts.dtypes\n\n# Set date column as index\n\ncounts.set_index(['date'])\n\n# Calculate some ratios\n\ncounts['highly_active_to_newcomer_ratio']=counts['>100 edits']/counts['new accts']\ncounts['active_to_newcomer_ratio']=counts['>5 edits']/counts['new accts']\ncounts['highly_active_to_active_ratio']=counts['>100 edits']/(counts['>5 edits']-counts['>100 edits'])\n",
"Graphs\nEditor retention",
"import datetime\ndef dt_to_yearmofirst(dt):\n \"\"\"\n Adding one year to the reg date, because we are looking at if people who registered were still editing 1 year later\n \"\"\"\n year= dt.year + 1\n month= dt.month\n return datetime.datetime(year=year,month=month,day=1)\n\nretention['reg_mo_first'] = retention.reg_date.apply(dt_to_yearmofirst)\n\nretention[0:10]\n\nretention_group = retention.groupby([\"reg_mo_first\"])\nmonthly_averages = retention_group.aggregate({\"survival\":np.mean})\n\ndef add_year(dt):\n year = dt.year + 1\n month = dt.month\n day = dt.day\n return datetime.datetime(year, month, day)",
"Filter because there is missing data before 2003 and counts after July 2014 aren't accurate",
"\nmonthly_avg = monthly_averages[monthly_averages.index>datetime.datetime(2004,1,1)]\nmonthly_avg= monthly_avg[monthly_avg.index<datetime.datetime(2014,9,1)]\n\n\nmonthly_avg.plot()",
"Number of editors",
"matplotlib.style.use(['bmh'])\nfont = {'weight' : 'regular',\n 'size' : 16}\n\nmatplotlib.rc('font', **font)\n\nax1 = counts.plot(x='date',y=['>5 edits', '>100 edits'], figsize=(12,4), \n label=\"Users making >5 edits in a month\", color=\"r\")\n\nax1.set_xlabel(\"Year\")\nax1.set_ylabel(\"Number of users\")\n\nax2 = counts.plot(x='date',y='>100 edits', figsize=(12,4), \n label=\"Users making >100 edits in a month\",color=\"g\")\nax2.set_xlabel(\"Year\")\nax2.set_ylabel(\"Number of editors\")\n\nax3 = counts.plot(x='date',y='new accts', figsize=(12,4), \n label=\"New users making >10 edits in a month\",color=\"b\")\nax3.set_xlabel(\"Year\")\nax3.set_ylabel(\"Number of editors\")\nax3.yaxis.set_major_formatter(ScalarFormatter())\n\nmatplotlib.style.use(['bmh'])\nfont = {'weight' : 'regular',\n 'size' : 16}\n\nmatplotlib.rc('font', **font)\n\nax1 = counts.plot(x='date',y=['>5 edits', '>100 edits'], figsize=(12,7), \n label=\"Users making >5 edits in a month\", color=\"bg\")\n\nax1.set_xlabel(\"Year\")\nax1.set_ylabel(\"Number of users\")\nax1.set_ylim(0,70000)\n\nax2 = ax1.twinx()\n\nax2 = monthly_avg.plot(ax=ax2,secondary_y=True,color=\"r\")\n\nax1.set_xlim(372,519)\n\nax1.legend(loc='upper center', bbox_to_anchor=(0.35, 1.15),\n ncol=2, fancybox=True, shadow=True)\nax2.legend(loc='upper center', bbox_to_anchor=(0.75, 1.15),\n ncol=1, fancybox=True, shadow=True)\nax2.set_ylabel(\"1 year newcomer survival\")\n\n\n\nmatplotlib.style.use(['bmh'])\nfont = {'weight' : 'regular',\n 'size' : 16}\n\nmatplotlib.rc('font', **font)\n\nax1 = counts.plot(x='date',y=['>5 edits', '>100 edits'], figsize=(12,7), \n label=\"Users making >5 edits in a month\", color=\"bg\")\n\nax1.set_xlabel(\"Year\")\nax1.set_ylabel(\"Number of users making >x edits/month\")\nax1.set_ylim(0,60000)\n\n\nax1.set_xlim(372,529)\n\nax1.legend(loc='upper center', bbox_to_anchor=(0.35, 1.15),\n ncol=2, fancybox=True, shadow=True)\n\n\n\nax1 = counts.plot(x='date',y=['>5 edits','>100 edits','new accts'], figsize=(12,4), \n label=\"Users making >5 edits in a month\",color=['r','g','b'])\n\nax1.set_xlabel(\"Year\")\nax1.set_ylabel(\"Number of users\")\nax1.yaxis.set_major_formatter(ScalarFormatter())\n\nax1 = counts.plot(x='date',y=['>5 edits','>100 edits','new accts'], figsize=(12,4), \n label=\"Users making >5 edits in a month\",logy=True, color=['r','g','b'])\n\nax1.set_xlabel(\"Year\")\nax1.set_ylabel(\"Number of users\")\nax1.yaxis.set_major_formatter(ScalarFormatter())\n\n\nax3 = counts.plot(x='date',y='highly_active_to_active_ratio', figsize=(12,4), \n label=\"Highly active users to active users ratio\",color=\"k\")\nax3.set_xlabel(\"Year\")\nax3.set_ylabel(\"Ratio\")\n\n\nax3 = counts.plot(x='date',y='highly_active_to_newcomer_ratio', figsize=(12,4), \n label=\"Highly active users to newcomers ratio\",color=\"k\")\nax3.set_xlabel(\"Year\")\nax3.set_ylabel(\"Ratio\")\n\n\nax3 = counts.plot(x='date',y='active_to_newcomer_ratio', figsize=(12,4), \n label=\"Active users to newcomers ratio\",color=\"k\")\nax3.set_xlabel(\"Year\")\nax3.set_ylabel(\"Ratio\")\n\n\nax3 = counts.plot(x='date',y='edits_float', figsize=(12,4), \n label=\"Number of edits per month\",color=\"k\")\nax3.set_xlabel(\"Year\")\nax3.set_ylabel(\"Number of editors\")\n\n\nax3 = counts.plot(x='date',y='new per day', figsize=(12,4), \n label=\"New articles written per day\",color=\"k\")\nax3.set_xlabel(\"Year\")\nax3.set_ylabel(\"Number of articles\")\n\n\nax3 = counts.plot(x='date',y='article_count_float', figsize=(12,4), \n label=\"Number of articles\",color=\"k\")\nax3.set_xlabel(\"Year\")\nax3.set_ylabel(\"Number of articles\")\n\nax3 = counts.plot(x='date',y='article_count_float', figsize=(12,4), \n label=\"Number of articles\",color=\"k\",logy=True)\nax3.set_xlabel(\"Year\")\nax3.set_ylabel(\"Number of articles\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
patrickmineault/xcorr-notebooks
|
notebooks/Contextual bandits with Thompson sampling.ipynb
|
mit
|
[
"Contextual bandits\nImagine the following scenarios:\n\nyou have a tDCS stimulator that has a series of knobs - frequency, amplitude, location of electrodes, etc. - and you want to find a stimulation protocol that \"just works\" and improves people's performance on a psychophysical task.\nyou're about to run a drug trial for a really rare, deadly disease - so deadly that it would be unethical to keep people in the placebo arm when you know they would be better in the treatment arm. But it doesn't work for everybody, and the factors that make it work on not aren't fully known. You want to direct the right people to the treatment arm as you're learning about which factors cause the drug to work or not.\nMore prosaically, you're on Tinder and you want to optimize your photo selection, your profile text, etc. so you can most efficiently find love. Time's a wasting!\n\nEach of these scenarios has characteristics of a regression problem - learning the underlying factors that cause a measurement. But they also have characteristics of an exploration/exploitation problem - balancing learning about a system and exploiting your current knowledge to maximize your expected long-term outcome.\n<p style=\"text-align: center\"><a href=\"https://en.wikipedia.org/wiki/File:Las_Vegas_slot_machines.jpg#/media/File:Las_Vegas_slot_machines.jpg\"><img src=\"https://upload.wikimedia.org/wikipedia/en/8/82/Las_Vegas_slot_machines.jpg\" alt=\"Las Vegas slot machines.jpg\" height=\"447\" width=\"640\"></a><br><a href=\"http://creativecommons.org/licenses/by-sa/3.0/\" title=\"Attribution-ShareAlike 3.0\">CC BY-SA 3.0</a>, https://en.wikipedia.org/w/index.php?curid=5709790</p>\n\nMulti-armed bandits are a formalization of the exploration / exploitation problem. Imagine you have a series of one-armed bandits - slot machines - that give you a stochastic reward when you pull them. You have some number of trials to maximize your total reward. You can think of all kinds of heuristics for solving this - pull an arm until it doesn't give you a reward, then switch, etc. \nIn fact, the optimal solution to this problem is, by and large, intractable - you can find an optimal policy for binary rewards using dynamic programming but it doesn't scale to large state spaces and many actions. People have come up with all sorts of heuristics to solve this problem, which arent't quite optimal, but they work. This includes approaches like epsilon-greedy, Gibbs sampling, UCB, EXP4, etc. See this excellent post by Ian Osband for a more thorough introduction. \nThe best-in-class approaches focus on representing the uncertainty in the reward distributions and decreasing it. The answer, as always, is to be Bayesian. One of the best known solutions is to always pick the arm with the highest upper confidence bound (UCB) at every trial. Another one, which I'm very fond of, is Thompson sampling. You sample from the posterior for you model parameters, and then you act optimally, assuming that these are the true parameters. I'll explain this in more detail later. \nBayesian GLMs for the contextual bandit\nCombining elements of the multi-armed bandit with regression, as in the problems above, yields the problem class known as contextual bandits. You can use different formulations for the reward distribution - Gaussian processes, Bayesian neural nets - as long as you can sample from the posterior of the model parameters. Here we'll focus on the simplest class - Bayesian GLMs. Let's assume that the mean of the reward on a given trial is given by:\n$$\\mu = f(\\mathbf{x}^T \\mathbf{w})$$\nAnd that the actual reward $y$ is taken from a distribution:\n$$y = \\text{Distribution}(\\mu, \\zeta)$$\nHere $\\mathbf{x}$ is the design for the trial, $w$ are the weights corresponding to the factors of interest, $f$ is a nonlinearity, and $\\zeta$ are parameters of the noise distribution. \nThis generalized linear model (GLM) formulation can accomodate continuous rewards (identity-normal), binary rewards (logistic-binomial), integer rewards (exponential-Poisson), etc. \nDesigning design matrices\nEvery trial, you learn a little more about $\\mathbf{w}$, which helps you pick the design $\\mathbf{x}$ for the next trial. In the tDCS example, $\\mathbf{x}$ could encode:\n\nbinary variables: whether user receives real or sham stimulation\ncategorical variables: whether you use a square wave, sinusoid, or random noise stimulation\n(constrained) continuous variables: the frequency and amplitude of stimulation\n\nSome other elements of the design matrix can be set in stone - the age of a subject, for example. We can't change them, but they will influence our optimal action selection, e.g. whether they should be redirected to the experimental arm in the medical trial example.\nPrior information\nWe need to represent uncertainty to sample from our posterior, and we'll need a proper prior for our weights:\n$$\\mathbf{w} \\sim \\text{Normal}(0, \\text{precision} = \\Lambda)$$\nThen we turn the Bayesian crank on this GLM to estimate the model parameters. For the logistic-binomial model, for example, we know how to solve for the MAP value of $\\mathbf{w}$, by iterating:\n$$\\mathbf{w} \\leftarrow \\mathbf{w} - \\mathbf{H}^{-1}\\mathbf{g}$$\n$$\\mathbf{H} = \\mathbf{X}^T\\text{diag}(\\mu(1-\\mu))\\mathbf{X} + \\lambda \\mathbf{I}$$\n$$\\mathbf{g} = \\mathbf{X}^T (\\mathbf{y} - \\mathbf{\\mu})$$\nHere $\\mathbf{X}$ is a collection of all trials (one row per trial).\nThompson sampling\nEvery time that we get a new dump of data, we update our estimates of model parameters. But how should we act on this? We ought to select our next trial design so that the expected reward (psychophysical performance, survivals, swipes, etc.) at the end of our trials is maximized. But we can't simply pick the best current experimental parameters and use that for the next trial; then we'll never get any information about other variations!\nSo we have to balance exploiting (showing what is known to work) and exploring (presenting new things that have a shot of working). One that works really well in practice is Thompson sampling. It's really quite simple - take a sample from the posterior distribution of $\\mathbf{\\hat{w}} \\sim p(\\mathbf{w})$. Find the design that maximizes:\n$$\\mathbf{x}^T \\mathbf{\\hat{w}}$$\nUse that for each trial. So, sampling and a really simple maximization. What's nice about it, apart from the fact that it works really well in practice and it's dead simple to implement, is that it's a stochastic policy, and that means it's robust to delayed updates. \nIn the tDCS example, let's say that you run a batch of subjects, and you only analyze your data at the end of the day. Under a deterministic policy, every subject on that day would have the same stimulation protocol. That's really inefficient, and it breaks deterministic policies like UCB. Under Thompson sampling, however, every subject would undergo a slightly different protocol, and we'll keep learning. It's not quite as efficient as an instantaneous update, but it can be a lot more practical.\nLaplace approximation\nThere's a slight hiccup in the Thompson sampling formulation - how do we sample from the posterior? If our value function was continuous, and we assume normal noise, our posterior would be exactly Gaussian, so that we could trivially sample from it. For non-normal GLMs (binomial, poisson, etc.), that's not the case, however. We can, however, /approximate/ our posterior as Gaussian. There's a lot of different ways of doing this (VB, EP, etc.), but here we'll stick to the simplest, the Laplace approximation. The Laplace approximation approximates a log posterior by a quadratic function matching the one of the real posterior. Hence, we'll have:\n$$p(\\mathbf{w}) \\approx \\text{Normal}(\\mathbf{w}{MAP}, \\text{precision} = \\mathbf{H}{MAP})$$\nWe know how to sample from a multivariate normal distribution, so that solves our sampling problem. Another nice thing we can do is to use sequential updating rather than full updating. Every time we refit the model, we just replace our prior with the Laplace approximation from the last time we fit. Of course, if you only have a small number of data points, this is wasteful - but for closed-loop experiments in neuroscience, where you might have a trial every second, or even a trial every frame, it's essential.\nNon-stationarity\nA nice side-effect of sequential updating is that it makes it easy to offer some robustness against non-stationarity - in the tDCS example, maybe your first subjects (the experimenters) are different than your later subjects (naive undergrads); the protocol optimization you've done in the early stages might be counterproductive for later stages. \nYou can artificially broaden the covariance of the prior distribution before doing an update. Doing so will make the model partially forgetful, and will allow adjustments for non-stationarities. This might sound hacky, but you can get this exact solution if you cast the parameters as a dynamical system. With normal dynamics, a drift term, and nonlinear observation dynamics, you can use the Extended Kalman filter to solve for the parameter updates, and you get a solution which looks exactly like broadening the prior. Neat!\nEnough talk - let's get right to it!",
"%matplotlib inline\nfrom __future__ import division\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.signal",
"Let's first implement sequential updates for GLMs.",
"def logistic(x):\n \"\"\"Returns the logistic of the numeric argument to the function.\"\"\" \n return 1 / (1 + np.exp(-x))\n\ndef estimate_glm_params(X, \n y, \n prior_mean, \n prior_precision,\n model = 'binomial',\n niters = 1):\n \"\"\"Estimate model parameters for a GLM. \n \n Find MAP estimate of a GLM with a normal prior on the model parameters. \n \n Args:\n X: an NxM matrix - the design matrix\n y: a N length vector - the measured outcome\n prior_mean: an M length vector - the prior mean\n prior_precision: an MxM matrix - the prior precision\n model: a string - accepts normal, binomial, poisson. \n Uses canonical links. Gaussian assumes observation noise has variance 1.\n niters: the number of Newton iterations to do.\n \n Returns:\n (w_MAP, precision_MAP): the MAP parameter and its precision \n ( == the Hessian at the MAP) \n \"\"\"\n w = prior_mean\n for i in range(niters):\n eta = X.dot(w)\n if model == 'normal':\n mu = eta\n H = X.T.dot(X) + prior_precision\n elif model == 'binomial':\n mu = logistic(eta)\n H = X.T.dot(((1 - mu) *mu).reshape((-1,1)) * X) + prior_precision\n elif model == 'poisson':\n mu = np.exp(eta)\n H = X.T.dot(mu.reshape((-1,1)) * X) + prior_precision\n else:\n raise ValueError('Model should be one of normal, binomial, poisson')\n g = X.T.dot(mu - y)\n Hg, _, _, _ = np.linalg.lstsq(H, g)\n w = w - Hg\n return w, H\n\n# Check that one-shot estimation works.\nndata_points = 10000\nndims = 10\nX = np.random.randn(ndata_points, ndims)\nprior_precision = 100*np.eye(10)\nw = np.random.randn((ndims))*.1\n\nthreshold = .95\nfor family in ['normal', 'binomial', 'poisson']:\n w_mean = np.zeros((ndims))\n if family == 'normal':\n mu = X.dot(w)\n y = mu + np.random.randn(mu.size)\n elif family == 'binomial':\n mu = logistic(X.dot(w))\n y = np.random.binomial(1, mu)\n elif family == 'poisson':\n mu = np.exp(X.dot(w))\n y = np.random.poisson(mu) \n\n w_est, H_est = estimate_glm_params(X, y, w_mean, prior_precision, family, niters= 10)\n assert np.corrcoef(w_est, w)[0, 1] > threshold\n w_est0 = w_est.copy()\n\n # Check that sequential estimation works\n nbatches = 100\n w_est = w_mean.copy()\n prior_precision_est = prior_precision.copy()\n for n in range(nbatches):\n rg = slice( int(n / nbatches), int((n+1)*ndata_points / nbatches))\n w_est, prior_precision_est = estimate_glm_params(X[rg,:], y[rg], w_est, prior_precision_est, family)\n\n assert np.corrcoef(w_est0, w)[0, 1] > threshold\n assert np.corrcoef(w_est0, w_est)[0, 1] > threshold\nprint \"Sequential estimation in GLMs is working.\"",
"Let's make a function which can sample from a multivariate normal distribution.",
"def sample_normal_mean_precision(mean, precision, N_samples = 1):\n \"\"\"Samples from a normal distribution with a mean and precision.\n \n Uses eigenvalue decomposition to sample from the right distribution.\n \n Reference: \n https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Drawing_values_from_the_distribution\n \n Args:\n mean: an M-length vector, the mean of the normal.\n precision: an MxM matrix, the precision of the normal.\n N_samples: the number of samples.\n \n Returns:\n An MxN sample matrix.\n \"\"\"\n S, U = np.linalg.eig(precision)\n noise_vector = np.random.randn(precision.shape[1], N_samples)\n projection_matrix = (U * (S ** (-1/2)).reshape((1, -1)))\n sample = mean.reshape((-1, 1)) + projection_matrix.dot(noise_vector)\n return sample\n\nX = np.random.randn(100, 10)\nS, U = np.linalg.eig(X.T.dot(X))\nS = S ** 3\ncov = U.dot(S.reshape((-1, 1)) * U.T)\nprecision = np.linalg.inv(cov)\n\nsamples = sample_normal_mean_precision(np.zeros(precision.shape[0]), precision, 1000)\ncov_est = samples.dot(samples.T) / samples.shape[1]\n\nassert abs((cov_est - cov) / cov.max()).max() < .1",
"Let's makes some classes to represent different kinds of knobs - we'll just implement fixed and categorical (including binary) knobs here, but of course you can implement other ones.",
"class FixedKnob(object):\n def __init__(self):\n \"\"\"Defines a fixed knob\"\"\"\n self.dim = 1\n\n def optimal_design(self, knob_values):\n \"\"\" Returns the optimal design contingent on the knob values.\"\"\"\n return np.ones_like(knob_values)\n\nclass CategoricalKnob(object):\n def __init__(self, nclasses = 2):\n \"\"\"Defines a categorical knob. With nclasses = 2, this becomes a \n binary knob.\"\"\"\n self.dim = nclasses - 1\n \n def optimal_design(self, knob_values):\n if self.dim == 1:\n return (1 * (knob_values > 0)).reshape((-1, 1))\n else:\n max_vals = 1 * (knob_values == knob_values.max(axis = 1).reshape((-1, 1)))\n # De-dup in case of ties\n max_vals = max_vals * (np.cumsum(max_vals, axis = 1) == 1)\n return 1 * (knob_values > 0) * max_vals\n\n# Check that de-duping works\nknob = CategoricalKnob(3)\noptimal_design = knob.optimal_design(np.array([.5, .5]).reshape((1, -1)))\nassert np.allclose(optimal_design, np.array([1, 0]))\n\n# Check that it selects the default category when all the parameters\n# are negative.\noptimal_design = knob.optimal_design(np.array([-.5, -.5]).reshape((1, -1)))\nassert np.allclose(optimal_design, np.array([0, 0]))",
"And now, to sample and optimize these knobs...",
"def thompson_sampling(knobs, prior_mean, prior_precision, N_samples):\n \"\"\"\n Do Thompson sampling for the posterior distribution of the parameters\n of the knobs. \n \n Args:\n knobs: a list of knobs\n prior_mean: a M-length vector of means\n prior_precision: an MxM matrix of means\n N_samples: the number of samples to take\n \n Returns:\n (sampled_params, optimal_design) the sampled parameters (M x N_samples) \n and the optimal design (N_samples X N) corresponding to each draw from \n the sampled params.\n \"\"\"\n sampled_params = sample_normal_mean_precision(prior_mean, prior_precision, N_samples)\n X = []\n start_block = 0\n # Sample from each knob in sequence.\n for knob in knobs:\n rg = slice(start_block, start_block + knob.dim)\n X.append(knob.optimal_design(sampled_params[rg,:].T))\n start_block += knob.dim\n \n return sampled_params, np.hstack(X)\n\nknobs = [FixedKnob(), \n CategoricalKnob(2)]\n\n# All these knobs are good, so we expect a matrix of ones.\nw, X = sample_optimize_knobs(knobs, \n np.ones(2),\n np.eye((2))*100,\n 10)\nassert X.shape[0] == 10\nassert X.shape[1] == 2\nassert np.allclose(X, np.ones((10, 2)))\n\nknobs = [CategoricalKnob(3)]\n# Check that we get roughly the same number of 1's in each column\nw, X = thompson_sampling(knobs, \n np.ones(2),\n np.eye((2))*100,\n 1000)\nassert X.mean(0)[0] > .45 and X.mean(0)[1] < .55",
"Now we have all the pieces that we need to do contextual bandit. Let's run a binomial contextual bandit with a bunch of binary knobs, each of which can have a modest effect on the reward - in the range of 10%. We'll run 10 trials per batch, and run this for a number of batches.",
"def simulate_binomial_bandit(true_parameters, \n knobs,\n prior_mean,\n prior_precision,\n batch_size, \n N_batches):\n \"\"\" Run the binomial contextual bandit with Thompson sampling policy.\n \"\"\"\n rewards = np.zeros((N_batches,batch_size))\n for i in range(N_batches):\n # Get a design matrix for this batch\n _, X = thompson_sampling(knobs, prior_mean, prior_precision, batch_size)\n \n # Simulate rewards\n reward_rate = logistic(X.dot(true_parameters))\n batch_rewards = np.random.binomial(1, reward_rate)\n \n # Update the matrix.\n prior_mean, prior_precision = estimate_glm_params(X, \n batch_rewards, \n prior_mean, \n prior_precision)\n \n # Store the outcome.\n rewards[i, :] = batch_rewards\n \n return rewards, prior_mean\n\ndef logit(p):\n return np.log(p / (1 - p))\n\nbaseline_rate = .5\nbeta = logit(baseline_rate)\nbeta_sd = .2\n\nN_knobs = 10\nknob_sd = .5\n\nbatch_size = 10\nN_batches = 50\n\nprior_mean = np.hstack((beta, np.zeros(N_knobs)))\nprior_precision = np.diag(np.hstack((1 / beta_sd**2,\n np.ones(N_knobs) / knob_sd**2)))\n\n# Pick the parameters from the prior distribution.\ntrue_parameters = sample_normal_mean_precision(prior_mean, prior_precision).squeeze()\n\nknobs = [FixedKnob()]\nfor i in range(N_knobs):\n knobs.append(CategoricalKnob())\n\nrewards, _ = simulate_binomial_bandit(true_parameters,\n knobs,\n prior_mean,\n prior_precision,\n batch_size,\n N_batches)\n\nreward_sequence = rewards.ravel()\nplt.figure(figsize=(13, 5))\n\n# And also plot a smoother version\nsigma = 3\nrg = np.arange(-int(3*sigma), int(3*sigma) + 1)\nthefilt = np.exp(-(rg**2) / 2 / sigma**2)\nthefilt = thefilt / thefilt.sum()\nsmoothed_sequence = scipy.signal.convolve(reward_sequence, thefilt, 'same')\nsmoothed_sequence /= scipy.signal.convolve(np.ones_like(reward_sequence), thefilt, 'same')\nplt.plot(smoothed_sequence)\nplt.axis('tight')\nplt.box('off')\n\n# And show the optimal average reward\n_, opt_design = sample_optimize_knobs(knobs, true_parameters, prior_precision*10000, 1)\nopt_reward = logistic(opt_design.dot(true_parameters))\nplt.plot([0, N_batches*batch_size], [opt_reward, opt_reward], 'r-')\nplt.text(0, opt_reward, 'Best attainable average reward')\nplt.xlabel('Trial #')\nplt.title('Smoothed reward')",
"We see that the contextual bandit with Thompson sampling converges to the optimal policy quite rapidly - in a few hundred trials. Not a bad feat for such a simple method!\nWhere's the context?\nOur formulation of the contextual bandit was appropriate for optimizing discrete knobs which were entirely under our control. It's less obvious how to apply this method to problems where:\n\nVariables are numerous, continuous and have nonlinear relationships with the reward.\nThere are variables we observe, in addition to variables we can change.\n\nThe first problem can be tackled by replacing GLMs with Bayesian neural nets or Gaussian processes - we'll leave that for another day. The second problem is, however, quite tractable.\nConsider the medical treatment example. Let's say that we have two variables we care about - sex and age - and we want to optimize under which arm (control or experiment) each new enrolled patient is put. Let's write a generative model for the mean treatment effect:\nThen we might have a model where the probability of being healthy at the end of the treatment period is given by:\n$$\\mu = \\text{logistic}(\\kappa \\cdot (\\mathbf{z}^T \\mathbf{v} + \\beta) + \\alpha)$$\nHere:\n\n$\\kappa$: +1 if in the treatment arm, 0 in the control arm\n$\\mathbf{z}$: a vector of measurements from a patient, in this case a 2-element vector containing age and sex (variables centered around 0 for ease of interpretation).\n$\\alpha$: logit of the baseline reward - i.e. the spontaneous recovery rate\n$\\beta$: logit of the incremental reward for treatment, unconditioned on covariates - i.e. how much giving the pill helps for the average person.\n\nNow we collect $\\mathbf{v}$, $\\beta$, $\\alpha$ into a vector $\\mathbf{w}$, and $\\kappa \\mathbf{z}$, $\\kappa$, 1 into a vector $\\mathbf{x}$. Then we can apply the contextual bandit just like we did previously; we simply have to implement a new FactorizedKnob class and modify our simulation framework to sample from sex and age whenever a new patient is encountered. We leave this as an exercise to reader."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pastas/pasta
|
concepts/pastas_timeseries.ipynb
|
mit
|
[
"Pastas TimeSeries\nDeveloped by Raoul Collenteur \nIn this Jupyter Notebook, the concept of the pastas.TimeSeries class is explained in full detail. \nObjective of the Pastas TimeSeries class:\n\"To create one class that deals with all user-provided time series and the manipulations of the series while maintaining the original series.\"\nDesired Capabilities:\nThe central idea behind the TimeSeries class is to solve all data manipulations in a single class while maintaining the original time series. While manipulating the TimeSeries when working with your Pastas model, the original data are to be maintained such that only the settings and the original series can be stored. \n- Validate user-provided time series\n- Extend before and after\n- Fill nan-values\n- Change frequency\n - Upsample\n - Downsample \n- Normalize values\nResources\nThe definition of the class can be found on Github (https://github.com/pastas/pastas/blob/master/pastas/timeseries.py)\nDocumentation on the Pandas Series can be found here: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html",
"# Import some packages\nimport pastas as ps\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nps.show_versions()",
"1. Importing groundwater time series\nLet's first import some time series so we have some data to play around with. We use Pandas read_csv method and obtain a Pandas Series object, pandas data structure to efficiently deal with 1D Time Series data. By default, Pandas adds a wealth of functionalities to a Series object, such as descriptive statistics (e.g. series.describe()) and plotting funtionality.",
"gwdata = pd.read_csv('../examples/data/head_nb1.csv', parse_dates=['date'],\n index_col='date', squeeze=True)\ngwdata.plot(figsize=(15,4));",
"2. Creating a Pastas TimeSeries object\nThe user will provide time series data when creating a model instance, or one of the stressmodels found in stressmodels.py. Pastas expects Pandas Series as a standard format in which time series are provided, but will internally transform these to Pastas TimeSeries objects to add the necessary funtionality. It is therefore also possible to provide a TimeSeries object directly instead of a Pandas Series object.\nWe will now create a TimeSeries object for the groundwater level (gwdata). When creating a TimeSeries object the time series that are provided are validated, such that Pastas can use the provided time series for simulation without errors. The time series are checked for:\n\nBeing actual Pandas Series object;\nMaking sure the indices are all TimeStamps;\nMaking sure the indices are ordered in time;\nDropping any nan-values before and after the first and final valid value;\nFrequency of the Series is inferred, or otherwise the user-provided value for \"freq\" is applied;\nNan-values within the series are handled, depending on the value for the \"fill_nan\" argument;\nDuplicate indices are dropped from the series.\n\nIf all of the above is OK, a TimeSeries object is returned. When valid time series are provided all of the above checks are no problem and no settings are required. However, all too often this is not the case and at least \"fill_nan\" and \"freq\" are required. The first argument tells the TimeSeries object how to handle nan-values, and the freq argument provides the frequency of the original time series (by default, freq=D, fill_nan=\"interpolate\").",
"oseries = ps.TimeSeries(gwdata, name=\"Groundwater Level\")\n\n# Plot the new time series and the original\nplt.figure(figsize=(10,4))\noseries.plot(label=\"pastas timeseries\")\ngwdata.plot(label=\"original\")\nplt.legend()",
"3. Configuring a TimeSeries object\nSo let's see how we can configure a TimeSeries object. In the case of the observed groundwater levels (oseries) as in the example above, interpolating between observations might not be the preffered method to deal with gaps in your data. In fact, the do not have to be constant for simulation, one of the benefits of the method of impulse response functions. The nan-values can simply be dropped. To configure a TimeSeries object the user has three options:\n\nUse a predefined configuration by providing a string to the settings argument\nManually set all or some of the settings by providing a dictonary to the settings argument\nProviding the arguments as keyword arguments to the TimeSeries object (not recommended)\n\nFor example, when creating a TimeSeries object for the groundwater levels consider the three following examples for setting the fill_nan option:",
"# Options 1\noseries = ps.TimeSeries(gwdata, name=\"Groundwater Level\", settings=\"oseries\")\nprint(oseries.settings)\n\n# Option 2\noseries = ps.TimeSeries(gwdata, name=\"Groundwater Level\", settings=dict(fill_nan=\"drop\"))\nprint(oseries.settings)\n\n# Options 3\noseries = ps.TimeSeries(gwdata, name=\"Groundwater Level\", fill_nan=\"drop\")\nprint(oseries.settings)",
"Predefined settings\nAll of the above methods yield the same result. It is up to the user which one is preferred. \nA question that may arise with options 1, is what the possible strings for settings are and what configuration is then used.\nThe TimeSeries class contains a dictionary with predefined settings that are used often. You can ask the TimeSeries class this question:",
"pd.DataFrame(ps.TimeSeries._predefined_settings).T",
"4. Let's explore the possibilities\nAs said, Pastas TimeSeries are capable of handling time series in a way that is convenient for Pastas. \n\nChanging the frequency of the time series (sample_up, sameple_down)\nExtending the time series (fill_before and fill_after)\nNormalizing the time series (norm *not fully supported yet)\n\nWe will now import some precipitation series measured at a daily frequency and show how the above methods work",
"# Import observed precipitation series\nprecip = pd.read_csv('../examples/data/rain_nb1.csv', parse_dates=['date'],\n index_col='date', squeeze=True)\nprec = ps.TimeSeries(precip, name=\"Precipitation\", settings=\"prec\")\n\n# fig, ax = plt.subplots(2, 1, figsize=(10,8))\n# prec.update_series(freq=\"D\")\n# prec.series.plot.bar(ax=ax[0])\n# prec.update_series(freq=\"7D\")\n# prec.series.plot.bar(ax=ax[1])\n\n# import matplotlib.dates as mdates\n# ax[1].fmt_xdata = mdates.DateFormatter('%m')\n# fig.autofmt_xdate()",
"Wait, what?\nWe just changed the frequency of the TimeSeries. When reducing the frequency, the values were summed into the new bins. Conveniently, all pandas methods are still available and functional, such as the great plotting functionalities of Pandas.\nAll this happened inplace, meaning the same object just took another shape based on the new settings. Moreover, it performed those new settings (freq=\"W\" weekly values) on the original series. This means that going back and forth between frequencies does not lead to any information loss. \nWhy is this so important? Because when solving or simulating a model, the Model will ask every member of the TimeSeries family to prepare itself with the necessary settings (e.g. new freq) and perform that operation only once. When asked for a time series, the TimeSeries object will \"be\" in that new shape.\nSome more action\nLet's say, we want to simulate the groundwater series for a period where no data is available for the time series, but we need some kind of value for the warmup period to prevent things from getting messy. The TimeSeries object can easily extend itself, as the following example shows.",
"prec.update_series(tmin=\"2011\")\nprec.plot()\nprec.settings",
"5. Exporting the TimeSeries\nWhen done, we might want to store the TimeSeries object for later use. A to_dict method is built-in to export the original time series to a json format, along with its current settings and name. This way the original data is maintained and can easily be recreated from a json file.",
"data = prec.to_dict()\nprint(data.keys())\n\n# Tadaa, we have our extended time series in weekly frequency back!\nts = ps.TimeSeries(**data)\nts.plot()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NSLS-II-HXN/PyXRF
|
examples/batch_mode_fit_AND_combine_data_for_recon.ipynb
|
bsd-3-clause
|
[
"%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os\nimport pandas as pd\nfrom pyxrf.model.command_tools import fit_pixel_data_and_save\nfrom pyxrf.api import *",
"Batch mode to fit data from multiple runs\nUsers need to have .json file ready in order to do batch mode fitting.",
"# Define working directory and json file\nworking_dir = '/data/users/2016Q3/Gill_2016Q3/' \n\n# Define a list of h5 files which should stay in the working directory.\ndatalist = np.arange(16715, 16808)\nfilelist = ['scan2D_'+str(n)+'.h5' for n in datalist] \n\n# Parameter file to fit all the data.\nparam_file = 'parameter_data.json' \n\n# Pixel fitting for all the files. If ic_name is given, \n# data will be also normalized based on ion chamber value. \nfor fname in filelist:\n fit_pixel_data_and_save(working_dir, fname, \n param_file_name=param_file, \n save_txt=True, ic_name='sclr1_ch4')",
"Combine data into 3D array for reconstruction",
"element_list = ['Cu_K','Fe_K']\nd3 = combine_data_to_recon(element_list, datalist, working_dir)\n\nd3.keys()\n\n# create movie for Fe\ncreate_movie(d3['Fe_K'], 'data3d_Fe.mp4')\n\n# create movie for Cu\ncreate_movie(d3['Cu_K'], 'data3d_Cu.mp4')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
mica5/mica-dlnd-p2
|
dlnd_image_classification.ipynb
|
mit
|
[
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n return x / 255\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.",
"from sklearn.preprocessing import LabelBinarizer\n\nencoder = None\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return\n a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n global encoder\n if encoder is None:\n encoder = LabelBinarizer()\n encoder.fit(x)\n\n return encoder.transform(x)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.",
"import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n return tf.placeholder(\n tf.float32,\n shape=[None, *image_shape],\n name='x'\n )\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n return tf.placeholder(\n tf.float32,\n shape=[None, n_classes],\n name='y'\n )\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n return tf.placeholder(\n tf.float32,\n name='keep_prob'\n )\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.",
"from math import ceil\n\ndef conv2d_maxpool(x_tensor, conv_num_outputs=32, conv_ksize=[4,4],\n conv_strides=[3,3], pool_ksize=[2,2], pool_strides=[2,2]):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the\n convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the\n convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and\n max pooling of x_tensor\n \"\"\"\n\n W = tf.Variable(conv2d_normal_distribution((\n *conv_ksize,\n int(x_tensor.shape[3]),\n conv_num_outputs\n ), stddev=conv2d_stddev))\n\n bias = conv2d_bias(shape=(conv_num_outputs,))\n\n conv = tf.nn.conv2d(\n x_tensor,\n W,\n strides=[1, *conv_strides, 1],\n padding='SAME'\n )\n conv_w_bias = conv + bias\n a = tf.nn.relu(conv_w_bias)\n return tf.nn.max_pool(\n a,\n [1, *pool_ksize, 1],\n strides=[1, *pool_strides, 1],\n padding='SAME'\n )\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"from functools import reduce\nfrom operator import mul\n\ndef flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n return tf.reshape(\n x_tensor,\n [tf.shape(x_tensor)[0], int(reduce(mul, x_tensor.shape[1:]))]\n )\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n W = tf.random_normal((int(x_tensor.shape[1]), num_outputs), stddev=0.1)\n W = tf.Variable(W)\n\n bias = tf.Variable(tf.zeros([num_outputs]))\n\n h = tf.matmul(x_tensor, W) + bias\n a = tf.nn.relu(h)\n return a\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n W = tf.random_normal([int(x_tensor.shape[1]), num_outputs], stddev=0.1)\n W = tf.Variable(W)\n\n bias = tf.Variable(tf.zeros([num_outputs]))\n\n h = tf.matmul(x_tensor, W) + bias\n return h\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.",
"# condensed parameter list\nconv2d_normal_distribution = tf.truncated_normal\nconv2d_stddev = 0.1\ndef conv2d_bias(shape):\n # fill, zeros, random_normal, truncated_normal\n return tf.Variable(tf.zeros(shape))\n\ndef conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # image is 32x32x3\n x = conv2d_maxpool(\n x, conv_num_outputs=16, conv_ksize=[5,5],\n conv_strides=[2,2], pool_ksize=[1,1], pool_strides=[1,1]\n )\n x = tf.nn.dropout(x, keep_prob)\n x = conv2d_maxpool(\n x, conv_num_outputs=32, conv_ksize=[3,3],\n conv_strides=[1,1], pool_ksize=[2,2], pool_strides=[2,2]\n )\n x = tf.nn.dropout(x, keep_prob)\n x = conv2d_maxpool(\n x, conv_num_outputs=48, conv_ksize=[3,3],\n conv_strides=[1,1], pool_ksize=[1,1], pool_strides=[1,1]\n )\n x = flatten(x)\n\n x = fully_conn(x, 864)\n x = tf.nn.dropout(x, keep_prob)\n x = fully_conn(x, 864)\n x = tf.nn.dropout(x, keep_prob)\n x = output(x, 10)\n return x\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.",
"def train_neural_network(session, optimizer, keep_probability,\n feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n feed_dict = {\n x: feature_batch,\n y: label_batch,\n keep_prob: keep_probability\n }\n session.run(\n optimizer,\n feed_dict=feed_dict\n )\n \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.",
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n feed_dict = {\n x: feature_batch,\n y: label_batch,\n keep_prob: 1.0\n }\n loss = session.run(cost, feed_dict=feed_dict)\n feed_dict = {\n x: valid_features,\n y: valid_labels,\n keep_prob: 1.0\n }\n acc = session.run(accuracy, feed_dict=feed_dict)\n print('loss: {:6.4f} accuracy:'.format(loss), acc)\n",
"Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout",
"# TODO: Tune Parameters\nepochs = 20\nbatch_size = 256\nkeep_probability = 0.7",
"Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)",
"Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lisitsyn/shogun
|
doc/ipython-notebooks/multiclass/Tree/TreeEnsemble.ipynb
|
bsd-3-clause
|
[
"Ensemble of Decision Trees\nBy Parijat Mazumdar (GitHub ID: mazumdarparijat)\nThis notebook illustrates the use of Random Forests in Shogun for classification and regression. We will understand the functioning of Random Forests, discuss about the importance of its various parameters and appreciate the usefulness of this learning method. \nWhat is Random Forest?\nRandom Forest is an ensemble learning method in which a collection of decision trees are grown during training and the combination of the outputs of all the individual trees are considered during testing or application. The strategy for combination can be varied but generally, in case of classification, the mode of the output classes is used and, in case of regression, the mean of the outputs is used. The randomness in the method, as the method's name suggests, is infused mainly by the random subspace sampling done while training individual trees. While choosing the best split during tree growing, only a small randomly chosen subset of all the features is considered. The subset size is a user-controlled parameter and is usually the square root of the total number of available features. The purpose of the random subset sampling method is to decorrelate the individual trees in the forest, thus making the overall model more generic; i.e. decrease the variance without increasing the bias (see bias-variance trade-off). The purpose of Random Forest, in summary, is to reduce the generalization error of the model as much as possible. \nRandom Forest vs Decision Tree\nIn this section, we will appreciate the importance of training a Random Forest over a single decision tree. In the process, we will also learn how to use Shogun's Random Forest class. For this purpose, we will use the letter recognition dataset. This dataset contains pixel information (16 features) of 20000 samples of the English alphabet. This is a 26-class classification problem where the task is to predict the alphabet given the 16 pixel features. We start by loading the training dataset.",
"import os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../../data')\nfrom shogun import CSVFile,features,MulticlassLabels\n\ndef load_file(feat_file,label_file):\n feats=features(CSVFile(feat_file))\n labels=MulticlassLabels(CSVFile(label_file))\n return (feats, labels)\n\ntrainfeat_file=os.path.join(SHOGUN_DATA_DIR, 'uci/letter/train_fm_letter.dat')\ntrainlab_file=os.path.join(SHOGUN_DATA_DIR, 'uci/letter/train_label_letter.dat')\ntrain_feats,train_labels=load_file(trainfeat_file,trainlab_file)",
"Next, we decide the parameters of our Random Forest.",
"from shogun import RandomForest, MajorityVote\nfrom numpy import array\n\ndef setup_random_forest(num_trees,rand_subset_size,combination_rule,feature_types):\n rf=RandomForest(rand_subset_size,num_trees)\n rf.put('combination_rule', combination_rule)\n rf.set_feature_types(feature_types)\n\n return rf\n\ncomb_rule=MajorityVote()\nfeat_types=array([False]*16)\nrand_forest=setup_random_forest(10,4,comb_rule,feat_types)",
"In the above code snippet, we decided to create a forest using 10 trees in which each split in individual trees will be using a randomly chosen subset of 4 features. Note that 4 here is the square root of the total available features (16) and is hence the usually chosen value as mentioned in the introductory paragraph. The strategy for combination chosen is Majority Vote which, as the name suggests, chooses the mode of all the individual tree outputs. The given features are all continuous in nature and hence feature types are all set false (i.e. not nominal). Next, we train our Random Forest and use it to classify letters in our test dataset.",
"# train forest\nrand_forest.put('labels', train_labels)\nrand_forest.train(train_feats)\n\n# load test dataset\ntestfeat_file= os.path.join(SHOGUN_DATA_DIR, 'uci/letter/test_fm_letter.dat')\ntestlab_file= os.path.join(SHOGUN_DATA_DIR, 'uci/letter/test_label_letter.dat')\ntest_feats,test_labels=load_file(testfeat_file,testlab_file)\n\n# apply forest\noutput_rand_forest_train=rand_forest.apply_multiclass(train_feats)\noutput_rand_forest_test=rand_forest.apply_multiclass(test_feats)",
"We have with us the labels predicted by our Random Forest model. Let us also get the predictions made by a single tree. For this purpose, we train a CART-flavoured decision tree.",
"from shogun import CARTree, PT_MULTICLASS\n\ndef train_cart(train_feats,train_labels,feature_types,problem_type):\n c=CARTree(feature_types,problem_type,2,False)\n c.put('labels', train_labels)\n c.train(train_feats)\n \n return c\n\n# train CART\ncart=train_cart(train_feats,train_labels,feat_types,PT_MULTICLASS)\n\n# apply CART model\noutput_cart_train=cart.apply_multiclass(train_feats)\noutput_cart_test=cart.apply_multiclass(test_feats)",
"With both results at our disposal, let us find out which one is better.",
"from shogun import MulticlassAccuracy\n\naccuracy=MulticlassAccuracy()\n\nrf_train_accuracy=accuracy.evaluate(output_rand_forest_train,train_labels)*100\nrf_test_accuracy=accuracy.evaluate(output_rand_forest_test,test_labels)*100\n\ncart_train_accuracy=accuracy.evaluate(output_cart_train,train_labels)*100\ncart_test_accuracy=accuracy.evaluate(output_cart_test,test_labels)*100\n\nprint('Random Forest training accuracy : '+str(round(rf_train_accuracy,3))+'%')\nprint('CART training accuracy : '+str(round(cart_train_accuracy,3))+'%')\nprint\nprint('Random Forest test accuracy : '+str(round(rf_test_accuracy,3))+'%')\nprint('CART test accuracy : '+str(round(cart_test_accuracy,3))+'%')",
"As it is clear from the results above, we see a significant improvement in the predictions. The reason for the improvement is clear when one looks at the training accuracy. The single decision tree was over-fitting on the training dataset and hence was not generic. Random Forest on the other hand appropriately trades off training accuracy for the sake of generalization of the model. Impressed already? Let us now see what happens if we increase the number of trees in our forest.\nRandom Forest parameters : Number of trees and random subset size\nIn the last section, we trained a forest of 10 trees. What happens if we make our forest with 20 trees? Let us try to answer this question in a generic way.",
"def get_rf_accuracy(num_trees,rand_subset_size):\n rf=setup_random_forest(num_trees,rand_subset_size,comb_rule,feat_types)\n rf.put('labels', train_labels)\n rf.train(train_feats)\n out_test=rf.apply_multiclass(test_feats)\n acc=MulticlassAccuracy()\n return acc.evaluate(out_test,test_labels)",
"The method above takes the number of trees and subset size as inputs and returns the evaluated accuracy as output. Let us use this method to get the accuracy for different number of trees keeping the subset size constant at 4.",
"import matplotlib.pyplot as plt\n% matplotlib inline\n\nnum_trees4=[5,10,20,50,100]\nrf_accuracy_4=[round(get_rf_accuracy(i,4)*100,3) for i in num_trees4]\n\nprint('Random Forest accuracies (as %) :' + str(rf_accuracy_4))\n\n# plot results\n\nx4=[1]\ny4=[86.48] # accuracy for single tree-CART\nx4.extend(num_trees4)\ny4.extend(rf_accuracy_4)\nplt.plot(x4,y4,'--bo')\nplt.xlabel('Number of trees')\nplt.ylabel('Multiclass Accuracy (as %)')\nplt.xlim([0,110])\nplt.ylim([85,100])\nplt.show()",
"NOTE : The above code snippet takes about a minute to execute. Please wait patiently.\nWe see from the above plot that the accuracy of the model keeps on increasing as we increase the number of trees on our Random Forest and eventually satarates at some value. Extrapolating the above plot qualitatively, the saturation value will be somewhere around 96.5%. The jump of accuracy from 86.48% for a single tree to 96.5% for a Random Forest with about 100 trees definitely highlights the importance of the Random Forest algorithm.\nThe inevitable question at this point is whether it is possible to achieve higher accuracy saturation by working with lesser (or greater) random feature subset size. Let us figure this out by repeating the above procedure for random subset size as 2 and 8.",
"# subset size 2\n\nnum_trees2=[10,20,50,100]\nrf_accuracy_2=[round(get_rf_accuracy(i,2)*100,3) for i in num_trees2]\n\nprint('Random Forest accuracies (as %) :' + str(rf_accuracy_2))\n\n# subset size 8\n\nnum_trees8=[5,10,50,100]\nrf_accuracy_8=[round(get_rf_accuracy(i,8)*100,3) for i in num_trees8]\n\nprint('Random Forest accuracies (as %) :' + str(rf_accuracy_8))",
"NOTE : The above code snippets take about a minute each to execute. Please wait patiently.\nLet us plot all the results together and then comprehend the results.",
"x2=[1]\ny2=[86.48]\nx2.extend(num_trees2)\ny2.extend(rf_accuracy_2)\n\nx8=[1]\ny8=[86.48]\nx8.extend(num_trees8)\ny8.extend(rf_accuracy_8)\n\nplt.plot(x2,y2,'--bo',label='Subset Size = 2')\nplt.plot(x4,y4,'--r^',label='Subset Size = 4')\nplt.plot(x8,y8,'--gs',label='Subset Size = 8')\nplt.xlabel('Number of trees')\nplt.ylabel('Multiclass Accuracy (as %) ')\nplt.legend(bbox_to_anchor=(0.92,0.4))\nplt.xlim([0,110])\nplt.ylim([85,100])\nplt.show()",
"As we can see from the above plot, the subset size does not have a major impact on the saturated accuracy obtained in this particular dataset. While this is true in many datasets, this is not a generic observation. In some datasets, the random feature sample size does have a measurable impact on the test accuracy. A simple strategy to find the optimal subset size is to use cross-validation. But with Random Forest model, there is actually no need to perform cross-validation. Let us see how in the next section. \nOut-of-bag error\nThe individual trees in a Random Forest are trained over data vectors randomly chosen with replacement. As a result, some of the data vectors are left out of training by each of the individual trees. These vectors form the out-of-bag (OOB) vectors of the corresponding trees. A data vector can be part of OOB classes of multiple trees. While calculating OOB error, a data vector is applied to only those trees of which it is a part of OOB class and the results are combined. This combined result averaged over similar estimate for all other vectors gives the OOB error. The OOB error is an estimate of the generalization bound of the Random Forest model. Let us see how to compute this OOB estimate in Shogun.",
"rf=setup_random_forest(100,2,comb_rule,feat_types)\nrf.put('labels', train_labels)\nrf.train(train_feats)\n \n# set evaluation strategy\neval=MulticlassAccuracy()\noobe=rf.get_oob_error(eval)\n\nprint('OOB accuracy : '+str(round(oobe*100,3))+'%')",
"The above OOB accuracy calculated is found to be slighly less than the test error evaluated in the previous section (see plot for num_trees=100 and rand_subset_size=2). This is because of the fact that the OOB estimate depicts the expected error for any generalized set of data vectors. It is only natural that for some set of vectors, the actual accuracy is slightly greater than the OOB estimate while in some cases the accuracy observed in a bit lower.\nLet us now apply the Random Forest model to the wine dataset. This dataset is different from the previous one in the sense that this dataset is small and has no separate test dataset. Hence OOB (or equivalently cross-validation) is the only viable strategy available here. Let us read the dataset first.",
"trainfeat_file= os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat')\ntrainlab_file= os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat')\ntrain_feats,train_labels=load_file(trainfeat_file,trainlab_file)",
"Next let us find out the appropriate feature subset size. For this we will make use of OOB error.",
"import matplotlib.pyplot as plt\n\ndef get_oob_errors_wine(num_trees,rand_subset_size):\n feat_types=array([False]*13)\n rf=setup_random_forest(num_trees,rand_subset_size,MajorityVote(),feat_types)\n rf.put('labels', train_labels)\n rf.train(train_feats)\n eval=MulticlassAccuracy()\n return rf.get_oob_error(eval) \n\nsize=[1,2,4,6,8,10,13]\noobe=[round(get_oob_errors_wine(400,i)*100,3) for i in size]\n\nprint('Out-of-box Accuracies (as %) : '+str(oobe))\n\nplt.plot(size,oobe,'--bo')\nplt.xlim([0,14])\nplt.xlabel('Random subset size')\nplt.ylabel('Multiclass accuracy')\nplt.show()",
"From the above plot it is clear that subset size of 2 or 3 produces maximum accuracy for wine classification. At this value of subset size, the expected classification accuracy is of the model is 98.87%. Finally, as a sanity check, let us plot the accuracy vs number of trees curve to ensure that 400 is indeed a sufficient value ie. the oob error saturates before 400.",
"size=[50,100,200,400,600]\noobe=[round(get_oob_errors_wine(i,2)*100,3) for i in size]\n\nprint('Out-of-box Accuracies (as %) : '+str(oobe))\n\nplt.plot(size,oobe,'--bo')\nplt.xlim([40,650])\nplt.ylim([95,100])\nplt.xlabel('Number of trees')\nplt.ylabel('Multiclass accuracy')\nplt.show()",
"We see from the above plot that the accuracy remains constant beyond 100. Hence 400 is a sufficient value. In-fact, values just above 100 would have been ideal because of the lower training time associated with them. \nReferences\n[1] Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science\n[2] Leo Breiman. 2001. Random Forests. Mach. Learn. 45, 1 (October 2001), 5-32. DOI=10.1023/A:1010933404324 http://dx.doi.org/10.1023/A:1010933404324"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
inkenbrandt/WellApplication
|
docs/UMAR_Chem_Compile.ipynb
|
mit
|
[
"import pandas as pd\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport urllib2\nfrom pyproj import Proj, transform\nimport xmltodict\nimport mechanize\nimport sys\nimport platform\nimport datetime\nimport statsmodels.api as sm\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 15, 10\n%matplotlib inline\n\nprint(\"Operating System \" + platform.system() + \" \" + platform.release())\nprint(\"Python Version \" + str(sys.version))\nprint(\"Pandas Version \" + str(pd.__version__))\nprint(\"Numpy Version \" + str(np.__version__))\nprint(\"Matplotlib Version \" + str(matplotlib.__version__))\n\nimport wellapplication\n\nimport wellapplication.chem as wc\n\nwellapplication.__version__",
"Import and Standardize Data",
"#rootname = \"/media/p/5F5B-8FCB/PROJECTS/UMAR/Data/chem/\" #thumb on ubuntu\nrootname = \"E:\\\\PROJECTS\\\\UMAR\\\\Data\\\\chem\\\\\" #thumb on windows\n\nWQPResultsFile = rootname + \"result.csv\"\nWQPStationFile = rootname + \"station.csv\"\nSDWISFile = rootname + \"SDWIS_Cache.txt\"\nAGStationsFile = rootname + \"AG_Stations_Cache.csv\"\nAGResultsFile = rootname + \"AG_byparam.csv\"\nUGSFile = rootname + \"UGS_Cache.txt\"\nSTORLegStatFile = rootname + \"UT_Cache_sta_001.txt\"\nSTORLegResFile = rootname + \"UT_Cache_res_001.txt\"\nSTORParamFile = rootname + \"parameter.txt\"\n\nfielddata = rootname + \"FieldData.xlsx\"\nstatelabresults0 = rootname + \"utgs1402.txt\"\nstatelabresults1 = rootname + \"utgs1403.txt\"\nstatelabresults2 = rootname + \"utgs1501.txt\"\nstatelabstations = rootname + \"UtahStateLabStations.xlsx\"\n\ndf = wc.WQP.WQPimportRes(WQPResultsFile)\ndf = wc.WQP.WQPmassageResults(df)\ndf",
"WQP\nThe following html addresses are REST-based queries to download WQP data from the <a href=\"http://www.waterqualitydata.us/portal/\">WQP portal</a>. If you click on them, they will produce zipped csv files that can be opened and processed with the code below. Originally, the code directly applied these links, but the files are large and take a lot of time to download.\nStation data address:\nhttp://waterqualitydata.us/Station/search?statecode=US%3A49&countycode=US%3A49%3A005&sampleMedia=Water&characteristicType=Information%3BInorganics%2C+Major%2C+Metals%3BInorganics%2C+Major%2C+Non-metals%3BInorganics%2C+Minor%2C+Metals%3BInorganics%2C+Minor%2C+Non-metals%3BNot+Assigned%3BNutrient%3BPhysical%3BStable+Isotopes&mimeType=csv&zip=yes&sorted=no\nResult data address:\nhttp://waterqualitydata.us/Result/search?statecode=US%3A49&countycode=US%3A49%3A005&sampleMedia=Water&characteristicType=Information%3BInorganics%2C+Major%2C+Metals%3BInorganics%2C+Major%2C+Non-metals%3BInorganics%2C+Minor%2C+Metals%3BInorganics%2C+Minor%2C+Non-metals%3BNot+Assigned%3BNutrient%3BPhysical%3BStable+Isotopes&mimeType=csv&zip=yes&sorted=no\nWQP Results\nDefine data type of each field in the WQP database. This allows for easy import of data. Everything under this header can be acheived using wc.WQP.WQPimportRes(WQPResultsFile) then wc.WQP.WQPmassageResults(df)",
"Rdtypes = {\"OrganizationIdentifier\":np.str_, \"OrganizationFormalName\":np.str_, \"ActivityIdentifier\":np.str_, \n \"ActivityStartTime/Time\":np.str_,\n \"ActivityTypeCode\":np.str_, \"ActivityMediaName\":np.str_, \"ActivityMediaSubdivisionName\":np.str_, \n \"ActivityStartDate\":np.str_, \"ActivityStartTime/Time\":np.str_, \"ActivityStartTime/TimeZoneCode\":np.str_, \n \"ActivityEndDate\":np.str_, \"ActivityEndTime/Time\":np.str_, \"ActivityEndTime/TimeZoneCode\":np.str_, \n \"ActivityDepthHeightMeasure/MeasureValue\":np.float16, \"ActivityDepthHeightMeasure/MeasureUnitCode\":np.str_, \n \"ActivityDepthAltitudeReferencePointText\":np.str_, \"ActivityTopDepthHeightMeasure/MeasureValue\":np.float16, \n \"ActivityTopDepthHeightMeasure/MeasureUnitCode\":np.str_, \n \"ActivityBottomDepthHeightMeasure/MeasureValue\":np.float16, \n \"ActivityBottomDepthHeightMeasure/MeasureUnitCode\":np.str_, \n \"ProjectIdentifier\":np.str_, \"ActivityConductingOrganizationText\":np.str_, \n \"MonitoringLocationIdentifier\":np.str_, \"ActivityCommentText\":np.str_, \n \"SampleAquifer\":np.str_, \"HydrologicCondition\":np.str_, \"HydrologicEvent\":np.str_, \n \"SampleCollectionMethod/MethodIdentifier\":np.str_, \"SampleCollectionMethod/MethodIdentifierContext\":np.str_, \n \"SampleCollectionMethod/MethodName\":np.str_, \"SampleCollectionEquipmentName\":np.str_, \n \"ResultDetectionConditionText\":np.str_, \"CharacteristicName\":np.str_, \"ResultSampleFractionText\":np.str_, \n \"ResultMeasureValue\":np.str_, \"ResultMeasure/MeasureUnitCode\":np.str_, \"MeasureQualifierCode\":np.str_, \n \"ResultStatusIdentifier\":np.str_, \"StatisticalBaseCode\":np.str_, \"ResultValueTypeName\":np.str_, \n \"ResultWeightBasisText\":np.str_, \"ResultTimeBasisText\":np.str_, \"ResultTemperatureBasisText\":np.str_, \n \"ResultParticleSizeBasisText\":np.str_, \"PrecisionValue\":np.str_, \"ResultCommentText\":np.str_, \n \"USGSPCode\":np.str_, \"ResultDepthHeightMeasure/MeasureValue\":np.float16, \n \"ResultDepthHeightMeasure/MeasureUnitCode\":np.str_, \"ResultDepthAltitudeReferencePointText\":np.str_, \n \"SubjectTaxonomicName\":np.str_, \"SampleTissueAnatomyName\":np.str_, \n \"ResultAnalyticalMethod/MethodIdentifier\":np.str_, \"ResultAnalyticalMethod/MethodIdentifierContext\":np.str_, \n \"ResultAnalyticalMethod/MethodName\":np.str_, \"MethodDescriptionText\":np.str_, \"LaboratoryName\":np.str_, \n \"AnalysisStartDate\":np.str_, \"ResultLaboratoryCommentText\":np.str_, \n \"DetectionQuantitationLimitTypeName\":np.str_, \"DetectionQuantitationLimitMeasure/MeasureValue\":np.str_, \n \"DetectionQuantitationLimitMeasure/MeasureUnitCode\":np.str_, \"PreparationStartDate\":np.str_, \n \"ProviderName\":np.str_} \n\ndt = [6,56,61]",
"Read csv data into python.",
"WQP = pd.read_csv(WQPResultsFile, dtype=Rdtypes, parse_dates=dt)",
"Rename columns to match with other data later.",
"ResFieldDict = {\"AnalysisStartDate\":\"AnalysisDate\", \"ResultAnalyticalMethod/MethodIdentifier\":\"AnalytMeth\", \n \"ResultAnalyticalMethod/MethodName\":\"AnalytMethId\", \"ResultDetectionConditionText\":\"DetectCond\", \n \"ResultLaboratoryCommentText\":\"LabComments\", \"LaboratoryName\":\"LabName\", \n \"DetectionQuantitationLimitTypeName\":\"LimitType\", \"DetectionQuantitationLimitMeasure/MeasureValue\":\"MDL\", \n \"DetectionQuantitationLimitMeasure/MeasureUnitCode\":\"MDLUnit\", \"MethodDescriptionText\":\"MethodDescript\", \n \"OrganizationIdentifier\":\"OrgId\", \"OrganizationFormalName\":\"OrgName\", \"CharacteristicName\":\"Param\", \n \"ProjectIdentifier\":\"ProjectId\", \"MeasureQualifierCode\":\"QualCode\", \"ResultCommentText\":\"ResultComment\", \n \"ResultStatusIdentifier\":\"ResultStatus\", \"ResultMeasureValue\":\"ResultValue\", \n \"ActivityCommentText\":\"SampComment\", \"ActivityDepthHeightMeasure/MeasureValue\":\"SampDepth\", \n \"ActivityDepthAltitudeReferencePointText\":\"SampDepthRef\", \n \"ActivityDepthHeightMeasure/MeasureUnitCode\":\"SampDepthU\", \"SampleCollectionEquipmentName\":\"SampEquip\", \n \"ResultSampleFractionText\":\"SampFrac\", \"ActivityStartDate\":\"SampleDate\", \"ActivityIdentifier\":\"SampleId\", \n \"ActivityStartTime/Time\":\"SampleTime\", \"ActivityMediaSubdivisionName\":\"SampMedia\", \n \"SampleCollectionMethod/MethodIdentifier\":\"SampMeth\", \"SampleCollectionMethod/MethodName\":\"SampMethName\", \n \"ActivityTypeCode\":\"SampType\", \"MonitoringLocationIdentifier\":\"StationId\", \n \"ResultMeasure/MeasureUnitCode\":\"Unit\", \"USGSPCode\":\"USGSPCode\",\n \"ActivityStartDate\":\"StartDate\",\"ActivityStartTime/Time\":\"StartTime\"}\n\nWQP.rename(columns=ResFieldDict,inplace=True)",
"Define unneeded columns that will be dropped to save memory.",
"resdroplist = [\"ActivityBottomDepthHeightMeasure/MeasureUnitCode\", \"ActivityBottomDepthHeightMeasure/MeasureValue\", \n \"ActivityConductingOrganizationText\", \"ActivityEndDate\", \"ActivityEndTime/Time\", \n \"ActivityEndTime/TimeZoneCode\", \"ActivityMediaName\", \"ActivityStartTime/TimeZoneCode\", \n \"ActivityTopDepthHeightMeasure/MeasureUnitCode\", \"ActivityTopDepthHeightMeasure/MeasureValue\", \n \"HydrologicCondition\", \"HydrologicEvent\", \"PrecisionValue\", \"PreparationStartDate\", \"ProviderName\", \n \"ResultAnalyticalMethod/MethodIdentifierContext\", \"ResultDepthAltitudeReferencePointText\", \n \"ResultDepthHeightMeasure/MeasureUnitCode\", \"ResultDepthHeightMeasure/MeasureValue\", \n \"ResultParticleSizeBasisText\", \"ResultTemperatureBasisText\", \n \"ResultTimeBasisText\", \"ResultValueTypeName\", \"ResultWeightBasisText\", \"SampleAquifer\", \n \"SampleCollectionMethod/MethodIdentifierContext\", \"SampleTissueAnatomyName\", \"StatisticalBaseCode\", \n \"SubjectTaxonomicName\",\"StartTime\",\"StartDate\",\"StartTime\",\"StartDate\"]",
"Define a function to fix funky dates found in the WQP database. This includes impossible dates or dates with too many numbers.",
"def datetimefix(x,format):\n '''\n This script cleans date-time errors\n \n input\n x = date-time string\n format = format of date-time string\n \n output \n formatted datetime type\n '''\n d = str(x[0]).lstrip().rstrip()[0:10]\n t = str(x[1]).lstrip().rstrip()[0:5].zfill(5)\n try:\n int(d[0:2])\n except(ValueError,TypeError,NameError):\n return np.nan\n try:\n int(t[0:2])\n int(t[3:5])\n except(ValueError,TypeError,NameError):\n t = \"00:00\"\n \n if int(t[0:2])>23:\n t = \"00:00\"\n elif int(t[3:5])>59:\n t = \"00:00\"\n else:\n t = t[0:2].zfill(2) + \":\" + t[3:5]\n return datetime.datetime.strptime(d + \" \" + t, format)\n\nWQP[\"SampleDate\"] = WQP[[\"StartDate\",\"StartTime\"]].apply(lambda x: datetimefix(x,\"%Y-%m-%d %H:%M\"),1)",
"Drop unwanted fields",
"WQP.drop(resdroplist,inplace=True,axis=1)",
"Convert result values and the MDL values to numeric fields from string fields.",
"WQP['ResultValue'] = WQP['ResultValue'].convert_objects(convert_numeric=True)\nWQP['MDL'] = WQP['MDL'].convert_objects(convert_numeric=True)",
"Remove station duplicates by removing the preceding 'WQX' found in the station id field.",
"WQP['StationId'] = WQP['StationId'].str.replace('_WQX-','-')",
"Standardize all ug/l data to mg/l by changing out the unit descriptor and dividing by 1000.",
"#standardize all ug/l data to mg/l\ndef unitfix(x):\n z = str(x).lower()\n if z == \"ug/l\":\n return \"mg/l\"\n elif z == \"mg/l\":\n return \"mg/l\"\n else:\n return x\n\nWQP.Unit = WQP.Unit.apply(lambda x: str(x).rstrip(), 1)\nWQP.ResultValue = WQP[[\"ResultValue\",\"Unit\"]].apply(lambda x: x[0]/1000 if str(x[1]).lower()==\"ug/l\" else x[0], 1)\nWQP.Unit = WQP.Unit.apply(lambda x: unitfix(x),1)",
"Normalize common nutrients so that they are all in the same type of units. For example, sometimes nitrate is reported \"as nitrogen\" and sometimes it is reported \"as nitrate\". The difference between the two types of reporting is a factor of 4.427!",
"def parnorm(x):\n p = str(x[0]).rstrip().lstrip().lower()\n u = str(x[2]).rstrip().lstrip().lower()\n if p == 'nitrate' and u == 'mg/l as n':\n return 'Nitrate', x[1]*4.427, 'mg/l'\n elif p == 'nitrite' and u == 'mg/l as n':\n return 'Nitrite', x[1]*3.285, 'mg/l'\n elif p == 'ammonia-nitrogen' or p == 'ammonia-nitrogen as n' or p == 'ammonia and ammonium':\n return 'Ammonium', x[1]*1.288, 'mg/l'\n elif p == 'ammonium' and u == 'mg/l as n':\n return 'Ammonium', x[1]*1.288, 'mg/l'\n elif p == 'sulfate as s':\n return 'Sulfate', x[1]*2.996, 'mg/l'\n elif p in ('phosphate-phosphorus', 'phosphate-phosphorus as p','orthophosphate as p'):\n return 'Phosphate', x[1]*3.066, 'mg/l'\n elif (p == 'phosphate' or p == 'orthophosphate') and u == 'mg/l as p':\n return 'Phosphate', x[1]*3.066, 'mg/l'\n elif u == 'ug/l':\n return x[0], x[1]/1000, 'mg/l'\n else:\n return x[0], x[1], str(x[2]).rstrip()\n\nWQP['Param'], WQP['ResultValue'], WQP['Unit'] = zip(*WQP[['Param','ResultValue','Unit']].apply(lambda x: parnorm(x),1))",
"WQP Stations\nRead in WQP station data.",
"WQPStat = pd.read_csv(WQPStationFile)",
"Rename and simplify station data column names for later compilation.",
"StatFieldDict = {\"MonitoringLocationIdentifier\":\"StationId\", \"AquiferName\":\"Aquifer\", \"AquiferTypeName\":\"AquiferType\", \n \"ConstructionDateText\":\"ConstDate\", \"CountyCode\":\"CountyCode\", \"WellDepthMeasure/MeasureValue\":\"Depth\", \n \"WellDepthMeasure/MeasureUnitCode\":\"DepthUnit\", \"VerticalMeasure/MeasureValue\":\"Elev\", \n \"VerticalAccuracyMeasure/MeasureValue\":\"ElevAcc\", \"VerticalAccuracyMeasure/MeasureUnitCode\":\"ElevAccUnit\", \n \"VerticalCollectionMethodName\":\"ElevMeth\", \"VerticalCoordinateReferenceSystemDatumName\":\"ElevRef\", \n \"VerticalMeasure/MeasureUnitCode\":\"ElevUnit\", \"FormationTypeText\":\"FmType\", \n \"WellHoleDepthMeasure/MeasureValue\":\"HoleDepth\", \"WellHoleDepthMeasure/MeasureUnitCode\":\"HoleDUnit\", \n \"HorizontalAccuracyMeasure/MeasureValue\":\"HorAcc\", \"HorizontalAccuracyMeasure/MeasureUnitCode\":\"HorAccUnit\", \n \"HorizontalCollectionMethodName\":\"HorCollMeth\", \"HorizontalCoordinateReferenceSystemDatumName\":\"HorRef\", \n \"HUCEightDigitCode\":\"HUC8\", \"LatitudeMeasure\":\"Lat_Y\", \"LongitudeMeasure\":\"Lon_X\", \n \"OrganizationIdentifier\":\"OrgId\", \"OrganizationFormalName\":\"OrgName\", \"StateCode\":\"StateCode\", \n \"MonitoringLocationDescriptionText\":\"StationComment\", \"MonitoringLocationName\":\"StationName\", \n \"MonitoringLocationTypeName\":\"StationType\"}\n\nWQPStat.rename(columns=StatFieldDict,inplace=True)",
"Define the fields to drop to save memory.",
"statdroplist = [\"ContributingDrainageAreaMeasure/MeasureUnitCode\", \"ContributingDrainageAreaMeasure/MeasureValue\", \n \"DrainageAreaMeasure/MeasureUnitCode\", \"DrainageAreaMeasure/MeasureValue\", \"CountryCode\", \"ProviderName\", \n \"SourceMapScaleNumeric\"]\nWQPStat.drop(statdroplist,inplace=True,axis=1)",
"Make station types in the StationType field consistent for easier summary and compilation later on.",
"TypeDict = {\"Stream: Canal\":\"Stream\", \"River/Stream\":\"Stream\", \n \"Stream: Canal\":\"Stream\", \"Well: Test hole not completed as a well\":\"Well\"}\nWQPStat.StationType = WQPStat[\"StationType\"].apply(lambda x: TypeDict.get(x,x),1)\nWQPStat.Elev = WQPStat.Elev.apply(lambda x: np.nan if x==0.0 else round(x,1), 1)",
"Remove preceding WQX from StationId field to remove duplicate station data created by legacy database.",
"WQPStat['StationId'] = WQPStat['StationId'].str.replace('_WQX-','-')\nWQPStat.drop_duplicates(subset=['StationId'],inplace=True)",
"SDWIS\nSDWIS data were extracted from the Utah SDWIS database into ArcGIS 10.3.2 using the following SQL query. NED 10m elevation and UTM coordinates were appended using ArcGIS.\nSQL\nSELECT UTV80.TINWSF.EXTERNAL_SYS_NUM AS \"FED_NM\", UTV80.TINWSF.ST_ASGN_IDENT_CD AS \"ST_ID\", UTV80.TINWSF.TYPE_CODE, UTV80.TINWSYS.NAME AS \"SYS_NM\", UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM AS \"COUNTY\", UTV80.TINWSF.NAME AS \"FAC_NM\", UTV80.TINWSF.TINWSYS_IS_NUMBER AS \"SY_NBR\", UTV80.TINLOC.LATITUDE_MEASURE AS \"Y\", UTV80.TINLOC.LONGITUDE_MEASURE AS \"X\", UTV80.TINLOC.VERTICAL_MEASURE AS \"Z\", UTV80.TSASAMPL.COLLLECTION_END_DT AS \"DTE\", UTV80.TSAANLYT.NAME AS \"ANLY_NM\", UTV80.TSASAR.CONCENTRATION_MSR AS \"CONC_MSR\", UTV80.TSASAR.TSASAR_IS_NUMBER AS \"ID_NUM\", UTV80.TSASAR.UOM_CODE, UTV80.TSASAR.DETECTN_LIMIT_NUM AS \"DET_LIM\", UTV80.TSASAR.DETECTN_LIM_UOM_CD AS \"DET_UOM\" FROM UTV80.TINWSF INNER JOIN UTV80.TINWSYS ON UTV80.TINWSF.TINWSYS_IS_NUMBER = UTV80.TINWSYS.TINWSYS_IS_NUMBER INNER JOIN UTV80.TINLOC ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TINLOC.TINWSF_IS_NUMBER INNER JOIN UTV80.TSASMPPT ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TSASMPPT.TINWSF0IS_NUMBER INNER JOIN UTV80.TSASAMPL ON UTV80.TSASMPPT.TSASMPPT_IS_NUMBER = UTV80.TSASAMPL.TSASMPPT_IS_NUMBER INNER JOIN UTV80.TSASAR ON UTV80.TSASAMPL.TSASAMPL_IS_NUMBER = UTV80.TSASAR.TSASAMPL_IS_NUMBER INNER JOIN UTV80.TSAANLYT ON UTV80.TSASAR.TSAANLYT_IS_NUMBER = UTV80.TSAANLYT.TSAANLYT_IS_NUMBER WHERE (UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM LIKE '%CACHE COUNTY%') AND (UTV80.TSAANLYT.NAME LIKE '%NITRATE%' OR UTV80.TSAANLYT.NAME LIKE '%NITRITE%' OR UTV80.TSAANLYT.NAME LIKE '%AMMONI%' OR UTV80.TSAANLYT.NAME LIKE '%SULFATE%' OR UTV80.TSAANLYT.NAME LIKE '%TDS%' OR UTV80.TSAANLYT.NAME LIKE '%SODIUM%' OR UTV80.TSAANLYT.NAME LIKE '%FLUORIDE%' OR UTV80.TSAANLYT.NAME LIKE '%MAGNESIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SELENIUM%' OR UTV80.TSAANLYT.NAME LIKE '%CALCIUM%' OR UTV80.TSAANLYT.NAME LIKE '%CHLORIDE%' OR UTV80.TSAANLYT.NAME LIKE '%POTASSIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SELENIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SILICA%' OR UTV80.TSAANLYT.NAME LIKE '%IRON %' OR UTV80.TSAANLYT.NAME LIKE '%ALKA %' OR UTV80.TSAANLYT.NAME LIKE '%CONDUCTIVITY%' OR UTV80.TSAANLYT.NAME LIKE '%PH %' OR UTV80.TSAANLYT.NAME LIKE '%TEMP%' OR UTV80.TSAANLYT.NAME LIKE '%ARSENIC%' OR UTV80.TSAANLYT.NAME LIKE '%CARBON%' OR UTV80.TSAANLYT.NAME LIKE '%TRITIUM%' OR UTV80.TSAANLYT.NAME LIKE '%COPPER%' OR UTV80.TSAANLYT.NAME LIKE '%LEAD%' OR UTV80.TSAANLYT.NAME LIKE '%NITROGEN%' OR UTV80.TSAANLYT.NAME LIKE '%PHOSPHATE%' OR UTV80.TSAANLYT.NAME LIKE '%TDS%' OR UTV80.TSAANLYT.NAME LIKE '%ZINC%' OR UTV80.TSAANLYT.NAME LIKE '%IRON%' OR UTV80.TSAANLYT.NAME LIKE '%CHROMIUM%' ) ORDER BY UTV80.TINWSF.ST_ASGN_IDENT_CD\nSQL\nSELECT UTV80.TINWSF.EXTERNAL_SYS_NUM AS \"FED_NM\", UTV80.TINWSF.ST_ASGN_IDENT_CD AS \"ST_ID\", UTV80.TINWSF.TYPE_CODE, UTV80.TINWSYS.NAME AS \"SYS_NM\", UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM AS \"COUNTY\", UTV80.TINWSF.NAME AS \"FAC_NM\", UTV80.TINWSF.TINWSYS_IS_NUMBER AS \"SY_NBR\", UTV80.TINLOC.LATITUDE_MEASURE AS \"Y\", UTV80.TINLOC.LONGITUDE_MEASURE AS \"X\", UTV80.TINLOC.VERTICAL_MEASURE AS \"Z\", UTV80.TSASAMPL.COLLLECTION_END_DT AS \"DTE\", UTV80.TSAANLYT.NAME AS \"ANLY_NM\", UTV80.TSASAR.CONCENTRATION_MSR AS \"CONC_MSR\", UTV80.TSASAR.TSASAR_IS_NUMBER AS \"ID_NUM\", UTV80.TSASAR.UOM_CODE, UTV80.TSASAR.DETECTN_LIMIT_NUM AS \"DET_LIM\", UTV80.TSASAR.DETECTN_LIM_UOM_CD AS \"DET_UOM\" FROM UTV80.TINWSF INNER JOIN UTV80.TINWSYS ON UTV80.TINWSF.TINWSYS_IS_NUMBER = UTV80.TINWSYS.TINWSYS_IS_NUMBER INNER JOIN UTV80.TINLOC ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TINLOC.TINWSF_IS_NUMBER INNER JOIN UTV80.TSASMPPT ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TSASMPPT.TINWSF0IS_NUMBER INNER JOIN UTV80.TSASAMPL ON UTV80.TSASMPPT.TSASMPPT_IS_NUMBER = UTV80.TSASAMPL.TSASMPPT_IS_NUMBER INNER JOIN UTV80.TSASAR ON UTV80.TSASAMPL.TSASAMPL_IS_NUMBER = UTV80.TSASAR.TSASAMPL_IS_NUMBER INNER JOIN UTV80.TSAANLYT ON UTV80.TSASAR.TSAANLYT_IS_NUMBER = UTV80.TSAANLYT.TSAANLYT_IS_NUMBER WHERE UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM LIKE '%CACHE COUNTY%' AND (UTV80.TINWSYS.NAME IN('%PROVID%','%MILL%','%LOG%','%NIB%', ORDER BY UTV80.TINWSF.ST_ASGN_IDENT_CD\nRead in the queried SDWIS data and make a StationId and StationName field. Make field names consistent with those applied to WQP data above so that compilation is easier later.",
"SDWIS = pd.read_csv(SDWISFile)\n\ndef sampid(x):\n return \"SDWIS\" + str(x[0]) + str(x[1]) + str(x[2])[:-7]\n\ndef statid(x):\n return \"SDWIS\" + str(x[0]) + str(x[1]) \n\ndef statnm(x):\n return str(str(x[0]) + \" \" + str(x[1])).title()\n\nSDWIS[\"StationId\"] = SDWIS[[\"FED_NM\",\"ST_ID\"]].apply(lambda x: statid(x),1)\nSDWIS[\"StationName\"] = SDWIS[[\"SYS_NM\",\"FAC_NM\"]].apply(lambda x: statnm(x),1)\nSDWIS[\"SampleId\"] = SDWIS[[\"FED_NM\",\"ST_ID\",\"DTE\"]].apply(lambda x: sampid(x),1)\nSDWIS[\"OrgId\"] = \"UDDW\"\nSDWIS[\"OrgName\"] = \"Utah Division of Drinking Water\"\nSDWIS[\"Elev\"] = SDWIS[\"Z\"].apply(lambda x: round(x*3.2808,1),1)\nSDWIS[\"Unit\"] = SDWIS[\"UOM_CODE\"].apply(lambda x: str(x).lower(),1)\nSDWIS[\"MDLUnit\"] = SDWIS[\"DET_UOM\"].apply(lambda x: str(x).lower(),1)\nSDWIS[\"Param\"] = SDWIS[\"ANLY_NM\"].apply(lambda x: str(x).title().rstrip(),1)\n\nSDWISFields ={\"DTE\":\"SampleDate\", \"TYPE_CODE\":\"StationType\",\n \"CONC_MSR\":\"ResultValue\", \"DET_LIM\":\"MDL\",\n \"Y\":\"Lat_Y\", \"X\":\"Lon_X\"} \nSDWIS.rename(columns=SDWISFields,inplace=True)\n\ndef datetimefixSDWIS(x,format):\n d = str(x).lstrip().rstrip()\n try:\n return datetime.datetime.strptime(d, \"%m/%d/%Y %H:%M:%S\")\n except(ValueError):\n return datetime.datetime.strptime(d, \"%Y-%m-%d %H:%M:%S\")\n\nSDWIS[\"SampleDate\"] = SDWIS[\"SampleDate\"].apply(lambda x: datetimefixSDWIS(x,\"%m/%d/%Y %H:%M:%S\"),1)",
"Normalize units and nutrient data so that they are consistent with the WQP data. This includes standardizing ug/l to mg/l",
"print sorted(list(SDWIS.Param.unique()))\n\ndef parnormSDWIS(x):\n p = str(x[0]).rstrip().lstrip().lower()\n u = str(x[2]).rstrip().lstrip().lower()\n if p == 'nitrate':\n return 'Nitrate', x[1]*4.427, 'mg/l'\n elif p == 'nitrite':\n return 'Nitrite', x[1]*3.285, 'mg/l'\n elif p == 'nitrogen-ammonia as (n)':\n return 'Ammonium', x[1]*1.288, 'mg/l'\n elif u == 'ug/l':\n return x[0], x[1]/1000, 'mg/l'\n else:\n return x[0], x[1], str(x[2]).rstrip()\n \nSDWIS['Param'], SDWIS['ResultValue'], SDWIS['Unit'] = zip(*SDWIS[['Param','ResultValue','Unit']].apply(lambda x: parnormSDWIS(x),1))",
"Drop unneeded SDWIS fields to save memory and reduce confusion.",
"SDWIS.drop([\"FED_NM\", \"DET_UOM\", \"UOM_CODE\",\"ANLY_NM\", \"FAC_NM\", \"ST_ID\", \n \"SYS_NM\", \"COUNTY\", \"SY_NBR\", \"Z\", \"ID_NUM\"],inplace=True, axis=1)",
"Rename chemical parameters in the SDWIS Param field to match those of the WQP data.",
"SDWISPmatch = {\"Ph\":\"pH\",\"Tds\":\"TDS\",\"Nitrogen-Ammonia As (N)\":\"Nitrogen-Ammonia as (N)\",\n \"Hydroxide As Calcium Carbonate\":\"Hydroxide as Calcium Carbonate\",\n \"Bicarbonate As Hco3\":\"Bicarbonate as HCO3\"}\nSDWIS[\"Param\"] = SDWIS[\"Param\"].apply(lambda x: SDWISPmatch.get(x,x))\nSDWIS[\"StationName\"] = SDWIS[\"StationName\"].apply(lambda x: x.replace(\"Wtp\",\"WTP\"))\n\nSDWIS[\"ResultValue\"] = SDWIS[[\"ResultValue\",\"Unit\"]].apply(lambda x: x[0]/1000 if x[1]==\"ug/L\" else x[0], 1)",
"Make station types consistent with the WQP data.",
"SDWISType = {\"SP\":\"Spring\",\"WL\":\"Well\",\"TP\":\"Facility Other\",\"IN\":\"Stream\",\"CC\":\"Connection\",\"WH\":\"Well\"}\nSDWIS.StationType = SDWIS.StationType.apply(lambda x: SDWISType.get(x,x),1)",
"SDWIS facility type code (FacTypeCode): CC, Consecutive_connection;\nCH, Common_headers; CS, Cistern; CW, Clear_well;\nDS, Distribution_system_zone; IG, Infiltration_gallery; IN,\nIntake; NP, Non-piped; OT, Other; PC, Pressure_control;\nPF, Pump_facility; RC, Roof_catchment; RS, Reservoir; SI,\nSurface_impoundment; SP, Spring; SS, Sampling_station; ST,\nStorage; TM, Transmission_main; TP, Treatment_plant; WH,\nWell_head; WL, Well.\nCreate a SDWIS stations file from the SDWIS data. Drop unneeded fields from the station file. Remove duplication stations.",
"SDWISSta = SDWIS.drop([u'SampleDate', u'ResultValue', u'MDL', u'SampleId', u'Unit', u'MDLUnit', u'Param'], axis=1)\nSDWISSta.drop_duplicates(inplace=True)",
"Create SDWIS results file from the SDWIS data. Drop unneeded fields from the results file. These are fields that are in the station field and apply to stations.",
"SDWISRes = SDWIS.drop([u'StationType', u'Lat_Y', u'Lon_X', u'StationName', u'Elev'], axis=1)",
"Create a sample media field and populate it with the value Groundwater.",
"SDWISRes[\"SampMedia\"] = \"Groundwater\"",
"UDAF\nUDAF Stations\nImport Utah Department of Food and Agriculture data from the data file. These data were compiled from <a href=http://ag.utah.gov/conservation-environmental/ground-water.html>reports available of the UDAF website</a>. Once the data are imported, rename the fields to match the above SDWIS and WQP data.",
"AGStat = pd.read_csv(AGStationsFile)\nAGStat[\"StationType\"] = \"Well\"\nAGStatFields = {\"SITEID\":\"StationId\",\"FINISHEDDE\":\"Depth\",\"POINT_Y\":\"Lat_Y\",\n \"POINT_X\":\"Lon_X\",\"ELEV_FT\":\"Elev\",\"ACCURACY\":\"HorAcc\"}\nAGStat.rename(columns=AGStatFields,inplace=True)",
"Drop unneeded fields to save memory.",
"AGStat.drop([\"OBJECTID_1\", \"OBJECTID\", \"PUB_YR\", \"SAMPLENO\", \"WLDATE\", \"WLDEPTH\"], inplace=True, axis=1)",
"Add UDAF prefix to the station identification field (StationId) to make station ids unique.",
"AGStat.StationId = AGStat.StationId.apply(lambda x: \"UDAF-\"+str(int(x)).zfill(5),1)",
"UDAF Results\nImport data Utah Department of Food and Agriculture data from the data file. These data were compiled from reports available of the UDAF website. Once the data are imported, rename the fields to match the above SDWIS and WQP data.",
"names = [\"SampleId\",\"ResultValue\", \"ParAbb\", \"Unit\", \"Param\", \"MDL\",\"BelowLim\",\"TestNo\",\n \"StationId\",\"SampleDate\",\"SampYear\"]\nAGRes = pd.read_csv(AGResultsFile, names=names, index_col=10)",
"Create a detection condition field and populate it based on values in the imported data.",
"AGRes[\"DetectCond\"] = AGRes[\"BelowLim\"].apply(lambda x: 'Not Detected' if x=='Y' else np.nan,1)",
"Fill null result values with zeros when data are reported as below detection limit.",
"AGRes.ResultValue = AGRes[[\"BelowLim\",\"ResultValue\"]].apply(lambda x: np.nan if x[0]==\"Y\" or x[1] == 0.0 else x[1], 1)",
"Make data consistent by cleaning up parameter descriptions.",
"def parnormAG(x):\n p = str(x[0]).rstrip().lstrip().lower()\n u = str(x[2]).rstrip().lstrip().lower()\n if p == 'nitrate-n':\n return 'Nitrate', x[1]*4.427, 'mg/l'\n elif u == 'ug/l':\n return x[0], x[1]/1000, 'mg/l'\n else:\n return x[0], x[1], str(x[2]).rstrip()\n \nAGRes['Param'], AGRes['ResultValue'], AGRes['Unit'] = zip(*AGRes[['Param','ResultValue','Unit']].apply(lambda x: parnormAG(x),1))\n\nAGRes.Unit.unique()\n\nAGRes.dropna(subset=[\"StationId\",\"ResultValue\"], how=\"any\", inplace=True)\n\nAGRes.StationId = AGRes.StationId.apply(lambda x: \"UDAF-\"+str(int(x)).zfill(5),1)\n\nAGStAv = list(AGStat.StationId.values)\nAGRes = AGRes[AGRes.StationId.isin(AGStAv)]\nAGRes[\"SampMedia\"] = \"Groundwater\"\n\nAGStat['OrgId']='UDAF'",
"STORET Legacy\nLegacy EPA data are kept in the <a href=ftp://ftp.epa.gov/storet/exports/>STORET Legacy Database</a>.",
"STORLegSta = pd.read_table(STORLegStatFile, skiprows=[1])\nSTORLegRes = pd.read_table(STORLegResFile, skiprows=[1])\nSTORParam = pd.read_table(STORParamFile)",
"Parse choppy text data from the STORET Legacy database.",
"rescol = list(STORLegRes.columns)\nj = []\nfor i in rescol:\n j.append(i.rstrip(\"\\t\").rstrip().lstrip().replace(\" \",\"\"))\nresdict = dict(zip(rescol,j))\nSTORLegRes.rename(columns=resdict,inplace=True)\n\nstatcol = list(STORLegSta.columns)\nk = []\nfor i in statcol:\n k.append(i.rstrip(\"\\t\").rstrip().lstrip().replace(\" \",\"\"))\nstatdict = dict(zip(statcol,k))\nSTORLegSta.rename(columns=statdict,inplace=True)\n\nSTORLegRes[\"SampleDate\"] = STORLegRes[[\"StartDate\",\"StartTime\"]].apply(lambda x: datetimefix(x,\"%Y-%m-%d %H:%M\"),1)\nSTORLegRes = STORLegRes[STORLegRes.SecondaryActivityCategory.isin(['Water',np.nan])]\nSTORParamDict = dict(zip(STORParam['Parameter No.'].values, STORParam['Full Name'].values))\nSTORLegRes.Param = STORLegRes.Param.apply(lambda x: STORParamDict.get(x),1)\n\nSTORResField = {\"Agency\":\"OrgId\",\"AgencyName\":\"OrgName\",\"Station\":\"StationId\",\"SampleDepth\":\"SampDepth\"}\nSTORLegRes.rename(columns=STORResField,inplace=True)\n\nSTORLegRes.drop([\"StateName\", \"CountyName\", \"HUC\", \"EndDate\", \"UMK\", \"CS\", \"ReplicateNumber\",\n \"COMPOSITE_GRAB_NUMBER\",\"CM\",\"PrimaryActivityCategory\",\"PrimaryActivityCategory\",\n \"SecondaryActivityCategory\",\n \"EndTime\", \"StartDate\", \"StartTime\", \"Latitude\", \"Longitude\"],inplace=True,axis=1)\n\nSTORLegRes[\"SampleId\"] = STORLegRes[[\"StationId\",\"SampleDate\"]].apply(lambda x: str(x[0]) + \"-\" + str(x[1]),1 )\nSTORLegRes[\"StationId\"] = STORLegRes[\"StationId\"].apply(lambda x: \"EPALeg-\" + x, 1)\n\nSTORLegRes.Param = STORLegRes.Param.apply(lambda x: str(x).title(),1)\n\nSTORLegRes.columns\n\ndef parnormSTOR(x):\n p = str(x[0]).rstrip().lstrip().lower()\n if p == 'nitrate nitrogen, total (mg/L as n)' or p== 'nitrate nitrogen, total':\n return 'Nitrate', x[1]*4.427, 'mg/l'\n elif p == 'nitrite nitrogen, total (mg/l as n)':\n return 'Nitrite', x[1]*3.285, 'mg/l'\n elif p == 'nitrogen, ammonia, total (mg/l as n)':\n return 'Ammonium', x[1]*1.288, 'mg/l'\n elif p == 'sulfate (as s) whole water, mg/L':\n return 'Sulfate', x[1]*2.996, 'mg/l'\n elif p in ('phosphorus, dissolved orthophosphate (mg/l as p)'):\n return 'Phosphate', x[1]*3.066, 'mg/l'\n else:\n return x[0], x[1], np.nan\n\nSTORLegRes['Param'], STORLegRes['ResultValue'], STORLegRes['Unit'] = zip(*STORLegRes[['Param','ResultValue']].apply(lambda x: parnormSTOR(x),1))\n\nSTORKeepers = ['Temperature, Water (Degrees Centigrade)',\n 'Temperature, Water (Degrees Fahrenheit)', \n 'Specific Conductance,Field (Umhos/Cm @ 25C)',\n 'Specific Conductance (Umhos/Cm @ 25C)', \n 'Sulfate (As S) Whole Water, Mg/L',\n 'Oxygen, Dissolved Mg/L',\n 'Oxygen, Dissolved, Percent Of Saturation %',\n 'Bod, 5 Day, 20 Deg C Mg/L',\n 'Ph (Standard Units)', 'Ph, Lab, Standard Units Su',\n 'Carbon Dioxide (Mg/L As Co2)', 'Alkalinity,Total,Low Level Gran Analysis Ueq/L',\n 'Alkalinity, Total (Mg/L As Caco3)', 'Bicarbonate Ion (Mg/L As Hco3)', 'Carbonate Ion (Mg/L As Co3)',\n 'Nitrogen, Ammonia, Total (Mg/L As N)', 'Ammonia, Unionzed (Mg/L As N)',\n 'Nitrite Nitrogen, Total (Mg/L As N)', 'Ammonia, Unionized (Calc Fr Temp-Ph-Nh4) (Mg/L)',\n 'Nitrate Nitrogen, Total (Mg/L As N)', 'Nitrogen, Kjeldahl, Total, (Mg/L As N)',\n 'Nitrite Plus Nitrate, Total 1 Det. (Mg/L As N)', 'Phosphorus (P), Water, Total Recoverable Ug/L',\n 'Phosphorus, Total (Mg/L As P)', 'Phosphorus, Dissolved Orthophosphate (Mg/L As P)',\n 'Carbon, Dissolved Organic (Mg/L As C)',\n 'Carbon, Dissolved Inorganic (Mg/L As C)',\n 'Hardness, Total (Mg/L As Caco3)', 'Calcium (Mg/L As Caco3)',\n 'Calcium, Dissolved (Mg/L As Ca)',\n 'Magnesium, Dissolved (Mg/L As Mg)',\n 'Sodium, Dissolved (Mg/L As Na)',\n 'Potassium, Dissolved (Mg/L As K)',\n 'Chloride, Dissolved In Water Mg/L',\n 'Sulfate, Dissolved (Mg/L As So4)',\n 'Fluoride, Dissolved (Mg/L As F)',\n 'Silica, Dissolved (Mg/L As Si02)',\n 'Arsenic, Dissolved (Ug/L As As)', 'Arsenic, Total (Ug/L As As)',\n 'Barium, Dissolved (Ug/L As Ba)', 'Barium, Total (Ug/L As Ba)',\n 'Beryllium, Total (Ug/L As Be)', 'Boron, Dissolved (Ug/L As B)',\n 'Boron, Total (Ug/L As B)', 'Cadmium, Dissolved (Ug/L As Cd)',\n 'Cadmium, Total (Ug/L As Cd)', 'Chromium, Dissolved (Ug/L As Cr)',\n 'Chromium, Hexavalent (Ug/L As Cr)', 'Chromium, Total (Ug/L As Cr)',\n 'Copper, Dissolved (Ug/L As Cu)', 'Copper, Total (Ug/L As Cu)',\n 'Iron, Dissolved (Ug/L As Fe)', 'Lead, Dissolved (Ug/L As Pb)',\n 'Lead, Total (Ug/L As Pb)', 'Manganese, Total (Ug/L As Mn)',\n 'Manganese, Dissolved (Ug/L As Mn)', 'Thallium, Total (Ug/L As Tl)',\n 'Nickel, Dissolved (Ug/L As Ni)', 'Nickel, Total (Ug/L As Ni)',\n 'Silver, Dissolved (Ug/L As Ag)', 'Silver, Total (Ug/L As Ag)',\n 'Zinc, Dissolved (Ug/L As Zn)', 'Zinc, Total (Ug/L As Zn)',\n 'Antimony, Total (Ug/L As Sb)', 'Aluminum, Total (Ug/L As Al)',\n 'Selenium, Dissolved (Ug/L As Se)', 'Selenium, Total (Ug/L As Se)',\n 'Tritium (1H3),Total (Picocuries/Liter)',\n 'Hardness, Ca Mg Calculated (Mg/L As Caco3)',\n 'Chlorine, Total Residual (Mg/L)',\n 'Residue,Total Filtrable (Dried At 180C),Mg/L',\n 'Nitrate Nitrogen, Dissolved (Mg/L As No3)', 'Iron (Ug/L As Fe)',\n 'Phosphorus, Total, As Po4 - Mg/L', 'Mercury, Total (Ug/L As Hg)']\nSTORLegRes = STORLegRes[STORLegRes.Param.isin(STORKeepers)]\n\ndef parsplit(x,p):\n x = str(x).rstrip().lstrip()\n if p == \"Un\":\n z = -1\n x = str(x).replace(\"Mg/L\", \"mg/l\")\n x = str(x).replace(\"Ug/L\", \"ug/l\")\n x = str(x).replace(\"o\", \"O\")\n x = str(x).replace(\"c\", \"C\")\n x = str(x).replace(\"TOtal ReCOverable\",\"Total Recoverable\") \n x = str(x).replace(\"UmhOs\", \"umhos\")\n x = str(x).replace(\"TOtal\",\"Total\")\n elif p== \"Par\":\n z = 0\n x = str(x).replace(\", Standard Units\",\"\")\n x = str(x).replace(\", Unionized\",\"\")\n x = str(x).replace(\", Unionzed\",\"\")\n x = str(x).replace(\",Low Level Gran Analysis\",\"\")\n x = str(x).replace(\" Ion\",\"\")\n x = str(x).replace(\",Total\",\", Total\")\n if x == \"Ph\" or x == \"Ph, Lab\":\n x = str(x).replace(\"Ph\",\"pH\")\n if \"(\" in x:\n x = str(x).replace(\" As \", \" as \")\n return str(x).split(\" (\")[z].rstrip(\")\").rstrip().lstrip()\n else:\n return str(x).split(\" \")[z].rstrip().lstrip()\n\ndef splitmore(x):\n if \"NO3\" in x:\n return x\n elif \" as \" in x:\n return x.split(\" as \")[0]\n elif x == \"As S) WhOle Water, mg/l\" or x == \"Dried At 180C),mg/l\" or x==\"PhOsphOrus, Total, As PO4 - mg/l\":\n return \"mg/l\"\n elif x == \"P), Water, Total Recoverable ug/l\":\n return \"ug/l\"\n else:\n return x\n\ndef unitconv(x):\n if x[1]==\"ug/l\":\n return x[0]/1000\n elif x[1]==\"Degrees Fahrenheit\":\n return (float(x[0])-32.0)*(5.0/9.0)\n else:\n return x[0]\n\nSTORLegRes[\"Unit\"] = STORLegRes[\"Param\"].apply(lambda x: parsplit(x,\"Un\"), 1)\nSTORLegRes[\"Param\"] = STORLegRes[\"Param\"].apply(lambda x: parsplit(x,\"Par\"), 1)\nSTORLegRes[\"Unit\"] = STORLegRes[\"Unit\"].apply(lambda x: splitmore(x), 1)\nSTORLegRes[\"ResultValue\"] = STORLegRes[[\"ResultValue\",\"Unit\"]].apply(lambda x: unitconv(x), 1)\nSTORLegRes[\"Unit\"] = STORLegRes[\"Unit\"].apply(lambda x: \"mg/l\" if x==\"ug/l\" else x, 1)\nSTORLegRes[\"Unit\"] = STORLegRes[\"Unit\"].apply(lambda x: \"Degrees Centigrade\" if x==\"Degrees Fahrenheit\" else x, 1)\n\nSTORStaField = {\"Agency\":\"OrgId\",\"AgencyName\":\"OrgName\",\"Station\":\"StationId\", \"DepthUnits\":\"DepthUnit\",\n \"Latitude\":\"Lat_Y\", \"Longitude\":\"Lon_X\", \"HUC\":\"HUC8\", \"StationDepth\":\"Depth\"}\nSTORLegSta.rename(columns=STORStaField,inplace=True)\n\nSTORLegSta.columns\n\nSTORLegSta.drop([\"RchmileSegment\", \"MilesUpReach\", \"Rchonoff\", \"Description\", \"G\", \"S\", \"StationAlias\",\n \"Rchname\", \"StateName\", \"CountyName\"], inplace=True, axis=1)\n\nSTORLegSta.StationType = STORLegSta.StationType.apply(lambda x: str(x).rstrip(\" \").strip(\"/SUPPLY\").split(\"/\")[-1].title(),1)\n\nLegTypeDict = {\"We\":\"Well\"}\nSTORLegSta.StationType = STORLegSta.StationType.apply(lambda x: LegTypeDict.get(x,x),1)\n\nSTORLegSta.StationId = STORLegSta[\"StationId\"].apply(lambda x: \"EPALeg-\" + x, 1)",
"UGS Data",
"UGSfield = pd.read_excel(fielddata,\"FieldChem\") #Field data\nUGSNO3 = pd.read_excel(fielddata,\"Nitrate\") #Nitrate data provided by Millville City\n\nUGS = pd.read_csv(UGSFile, engine=\"python\")\nUGS[\"StationId\"] = UGS[\"SITE\"].apply(lambda x:\"UGS-\"+str(x).zfill(4),1)\n\nUGSSta = UGS.drop([u'OBJECTID_1',u'SITE', u'TDS', u'Temp', u'Cond', u'CO2', u'HCO3', \n u'CO3',u'Na', u'pH', u'Ca', u'SO4', u'NO3', u'As_', u'Cl', u'K',\n u'Mg', u'Hard', u'NH4'], axis=1)\n\nUGSRe = UGS.drop([u'OBJECTID_1',u'SITE',u'StationType', u'Geology', u'Elev', u'Lat_Y', u'Lon_X', u'StationName', \n u'OrgId', u'WRNUM', u'SITE', u'UTM_X', u'UTM_Y', u'Depth_ft'], axis=1)\n\nUGSRe[\"SampleId\"] = UGSRe.index\n\nUGSRe.reset_index(inplace=True)\nUGSRe.set_index([\"StationId\",\"SampleId\"], inplace=True)\n\nUGSRe.drop(UGSRe.columns[0],inplace=True,axis=1)\n\nUGSStack = UGSRe.stack().to_frame()\nUGSStack.columns = [\"ResultValue\"]\n\nUGSStack.reset_index(inplace=True)\n\nUGSStack.columns=[\"StationId\",\"SampleId\",\"Param\",\"ResultValue\"]\n\ndef unitcon(x):\n if x==\"pH\":\n return \"\"\n elif x==\"Temp\":\n return \"C\"\n elif x==\"Cond\":\n return \"uS/cm\"\n else:\n return \"mg/l\"\n \nUGSStack[\"Unit\"] = UGSStack[\"Param\"].apply(lambda x: unitcon(x),1)\nUGSStack[\"ParAbb\"] = UGSStack[\"Param\"]\nUGSStack[\"OrgId\"] = \"UGS\"\nUGSStack[\"OrgName\"] = \"Utah Geological Survey\"\nUGSStack[\"ResultValue\"] = UGSStack[['Param','ResultValue']].apply(lambda x: x[1]*1.288 if x[0]=='Ammonia as N' else x[1],1)\nUGSStack[\"Param\"] = UGSStack['Param'].apply(lambda x: 'Ammonia' if x=='Ammonia as N' else x, 1)\nUGSStack[\"ResultValue\"] = UGSStack[['Param','ResultValue']].apply(lambda x: x[1]*3.066 if x[0]=='Phosphate, Tot. Dig. (as P)' else x[1],1)\nUGSStack[\"Param\"] = UGSStack['Param'].apply(lambda x: 'Phosphate' if x=='Phosphate, Tot. Dig. (as P)' else x, 1)",
"State Lab\nThese are raw data results sent to the UGS via Tab-delimited tables from the Utah State Health Laboratory. They make up the bulk of results of data collected for this study. They are supplimented with field data translated to spreadsheets.",
"SLSampMatch = pd.read_excel(fielddata,\"StateLabMatch\")\nSLStat = pd.read_excel(fielddata,\"Stations\")\n#SLStat = pd.merge(SLSampMatch, SLStations, on='StationId', how='outer')\n#SLStat.reset_index(inplace=True)\n\nSLStat\n\nSL0 = pd.read_table(statelabresults0, sep=\"\\t\", lineterminator=\"\\n\", error_bad_lines=False)\nSL0 = SL0[SL0['Collector']=='PI']\nSL1 = pd.read_table(statelabresults1, sep=\"\\t\", lineterminator=\"\\n\", error_bad_lines=False)\nSL1 = SL1[SL1['Collector']=='PI']\nSL2 = pd.read_table(statelabresults2, sep=\"\\t\", lineterminator=\"\\n\", error_bad_lines=False)\nSL2 = SL2[SL2['Collector']=='PI']\n\nSL = pd.concat([SL0,SL1,SL2])\n\nSL[\"OrgId\"] = \"UGS\"\nSL[\"OrgName\"] = \"Utah Geological Survey\"\nSL['DetectCond'] = SL['Problem#Identifier'].apply(lambda x: 'Not Detected' if str(x).rstrip()=='<' else np.nan,1)\nSL['SampleDate'] = SL[['Sample#Date','Sample#Time']].apply(lambda x: datetimefix(x,\"%m/%d/%y %H:%M\"),1)\n\nSLHead = {'Sample#Number':'SampleId', 'Param#Description':'Param', 'Result#Value':'ResultValue','Units':'Unit', \n 'Lower#Report#Limit':'MDL','Method#ID':'SampMeth','Analysis#Date':'AnalysisDate'}\nSL.rename(columns=SLHead,inplace=True)\n\nSL['Sample#Description'].unique()\n\nSL.drop([u'Lab#Code', u'Station#ID', u'Source#Code', u'Sample#Date',\n u'Sample#Time', u'Sample#Type', u'Cost#Code', u'Billing#Code',\n u'Agency#Bill#Code', u'Trip#ID', u'Sample#Description', u'Collector',\n u'Sample#Recieved#Date', u'Chain#of#Custody#Ind.', u'Replicate#Number',\n u'Sample#Comment', u'Method#Number', u'Method#Agency',\n u'Method#Description', u'Param#Number', u'CAS#Number',\n u'Matrix#Number', u'Matrix#Description', u'Preparation#Date',\n u'Problem#Identifier', u'Result#Code', \n u'Upper#Quant#Limit', u'Method#Detect#Limit',\n u'Confidence#Limit', u'%#Confidence#Limit',u'Dilution#Factor',\n u'Batch#Number',u'Comment#Number', u'Comment#Text'], inplace=True, axis=1)\n\nSL.columns\n\nSLRes = pd.merge(SL, SLSampMatch, on='SampleId', how='left')\n\nSLStat.drop_duplicates(inplace=True)\n\ndef SLparnorm(x):\n p = str(x[0]).rstrip().lstrip().lower()\n u = str(x[2]).rstrip().lstrip().lower()\n if p == 'nitrate nitrogen, total (mg/l as n)':\n return 'Nitrate', x[1]*4.427, 'mg/l'\n elif p == 'nitrite nitrogen, total (mg/l as n)':\n return 'Nitrite', x[1]*3.285, 'mg/l'\n elif p == 'ammonia as n':\n return 'Ammonium', x[1]*1.288, 'mg/l'\n elif p == 'sulfate (as s) whole water, mg/L':\n return 'Sulfate', x[1]*2.996, 'mg/l'\n elif p in ('phosphate, tot. dig. (as p)', 'phosphate-phosphorus as p','orthophosphate as p'):\n return 'Phosphate', x[1]*3.066, 'mg/l'\n elif u == 'ug/l':\n return x[0], x[1]/1000, 'mg/l'\n else:\n return x[0], x[1], str(x[2]).rstrip()\n\ndef MDLfix(x):\n u = str(x[1]).rstrip().lstrip().lower()\n if np.isfinite(x[2]):\n return x[0]\n elif u=='ug/l':\n return x[0]/1000\n else:\n return x[0]\n\nSLRes['MDL'] = SLRes[['MDL','Unit','ResultValue']].apply(lambda x: MDLfix(x),1)\nSLRes['Param'], SLRes['ResultValue'], SLRes['Unit'] = zip(*SLRes[['Param','ResultValue','Unit']].apply(lambda x: parnorm(x),1))\n\nSLRes.StationId.unique()",
"Combine Data",
"Res = pd.concat([STORLegRes,AGRes,SDWISRes,WQP,UGSStack,SLRes,UGSfield,UGSNO3])\n\nRes = Res[~Res[\"Unit\"].isin(['ueq/L','Ueq/L','ueq/l','tons/ac ft','tons/day','meq/L'])]\nRes = Res[~Res[\"Param\"].isin([\"Heptachlorobiphenyl\", \"Hydrocarbons\", \"Hydroxide\", \"Ionic strength\",\n \"Floating debris, severity\", \"Carbon Tetrachloride\", \"Trichlorobiphenyl\",\n \"Vinyl Chloride\", \"True color\", \"Color\", \"Trash, Debris, Floatables\",\n \"Total volatile solids\", \"Temperature, air\", \"Residue, Total Filtrable\",\n \"Pentachlorobiphenyl\", \"Odor threshold number\", \"Odor, atmospheric\",\n \"Instream features, est. stream width\", \"Hydroxide\", \n \"Light, transmissivity\",\"Algae, floating mats (severity)\"])]\nlen(Res)\n\nRes[[\"Param\",\"Unit\",\"USGSPCode\"]].drop_duplicates(subset=[\"Param\",\"Unit\"]).sort_values(by=[\"Param\"]).to_clipboard()\n\nStat = pd.concat([STORLegSta, AGStat, SDWISSta, WQPStat, SLStat, UGSSta])\n\nparmatch = pd.read_excel(rootname + \"Aquachem.xlsx\")\n\nparmatchdict = dict(zip(parmatch.Param.values, parmatch.ParrAbb.values))\nRes[\"ParAbb\"] = Res[[\"ParAbb\",\"Param\"]].apply(lambda x: parmatchdict.get(x[1],x[0]),1)\n\nresults = Res.dropna(subset=[\"StationId\",\"Param\",\"SampleId\"], how=\"any\")\n\nStat.loc[:,\"StationName\"] = Stat[\"StationName\"].apply(lambda x: str(x).strip().lstrip().rstrip(),1)\nStat.loc[:,\"StationId\"] = Stat[\"StationId\"].apply(lambda x: str(x).strip().lstrip().rstrip(),1)\nRes.loc[:,\"StationId\"] = Res[\"StationId\"].apply(lambda x: str(x).strip().lstrip().rstrip(),1)\n\nresults.loc[:,\"Unit\"] = results[[\"ParAbb\",\"Unit\"]].apply(lambda x: \"C\" if x[0]==\"Temp\" else x[1],1)\nresults.loc[:,\"Unit\"] = results[[\"ParAbb\",\"Unit\"]].apply(lambda x: \"umhos/cm\" if x[0]==\"Cond\" else x[1],1)\nresults.loc[:,\"Unit\"] = results[[\"ParAbb\",\"Unit\"]].apply(lambda x: \"\" if x[0]==\"pH\" else x[1],1)\n\nresults.drop([\"AnalysisDate\",\"AnalytMeth\",\"SampType\",\"AnalytMethId\", \"BelowLim\", \"StationName\",\n \"MethodDescript\", \"LabComments\", \"LabName\", \"LimitType\", \"ProjectId\", \"QualCode\",\n \"OrgName\",\"R\", \"ResultComment\",\"ResultStatus\",\"SampComment\", \"SampEquip\", \n \"SampDepthRef\", \"SampDepthU\",\"SampDepth\",\"SampType\", \"USGSPCode\",\n \"SampMeth\", \"SampMethName\",\"SampYear\",\"TestNo\"],inplace=True,axis=1)",
"Clean Up Non Detects",
"NDs = {'Not Detected':'<', 'Present Above Quantification Limit':'<', 'ND ':'<', '*Present >QL ':'>',\n 'Present Below Quantification Limit':'<', '*Non-detect ':'<', 'Detected Not Quantified':'<', \n 'Systematic Contamination':'<'}\nresults.DetectCond = results.DetectCond.apply(lambda x: NDs.get(x,np.nan),1)\n\ndef is_nan(x): \n '''\n this function identifies nan values\n Source: http://stackoverflow.com/questions/944700/how-to-check-for-nan-in-python\n '''\n try: \n return math.isnan(x) \n except: \n return False \n\ndef detected(x):\n '''\n Finds nondetects and fixes units and values\n '''\n if x[1]=='<' and np.isfinite(x[0]):\n return x[1]+str(x[0]) \n elif x[1]=='<' and np.isfinite(x[2]):\n if str(x[3]).rstrip().lower() == 'ug/l':\n return x[1]+str(x[2]/1000)\n else:\n return x[1]+str(x[2])\n else:\n return x[0]\n\nresults.ResultValue = results[['ResultValue','DetectCond','MDL','MDLUnit']].apply(lambda x: detected(x),1)\n\ndef MDLfill(x):\n if x[0] <= 0 and x[1]>0:\n return 0\n elif x[2] == '<':\n return 0\n elif x[0] < x[1]:\n return 0\n else:\n return 1\nresults.loc[:,'ResValue'] = pd.to_numeric(results['ResultValue'], errors='coerce') \nresults.loc[:,'Censored'] = results[['ResValue','MDL','DetectCond']].apply(lambda x: MDLfill(x),1)\n\nmatchDict = {'414143111495501':'USGS-414143111495501','414115111490301':'USGS-414115111490301',\n 'SDWIS3117.0WS004':'USGS-414115111490301', \n 'EPALeg-0301203':'414029111483501','SDWIS3116.0WS003':'414029111483501',\n 'EPALeg-0301201':'USGS-414024111481101','SDWIS5435.0WS001':'USGS-414024111481101',\n '414024111481101':'USGS-414024111481101','EPALeg-0300101':'SDWIS5411.0WS001',\n 'EPALeg-0300102':'SDWIS5412.0WS002', 'EPALeg-0300103':'SDWIS5413.0WS003', \n 'UGS-107.5':'SDWIS3143.0WS001','UDAF-01492':'UGS-0412', 'UDAF-03165':'UGS-106.5',\n 'SDWIS3126.0WS002':'USGS-414216111485201', 'EPALeg-0301702':'USGS-414216111485201',\n 'EPALeg-0301901':'USGS-414328111493001', 'SDWIS3131.0WS001':'USGS-414328111493001', \n 'EPALeg-0301005':'SDWIS3112.0WS005', 'EPALeg-0301002':'USGS-414417111484301', \n 'SDWIS3109.0WS002':'USGS-414417111484301', 'SDWIS3113.0WS006':'USGS-414459111493601', \n 'SDWIS3127.0WS003':'414213111493101', 'SDWIS3159.0WS003':'SDWIS3157.0WS001','UDAF-01500':'UGS-63.5',\n 'SDWIS3111.0WS004':'USGS-414441111490701', 'EPALeg-0301904':'SDWIS3133.0WS004',\n 'EPALeg-0301004':'USGS-414441111490701','EPALeg-0301502':'SDWIS3118.0WS002',\n 'UDAF-01589':'UDAF-01568','UDAF-01586':'UDAF-01566','UGS-0050':'UDAF-01566',\n 'EPALeg-0300104':'SDWIS3088.0WS004', 'UDAF-01585':'UGS-0032', 'UDAF-01565':'UGS-0032',\n 'EPALeg-0300201':'SDWIS3091.0WS001', 'EPALeg-0300204':'SDWIS3094.0WS004',\n 'EPALeg-0301803':'SDWIS3129.0WS003','EPALeg-0300405':'SDWIS5418.0WS005',\n 'EPALeg-0300404':'SDWIS5417.0WS004', 'EPALeg-0300403':'SDWIS5416.0WS003',\n 'SDWIS5439.0WS003':'SDWIS5416.0WS003', 'SDWIS5460.0WS003':'SDWIS5416.0WS003',\n 'SDWIS5414.0WS001':'SDWIS5458.0WS001', 'SDWIS5437.0WS001':'SDWIS5458.0WS001',\n 'EPALeg-0308601':'USGS-415828111460001', 'SDWIS5487.0WS001':'USGS-415828111460001',\n 'SDWIS5430.0WS002':'USGS-415828111460001', 'SDWIS5423.0WS003':'USGS-415828111460001',\n 'SDWIS5421.0WS001':'USGS-415836111464701', 'EPALeg-0304901':'SDWIS5479.0WS001', \n 'EPALeg-0303201':'SDWIS5470.0WS001', 'SDWIS5432.0WS001':'USGS-414535111423001',\n 'EPALeg-0303001':'SDWIS5469.0WS001','EPALeg-0307701':'SDWIS5485.0WS001',\n 'EPALeg-0308301':'SDWIS5486.0WS001', 'EPALeg-0301501':'SDWIS5445.0WS001',\n 'EPALeg-0300701':'USGS-415120111440001', 'SDWIS5424.0WS001':'USGS-415120111440001',\n 'EPALeg-0302001':'SDWIS5455.0WS001', 'EPALeg-0301101':'SDWIS5433.0WS001', \n 'EPALeg-0301102':'SDWIS5434.0WS002'}\n\n\nStat.loc[:,'StationId'] = Stat['StationId'].apply(lambda x: matchDict.get(x,x),1)\nresults.loc[:,'StationId'] = results['StationId'].apply(lambda x: matchDict.get(x,x),1) \nresults.loc[:,'SampleDate'] = pd.to_datetime(results.SampleDate)\n\ndef depthFill(x):\n if x > 0:\n return x\n \ndef depthUnitFill(x):\n if x > 0:\n return 'ft'\n\nStat.Depth = Stat['Depth_ft'].apply(lambda x: depthFill(x),1)\nStat.DepthUnit = Stat['Depth_ft'].apply(lambda x: depthUnitFill(x),1)\n\nWINdict = {'SDWIS3180.0WS001':435116, 'UGS-47.5':32700, 'UDAF-01566':30211, 'UGS-46.5':12420, \n 'USGS-414525111503705':427268, 'UDAF-01569':28327, 'SDWIS3112.0WS005':2694,\n 'UDAF-03162':434818, 'USGS-414328111493001':2823, 'USGS-414332111491001':2836,\n 'SDWIS3133.0WS004':2848, 'UGS-91.5':28647, 'UGS-95.5':35814, 'SDWIS3128.0WS004':18590,\n 'UGS-0102':426853, 'USGS-414115111490301':2722, 'UT4140521114843201':32975, '414029111483501':2721,\n 'SDWIS3088.0WS004':2741,'UGS-63.5':7126, 'UGS-0084':9639, 'USGS-414134111544701':434098, 'UGS-0070':35061,\n 'UGS-0029':32851,'UGS-0030':26663,'UGS-0034':29110, 'UGS-0055':3728, 'UGS-61.5':9280, \n 'SDWIS3129.0WS003':2816, 'UGS-0043':29329, 'UGS-0889':24493, 'UGS-44.5':28333}\n \nStat.WIN = Stat['StationId'].apply(lambda x: WINdict.get(x,x),1)\n\nresults.SampleID = results.SampleId.apply(lambda x: str(x).replace(' ',''),1)\nresults.StationId = results.StationId.apply(lambda x: str(x).replace(' ',''),1)\nStat.StationId = Stat.StationId.apply(lambda x: str(x).replace(' ',''),1)\n\nresults.drop_duplicates(subset = ['SampleId','ParAbb'],inplace=True)\nStat.drop_duplicates(subset = ['StationId'],inplace=True)\n\nresultsNoND = results[(~results['DetectCond'].isin(['<','>']))]",
"Pivot Data",
"datap = resultsNoND.pivot(index='SampleId', columns='ParAbb', values='ResValue')\n\ndatap.dropna(subset=['SO4','Cond','Temp','TDS','pH_field'],how='all',inplace=True)\ndatap.drop(datap.columns[[0]], axis=1, inplace=True)\n\nresults.columns\n\nresdrop = [ 'DetectCond', u'Comment#Number.1', u'Comment#Text.1', 'ResultValue', 'ResValue',\n 'MDL', 'MDLUnit', 'OrgId', 'Param', 'ResultValue', 'SampFrac',\n 'SampMedia', 'Unit', 'ParAbb']\nresPivot = results.drop(resdrop, axis=1)\n\ndatapiv = pd.merge(datap, resPivot, left_index=True, right_on='SampleId',how='left')\ndatapiv.drop_duplicates(subset=['SampleId'],inplace=True)",
"Add GIS Information",
"def projy(x):\n inProj = Proj(init='epsg:4326') #WGS84\n outProj = Proj(init='epsg:2152') #NAD83(CSRS98) / UTM zone 12N\n x2,y2 = transform(inProj,outProj,x[0],x[1])\n return y2\n\ndef projx(x):\n inProj = Proj(init='epsg:4326') #WGS84\n outProj = Proj(init='epsg:2152') #NAD83(CSRS98) / UTM zone 12N\n x2,y2 = transform(inProj,outProj,x[0],x[1])\n return x2\n\ndef getelev(x):\n elev = \"http://ned.usgs.gov/epqs/pqs.php?x=\"+str(x[0])+\"&y=\"+str(x[1])+\"&units=Meters&output=xml\"\n response = urllib2.urlopen(elev)\n html = response.read()\n d = xmltodict.parse(html)\n return float(d['USGS_Elevation_Point_Query_Service']['Elevation_Query']['Elevation'])\n\nStat.loc[:,'UTM_X'] = Stat[['Lon_X','Lat_Y']].apply(lambda x: projx(x),1)\nStat.loc[:,'UTM_Y'] = Stat[['Lon_X','Lat_Y']].apply(lambda x: projy(x),1)\nStat.loc[:,'Elev'] = Stat[['Lon_X','Lat_Y']].apply(lambda x: getelev(x),1)\n\npivStats = Stat.drop(['Aquifer', 'ConstDate', 'Depth', 'DepthUnit','AquiferType', 'HorCollMeth', 'Geology',\n 'HoleDUnit', 'HoleDepth', 'HUC8', 'HorAccUnit', 'HoleDUnit', 'SCREENDEPT',\n 'ElevUnit', 'ElevRef', 'ElevAcc', 'ElevMeth','CountyCode', 'ElevAccUnit',\n 'HorAcc', 'StateCode', 'HorRef',\n 'OrgId', 'StationComment'], axis=1)\n\npivStats.reset_index(inplace=True)\npivStats.set_index(\"StationId\",inplace=True)\n\npivdata = pd.merge(datapiv, pivStats, left_on=\"StationId\", right_index=True, how='left')\n\npivdata.drop_duplicates(subset=['SampleId'],inplace=True)",
"Convert and Balance Samples",
"alkmatch = pivdata[(pivdata['Meas_Alk']>0)&(pivdata['HCO3']>0)]\n\nx = [np.float64(i) for i in alkmatch['Meas_Alk'].values]\ny = [np.float64(i) for i in alkmatch['HCO3'].values]\n\nX = sm.add_constant(x)\nres = sm.RLM(y,X).fit()\nb = res.params[0]\nm = res.params[1]\nprint m\nprint b\n\nplt.figure()\nplt.scatter(x,y)\nplt.plot(x, res.fittedvalues, color='red')\n\ndef HCO3fix(x):\n if x[0]>0:\n return x[0]\n elif x[1]>0:\n return x[1]*m+b\n else:\n pass\n\npivdata['HCO3'] = pivdata[['HCO3','Meas_Alk']].apply(lambda x: HCO3fix(x),1)\n\nparlist = ['Ca','Mg','Na','K','Cl','HCO3','CO3','SO4','NO3','NO2','CO2','TDS','Si','Zn_tot','As_tot']\n\ndef removeInf(x):\n if x <= 0:\n return np.nan\n else:\n return np.log(x)\n\nfor i in parlist:\n if i in pivdata.columns:\n pivdata[i+'Ln'] = pivdata[i].apply(lambda x: removeInf(x),1)\n\nd = {'Ca':0.04990269, 'Mg':0.082287595, 'Na':0.043497608, 'K':0.02557656, 'Cl':0.028206596, 'HCO3':0.016388838, 'CO3':0.033328223, 'SO4':0.020833333, 'NO2':0.021736513, 'NO3':0.016129032}\nchemlist = ['Ca','Mg','Na','K','Cl','HCO3','CO3','SO4','NO3','NO2']\n\nfor i in chemlist:\n if i in pivdata.columns:\n pivdata[i+'Meq'] = pivdata.loc[:,i] * d[i]\n\npivdata.drop_duplicates(subset = ['StationId','SampleDate'], inplace=True)\n\ndef sumIons(x):\n b = 0\n for i in x:\n if i>0:\n b = i + b\n else:\n b = b\n return b\n\npivdata['Anions'] = pivdata[['ClMeq','HCO3Meq','SO4Meq','CO3Meq']].apply(lambda x: sumIons(x),1)\npivdata['Cations'] = pivdata[['KMeq','MgMeq','NaMeq','CaMeq']].apply(lambda x: sumIons(x),1)\npivdata['EC'] = pivdata['Anions'] - pivdata['Cations'] \npivdata['CBE'] = ((pivdata['Cations']-np.abs(pivdata['Anions']))/(pivdata['Cations']+np.abs(pivdata['Anions'])))*100",
"Subset Data",
"#piperdata = pivdata.dropna(subset = ['Ca','Na','Cl','Mg','SO4','HCO3'], how='any')\n#piperdata.drop_duplicates(subset=['SampleId'], inplace=True)\n\nprint(len(pivdata))\n\npivgrps = pivdata.groupby(['StationId']).median()\npivGoodData = pivdata[abs(pivdata.CBE)<=5]\npipergrps = pivGoodData.groupby(['StationId']).median()\npipergrps['sampCount'] = pivGoodData.groupby(['StationId'])['CBE'].agg({'cnt':(lambda x: np.count_nonzero(~np.isnan(x)))}).reset_index#squeeze=True\n\npivgrp = pd.merge(pivgrps, pivStats, left_index=True, right_index=True, how='left')\npipergrp = pd.merge(pipergrps, pivStats, left_index=True, right_index=True, how='left')\npipergrp.drop_duplicates(inplace=True)\npivgrp = pivgrp.reset_index().drop_duplicates(subset=['StationId']).set_index('StationId')\n\nprincpiv = pivGoodData[(pivGoodData.SampleDate < datetime.datetime(2014,3,10))&(pivGoodData.UTM_X < 435000) & (pivGoodData.UTM_X > 422000) \\\n & (pivGoodData.UTM_Y > 4608000) & (pivGoodData.UTM_Y < 4634000) & (pivGoodData.StationType=='Well')]\nprincpiv.drop_duplicates(subset = ['SampleId'],inplace=True)\n\nResOldPrinc = resultsNoND[(resultsNoND.SampleId.isin(princpiv.SampleId))]\n\nGWStat = Stat[Stat.StationType.isin(['Well','Spring'])]\nGWRes = results[results.StationId.isin(list(GWStat.StationId))]\n\nNitrate = GWRes[GWRes['ParAbb'].isin(['N','NO2','NO3','NH4'])]\nNitrateStat = GWStat[GWStat.StationId.isin(list(Nitrate.StationId))]",
"Summarize & Plot Data",
"ParrAbbSummary = ResOldPrinc.groupby('ParAbb')['ResValue'].agg({'min':np.min, 'mean':np.mean,\n 'qrt5':(lambda x: np.percentile(x,q=5)),\n 'qrt95':(lambda x: np.percentile(x,q=95)),\n 'range':(lambda x: np.max(x)-np.min(x)), \n 'lqrt':(lambda x: np.percentile(x,q=25)),\n 'median':np.median, \n 'uqrt':(lambda x: np.percentile(x,q=75)), \n 'max':np.max, 'std':np.std, \n 'cnt':(lambda x: np.count_nonzero(~np.isnan(x)))}).reset_index()\n\nParrAbbSummary\n\nmanyPars = list(ParrAbbSummary[ParrAbbSummary['cnt'] >= 30]['ParAbb'])\n\nResOldPrinc = ResOldPrinc[ResOldPrinc['ParAbb'].isin(manyPars)] \nsummaryStats = ParrAbbSummary[ParrAbbSummary['ParAbb'].isin(manyPars)]\n\nsummaryStats\n\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 15, 10\n\n\nparLabCounts = ParrAbbSummary.reset_index()\nparLabCounts = parLabCounts.set_index(['ParAbb'])\nplt.figure()\nboxres= ResOldPrinc[ResOldPrinc['ParAbb'].isin(['pH_lab','pH_field'])] \nboxres.boxplot(column='ResValue', by='ParAbb',vert=False)\n\nplt.title('Boxplot of Principal Aquifer pH')\nplt.yticks([1,2],['Field pH (n = %s)'%(parLabCounts.loc['pH_field','cnt']),'Lab pH (n = %s)'%(parLabCounts.loc['pH_lab','cnt'])])\nplt.xlim(6,9)\nplt.xticks(np.arange(6,9.25,0.25))\nplt.xlabel('pH')\nplt.savefig(rootname+\"pHBoxplot.svg\")\nplt.savefig(rootname+\"pHBoxplot.pdf\")\n\nplt.figure()\nboxres= ResOldPrinc[ResOldPrinc['ParAbb'].isin(['Temp'])] \nboxres.boxplot(column='ResValue', by='ParAbb',vert=False)\n\nplt.title('Boxplot of Principal Aquifer Temperature')\nplt.yticks([1],['Temperature (deg. C) (n = %s)'%(parLabCounts.loc['Temp','cnt'])])\nplt.xticks(np.arange(5,30,1))\nplt.xlabel('Temp. (deg. C)')\nplt.savefig(rootname+\"pHBoxplot.pdf\")\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nsns.set(style=\"whitegrid\")\nrcParams['figure.figsize'] = 15, 20\n\n\nparLabCounts = ParrAbbSummary.reset_index()\nparLabCounts = parLabCounts.set_index(['ParAbb'])\n\nparlist = ['Mg','Ca','Na','Cl','SO4','HCO3','Si','K','NO3','TDS','N']\nboxres = ResOldPrinc[ResOldPrinc['ParAbb'].isin(parlist)] \nplt.figure()\nsns.violinplot(x=\"ResValue\", y='ParAbb', data=boxres, palette=\"Set3\", scale='width', cut=0)\nplt.xlabel('mg/L')\nplt.xlim(0,1200)\nplt.ylabel('Chemical Constituent')\nplt.savefig(rootname+'violinMajor.pdf')\n\nparLabCounts = ParrAbbSummary.reset_index()\nparLabCounts = parLabCounts.set_index(['ParAbb'])\n\n\ndef parboxplot(parlist):\n plt.figure()\n boxres= ResOldPrinc[ResOldPrinc['ParAbb'].isin(parlist)] \n boxres.boxplot(column='ResValue', by='ParAbb',vert=False)\n #labs = [str(parlist[i]) + \" (n= %s)\"%(parLabCounts.loc[parlist[i],'cnt']) for i in range(len(parlist))]\n #tickloc = [b+1 for b in range(len(parlist))]\n #plt.yticks(tickloc,labs)\n \n\nparlist = ['pH_lab','pH_field']\nparboxplot(parlist)\nplt.xlabel('pH')\nplt.savefig(rootname+'pHBoxplot')\n\nparlist = ['Mg','Ca','Na','Cl','SO4','HCO3','Si','K','NO3','TDS','N']\nparboxplot(parlist)\nplt.title('Major Ions')\nplt.xlabel('mg/L')\nplt.grid(which='both',axis='both')\nplt.xscale('log')\nplt.xlim(0.1,1000)\n\nplt.savefig(rootname+'MajorIonsBoxplot.pdf')\n\n#plt.xlim(0.00001,1000)\n#plt.xscale('log')",
"Export data",
"pipergrps.to_csv(rootname+'avgpiper.csv',index_label='StationId')\npivdata.to_csv(rootname+'pivotdata.csv',index_label='OBJECTID')\nprincpiv.to_csv(rootname+'PrincAquiferData.csv',index_label='OBJECTID')\npivgrp.to_csv(rootname+'pivgrps.csv',index_label='StationId')\n\nNitrate.to_csv(rootname+'NitrateResults.csv')\nNitrateStat.to_csv(rootname+'NitrateStations.csv')\n\nsummaryStats.to_csv(rootname+'PrincAquifStats.csv')\n\nsummaryStats.to_clipboard()\n\nGWStat.to_csv(rootname+'GWStations.csv',index_label='ObjectID')\nGWRes.to_csv(rootname+'GWResults.csv',index_label='ObjectID')\n\nwriter = pd.ExcelWriter(rootname + \"combined_out.xlsx\", engine=\"xlsxwriter\")\nStat.to_excel(writer, \"stations\", index=False)\nresults.to_excel(writer, \"results\", index=False)\nGWStat.to_excel(writer, 'GWStations',index=False)\nGWRes.to_excel(writer, 'GWResults',index=False)\npipergrps.to_excel(writer,'avgpiper')\npivdata.to_excel(writer,'pivotdata')\nsummaryStats.to_excel(writer,'princaquifstats')\nwriter.save()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Danghor/Algorithms
|
Python/Chapter-04/Radix-Sort.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)",
"Radix Sort\nAs <em style=\"color:blue\">radix sort</em> is based on <em style=\"color:blue\">counting sort</em>, we have to start our implementation of radix sort by defining the function countingSort that we have already discussed previously. The easiest way to do this is by using the %run magic that Juypter notebooks provide.",
"%run Counting-Sort.ipynb",
"The function $\\texttt{extractByte}(n, k)$ takes a natural number $n < 2^{32}$ and a number $k\\in {1,2,3,4}$ as arguments. It returns the $k$-th byte of $n$.",
"def extractByte(n, k):\n return n >> (8 * (k-1)) & 0b1111_1111\n\nn = 123456789\nB = [extractByte(n, k) for k in [1, 2, 3, 4]]\nprint(B)\nassert n == sum([B[k] * 256 ** k for k in [0, 1, 2, 3]])",
"The function $\\texttt{radixSort}(L)$ sorts a list $L$ of unsigned 32 bit integers and returns the sorted list.\nThe idea is to sort these numbers by first sorting them with respect to their last byte, then to sort the list with respect to the second byte, then with respect to the third byte, and finally with respect to the most important byte.\nThese four sorts are done using <em style=\"color:blue\">counting sort</em>.\nThe fact that <em style=\"color:blue\">counting sort</em> is <em style=\"color:blue\">stable</em> guarantees that when we sort with respect to the second byte, numbers that have the same second byte will still be sorted with respect to the first byte.",
"def radixSort(L):\n L = [(n, 0) for n in L]\n for k in range(1, 4+1):\n L = [(n, extractByte(n, k)) for (n, _) in L]\n L = countingSort(L)\n return [n for (n, _) in L]",
"Testing",
"import random as rnd\n\ndef demo():\n L = [ rnd.randrange(1, 1000) for n in range(1, 16) ]\n print(\"L = \", L)\n S = radixSort(L)\n print(\"S = \", S)\n\ndemo()\n\ndef isOrdered(L):\n for i in range(len(L) - 1):\n assert L[i] <= L[i+1], f'L = {L}, i = {i}' \n\nfrom collections import Counter\n\ndef sameElements(L, S):\n assert Counter(L) == Counter(S)",
"The function $\\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.",
"def testSort(n, k):\n for i in range(n):\n L = [ rnd.randrange(2**31) for x in range(k) ]\n oldL = L[:]\n L = radixSort(L)\n isOrdered(L)\n sameElements(oldL, L)\n print('.', end='')\n print()\n print(\"All tests successful!\")\n\n%%time\ntestSort(100, 20000)\n\n%%timeit\nk = 1_000_000\nL = [ rnd.randrange(2*k) for x in range(k) ]\nS = radixSort(L)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
a378ec99/bcn
|
bcn/examples/Example.ipynb
|
mit
|
[
"from __future__ import division, absolute_import\n\nimport numpy as np\n\nfrom bcn.bias import guess_func\nfrom bcn.data import DataSimulated, estimate_partial_signal_characterists\nfrom bcn.cost import Cost\nfrom bcn.solvers import ConjugateGradientSolver\nfrom bcn.linear_operators import LinearOperatorCustom, LinearOperatorEntry, possible_measurements\nfrom bcn.utils.visualization import *\n\nnp.random.seed(seed=42)",
"Case study: Data normalization for unknown confounding factors!\nOutline\n\nDescription of common use cases.\nCreation of two corrupted datasets (A and B).\nApplication of matrix recovery.\nDataset A with entry sensing. \nDataset B with blind normalization.\n\n\nDiscussion of results and advantages.\n\n1. Use cases\nYou have to work with two datasets that are corrupted by an unknown number of confounding factors. Both datasets consist of a matrix of samples and features. For example, customers and products with a rating of satisfaction, or location and time of temperature measurements across the globe, or the change in value of stocks at closing time at the financial markets. Thus, values in the matrix are continuous and can range from negative to positive.\nLuckily, for dataset A you were able to determine the true values for a small subset of matrix entries, for example through quantitative measurement standards or time intensive in-depth analysis. Thus, the recovery of confounding factors is similar to a matrix recovery problem solvable through entry sensing, as the observed values subtracted by the true values give the necessary entries for the bias matrix of confounding factors to be recovered. \nFor dataset B it is more challenging, as you were not able to determine any true values for its entries. However, instead you know with certainty that several of the samples and several of the features are strongly correlated and that you are likely to be able to identify those, as the corruption through the confounding factors is not stronger than the underlying signal. Thus the problem can be approached by blind normalization.\nIn order to remove the unknown confounding factors several assumptions have to be satisfied for dataset A and B. First of all, the to be recovered bias matrix must lie on a sufficiently low dimensional manifold, such as one modelled by a low rank matrix. Secondly, the dataset must satisfy certain incoherence requirements. If both assumptions are satisfied, the otherwise NP-HARD recovery problem can be solved efficiently in the framework of compressed sensing. \n2. Creation of dataset A and B",
"# Setup of general parameters for the recovery experiment.\nn_restarts = 10\nrank = 6\nn_measurements = 2800\nshape = (50, 70) # samples, features\nmissing_fraction = 0.1\nnoise_amplitude = 2.0\nm_blocks_size = 5 # size of each block\ncorrelation_threshold = 0.75\ncorrelation_strength = 1.0\nbias_model = 'image'\n\n# Creation of the true signal for both datasets.\ntruth = DataSimulated(shape, rank, bias_model=bias_model, correlation_threshold=correlation_threshold, m_blocks_size=m_blocks_size, noise_amplitude=noise_amplitude, correlation_strength=correlation_strength, missing_fraction=missing_fraction)\n\ntrue_bias = truth.d['sample']['true_bias']\ntrue_bias_unshuffled = truth.d['sample']['true_bias_unshuffled']\ntrue_signal = truth.d['sample']['signal']\ntrue_signal_unshuffled = truth.d['sample']['signal_unshuffled']\n\ntrue_correlations = {'sample': truth.d['sample']['true_correlations'], 'feature': truth.d['feature']['true_correlations']}\ntrue_correlations_unshuffled = {'sample': truth.d['sample']['true_correlations_unshuffled'], 'feature': truth.d['feature']['true_correlations_unshuffled']}\ntrue_pairs = {'sample': truth.d['sample']['true_pairs'], 'feature': truth.d['feature']['true_pairs']}\ntrue_pairs_unshuffled = {'sample': truth.d['sample']['true_pairs_unshuffled'], 'feature': truth.d['feature']['true_pairs_unshuffled']}\ntrue_directions = {'sample': truth.d['sample']['true_directions'], 'feature': truth.d['feature']['true_directions']}\ntrue_stds = {'sample': truth.d['sample']['true_stds'], 'feature': truth.d['feature']['true_stds']}\n\n# Creation of the corrupted signal for both datasets.\n\nmixed = truth.d['sample']['mixed']\n\nshow_absolute(true_signal_unshuffled, kind='Signal', unshuffled=True, map_backward=truth.map_backward)\n\nshow_absolute(true_signal, kind='Signal')\n\nshow_dependence_structure(true_correlations, 'sample')\n\nshow_dependence_structure(true_correlations_unshuffled, 'sample', unshuffled=True, map_backward=truth.map_backward)\n\nshow_dependence_structure(true_correlations, 'feature')\n\nshow_dependence_structure(true_correlations_unshuffled, 'feature', unshuffled=True, map_backward=truth.map_backward)\n\nshow_dependences(true_signal, true_pairs, 'sample')\n\nshow_dependences(true_signal, true_pairs, 'feature')\n\nshow_independences(true_signal, true_pairs, 'sample')\n\nshow_independences(true_signal, true_pairs, 'feature')\n\nshow_absolute(true_bias_unshuffled, unshuffled=True, map_backward=truth.map_backward_bias, kind='Bias', vmin=-1.5, vmax=1.5)\n\nshow_absolute(true_bias, kind='Bias', vmin=-1.5, vmax=1.5)\n\n# Here the white dots are missing values as common in real data.\nshow_absolute(mixed, kind='Mixed')\n\nshow_dependences(mixed, true_pairs, 'sample')\n\nshow_dependences(mixed, true_pairs, 'feature')",
"3. Normalization of dataset A with entry sensing",
"# Construct measurements from known entries.\noperator = LinearOperatorEntry(n_measurements)\nmeasurements = operator.generate(true_bias)\n\n# Construct cost function.\ncost = Cost(measurements['A'], measurements['y'], sparsity=1)\n\n# Recover the bias.\nsolver = ConjugateGradientSolver(mixed, cost.cost_func, guess_func, rank, guess_noise_amplitude=noise_amplitude, verbosity=0)\nresults = solver.recover()\n\n# Recovery performance statistics.\nrecovery_performance(mixed, cost.cost_func, truth.d['sample']['true_bias'], results['estimated_signal'], truth.d['sample']['signal'], results['estimated_bias'])\n\nshow_absolute(results['estimated_bias'], kind='Bias', unshuffle=True, map_backward=truth.map_backward_bias, vmin=-1.5, vmax=1.5)\n\nshow_absolute(results['guess_X'], kind='Bias', unshuffle=True, map_backward=truth.map_backward_bias, vmin=-0.1, vmax=0.1)",
"3. Normalization of dataset B with blind normalization",
"possible_measurements(shape, missing_fraction, m_blocks_size=m_blocks_size)\n\n# Prior information estimated from the corrputed signal and used for blind recovery.\nsignal_characterists = estimate_partial_signal_characterists(mixed, correlation_threshold, true_pairs=true_pairs, true_directions=true_directions, true_stds=true_stds)\n\nestimated_correlations = {'sample': signal_characterists['sample']['estimated_correlations'], 'feature': signal_characterists['feature']['estimated_correlations']}\n\nshow_threshold(estimated_correlations, correlation_threshold, 'sample')\n\nshow_threshold(estimated_correlations, correlation_threshold, 'feature')\n\n# Construct measurements from corrupted signal and its estimated partial characteristics.\noperator = LinearOperatorCustom(n_measurements)\nmeasurements = operator.generate(signal_characterists)\n\n# Construct cost function.\ncost = Cost(measurements['A'], measurements['y'], sparsity=2)\n\n# Recover the bias.\nsolver = ConjugateGradientSolver(mixed, cost.cost_func, guess_func, rank, guess_noise_amplitude=noise_amplitude, verbosity=0)\nresults = solver.recover()\n\nrecovery_performance(mixed, cost.cost_func, truth.d['sample']['true_bias'], results['estimated_signal'], truth.d['sample']['signal'], results['estimated_bias'])\n\nshow_absolute(results['estimated_bias'], kind='Bias', unshuffle=True, map_backward=truth.map_backward_bias, vmin=-1.5, vmax=1.5)\n\nshow_absolute(results['guess_X'], kind='Bias', unshuffle=True, map_backward=truth.map_backward_bias, vmin=-0.1, vmax=0.1)\n\n# Here red dots are the corrupted signal, green the clean signal, and blue the recovered signal.\n# Note: missing blue dots indicate missing values.\nshow_recovery(mixed, results['guess_X'], true_signal, results['estimated_signal'], true_pairs['sample'], signal_characterists['sample']['estimated_pairs'], true_stds['sample'], signal_characterists['sample']['estimated_stds'], true_directions['sample'], signal_characterists['sample']['estimated_directions'], n_pairs=5, n_points=50)",
"4. Discussion\nIt can be observed that both recovery approaches are effective in a setting where the underling signal contains strong correlations (which can be accurately estimated) or known values are avaible for some of the bias matrix entries. Notably, the signal is not modelled explicitly but the systematics bias affecting it is modelled instead, allowing for complex non-linear signals to be normalized.\nThe corruptions is modelled here as low dimensional manifold, specifically a low rank matrix, which is a flexible model that can effectively approximate any combination of confounding factors. The systematic bias recovered with the two approaches can be further analysed to understand what confounding factors are important contributors to the corruption affecting the signal.\nThe more inaccurate the estimates of correlations and standard deviations become, the more challenging becomes the recovery for the blind normalization approach. However, incorrectly estimated correlated pairs of features (or samples) are less important for solver convergence. It remains to be determined to what extend the estimated standard deviations and directions must correspond the a sub-gaussian distribution in practice, for recovery to be successfull."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cattoire/sparksamples
|
streaming-twitter/notebook/Twitter + Watson Tone Analyzer Part 2.ipynb
|
apache-2.0
|
[
"Twitter + Watson Tone Analyzer Sample Notebook\nIn this sample notebook, we show how to load and analyze data from the Twitter + Watson Tone Analyzer Spark sample application (code can be found here https://github.com/ibm-cds-labs/spark.samples/tree/master/streaming-twitter). The tweets data has been enriched with scores from various Sentiment Tone (e.g Anger, Cheerfulness, etc...).",
"# Import SQLContext and data types\nfrom pyspark.sql import SQLContext\nfrom pyspark.sql.types import *\n\n# sc is an existing SparkContext.\nsqlContext = SQLContext(sc)",
"Load the data\nIn this section, we load the data from a parquet file that has been saved from a scala notebook (see tutorial here...) and create a SparkSQL DataFrame that contains all the data.",
"parquetFile = sqlContext.read.parquet(\"swift://notebooks.spark/tweetsFull.parquet\")\nprint parquetFile\n\nparquetFile.registerTempTable(\"tweets\");\nsqlContext.cacheTable(\"tweets\")\ntweets = sqlContext.sql(\"SELECT * FROM tweets\")\nprint tweets.count()\ntweets.cache()",
"Compute the distribution of tweets by sentiments > 60%\nIn this section, we demonstrate how to use SparkSQL queries to compute for each tone that number of tweets that are greater than 60%",
"#create an array that will hold the count for each sentiment\nsentimentDistribution=[0] * 9\n#For each sentiment, run a sql query that counts the number of tweets for which the sentiment score is greater than 60%\n#Store the data in the array\nfor i, sentiment in enumerate(tweets.columns[-9:]):\n sentimentDistribution[i]=sqlContext.sql(\"SELECT count(*) as sentCount FROM tweets where \" + sentiment + \" > 60\")\\\n .collect()[0].sentCount\n\n%matplotlib inline\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nind=np.arange(9)\nwidth = 0.35\nbar = plt.bar(ind, sentimentDistribution, width, color='g', label = \"distributions\")\n\nparams = plt.gcf()\nplSize = params.get_size_inches()\nparams.set_size_inches( (plSize[0]*2.5, plSize[1]*2) )\nplt.ylabel('Tweet count')\nplt.xlabel('Tone')\nplt.title('Distribution of tweets by sentiments > 60%')\nplt.xticks(ind+width, tweets.columns[-9:])\nplt.legend()\n\nplt.show()\n\nfrom operator import add\nimport re\ntagsRDD = tweets.flatMap( lambda t: re.split(\"\\s\", t.text))\\\n .filter( lambda word: word.startswith(\"#\") )\\\n .map( lambda word : (word, 1 ))\\\n .reduceByKey(add, 10).map(lambda (a,b): (b,a)).sortByKey(False).map(lambda (a,b):(b,a))\ntop10tags = tagsRDD.take(10)\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nparams = plt.gcf()\nplSize = params.get_size_inches()\nparams.set_size_inches( (plSize[0]*2, plSize[1]*2) )\n\nlabels = [i[0] for i in top10tags]\nsizes = [int(i[1]) for i in top10tags]\ncolors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', \"beige\", \"paleturquoise\", \"pink\", \"lightyellow\", \"coral\"]\n\nplt.pie(sizes, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90)\n\nplt.axis('equal')\n\nplt.show()",
"Breakdown of the top 5 hashtags by sentiment scores\nIn this section, we demonstrate how to build a more complex analytic which decompose the top 5 hashtags by sentiment scores. The code below computes the mean of all the sentiment scores and visualize them in a multi-series bar chart",
"cols = tweets.columns[-9:]\ndef expand( t ):\n ret = []\n for s in [i[0] for i in top10tags]:\n if ( s in t.text ):\n for tone in cols:\n ret += [s + u\"-\" + unicode(tone) + \":\" + unicode(getattr(t, tone))]\n return ret \ndef makeList(l):\n return l if isinstance(l, list) else [l]\n\n#Create RDD from tweets dataframe\ntagsRDD = tweets.map(lambda t: t )\n\n#Filter to only keep the entries that are in top10tags\ntagsRDD = tagsRDD.filter( lambda t: any(s in t.text for s in [i[0] for i in top10tags] ) )\n\n#Create a flatMap using the expand function defined above, this will be used to collect all the scores \n#for a particular tag with the following format: Tag-Tone-ToneScore\ntagsRDD = tagsRDD.flatMap( expand )\n\n#Create a map indexed by Tag-Tone keys \ntagsRDD = tagsRDD.map( lambda fullTag : (fullTag.split(\":\")[0], float( fullTag.split(\":\")[1]) ))\n\n#Call combineByKey to format the data as follow\n#Key=Tag-Tone\n#Value=(count, sum_of_all_score_for_this_tone)\ntagsRDD = tagsRDD.combineByKey((lambda x: (x,1)),\n (lambda x, y: (x[0] + y, x[1] + 1)),\n (lambda x, y: (x[0] + y[0], x[1] + y[1])))\n\n#ReIndex the map to have the key be the Tag and value be (Tone, Average_score) tuple\n#Key=Tag\n#Value=(Tone, average_score)\ntagsRDD = tagsRDD.map(lambda (key, ab): (key.split(\"-\")[0], (key.split(\"-\")[1], round(ab[0]/ab[1], 2))))\n\n#Reduce the map on the Tag key, value becomes a list of (Tone,average_score) tuples\ntagsRDD = tagsRDD.reduceByKey( lambda x, y : makeList(x) + makeList(y) )\n\n#Sort the (Tone,average_score) tuples alphabetically by Tone\ntagsRDD = tagsRDD.mapValues( lambda x : sorted(x) )\n\n#Format the data as expected by the plotting code in the next cell. \n#map the Values to a tuple as follow: ([list of tone], [list of average score])\n#e.g. #someTag:([u'Agreeableness', u'Analytical', u'Anger', u'Cheerfulness', u'Confident', u'Conscientiousness', u'Negative', u'Openness', u'Tentative'], [1.0, 0.0, 0.0, 1.0, 0.0, 0.48, 0.0, 0.02, 0.0])\ntagsRDD = tagsRDD.mapValues( lambda x : ([elt[0] for elt in x],[elt[1] for elt in x]) )\n\n#Use custom sort function to sort the entries by order of appearance in top10tags\ndef customCompare( key ):\n for (k,v) in top10tags:\n if k == key:\n return v\n return 0\ntagsRDD = tagsRDD.sortByKey(ascending=False, numPartitions=None, keyfunc = customCompare)\n\n#Take the mean tone scores for the top 10 tags\ntop10tagsMeanScores = tagsRDD.take(10)\n\n\n%matplotlib inline\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nparams = plt.gcf()\nplSize = params.get_size_inches()\nparams.set_size_inches( (plSize[0]*3, plSize[1]*2) )\n\ntop5tagsMeanScores = top10tagsMeanScores[:5]\nwidth = 0\nind=np.arange(9)\n(a,b) = top5tagsMeanScores[0]\nlabels=b[0]\ncolors = [\"beige\", \"paleturquoise\", \"pink\", \"lightyellow\", \"coral\", \"lightgreen\", \"gainsboro\", \"aquamarine\",\"c\"]\nidx=0\nfor key, value in top5tagsMeanScores:\n plt.bar(ind + width, value[1], 0.15, color=colors[idx], label=key)\n width += 0.15\n idx += 1\nplt.xticks(ind+0.3, labels)\nplt.ylabel('AVERAGE SCORE')\nplt.xlabel('TONES')\nplt.title('Breakdown of top hashtags by sentiment tones')\n\nplt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='center',ncol=5, mode=\"expand\", borderaxespad=0.)\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
uber/pyro
|
tutorial/source/gmm.ipynb
|
apache-2.0
|
[
"Gaussian Mixture Model\nThis is tutorial demonstrates how to marginalize out discrete latent variables in Pyro through the motivating example of a mixture model. We'll focus on the mechanics of parallel enumeration, keeping the model simple by training a trivial 1-D Gaussian model on a tiny 5-point dataset. See also the enumeration tutorial for a broader introduction to parallel enumeration.\nTable of contents\n\nOverview\nTraining a MAP estimator\nServing the model: predicting membership\nPredicting membership using discrete inference\nPredicting membership by enumerating in the guide\nMCMC",
"import os\nfrom collections import defaultdict\nimport torch\nimport numpy as np\nimport scipy.stats\nfrom torch.distributions import constraints\nfrom matplotlib import pyplot\n%matplotlib inline\n\nimport pyro\nimport pyro.distributions as dist\nfrom pyro import poutine\nfrom pyro.infer.autoguide import AutoDelta\nfrom pyro.optim import Adam\nfrom pyro.infer import SVI, TraceEnum_ELBO, config_enumerate, infer_discrete\n\nsmoke_test = ('CI' in os.environ)\nassert pyro.__version__.startswith('1.7.0')",
"Overview\nPyro's TraceEnum_ELBO can automatically marginalize out variables in both the guide and the model. When enumerating guide variables, Pyro can either enumerate sequentially (which is useful if the variables determine downstream control flow), or enumerate in parallel by allocating a new tensor dimension and using nonstandard evaluation to create a tensor of possible values at the variable's sample site. These nonstandard values are then replayed in the model. When enumerating variables in the model, the variables must be enumerated in parallel and must not appear in the guide. Mathematically, guide-side enumeration simply reduces variance in a stochastic ELBO by enumerating all values, whereas model-side enumeration avoids an application of Jensen's inequality by exactly marginalizing out a variable.\nHere is our tiny dataset. It has five points.",
"data = torch.tensor([0., 1., 10., 11., 12.])",
"Training a MAP estimator\nLet's start by learning model parameters weights, locs, and scale given priors and data. We will learn point estimates of these using an AutoDelta guide (named after its delta distributions). Our model will learn global mixture weights, the location of each mixture component, and a shared scale that is common to both components. During inference, TraceEnum_ELBO will marginalize out the assignments of datapoints to clusters.",
"K = 2 # Fixed number of components.\n\n@config_enumerate\ndef model(data):\n # Global variables.\n weights = pyro.sample('weights', dist.Dirichlet(0.5 * torch.ones(K)))\n scale = pyro.sample('scale', dist.LogNormal(0., 2.))\n with pyro.plate('components', K):\n locs = pyro.sample('locs', dist.Normal(0., 10.))\n\n with pyro.plate('data', len(data)):\n # Local variables.\n assignment = pyro.sample('assignment', dist.Categorical(weights))\n pyro.sample('obs', dist.Normal(locs[assignment], scale), obs=data)",
"To run inference with this (model,guide) pair, we use Pyro's config_enumerate() handler to enumerate over all assignments in each iteration. Since we've wrapped the batched Categorical assignments in a pyro.plate indepencence context, this enumeration can happen in parallel: we enumerate only 2 possibilites, rather than 2**len(data) = 32. Finally, to use the parallel version of enumeration, we inform Pyro that we're only using a single plate via max_plate_nesting=1; this lets Pyro know that we're using the rightmost dimension plate and that Pyro can use any other dimension for parallelization.",
"optim = pyro.optim.Adam({'lr': 0.1, 'betas': [0.8, 0.99]})\nelbo = TraceEnum_ELBO(max_plate_nesting=1)",
"Before inference we'll initialize to plausible values. Mixture models are very succeptible to local modes. A common approach is choose the best among many randomly initializations, where the cluster means are initialized from random subsamples of the data. Since we're using an AutoDelta guide, we can initialize by defining a custom init_loc_fn().",
"def init_loc_fn(site):\n if site[\"name\"] == \"weights\":\n # Initialize weights to uniform.\n return torch.ones(K) / K\n if site[\"name\"] == \"scale\":\n return (data.var() / 2).sqrt()\n if site[\"name\"] == \"locs\":\n return data[torch.multinomial(torch.ones(len(data)) / len(data), K)]\n raise ValueError(site[\"name\"])\n\ndef initialize(seed):\n global global_guide, svi\n pyro.set_rng_seed(seed)\n pyro.clear_param_store()\n global_guide = AutoDelta(poutine.block(model, expose=['weights', 'locs', 'scale']),\n init_loc_fn=init_loc_fn)\n svi = SVI(model, global_guide, optim, loss=elbo)\n return svi.loss(model, global_guide, data)\n\n# Choose the best among 100 random initializations.\nloss, seed = min((initialize(seed), seed) for seed in range(100))\ninitialize(seed)\nprint('seed = {}, initial_loss = {}'.format(seed, loss))",
"During training, we'll collect both losses and gradient norms to monitor convergence. We can do this using PyTorch's .register_hook() method.",
"# Register hooks to monitor gradient norms.\ngradient_norms = defaultdict(list)\nfor name, value in pyro.get_param_store().named_parameters():\n value.register_hook(lambda g, name=name: gradient_norms[name].append(g.norm().item()))\n\nlosses = []\nfor i in range(200 if not smoke_test else 2):\n loss = svi.step(data)\n losses.append(loss)\n print('.' if i % 100 else '\\n', end='')\n\npyplot.figure(figsize=(10,3), dpi=100).set_facecolor('white')\npyplot.plot(losses)\npyplot.xlabel('iters')\npyplot.ylabel('loss')\npyplot.yscale('log')\npyplot.title('Convergence of SVI');\n\npyplot.figure(figsize=(10,4), dpi=100).set_facecolor('white')\nfor name, grad_norms in gradient_norms.items():\n pyplot.plot(grad_norms, label=name)\npyplot.xlabel('iters')\npyplot.ylabel('gradient norm')\npyplot.yscale('log')\npyplot.legend(loc='best')\npyplot.title('Gradient norms during SVI');",
"Here are the learned parameters:",
"map_estimates = global_guide(data)\nweights = map_estimates['weights']\nlocs = map_estimates['locs']\nscale = map_estimates['scale']\nprint('weights = {}'.format(weights.data.numpy()))\nprint('locs = {}'.format(locs.data.numpy()))\nprint('scale = {}'.format(scale.data.numpy()))",
"The model's weights are as expected, with about 2/5 of the data in the first component and 3/5 in the second component. Next let's visualize the mixture model.",
"X = np.arange(-3,15,0.1)\nY1 = weights[0].item() * scipy.stats.norm.pdf((X - locs[0].item()) / scale.item())\nY2 = weights[1].item() * scipy.stats.norm.pdf((X - locs[1].item()) / scale.item())\n\npyplot.figure(figsize=(10, 4), dpi=100).set_facecolor('white')\npyplot.plot(X, Y1, 'r-')\npyplot.plot(X, Y2, 'b-')\npyplot.plot(X, Y1 + Y2, 'k--')\npyplot.plot(data.data.numpy(), np.zeros(len(data)), 'k*')\npyplot.title('Density of two-component mixture model')\npyplot.ylabel('probability density');",
"Finally note that optimization with mixture models is non-convex and can often get stuck in local optima. For example in this tutorial, we observed that the mixture model gets stuck in an everthing-in-one-cluster hypothesis if scale is initialized to be too large.\nServing the model: predicting membership\nNow that we've trained a mixture model, we might want to use the model as a classifier. \nDuring training we marginalized out the assignment variables in the model. While this provides fast convergence, it prevents us from reading the cluster assignments from the guide. We'll discuss two options for treating the model as a classifier: first using infer_discrete (much faster) and second by training a secondary guide using enumeration inside SVI (slower but more general).\nPredicting membership using discrete inference\nThe fastest way to predict membership is to use the infer_discrete handler, together with trace and replay. Let's start out with a MAP classifier, setting infer_discrete's temperature parameter to zero. For a deeper look at effect handlers like trace, replay, and infer_discrete, see the effect handler tutorial.",
"guide_trace = poutine.trace(global_guide).get_trace(data) # record the globals\ntrained_model = poutine.replay(model, trace=guide_trace) # replay the globals\n \ndef classifier(data, temperature=0):\n inferred_model = infer_discrete(trained_model, temperature=temperature,\n first_available_dim=-2) # avoid conflict with data plate\n trace = poutine.trace(inferred_model).get_trace(data)\n return trace.nodes[\"assignment\"][\"value\"]\n\nprint(classifier(data))",
"Indeed we can run this classifer on new data",
"new_data = torch.arange(-3, 15, 0.1)\nassignment = classifier(new_data)\npyplot.figure(figsize=(8, 2), dpi=100).set_facecolor('white')\npyplot.plot(new_data.numpy(), assignment.numpy())\npyplot.title('MAP assignment')\npyplot.xlabel('data value')\npyplot.ylabel('class assignment');",
"To generate random posterior assignments rather than MAP assignments, we could set temperature=1.",
"print(classifier(data, temperature=1))",
"Since the classes are very well separated, we zoom in to the boundary between classes, around 5.75.",
"new_data = torch.arange(5.5, 6.0, 0.005)\nassignment = classifier(new_data, temperature=1)\npyplot.figure(figsize=(8, 2), dpi=100).set_facecolor('white')\npyplot.plot(new_data.numpy(), assignment.numpy(), 'bx', color='C0')\npyplot.title('Random posterior assignment')\npyplot.xlabel('data value')\npyplot.ylabel('class assignment');",
"Predicting membership by enumerating in the guide\nA second way to predict class membership is to enumerate in the guide. This doesn't work well for serving classifier models, since we need to run stochastic optimization for each new input data batch, but it is more general in that it can be embedded in larger variational models.\nTo read cluster assignments from the guide, we'll define a new full_guide that fits both global parameters (as above) and local parameters (which were previously marginalized out). Since we've already learned good values for the global variables, we will block SVI from updating those by using poutine.block.",
"@config_enumerate\ndef full_guide(data):\n # Global variables.\n with poutine.block(hide_types=[\"param\"]): # Keep our learned values of global parameters.\n global_guide(data)\n\n # Local variables.\n with pyro.plate('data', len(data)):\n assignment_probs = pyro.param('assignment_probs', torch.ones(len(data), K) / K,\n constraint=constraints.unit_interval)\n pyro.sample('assignment', dist.Categorical(assignment_probs))\n\noptim = pyro.optim.Adam({'lr': 0.2, 'betas': [0.8, 0.99]})\nelbo = TraceEnum_ELBO(max_plate_nesting=1)\nsvi = SVI(model, full_guide, optim, loss=elbo)\n\n# Register hooks to monitor gradient norms.\ngradient_norms = defaultdict(list)\nsvi.loss(model, full_guide, data) # Initializes param store.\nfor name, value in pyro.get_param_store().named_parameters():\n value.register_hook(lambda g, name=name: gradient_norms[name].append(g.norm().item()))\n\nlosses = []\nfor i in range(200 if not smoke_test else 2):\n loss = svi.step(data)\n losses.append(loss)\n print('.' if i % 100 else '\\n', end='')\n\npyplot.figure(figsize=(10,3), dpi=100).set_facecolor('white')\npyplot.plot(losses)\npyplot.xlabel('iters')\npyplot.ylabel('loss')\npyplot.yscale('log')\npyplot.title('Convergence of SVI');\n\npyplot.figure(figsize=(10,4), dpi=100).set_facecolor('white')\nfor name, grad_norms in gradient_norms.items():\n pyplot.plot(grad_norms, label=name)\npyplot.xlabel('iters')\npyplot.ylabel('gradient norm')\npyplot.yscale('log')\npyplot.legend(loc='best')\npyplot.title('Gradient norms during SVI');",
"We can now examine the guide's local assignment_probs variable.",
"assignment_probs = pyro.param('assignment_probs')\npyplot.figure(figsize=(8, 3), dpi=100).set_facecolor('white')\npyplot.plot(data.data.numpy(), assignment_probs.data.numpy()[:, 0], 'ro',\n label='component with mean {:0.2g}'.format(locs[0]))\npyplot.plot(data.data.numpy(), assignment_probs.data.numpy()[:, 1], 'bo',\n label='component with mean {:0.2g}'.format(locs[1]))\npyplot.title('Mixture assignment probabilities')\npyplot.xlabel('data value')\npyplot.ylabel('assignment probability')\npyplot.legend(loc='center');",
"MCMC\nNext we'll explore the full posterior over component parameters using collapsed NUTS, i.e. we'll use NUTS and marginalize out all discrete latent variables.",
"from pyro.infer.mcmc.api import MCMC\nfrom pyro.infer.mcmc import NUTS\npyro.set_rng_seed(2)\nkernel = NUTS(model)\nmcmc = MCMC(kernel, num_samples=250, warmup_steps=50)\nmcmc.run(data)\nposterior_samples = mcmc.get_samples()\n\nX, Y = posterior_samples[\"locs\"].t()\n\npyplot.figure(figsize=(8, 8), dpi=100).set_facecolor('white')\nh, xs, ys, image = pyplot.hist2d(X.numpy(), Y.numpy(), bins=[20, 20])\npyplot.contour(np.log(h + 3).T, extent=[xs.min(), xs.max(), ys.min(), ys.max()],\n colors='white', alpha=0.8)\npyplot.title('Posterior density as estimated by collapsed NUTS')\npyplot.xlabel('loc of component 0')\npyplot.ylabel('loc of component 1')\npyplot.tight_layout()",
"Note that due to nonidentifiability of the mixture components the likelihood landscape has two equally likely modes, near (11,0.5) and (0.5,11). NUTS has difficulty switching between the two modes.",
"pyplot.figure(figsize=(8, 3), dpi=100).set_facecolor('white')\npyplot.plot(X.numpy(), color='red')\npyplot.plot(Y.numpy(), color='blue')\npyplot.xlabel('NUTS step')\npyplot.ylabel('loc')\npyplot.title('Trace plot of loc parameter during NUTS inference')\npyplot.tight_layout()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nonotone79/investigativ
|
13 lösungen/03 Python II Homework.ipynb
|
mit
|
[
"Python II Hausaufgaben\n1.Baue eine Funktion mit dem Namen 'double', der die Zahl 5 vedoppelt:",
"def double(number):\n bigger = number * 2\n return bigger\n\ndouble(5)",
"2.Baue einen for-loop, der durch vordefinierte Zahlen-list geht, und mithilfe der eben kreierten eigenen Funktion, alle Resultate verdoppelt ausdruckt.",
"lst = list(range(1,5))\n\nfor n in lst:\n print(double(n))",
"3.Entwickle einen Code, der den Nutzer nach der Länge seinem Namen fragt, und ihm dann sagt, wieviele Zeichen sein Name hat.",
"elem = input('Wie heisst Du?')\nlength = len(elem)\nprint('Hallo '+ elem+ ','+ ' Dein Name hat '+ str(length)+ ' Zeichen.')",
"4.Entwickle eine Funktion mit dem Namen km_rechner, der für die untenaufgeführten automatisch die Umrechung von Meilen in km durchführt und gerundet auf eine Kommastelle anzeigt.",
"def km_rechner(m):\n m = m * 1.60934\n return round(m, 3)\n\nkm_rechner(5)\nkm_rechner(123)\nkm_rechner(53)",
"5.Wir haben in einem Dictionary mit Massen, die mit ganz unterschiedlichen Formaten daherkommen. Entwickle eine Funktion namens, die diese Formate berücksichtigt, und in Meter umwandelt.",
"#Unsere Formate\nvar_first = { 'measurement': 3.4, 'scale': 'kilometer' }\nvar_second = { 'measurement': 9.1, 'scale': 'mile' }\nvar_third = { 'measurement': 2.0, 'scale': 'meter' }\nvar_fourth = { 'measurement': 9.0, 'scale': 'inches' }\n\ndef m_converter(measurement):\n if measurement['scale'] == 'kilometer':\n return measurement['measurement'] * 1000\n if measurement['scale'] == 'mile':\n return measurement['measurement'] * 1600\n if measurement['scale'] == 'inches':\n return measurement['measurement'] * 0.0254\n else:\n return measurement['measurement']\n\nprint(m_converter(var_first))\nprint(m_converter(var_second))\nprint(m_converter(var_third))\nprint(m_converter(var_fourth))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jseabold/statsmodels
|
examples/notebooks/deterministics.ipynb
|
bsd-3-clause
|
[
"Deterministic Terms in Time Series Models",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nplt.rc(\"figure\", figsize=(16, 9))\nplt.rc(\"font\", size=16)",
"Basic Use\nBasic configurations can be directly constructed through DeterministicProcess. These can include a constant, a time trend of any order, and either a seasonal or a Fourier component.\nThe process requires an index, which is the index of the full-sample (or in-sample).\nFirst, we initialize a deterministic process with a constant, a linear time trend, and a 5-period seasonal term. The in_sample method returns the full set of values that match the index.",
"from statsmodels.tsa.deterministic import DeterministicProcess\n\nindex = pd.RangeIndex(0, 100)\ndet_proc = DeterministicProcess(\n index, constant=True, order=1, seasonal=True, period=5\n)\ndet_proc.in_sample()",
"The out_of_sample returns the next steps values after the end of the in-sample.",
"det_proc.out_of_sample(15)",
"range(start, stop) can also be used to produce the deterministic terms over any range including in- and out-of-sample.\nNotes\n\nWhen the index is a pandas DatetimeIndex or a PeriodIndex, then start and stop can be date-like (strings, e.g., \"2020-06-01\", or Timestamp) or integers.\nstop is always included in the range. While this is not very Pythonic, it is needed since both statsmodels and Pandas include stop when working with date-like slices.",
"det_proc.range(190, 210)",
"Using a Date-like Index\nNext, we show the same steps using a PeriodIndex.",
"index = pd.period_range(\"2020-03-01\", freq=\"M\", periods=60)\ndet_proc = DeterministicProcess(index, constant=True, fourier=2)\ndet_proc.in_sample().head(12)\n\ndet_proc.out_of_sample(12)",
"range accepts date-like arguments, which are usually given as strings.",
"det_proc.range(\"2025-01\", \"2026-01\")",
"This is equivalent to using the integer values 58 and 70.",
"det_proc.range(58, 70)",
"Advanced Construction\nDeterministic processes with features not supported directly through the constructor can be created using additional_terms which accepts a list of DetermisticTerm. Here we create a deterministic process with two seasonal components: day-of-week with a 5 day period and an annual captured through a Fourier component with a period of 365.25 days.",
"from statsmodels.tsa.deterministic import Fourier, Seasonality, TimeTrend\n\nindex = pd.period_range(\"2020-03-01\", freq=\"D\", periods=2 * 365)\ntt = TimeTrend(constant=True)\nfour = Fourier(period=365.25, order=2)\nseas = Seasonality(period=7)\ndet_proc = DeterministicProcess(index, additional_terms=[tt, seas, four])\ndet_proc.in_sample().head(28)",
"Custom Deterministic Terms\nThe DetermisticTerm Abstract Base Class is designed to be subclassed to help users write custom deterministic terms. We next show two examples. The first is a broken time trend that allows a break after a fixed number of periods. The second is a \"trick\" deterministic term that allows exogenous data, which is not really a deterministic process, to be treated as if was deterministic. This lets use simplify gathering the terms needed for forecasting.\nThese are intended to demonstrate the construction of custom terms. They can definitely be improved in terms of input validation.",
"from statsmodels.tsa.deterministic import DeterministicTerm\n\n\nclass BrokenTimeTrend(DeterministicTerm):\n def __init__(self, break_period: int):\n self._break_period = break_period\n\n def __str__(self):\n return \"Broken Time Trend\"\n\n def _eq_attr(self):\n return (self._break_period,)\n\n def in_sample(self, index: pd.Index):\n nobs = index.shape[0]\n terms = np.zeros((nobs, 2))\n terms[self._break_period :, 0] = 1\n terms[self._break_period :, 1] = np.arange(\n self._break_period + 1, nobs + 1\n )\n return pd.DataFrame(\n terms, columns=[\"const_break\", \"trend_break\"], index=index\n )\n\n def out_of_sample(\n self, steps: int, index: pd.Index, forecast_index: pd.Index = None\n ):\n # Always call extend index first\n fcast_index = self._extend_index(index, steps, forecast_index)\n nobs = index.shape[0]\n terms = np.zeros((steps, 2))\n # Assume break period is in-sample\n terms[:, 0] = 1\n terms[:, 1] = np.arange(nobs + 1, nobs + steps + 1)\n return pd.DataFrame(\n terms, columns=[\"const_break\", \"trend_break\"], index=fcast_index\n )\n\nbtt = BrokenTimeTrend(60)\ntt = TimeTrend(constant=True, order=1)\nindex = pd.RangeIndex(100)\ndet_proc = DeterministicProcess(index, additional_terms=[tt, btt])\ndet_proc.range(55, 65)",
"Next, we write a simple \"wrapper\" for some actual exogenous data that simplifies constructing out-of-sample exogenous arrays for forecasting.",
"class ExogenousProcess(DeterministicTerm):\n def __init__(self, data):\n self._data = data\n\n def __str__(self):\n return \"Custom Exog Process\"\n\n def _eq_attr(self):\n return (id(self._data),)\n\n def in_sample(self, index: pd.Index):\n return self._data.loc[index]\n\n def out_of_sample(\n self, steps: int, index: pd.Index, forecast_index: pd.Index = None\n ):\n forecast_index = self._extend_index(index, steps, forecast_index)\n return self._data.loc[forecast_index]\n\nimport numpy as np\n\ngen = np.random.default_rng(98765432101234567890)\nexog = pd.DataFrame(\n gen.integers(100, size=(300, 2)), columns=[\"exog1\", \"exog2\"]\n)\nexog.head()\n\nep = ExogenousProcess(exog)\ntt = TimeTrend(constant=True, order=1)\n# The in-sample index\nidx = exog.index[:200]\ndet_proc = DeterministicProcess(idx, additional_terms=[tt, ep])\n\ndet_proc.in_sample().head()\n\ndet_proc.out_of_sample(10)",
"Model Support\nThe only model that directly supports DeterministicProcess is AutoReg. A custom term can be set using the deterministic keyword argument. \nNote: Using a custom term requires that trend=\"n\" and seasonal=False so that all deterministic components must come from the custom deterministic term.\nSimulate Some Data\nHere we simulate some data that has an weekly seasonality captured by a Fourier series.",
"gen = np.random.default_rng(98765432101234567890)\nidx = pd.RangeIndex(200)\ndet_proc = DeterministicProcess(idx, constant=True, period=52, fourier=2)\ndet_terms = det_proc.in_sample().to_numpy()\nparams = np.array([1.0, 3, -1, 4, -2])\nexog = det_terms @ params\ny = np.empty(200)\ny[0] = det_terms[0] @ params + gen.standard_normal()\nfor i in range(1, 200):\n y[i] = 0.9 * y[i - 1] + det_terms[i] @ params + gen.standard_normal()\ny = pd.Series(y, index=idx)\nax = y.plot()",
"The model is then fit using the deterministic keyword argument. seasonal defaults to False but trend defaults to \"c\" so this needs to be changed.",
"from statsmodels.tsa.api import AutoReg\n\nmod = AutoReg(y, 1, trend=\"n\", deterministic=det_proc)\nres = mod.fit()\nprint(res.summary())",
"We can use the plot_predict to show the predicted values and their prediction interval. The out-of-sample deterministic values are automatically produced by the deterministic process passed to AutoReg.",
"fig = res.plot_predict(200, 200 + 2 * 52, True)\n\nauto_reg_forecast = res.predict(200, 211)\nauto_reg_forecast",
"Using with other models\nOther models do not support DeterministicProcess directly. We can instead manually pass any deterministic terms as exog to model that support exogenous values.\nNote that SARIMAX with exogenous variables is OLS with SARIMA errors so that the model is \n$$\n\\begin{align}\n\\nu_t & = y_t - x_t \\beta \\\n(1-\\phi(L))\\nu_t & = (1+\\theta(L))\\epsilon_t.\n\\end{align}\n$$\nThe parameters on deterministic terms are not directly comparable to AutoReg which evolves according to the equation\n$$\n(1-\\phi(L)) y_t = x_t \\beta + \\epsilon_t.\n$$\nWhen $x_t$ contains only deterministic terms, these two representation are equivalent (assuming $\\theta(L)=0$ so that there is no MA).",
"from statsmodels.tsa.api import SARIMAX\n\ndet_proc = DeterministicProcess(idx, period=52, fourier=2)\ndet_terms = det_proc.in_sample()\n\nmod = SARIMAX(y, order=(1, 0, 0), trend=\"c\", exog=det_terms)\nres = mod.fit(disp=False)\nprint(res.summary())",
"The forecasts are similar but differ since the parameters of the SARIMAX are estimated using MLE while AutoReg uses OLS.",
"sarimax_forecast = res.forecast(12, exog=det_proc.out_of_sample(12))\ndf = pd.concat([auto_reg_forecast, sarimax_forecast], axis=1)\ndf.columns = columns = [\"AutoReg\", \"SARIMAX\"]\ndf"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Kuni88/tutorial_python
|
text/Chapter2.ipynb
|
mit
|
[
"Chapter2: 制御とデータ構造\n\n制御\n標準的なデータ構造\n文字列\nリスト\n辞書\n\n\n練習問題\n\n2-1. 制御\nループ\n```python\nN = 10 # 繰り返す回数\nfor i in range(N):\n # 繰り返す処理\nwhile 条件:\n # 条件が成立するときの処理\n```\n条件分岐\npython\nif 条件1:\n # 条件1が成立しているときの処理\nelif 条件2:\n # 条件1が成立せず,条件2が成立するときの処理\nelse:\n # 条件1も条件2も成立しないときの処理\n2-2. 標準的なデータ構造\n2-2-1. 文字列",
"s = 'Hello'\nprint(s[1]) # indexを指定して,表示する\nprint(len(s)) # 文字列sの長さを表示する\nprint(s+' world') # 文字列を連結する\n\ns + 1.5 # 文字列と数値の和をとることはできない\n\ns + str(1.5) # キャストすることで連結ができる\n\nprint(s.upper()) # すべて大文字にする\nprint(s.lower()) # すべて小文字にする\nprint(s[1:]) # 部分文字列\n\ntext = \"\"\"I never think of the future. \nIt comes soon enough.\"\"\"\n# このように文字列を定義すると複数行でも扱える\nprint(text)\n\nprint(text.split(' '))# スペースで分割する",
"2-2-2. リスト",
"countries = ['USA', 'UK', 'Japan']\nfor i in range(3):\n print(countries[i]) # indexを指定して値を表示\n\nfor country in countries:\n print(country) # リストの要素を引き出すことができる\n\nprint(len(countries)) # リストの長さ\nif 'USA' in countries: # 'USA'がcountriesの中にある場合\n print('OK') \nelse:\n print('NG')",
"リストのメソッド\n\nappend\nsort\nremove",
"countries = ['USA', 'UK', 'Japan']\ncountries.append('China') # リストに追加\nprint(countries)\nprint(countries.index('UK')) # indexを求める\nprint(countries[1:-1]) # 部分リストを表示\n\ncountries.sort() # 辞書順にソート\nprint(countries)\n\ncountries.remove('UK') # リストにある値を消去\nprint(countries)",
"内包表現\npython\narray = [i for i in range(N)] # 0からN-1までのリストを作る",
"array = [i for i in range(10)]\nprint(\"0から9までのリスト: \", array)\n\nodd_array = [x for x in array if x%2 != 0]\nprint(\"奇数のリスト: \", odd_array)",
"注意点: 参照渡しと値渡し\nリストの値が渡されるのではなく,参照が渡される. \nコピーする時は気をつけてコピーする必要がある. \n参考資料: Pythonでリストの深いコピー",
"b = countries # 参照をコピーする\nb.pop() # bにのみ変更を加える\nprint(b)\nprint(countries) # countriesにも影響がある\n\na = countries[:] # 参照ではなく,値をコピーする\na.pop() # aにのみ変更を加える\nprint(a)\nprint(countries)\n\n# 深いコピーの例\nimport copy\nobj = [{'a': 10}]\ndeepcopied = copy.deepcopy(obj)\nprint('obj :', id(obj[0]))\nprint('copied:', id(deepcopied[0]))",
"2-2-3. 辞書\nPythonにおける連想配列を「辞書」と呼ぶ.\nkeyとvalueをペアにして保存するデータ構造である.",
"d = {}\nd['a'] = 'art' # dict[key] = value という形で辞書に追加\nd['b'] = 'beam'\nd['c'] = 'circuit'\nprint(d)\n\nprint(d.get('a')) # dからkeyが'a'となるものを取り出す\nprint(d['a']) # dからkeyが'a'となるものを取り出す(2)\n\nprint(d.keys()) # keyのリスト\nprint(d.values()) # valueのリスト\n\nfor key, value in d.items():\n print(key + ':' + value)\n\ndel d['b'] # keyが'b'となるエントリを消す\nprint(d)",
"2-3. 練習問題"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mikekestemont/lot2016
|
Chapter 3 - Conditions.ipynb
|
mit
|
[
"Chapter 3: Conditions\nSimple conditions\nA lot of programming has to do with executing code if a particular condition holds. Here we give a brief overview of how you can express certain conditions in Python. Can you figure our what all of the conditions do?",
"print(2 < 5)\nprint(2 <= 5)\nprint(3 > 7)\nprint(3 >= 7)\nprint(3 == 3)\nprint(\"school\" == \"school\")\nprint(\"Python\" != \"perl\")",
"The relevant 'logical operators' that we used here are: <, <=, >,>=,==,!=. In Python-speak, we say that such logical expression gets 'evaluated' when you run the code. The outcome of such an evaluation is a 'binary value' or a so-called 'boolean' that can take only two possible values: True or False. You can assign such a boolean to a variable:",
"greater = (5 > 2)\nprint(greater)\ngreater = 5 < 2\nprint(greater)\nprint(type(greater))",
"if, elif and else\nAt the end of the previous chapter, we have talked about dictionaries, which are a kind of data structure which you will need a lot when writing Python. Recall our collection of good_reads from the previous chapter. Recall that we could use the key of an entry to retrieve the score of a book in our collection:",
"good_reads = {\"Emma\":8, \"Pride and Prejudice\":10, \"Sense and Sensibility\":7, \"Northanger Abbey\":3}\nscore = good_reads[\"Sense and Sensibility\"]\nprint(score)",
"At some point, however, we might forget which books we have already added to our collection. What happens if we try to get the score of a book that is not in our collection?",
"score = good_reads[\"Moby Dick\"]\nprint(score)",
"We get an error, and more specifically a KeyError, which basically means: \"the key you asked me to look up is not in the dictionary...\". We will learn a lot more about error handling later, but for now we would like to prevent our program from giving it in the first place. Let's write a little program that prints \"X is in the collection\" if a particular book is in the collection and \"X is NOT in the collection\" if it is not.",
"book = \"Moby Dick\"\nif book in good_reads:\n print(book + \" is in the collection\")\nelse:\n print(book + \" is NOT in the collection\")",
"A lot of new syntax here. Let's go through it step by step. First we check whether the value we assigned to book is in our collection. The part after if is a logical expression which will be True or False:",
"print(book in good_reads)",
"Because our book is not in the collection, Python returns False. Let's do the same thing for a book that we know is in the collection:",
"print(\"Emma\" in good_reads)",
"Indeed, it is in the collection! Back to our if statement. If the expression after if evaluates to True, our program will go on to the next line and print book + \" is in the collection\". Let's try that as well:",
"if \"Emma\" in good_reads:\n print(\"Found it!\")\n\nif book in good_reads:\n print(\"Found it!\")",
"Notice that the last print statement is not executed. That is because the value we assigned to book is not in our collection and thus the part after if did not evaluate to True. In our little program above we used another statement besides if, namely else. It shouldn't be too hard to figure out what's going on here. The part after else will be evaluated if the if statement evaluated to False. In English: if the book is not in the collection, print that is is not.\nIndentation!\nUnlike other languages, Python does not make use of curly braces to mark the start and end of pieces of code, like if-statements. The only delimiter is a colon (:) and the indentation of the code (i.e. the use of whitespace). This indentation must be used consistently throughout your code. The convention is to use 4 spaces as indentation. This means that after you have used a colon (such as in our if statement) the next line should be indented by four spaces. (The shortcut for typing these 4 spaces in many editors is inserting a TAB.)\nSometimes we have various conditions that should all evaluate to something different. For that Python provides the elif statement. We use it similar to if and else. Note however that you can only use elif after an if statement! Above we asked whether a book was in the collection. We can do the same thing for parts of strings or for items in a list. For example we could test whether the letter a is in the word banana:",
"print(\"a\" in \"banana\")",
"Likewise the following evaluates to False:",
"print(\"z\" in \"banana\")",
"Let's use this in an if-elif-else combination, a very common way to implement 'decision trees' in Python:",
"word = \"rocket science\"\nif \"a\" in word:\n print(word + \" contains the letter a\")\nelif \"s\" in word:\n print(word + \" contains the letter s\")\nelif \"d\" in word:\n print(word + \" contains the letter s\")\nelif \"c\" in word:\n print(word + \" contains the letter c\")\nelse:\n print(\"What a weird word!\")",
"First the if statement will be evaluated. Only if that statement turns out to be False the computer will proceed to evaluate the elif statement. If the elif statement in turn would prove to be False, the machine will proceed and execute the lines of code associated with the else statement. You can think of this coding structure as a decision tree! Remember: if somewhere along the tree, your machine comes across a logical expression which is true, it won't bother anymore to evaluate the remaining options!\n\nDIY\nLet's practice our new condition skills a little. Write a small program that defines a variable weight. If the weight is > 50 pounds, print \"There is a $25 charge for luggage that heavy.\" If it is not, print: \"Thank you for your business.\" If the weight is exactly 50, print: \"Pfiew! The weight is just right!\". Change the value of weight a couple of times to check whether your code works. (Tip: make use of the logical operators and if-elif-else tree! Make sure you use the correct indentation.)",
"# insert your code here\n",
"and, or, not\nUntil now, our conditions consisted of single logical expresssions. However, quite often we would like to test for multiple conditions: for instance, you would like to tell your computer to do something if this and this were but this and that were not true. Python provides a number of ways to do that. The first is with the and statement which allows us to combine two expressions that need both to be true in order for the combination to be true. Let's see how that works:",
"word = \"banana\"\nif (\"a\" in word) or (\"b\" in word):\n print(\"Both a and b are in \" + word)",
"Note how we can use round brackets to make the code more readable (but you can just as easily leave them out):",
"word = \"banana\"\nif (\"a\" in word) and (\"b\" in word):\n print(\"Both a and b are in \" + word)",
"If one of the expressions evaluates to False, nothing will be printed:",
"if (\"a\" in word) and (\"z\" in word):\n print(\"Both a and z are in \" + word)",
"Now you know that the and operator exists in Python, you won't be too surprised to learn that there is also an or operator in Python that you can use. Replace and with or in the if statement below. Can you deduce what happens?",
"word = \"banana\"\nif (\"a\" in word) or (\"z\" in word):\n print(\"Both a and b are in \" + word)",
"In the code block below, can you add an else statement that prints that none of the letters were found?",
"if (\"a\" in word) and (\"z\" in word):\n print(\"a or z are in \" + word)\nelse:\n print(\"None of these were found...\")\n# insert your code here",
"Finally we can use not to test for conditions that are not true.",
"if (\"z\" not in word):\n print(\"z is not in \" + word)",
"Objects, such as strings or integers of lists are True, simply because they exist. Empty strings, lists, dictionaries etc on the other hand are False because in a way they do not exist -- an empty list is not really a list, right? This principle is often by programmers to, for example, only execute a piece of code if a certain list contains anything at all:",
"numbers = [1, 2, 3, 4]\nif numbers:\n print(\"I found some numbers!\")",
"Now if our list were empty, Python wouldn't print anything:",
"numbers = [9,999]\nif numbers:\n print(\"I found some numbers!\")",
"DIY\n\nCan you write code that prints \"This is an empty list\" if the provided list does not contain any values?",
"numbers = []\n# insert your code here\nif not numbers:\n print(\"Is an empty list\")",
"Can you do the same thing, but this time using the function len()?",
"# insert your code here",
"What we have learnt\nTo finish this section, here is an overview of the new functions, statements and concepts we have learnt. Go through them and make sure you understand what their purpose is and how they are used.\n\nconditions\nindentation\nif\nelif\nelse\nTrue\nFalse\nempty objects are false\nnot\nin\nand\nor\nmultiple conditions\n==\n<\n>\n!=\nKeyError\n\n\nFinal Exercises Chapter 3\nInspired by Think Python by Allen B. Downey (http://thinkpython.com), Introduction to Programming Using Python by Y. Liang (Pearson, 2013). Some exercises below have been taken from: http://www.ling.gu.se/~lager/python_exercises.html.\n\nCan you implement the following grading scheme in Python?\n<img src=\"https://raw.githubusercontent.com/mikekestemont/python-course/master/images/grade.png\">",
"# grading system",
"Can you spot the reasoning error in the following code?",
"score = 98.0\nif score >= 60.0:\n grade = 'D'\nelif score >= 70.0:\n grade = 'C'\nelif score >= 80.0:\n grade = 'B'\nelif score >= 90.0:\n grade = 'A'\nelse:\n grade = 'F'\nprint(grade)",
"Write Python code that defines two numbers and prints the largest one of them. Use an if-then-else tree.",
"# code",
"Congrats: you've reached the end of Chapter 3! Ignore the code block below; it's only here to make the page prettier.",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/nasa-giss/cmip6/models/sandbox-1/ocnbgchem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: NASA-GISS\nSource ID: SANDBOX-1\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:21\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-1', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DavidObando/carnd
|
Term1/Labs/CarND-LeNet-Lab/LeNet-Lab-Solution.ipynb
|
apache-2.0
|
[
"LeNet Lab Solution\n\nSource: Yan LeCun\nLoad Data\nLoad the MNIST data, which comes pre-loaded with TensorFlow.\nYou do not need to modify this section.",
"from tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"MNIST_data/\", reshape=False)\nX_train, y_train = mnist.train.images, mnist.train.labels\nX_validation, y_validation = mnist.validation.images, mnist.validation.labels\nX_test, y_test = mnist.test.images, mnist.test.labels\n\nassert(len(X_train) == len(y_train))\nassert(len(X_validation) == len(y_validation))\nassert(len(X_test) == len(y_test))\n\nprint()\nprint(\"Image Shape: {}\".format(X_train[0].shape))\nprint()\nprint(\"Training Set: {} samples\".format(len(X_train)))\nprint(\"Validation Set: {} samples\".format(len(X_validation)))\nprint(\"Test Set: {} samples\".format(len(X_test)))",
"The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.\nHowever, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.\nIn order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).\nYou do not need to modify this section.",
"import numpy as np\n\n# Pad images with 0s\nX_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')\nX_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')\nX_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')\n \nprint(\"Updated Image Shape: {}\".format(X_train[0].shape))",
"Visualize Data\nView a sample from the dataset.\nYou do not need to modify this section.",
"import random\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nindex = random.randint(0, len(X_train))\nimage = X_train[index].squeeze()\n\nplt.figure(figsize=(1,1))\nplt.imshow(image, cmap=\"gray\")\nprint(y_train[index])",
"Preprocess Data\nShuffle the training data.\nYou do not need to modify this section.",
"from sklearn.utils import shuffle\n\nX_train, y_train = shuffle(X_train, y_train)",
"Setup TensorFlow\nThe EPOCH and BATCH_SIZE values affect the training speed and model accuracy.\nYou do not need to modify this section.",
"import tensorflow as tf\n\nEPOCHS = 10\nBATCH_SIZE = 128",
"SOLUTION: Implement LeNet-5\nImplement the LeNet-5 neural network architecture.\nThis is the only cell you need to edit.\nInput\nThe LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.\nArchitecture\nLayer 1: Convolutional. The output shape should be 28x28x6.\nActivation. Your choice of activation function.\nPooling. The output shape should be 14x14x6.\nLayer 2: Convolutional. The output shape should be 10x10x16.\nActivation. Your choice of activation function.\nPooling. The output shape should be 5x5x16.\nFlatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.\nLayer 3: Fully Connected. This should have 120 outputs.\nActivation. Your choice of activation function.\nLayer 4: Fully Connected. This should have 84 outputs.\nActivation. Your choice of activation function.\nLayer 5: Fully Connected (Logits). This should have 10 outputs.\nOutput\nReturn the result of the 2nd fully connected layer.",
"from tensorflow.contrib.layers import flatten\n\ndef LeNet(x): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0\n sigma = 0.1\n \n # SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n\n # SOLUTION: Activation.\n conv1 = tf.nn.relu(conv1)\n\n # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Layer 2: Convolutional. Output = 10x10x16.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(16))\n conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n \n # SOLUTION: Activation.\n conv2 = tf.nn.relu(conv2)\n\n # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.\n conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Flatten. Input = 5x5x16. Output = 400.\n fc0 = flatten(conv2)\n \n # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(120))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b\n \n # SOLUTION: Activation.\n fc1 = tf.nn.relu(fc1)\n\n # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(84))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n \n # SOLUTION: Activation.\n fc2 = tf.nn.relu(fc2)\n\n # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(10))\n logits = tf.matmul(fc2, fc3_W) + fc3_b\n \n return logits",
"Features and Labels\nTrain LeNet to classify MNIST data.\nx is a placeholder for a batch of input images.\ny is a placeholder for a batch of output labels.\nYou do not need to modify this section.",
"x = tf.placeholder(tf.float32, (None, 32, 32, 1))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 10)",
"Training Pipeline\nCreate a training pipeline that uses the model to classify MNIST data.\nYou do not need to modify this section.",
"rate = 0.001\n\nlogits = LeNet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)",
"Model Evaluation\nEvaluate how well the loss and accuracy of the model for a given dataset.\nYou do not need to modify this section.",
"correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples",
"Train the Model\nRun the training data through the training pipeline to train the model.\nBefore each epoch, shuffle the training set.\nAfter each epoch, measure the loss and accuracy of the validation set.\nSave the model after training.\nYou do not need to modify this section.",
"with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(X_validation, y_validation)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n saver.save(sess, 'lenet')\n print(\"Model saved\")",
"Evaluate the Model\nOnce you are completely satisfied with your model, evaluate the performance of the model on the test set.\nBe sure to only do this once!\nIf you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.\nYou do not need to modify this section.",
"with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n\n test_accuracy = evaluate(X_test, y_test)\n print(\"Test Accuracy = {:.3f}\".format(test_accuracy))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
faneshion/MatchZoo
|
tutorials/model_tuning.ipynb
|
apache-2.0
|
[
"Model Tuning",
"import matchzoo as mz\ntrain_raw = mz.datasets.toy.load_data('train')\ndev_raw = mz.datasets.toy.load_data('dev')\ntest_raw = mz.datasets.toy.load_data('test')",
"basic Usage\nA couple things are needed by the tuner:\n - a model with a parameters filled\n - preprocessed training data\n - preprocessed testing data\nSince MatchZoo models have pre-defined hyper-spaces, the tuner can start tuning right away once you have the data ready.\nprepare the data",
"preprocessor = mz.models.DenseBaseline.get_default_preprocessor()\ntrain = preprocessor.fit_transform(train_raw, verbose=0)\ndev = preprocessor.transform(dev_raw, verbose=0)\ntest = preprocessor.transform(test_raw, verbose=0)",
"prepare the model",
"model = mz.models.DenseBaseline()\nmodel.params['input_shapes'] = preprocessor.context['input_shapes']\nmodel.params['task'] = mz.tasks.Ranking()",
"start tuning",
"tuner = mz.auto.Tuner(\n params=model.params,\n train_data=train,\n test_data=dev,\n num_runs=5\n)\nresults = tuner.tune()",
"view the best hyper-parameter set",
"results['best']\n\nresults['best']['params'].to_frame()",
"understading hyper-space\nmodel.params.hyper_space reprensents the model's hyper-parameters search space, which is the cross-product of individual hyper parameter's hyper space. When a Tuner builds a model, for each hyper parameter in model.params, if the hyper-parameter has a hyper-space, then a sample will be taken in the space. However, if the hyper-parameter does not have a hyper-space, then the default value of the hyper-parameter will be used.",
"model.params.hyper_space",
"In a DenseBaseline model, only mlp_num_units, mlp_num_layers, and mlp_num_fan_out have pre-defined hyper-space. In other words, only these hyper-parameters will change values during a tuning. Other hyper-parameters, like mlp_activation_func, are fixed and will not change.",
"def sample_and_build(params):\n sample = mz.hyper_spaces.sample(params.hyper_space)\n print('if sampled:', sample, '\\n')\n params.update(sample)\n print('the built model will have:\\n')\n print(params, '\\n\\n\\n')\n\nfor _ in range(3):\n sample_and_build(model.params)",
"This is similar to the process of a tuner sampling model hyper-parameters, but with one key difference: a tuner's hyper-space is suggestive. This means the sampling process in a tuner is not truely random but skewed. Scores of the past samples affect future choices: a tuner with more runs knows better about its hyper-space, and take samples in a way that will likely yields better scores.\nFor more details, consult tuner's backend: hyperopt, and the search algorithm tuner uses: Tree of Parzen Estimators (TPE)\nHyper-spaces can also be represented in a human-readable format.",
"print(model.params.get('mlp_num_units').hyper_space)\n\nmodel.params.to_frame()[['Name', 'Hyper-Space']]",
"setting hyper-space\nWhat if I want the tuner to choose optimizer among adam, adagrad, and rmsprop?",
"model.params.get('optimizer').hyper_space = mz.hyper_spaces.choice(['adam', 'adagrad', 'rmsprop'])\n\nfor _ in range(10):\n print(mz.hyper_spaces.sample(model.params.hyper_space))",
"What about setting mlp_num_layers to a fixed value of 2?",
"model.params['mlp_num_layers'] = 2\nmodel.params.get('mlp_num_layers').hyper_space = None\n\nfor _ in range(10):\n print(mz.hyper_spaces.sample(model.params.hyper_space))",
"using callbacks\nTo save the model during the tuning process, use mz.auto.tuner.callbacks.SaveModel.",
"tuner.num_runs = 2\ntuner.callbacks.append(mz.auto.tuner.callbacks.SaveModel())\nresults = tuner.tune()",
"This will save all built models to your mz.USER_TUNED_MODELS_DIR, and can be loaded by:",
"best_model_id = results['best']['model_id']\nmz.load_model(mz.USER_TUNED_MODELS_DIR.joinpath(best_model_id))",
"To load a pre-trained embedding layer into a built model during a tuning process, use mz.auto.tuner.callbacks.LoadEmbeddingMatrix.",
"toy_embedding = mz.datasets.toy.load_embedding()\npreprocessor = mz.models.DUET.get_default_preprocessor()\ntrain = preprocessor.fit_transform(train_raw, verbose=0)\ndev = preprocessor.transform(dev_raw, verbose=0)\nparams = mz.models.DUET.get_default_params()\nparams['task'] = mz.tasks.Ranking()\nparams.update(preprocessor.context)\nparams['embedding_output_dim'] = toy_embedding.output_dim\n\nembedding_matrix = toy_embedding.build_matrix(preprocessor.context['vocab_unit'].state['term_index'])\nload_embedding_matrix_callback = mz.auto.tuner.callbacks.LoadEmbeddingMatrix(embedding_matrix)\n\ntuner = mz.auto.tuner.Tuner(\n params=params,\n train_data=train,\n test_data=dev,\n num_runs=1\n)\ntuner.callbacks.append(load_embedding_matrix_callback)\nresults = tuner.tune()",
"make your own callbacks\nTo build your own callbacks, inherit mz.auto.tuner.callbacks.Callback and overrides corresponding methods.\nA run proceeds in the following way:\n\nrun start (callback)\nbuild model\nbuild end (callback)\nfit and evaluate model\ncollect result\nrun end (callback)\n\nThis process is repeated for num_runs times in a tuner.\nFor example, say I want to verify if my embedding matrix is correctly loaded.",
"import numpy as np\n\nclass ValidateEmbedding(mz.auto.tuner.callbacks.Callback):\n def __init__(self, embedding_matrix):\n self._matrix = embedding_matrix\n \n def on_build_end(self, tuner, model):\n loaded_matrix = model.get_embedding_layer().get_weights()[0]\n if np.isclose(self._matrix, loaded_matrix).all():\n print(\"Yes! The my embedding is correctly loaded!\")\n\nvalidate_embedding_matrix_callback = ValidateEmbedding(embedding_matrix)\n\ntuner = mz.auto.tuner.Tuner(\n params=params,\n train_data=train,\n test_data=dev,\n num_runs=1,\n callbacks=[load_embedding_matrix_callback, validate_embedding_matrix_callback]\n)\ntuner.callbacks.append(load_embedding_matrix_callback)\nresults = tuner.tune()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hayatoy/dataflow-tutorial
|
Dataflow_Tutorial1.ipynb
|
apache-2.0
|
[
"Cloud Dataflow Tutorial\n事前準備\n\nGoogle Cloud Platform の課金設定\nDataflow APIの有効化\nGCSのBucketを作る\nBigQueryにtestdatasetというデータセットを作る\nDatalabを起動\n\nThat's it!\nこのNotebookをコピーするには\nDatalabを開いたら、Notebookを新規に開いてください。\nその後、セルに次のコードを入力して実行してください。\n!git clone https://github.com/hayatoy/dataflow-tutorial.git\n先頭の\" ! \"を忘れずに入力してください。\n実行する前に・・\nProject名を変更してください。Esc->Fで一括置換できます。\n<font color=\"red\">注意:runAllを実行しないでください。全部実行するのに時間がかかります。</font>\nこのNotebookはDatalab (Dataflow 0.6.0)用です。\nDataflow 2.0.0以降で使う場合は beam.utils の部分を beam.options に変更してください。\nApache Beamのインポート",
"import apache_beam as beam",
"Dataflowの基本設定\nジョブ名、プロジェクト名、一時ファイルの置き場を指定します。",
"options = beam.utils.pipeline_options.PipelineOptions()\ngcloud_options = options.view_as(\n beam.utils.pipeline_options.GoogleCloudOptions)\ngcloud_options.job_name = 'dataflow-tutorial1'\ngcloud_options.project = 'PROJECTID'\ngcloud_options.staging_location = 'gs://PROJECTID/staging'\ngcloud_options.temp_location = 'gs://PROJECTID/temp'",
"Dataflowのスケール設定\nWorkerの最大数や、マシンタイプ等を設定します。\nWorkerのDiskサイズはデフォルトで250GB(Batch)、420GB(Streaming)と大きいので、ここで必要サイズを指定する事をオススメします。",
"worker_options = options.view_as(beam.utils.pipeline_options.WorkerOptions)\nworker_options.disk_size_gb = 20\nworker_options.max_num_workers = 2\n# worker_options.num_workers = 2\n# worker_options.machine_type = 'n1-standard-8'\n# worker_options.zone = 'asia-northeast1-a'",
"実行環境の切り替え\n\nDirectRunner: ローカルマシンで実行します\nDataflowRunner: Dataflow上で実行します",
"options.view_as(beam.utils.pipeline_options.StandardOptions).runner = 'DirectRunner'\n# options.view_as(beam.utils.pipeline_options.StandardOptions).runner = 'DataflowRunner'",
"準備は完了、以下パイプラインの例\n<br/>\n<br/>\n<br/>\n<br/>\n<br/>\n<br/>\nパイプラインその1\nGCSからファイルを読み込み、GCSにその内容を書き込むだけ \n+----------------+\n| |\n| Read GCS File |\n| |\n+-------+--------+\n |\n v\n+-------+--------+\n| |\n| Write GCS File |\n| |\n+----------------+",
"p1 = beam.Pipeline(options=options)\n\n(p1 | 'read' >> beam.io.ReadFromText('gs://dataflow-samples/shakespeare/kinglear.txt')\n | 'write' >> beam.io.WriteToText('gs://PROJECTID/test.txt', num_shards=1)\n )\n\np1.run().wait_until_finish()",
"パイプラインその2\nBigQueryからデータを読み込み、GCSにその内容を書き込むだけ\nBigQueryのデータセットは以下\nhttps://bigquery.cloud.google.com/table/bigquery-public-data:samples.shakespeare \n+----------------+\n| |\n| Read BigQuery |\n| |\n+-------+--------+\n |\n v\n+-------+--------+\n| |\n| Write GCS File |\n| |\n+----------------+",
"p2 = beam.Pipeline(options=options)\n\nquery = 'SELECT * FROM [bigquery-public-data:samples.shakespeare] LIMIT 10'\n(p2 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))\n | 'write' >> beam.io.WriteToText('gs://PROJECTID/test2.txt', num_shards=1)\n )\n\np2.run().wait_until_finish()",
"パイプラインその3\nBigQueryからデータを読み込み、BigQueryにデータを書き込む \n+----------------+\n| |\n| Read BigQuery |\n| |\n+-------+--------+\n |\n v\n+-------+--------+\n| |\n| Write BigQuery |\n| |\n+----------------+",
"p3 = beam.Pipeline(options=options)\n\n# 注意:データセットを作成しておく\nquery = 'SELECT * FROM [bigquery-public-data:samples.shakespeare] LIMIT 10'\n(p3 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))\n | 'write' >> beam.io.Write(beam.io.BigQuerySink(\n 'testdataset.testtable1',\n schema='corpus_date:INTEGER, corpus:STRING, word:STRING, word_count:INTEGER',\n create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,\n write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))\n )\n\np3.run().wait_until_finish()",
"パイプラインその4\n\nBigQueryからデータを読み込み\nデータを加工して\nBigQueryに書き込む\n\n+----------------+\n| |\n| Read BigQuery |\n| |\n+-------+--------+\n |\n v\n+-------+--------+\n| |\n| Modify Element |\n| |\n+----------------+\n |\n v\n+-------+--------+\n| |\n| Write BigQuery |\n| |\n+----------------+",
"def modify_data1(element):\n # beam.Mapは1行の入力に対し1行の出力をする場合に使う\n # element = {u'corpus_date': 0, u'corpus': u'sonnets', u'word': u'LVII', u'word_count': 1}\n\n corpus_upper = element['corpus'].upper()\n word_len = len(element['word'])\n\n return {'corpus_upper': corpus_upper,\n 'word_len': word_len\n }\n\n\np4 = beam.Pipeline(options=options)\n\nquery = 'SELECT * FROM [bigquery-public-data:samples.shakespeare] LIMIT 10'\n(p4 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))\n | 'modify' >> beam.Map(modify_data1)\n | 'write' >> beam.io.Write(beam.io.BigQuerySink(\n 'testdataset.testtable2',\n schema='corpus_upper:STRING, word_len:INTEGER',\n create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,\n write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))\n )\n\np4.run().wait_until_finish()",
"パイプラインその5\nブランチを分ける例\n+----------------+\n| |\n| Read BigQuery |\n| |\n+-------+--------+\n |\n +---------------------+\n | |\n+-------v--------+ +-------v--------+\n| | | |\n| Modify Element | | Modify Element |\n| | | |\n+-------+--------+ +-------+--------+\n | |\n +---------------------+\n |\n+-------v--------+\n| |\n| Flatten |\n| |\n+-------+--------+\n |\n |\n+-------v--------+\n| |\n| Save BigQuery |\n| |\n+----------------+",
"def modify1(element):\n # element = {u'corpus_date': 0, u'corpus': u'sonnets', u'word': u'LVII', u'word_count': 1}\n word_count = len(element['corpus'])\n count_type = 'corpus only'\n\n return {'word_count': word_count,\n 'count_type': count_type\n }\n\n\ndef modify2(element):\n # element = {u'corpus_date': 0, u'corpus': u'sonnets', u'word': u'LVII', u'word_count': 1}\n word_count = len(element['word'])\n count_type = 'word only'\n\n return {'word_count': word_count,\n 'count_type': count_type\n }\n\n\np5 = beam.Pipeline(options=options)\n\nquery = 'SELECT * FROM [bigquery-public-data:samples.shakespeare] LIMIT 10'\nquery_results = p5 | 'read' >> beam.io.Read(beam.io.BigQuerySource(\n project='PROJECTID', use_standard_sql=False, query=query))\n\n# BigQueryの結果を二つのブランチに渡す\nbranch1 = query_results | 'modify1' >> beam.Map(modify1)\nbranch2 = query_results | 'modify2' >> beam.Map(modify2)\n\n# ブランチからの結果をFlattenでまとめる\n((branch1, branch2) | beam.Flatten()\n | 'write' >> beam.io.Write(beam.io.BigQuerySink(\n 'testdataset.testtable3',\n schema='word_count:INTEGER, count_type:STRING',\n create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,\n write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))\n )\n\np5.run().wait_until_finish()",
"パイプラインその6\nGroupbyを使う",
"def modify_data2(kvpair):\n # groupbyによりkeyとそのkeyを持つデータのリストのタプルが渡される\n # kvpair = (u'word only', [4, 4, 6, 6, 7, 7, 7, 7, 8, 9])\n\n return {'count_type': kvpair[0],\n 'sum': sum(kvpair[1])\n }\n\n\np6 = beam.Pipeline(options=options)\n\nquery = 'SELECT * FROM [PROJECTID:testdataset.testtable3] LIMIT 20'\n(p6 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))\n | 'pair' >> beam.Map(lambda x: (x['count_type'], x['word_count']))\n | \"groupby\" >> beam.GroupByKey()\n | 'modify' >> beam.Map(modify_data2)\n | 'write' >> beam.io.Write(beam.io.BigQuerySink(\n 'testdataset.testtable4',\n schema='count_type:STRING, sum:INTEGER',\n create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,\n write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))\n )\n\np6.run().wait_until_finish()",
"パイプラインその7\nWindowでGroupByの区間を区切る",
"def assign_timevalue(v):\n # pcollectionのデータにタイムスタンプを付加する\n # 後段のwindowはこのタイムスタンプを基準に分割される\n # ここでは適当に乱数でタイムスタンプを入れている\n import apache_beam.transforms.window as window\n import random\n import time\n return window.TimestampedValue(v, int(time.time()) + random.randint(0, 1))\n\n\ndef modify_data3(kvpair):\n # groupbyによりkeyとそのkeyを持つデータのリストのタプルが渡される\n # windowで分割されているのでデータ数が少なくなる\n # kvpair = (u'word only', [4, 4, 6, 6, 7])\n\n return {'count_type': kvpair[0],\n 'sum': sum(kvpair[1])\n }\n\n\np7 = beam.Pipeline(options=options)\n\nquery = 'SELECT * FROM [PROJECTID:testdataset.testtable3] LIMIT 20'\n(p7 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))\n | \"assign tv\" >> beam.Map(assign_timevalue)\n | 'window' >> beam.WindowInto(beam.window.FixedWindows(1))\n | 'pair' >> beam.Map(lambda x: (x['count_type'], x['word_count']))\n | \"groupby\" >> beam.GroupByKey()\n | 'modify' >> beam.Map(modify_data3)\n | 'write' >> beam.io.Write(beam.io.BigQuerySink(\n 'testdataset.testtable5',\n schema='count_type:STRING, sum:INTEGER',\n create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,\n write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))\n )\n\np7.run().wait_until_finish()",
"終わり\nDataflowRunnerに変えて実行してみよう\nまた、p*.run().wait_until_finish() を p*.run()にしてください。wait_until_finish()が入っていると、かなり待つことになります!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
steinam/teacher
|
jup_notebooks/data-science-ipython-notebooks-master/matplotlib/04.12-Three-Dimensional-Plotting.ipynb
|
mit
|
[
"<!--BOOK_INFORMATION-->\n<img align=\"left\" style=\"padding-right:10px;\" src=\"figures/PDSH-cover-small.png\">\nThis notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.\nThe text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!\nNo changes were made to the contents of this notebook from the original.\n<!--NAVIGATION-->\n< Customizing Matplotlib: Configurations and Stylesheets | Contents | Geographic Data with Basemap >\nThree-Dimensional Plotting in Matplotlib\nMatplotlib was initially designed with only two-dimensional plotting in mind.\nAround the time of the 1.0 release, some three-dimensional plotting utilities were built on top of Matplotlib's two-dimensional display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization.\nthree-dimensional plots are enabled by importing the mplot3d toolkit, included with the main Matplotlib installation:",
"from mpl_toolkits import mplot3d",
"Once this submodule is imported, a three-dimensional axes can be created by passing the keyword projection='3d' to any of the normal axes creation routines:",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = plt.axes(projection='3d')",
"With this three-dimensional axes enabled, we can now plot a variety of three-dimensional plot types. \nThree-dimensional plotting is one of the functionalities that benefits immensely from viewing figures interactively rather than statically in the notebook; recall that to use interactive figures, you can use %matplotlib notebook rather than %matplotlib inline when running this code.\nThree-dimensional Points and Lines\nThe most basic three-dimensional plot is a line or collection of scatter plot created from sets of (x, y, z) triples.\nIn analogy with the more common two-dimensional plots discussed earlier, these can be created using the ax.plot3D and ax.scatter3D functions.\nThe call signature for these is nearly identical to that of their two-dimensional counterparts, so you can refer to Simple Line Plots and Simple Scatter Plots for more information on controlling the output.\nHere we'll plot a trigonometric spiral, along with some points drawn randomly near the line:",
"ax = plt.axes(projection='3d')\n\n# Data for a three-dimensional line\nzline = np.linspace(0, 15, 1000)\nxline = np.sin(zline)\nyline = np.cos(zline)\nax.plot3D(xline, yline, zline, 'gray')\n\n# Data for three-dimensional scattered points\nzdata = 15 * np.random.random(100)\nxdata = np.sin(zdata) + 0.1 * np.random.randn(100)\nydata = np.cos(zdata) + 0.1 * np.random.randn(100)\nax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Greens');",
"Notice that by default, the scatter points have their transparency adjusted to give a sense of depth on the page.\nWhile the three-dimensional effect is sometimes difficult to see within a static image, an interactive view can lead to some nice intuition about the layout of the points.\nThree-dimensional Contour Plots\nAnalogous to the contour plots we explored in Density and Contour Plots, mplot3d contains tools to create three-dimensional relief plots using the same inputs.\nLike two-dimensional ax.contour plots, ax.contour3D requires all the input data to be in the form of two-dimensional regular grids, with the Z data evaluated at each point.\nHere we'll show a three-dimensional contour diagram of a three-dimensional sinusoidal function:",
"def f(x, y):\n return np.sin(np.sqrt(x ** 2 + y ** 2))\n\nx = np.linspace(-6, 6, 30)\ny = np.linspace(-6, 6, 30)\n\nX, Y = np.meshgrid(x, y)\nZ = f(X, Y)\n\nfig = plt.figure()\nax = plt.axes(projection='3d')\nax.contour3D(X, Y, Z, 50, cmap='binary')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z');",
"Sometimes the default viewing angle is not optimal, in which case we can use the view_init method to set the elevation and azimuthal angles. In the following example, we'll use an elevation of 60 degrees (that is, 60 degrees above the x-y plane) and an azimuth of 35 degrees (that is, rotated 35 degrees counter-clockwise about the z-axis):",
"ax.view_init(60, 35)\nfig",
"Again, note that this type of rotation can be accomplished interactively by clicking and dragging when using one of Matplotlib's interactive backends.\nWireframes and Surface Plots\nTwo other types of three-dimensional plots that work on gridded data are wireframes and surface plots.\nThese take a grid of values and project it onto the specified three-dimensional surface, and can make the resulting three-dimensional forms quite easy to visualize.\nHere's an example of using a wireframe:",
"fig = plt.figure()\nax = plt.axes(projection='3d')\nax.plot_wireframe(X, Y, Z, color='black')\nax.set_title('wireframe');",
"A surface plot is like a wireframe plot, but each face of the wireframe is a filled polygon.\nAdding a colormap to the filled polygons can aid perception of the topology of the surface being visualized:",
"ax = plt.axes(projection='3d')\nax.plot_surface(X, Y, Z, rstride=1, cstride=1,\n cmap='viridis', edgecolor='none')\nax.set_title('surface');",
"Note that though the grid of values for a surface plot needs to be two-dimensional, it need not be rectilinear.\nHere is an example of creating a partial polar grid, which when used with the surface3D plot can give us a slice into the function we're visualizing:",
"r = np.linspace(0, 6, 20)\ntheta = np.linspace(-0.9 * np.pi, 0.8 * np.pi, 40)\nr, theta = np.meshgrid(r, theta)\n\nX = r * np.sin(theta)\nY = r * np.cos(theta)\nZ = f(X, Y)\n\nax = plt.axes(projection='3d')\nax.plot_surface(X, Y, Z, rstride=1, cstride=1,\n cmap='viridis', edgecolor='none');",
"Surface Triangulations\nFor some applications, the evenly sampled grids required by the above routines is overly restrictive and inconvenient.\nIn these situations, the triangulation-based plots can be very useful.\nWhat if rather than an even draw from a Cartesian or a polar grid, we instead have a set of random draws?",
"theta = 2 * np.pi * np.random.random(1000)\nr = 6 * np.random.random(1000)\nx = np.ravel(r * np.sin(theta))\ny = np.ravel(r * np.cos(theta))\nz = f(x, y)",
"We could create a scatter plot of the points to get an idea of the surface we're sampling from:",
"ax = plt.axes(projection='3d')\nax.scatter(x, y, z, c=z, cmap='viridis', linewidth=0.5);",
"This leaves a lot to be desired.\nThe function that will help us in this case is ax.plot_trisurf, which creates a surface by first finding a set of triangles formed between adjacent points (remember that x, y, and z here are one-dimensional arrays):",
"ax = plt.axes(projection='3d')\nax.plot_trisurf(x, y, z,\n cmap='viridis', edgecolor='none');",
"The result is certainly not as clean as when it is plotted with a grid, but the flexibility of such a triangulation allows for some really interesting three-dimensional plots.\nFor example, it is actually possible to plot a three-dimensional Möbius strip using this, as we'll see next.\nExample: Visualizing a Möbius strip\nA Möbius strip is similar to a strip of paper glued into a loop with a half-twist.\nTopologically, it's quite interesting because despite appearances it has only a single side!\nHere we will visualize such an object using Matplotlib's three-dimensional tools.\nThe key to creating the Möbius strip is to think about it's parametrization: it's a two-dimensional strip, so we need two intrinsic dimensions. Let's call them $\\theta$, which ranges from $0$ to $2\\pi$ around the loop, and $w$ which ranges from -1 to 1 across the width of the strip:",
"theta = np.linspace(0, 2 * np.pi, 30)\nw = np.linspace(-0.25, 0.25, 8)\nw, theta = np.meshgrid(w, theta)",
"Now from this parametrization, we must determine the (x, y, z) positions of the embedded strip.\nThinking about it, we might realize that there are two rotations happening: one is the position of the loop about its center (what we've called $\\theta$), while the other is the twisting of the strip about its axis (we'll call this $\\phi$). For a Möbius strip, we must have the strip makes half a twist during a full loop, or $\\Delta\\phi = \\Delta\\theta/2$.",
"phi = 0.5 * theta",
"Now we use our recollection of trigonometry to derive the three-dimensional embedding.\nWe'll define $r$, the distance of each point from the center, and use this to find the embedded $(x, y, z)$ coordinates:",
"# radius in x-y plane\nr = 1 + w * np.cos(phi)\n\nx = np.ravel(r * np.cos(theta))\ny = np.ravel(r * np.sin(theta))\nz = np.ravel(w * np.sin(phi))",
"Finally, to plot the object, we must make sure the triangulation is correct. The best way to do this is to define the triangulation within the underlying parametrization, and then let Matplotlib project this triangulation into the three-dimensional space of the Möbius strip.\nThis can be accomplished as follows:",
"# triangulate in the underlying parametrization\nfrom matplotlib.tri import Triangulation\ntri = Triangulation(np.ravel(w), np.ravel(theta))\n\nax = plt.axes(projection='3d')\nax.plot_trisurf(x, y, z, triangles=tri.triangles,\n cmap='viridis', linewidths=0.2);\n\nax.set_xlim(-1, 1); ax.set_ylim(-1, 1); ax.set_zlim(-1, 1);",
"Combining all of these techniques, it is possible to create and display a wide variety of three-dimensional objects and patterns in Matplotlib.\n<!--NAVIGATION-->\n< Customizing Matplotlib: Configurations and Stylesheets | Contents | Geographic Data with Basemap >"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gtzan/mir_book
|
Clustering.ipynb
|
cc0-1.0
|
[
"import matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nimport numpy as np\nfrom sklearn import datasets\nfrom sklearn.mixture import GaussianMixture\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.decomposition import PCA\nfrom sklearn.cluster import KMeans\nfrom sklearn.cluster import MiniBatchKMeans\nfrom sklearn.cluster import SpectralClustering\n",
"Clustering\nIn this notebook we explore clustering otherwise known as unsupervised learning. In order to examine the effects of clustering it is useful to have a data-set that is labeled with class labels as then we can visualize observe how well the identified clusters correspond to the original clusters. Notice that the clustering algorithms we will explore do NOT take into account the class labels. They are simply used to color the points in the visualizations. Let's start by loading and visualizing a data set consisting of audio features representing music tracks. There are 3 genres (classical, jazz, heavy metal) with 100 instances each. The first data-set we will examine consists of 2 features per track to make visualization easy. Let's look at a scatter plot of the points corresponding to each music track and use color to indicate what is the original class. The scikit-learn documentation has a lot of examples of clustering with interesting plots and visualizations.",
"(X, y) = datasets.load_svmlight_file(\"data/3genres.arff.libsvm\")\nX = X.toarray()\nX = MinMaxScaler().fit_transform(X)\ny = y.astype(int)\ntarget_names = ['classical', 'jazz', 'metal']\n\nprint(X.shape)\nprint(y.shape)\ncolors = ['navy', 'turquoise', 'darkorange']\n\nfor n, color in enumerate(colors):\n data = X[y == n]\n # Plot the training data \n plt.scatter(data[:, 0], data[:, 1], s=5, color=color,\n label=target_names[n])\n plt.title('Scatter plot of audio features and genres')\n plt.legend(scatterpoints=1, loc='upper right', prop=dict(size=10))\n\nplt.show()\n",
"K-means clustering\nWe can perform k-means clustering on this data with 3 clusters and using the resulting clusters as a way to predict a class label. The fit_predict function of clustering algorithms does just that. Notice that the clustering assign each point to a cluster in an unsupervised manner i.e it only takes into account X not y like classifiers do. Comparing the original scatter plot with the resulting predictions of the clustering algorithm shows where potential erros can happen. As you can see the dark blue cluster \"takes over\" some of the light blue cluster. \nWe can also look at other clustering methods such as spectral clustering and Gaussian Mixture Models.",
"random_state = 170\ncluster_names = ['cluster 1', 'cluster 2', 'cluster 3']\n\ndef plot_clustering(X,y,y_pred): \n fig = plt.figure(figsize=(12,4))\n for n, color in enumerate(colors):\n data = X[y == n]\n # Plot the training data \n plt.subplot(131)\n plt.subplots_adjust(bottom=.01, top=0.95, hspace=.15, wspace=.15, left=.01, right=.99)\n plt.scatter(data[:, 0], data[:, 1], s=20, color=color,label=target_names[n])\n plt.legend(scatterpoints=1, loc='upper right', prop=dict(size=10))\n \n # Plot the cluster predictions \n data = X[y_pred == n]\n h = plt.subplot(132)\n plt.scatter(data[:, 0], data[:, 1], s=20, marker='x',color=color,label=cluster_names[n])\n plt.legend(scatterpoints=1, loc='upper right', prop=dict(size=10))\n plt.subplot(133)\n \n data = X[y == n]\n plt.scatter(data[:, 0], data[:, 1], s=20, color=color,\n label=target_names[n]) \n data = X[y_pred == n]\n\n plt.scatter(data[:, 0], data[:, 1], s=40, marker='x',color=color,\n label=cluster_names[n])\n plt.legend(scatterpoints=1, loc='upper right', prop=dict(size=10))\n\n train_accuracy = np.mean(y_pred.ravel() == y.ravel()) * 100\n plt.text(0.7, 0.5, 'Accuracy: %.1f' % train_accuracy,\n transform=h.transAxes)\n plt.show()\n\n\ny_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(X)\nplot_clustering(X,y,y_pred)\ny_pred = SpectralClustering(n_clusters=3, eigen_solver='arpack',\n random_state=random_state,assign_labels = 'kmeans', \n affinity='nearest_neighbors').fit_predict(X)\nplot_clustering(X,y,y_pred)\ngmm = GaussianMixture(n_components=3, random_state=50, max_iter=100)\ngmm.fit(X)\ny_pred = gmm.predict(X)\nplot_clustering(X,y,y_pred)\n\n# rerrange cluster numbers to make things work \n#cluster_mapping = {} \n#cluster_mapping[0] = 0\n#cluster_mapping[1] = 2 \n#cluster_mapping[2] = 1 \ny_mapped_predict = np.array([cluster_mapping[i] for i in y_pred])\ny_mapped_predict[y_pred == 0] = 0\ny_mapped_predict[y_pred == 1] = 2 \ny_mapped_predict[y_pred == 2] = 1\nplot_clustering(X,y,y_mapped_predict)\n",
"Clustering with k-means visualized in 3 dimensions\nIf we increase the number of feature we use to represent each music track to 3 then we can do scatter plots and perform clustering in the resulting 3D space. In most actual applications clustering is performed in high-dimensional feature spaces so visualization is not easy. At the end of this notebook we explore Principal Component Analysis a methodology for dimensionality reduction.",
"# Modified for documentation by Jaques Grobler\n# License: BSD 3 clause\n\n\nfrom sklearn.cluster import KMeans\nfrom sklearn import datasets\n\nnp.random.seed(5)\n\n(X, y) = datasets.load_svmlight_file(\"data/3genres_4features.arff.libsvm\")\nX = X.toarray()\nX = MinMaxScaler().fit_transform(X)\ntarget_names = ['classical', 'jazz', 'metal']\ny = y.astype(int)\n\nestimators = [('k_means_8', KMeans(n_clusters=8)),\n ('k_means_3', KMeans(n_clusters=3)),\n ]\n\nfignum = 1\ntitles = ['8 clusters', '3 clusters']\nfor name, est in estimators:\n fig = plt.figure(fignum, figsize=(7, 6))\n ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=28, azim=34)\n est.fit(X)\n labels = est.labels_\n\n ax.scatter(X[:, 3], X[:, 0], X[:, 2],\n c=labels.astype(np.float), edgecolor='k')\n\n ax.w_xaxis.set_ticklabels([])\n ax.w_yaxis.set_ticklabels([])\n ax.w_zaxis.set_ticklabels([])\n ax.set_xlabel('Feature 1')\n ax.set_ylabel('Feature 2')\n ax.set_zlabel('Feature 3')\n ax.set_title(titles[fignum - 1])\n ax.dist = 12\n fignum = fignum + 1\n\n# Plot the ground truth\nfig = plt.figure(fignum, figsize=(7, 6))\nax = Axes3D(fig, rect=[0, 0, .95, 1], elev=28, azim=34)\n\nfor name, label in [('classical', 0),\n ('jazz', 1),\n ('metal', 2)]:\n ax.text3D(X[y == label, 3].mean(),\n X[y == label, 0].mean(),\n X[y == label, 2].mean()-1.5, name,\n horizontalalignment='center',\n bbox=dict(alpha=.2, edgecolor='w', facecolor='w'))\n# Reorder the labels to have colors matching the cluster results\ny = np.choose(y, [1, 2, 0]).astype(np.float)\nax.scatter(X[:, 3], X[:, 0], X[:, 2], c=y, edgecolor='k')\n\nax.w_xaxis.set_ticklabels([])\nax.w_yaxis.set_ticklabels([])\nax.w_zaxis.set_ticklabels([])\nax.set_xlabel('Feature 1')\nax.set_ylabel('Feature 2')\nax.set_zlabel('Feature 3')\nax.set_title('Ground Truth')\nax.dist = 12\nplt.show()\n\n",
"Clustering usig GMMs\nThe idea behind clustering using Gaussian Mixture Models is that each cluster will correspond to a different \nGaussian shaped component. When visualized in 2D these components can be represented as ellipses. When the covariance matrix is diagonal then these ellipses will be aligned with the axis. When they are spherical that means that the standard deviation of the features in each dimensions is considered equal. In the tied configuration the covariance matrices of each mixture component are tied to be equal. The most flexible case is when full covariance matrices are used.",
"colors = ['navy', 'turquoise', 'darkorange']\n\ndef make_ellipses(gmm, ax):\n for n, color in enumerate(colors):\n if gmm.covariance_type == 'full':\n covariances = gmm.covariances_[n][:2, :2]\n elif gmm.covariance_type == 'tied':\n covariances = gmm.covariances_[:2, :2]\n elif gmm.covariance_type == 'diag':\n covariances = np.diag(gmm.covariances_[n][:2])\n elif gmm.covariance_type == 'spherical':\n covariances = np.eye(gmm.means_.shape[1]) * gmm.covariances_[n]\n v, w = np.linalg.eigh(covariances)\n u = w[0] / np.linalg.norm(w[0])\n angle = np.arctan2(u[1], u[0])\n angle = 180 * angle / np.pi # convert to degrees\n v = 2. * np.sqrt(2.) * np.sqrt(v)\n ell = mpl.patches.Ellipse(gmm.means_[n, :2], v[0], v[1],\n 180 + angle, color=color)\n ell.set_clip_box(ax.bbox)\n ell.set_alpha(0.5)\n ax.add_artist(ell)\n\n \n(X, y) = datasets.load_svmlight_file(\"data/3genres.arff.libsvm\")\nX = X.toarray()\nX = MinMaxScaler().fit_transform(X)\ntarget_names = ['classical', 'jazz', 'metal']\n\n\n# Break up the dataset into non-overlapping training (75%) and testing\n# (25%) sets.\nskf = StratifiedKFold(n_splits=4)\n# Only take the first fold.\ntrain_index, test_index = next(iter(skf.split(X, y)))\n\nX_train = X[train_index]\ny_train = y[train_index]\nX_test = X[test_index]\ny_test = y[test_index]\n\nn_classes = len(np.unique(y_train))\n\n# Try GMMs using different types of covariances.\nestimators = dict((cov_type, GaussianMixture(n_components=n_classes,\n covariance_type=cov_type, max_iter=100, random_state=0))\n for cov_type in ['spherical', 'diag', 'tied', 'full'])\n\nn_estimators = len(estimators)\n\nfig = plt.figure(figsize=(6 * n_estimators // 2, 12))\nplt.subplots_adjust(bottom=.01, top=0.95, hspace=.15, wspace=.05, left=.01, right=.99)\n\nfor index, (name, estimator) in enumerate(estimators.items()):\n # Since we have class labels for the training data, we can\n # initialize the GMM parameters in a supervised manner.\n estimator.means_init = np.array([X_train[y_train == i].mean(axis=0)\n for i in range(n_classes)])\n\n # Train the other parameters using the EM algorithm.\n estimator.fit(X_train)\n\n h = plt.subplot(2, n_estimators // 2, index + 1)\n text= h.text(0,0, \"\", va=\"bottom\", ha=\"left\")\n make_ellipses(estimator, h)\n\n for n, color in enumerate(colors):\n data = X[y == n]\n # Plot the training data \n plt.scatter(data[:, 0], data[:, 1], s=0.8, color=color,\n label=target_names[n])\n # Plot the test data with crosses\n for n, color in enumerate(colors):\n data = X_test[y_test == n]\n plt.scatter(data[:, 0], data[:, 1], marker='x', color=color)\n\n y_train_pred = estimator.predict(X_train)\n train_accuracy = np.mean(y_train_pred.ravel() == y_train.ravel()) * 100\n plt.text(0.05, 0.9, 'Train accuracy: %.1f' % train_accuracy,\n transform=h.transAxes)\n\n y_test_pred = estimator.predict(X_test)\n test_accuracy = np.mean(y_test_pred.ravel() == y_test.ravel()) * 100\n plt.text(0.05, 0.8, 'Test accuracy: %.1f' % test_accuracy,\n transform=h.transAxes)\n\n plt.xticks(())\n plt.yticks(())\n plt.title(name)\n\nplt.legend(scatterpoints=1, loc='lower right', prop=dict(size=12))\nplt.show() \n\n ",
"Principal component analysis\nPCA is a tecnique for dimensionality reduction. In this example we take as input a similar audio feature data-set for the 3 genres we have been exploring but with 124 features per track instead. Using PCA we can reduce the dimensionality to 3 dimensions.",
"(X, y) = datasets.load_svmlight_file(\"data/3genres_full.arff.libsvm\")\nprint(X.shape)\nX = X.toarray()\nX = MinMaxScaler().fit_transform(X)\ntarget_names = ['classical', 'jazz', 'metal']\n\n# To getter a better understanding of interaction of the dimensions\n# plot the first three PCA dimensions\nfig = plt.figure(1, figsize=(8, 6))\nax = Axes3D(fig, elev=-150, azim=110)\nX_reduced = PCA(n_components=3).fit_transform(X)\nax.scatter(X_reduced[:, 0], X_reduced[:, 1], X_reduced[:, 2], c=y,\n cmap=plt.cm.Set1, edgecolor='k', s=40)\nax.set_title(\"First three PCA directions\")\nax.set_xlabel(\"1st eigenvector\")\nax.w_xaxis.set_ticklabels([])\nax.set_ylabel(\"2nd eigenvector\")\nax.w_yaxis.set_ticklabels([])\nax.set_zlabel(\"3rd eigenvector\")\nax.w_zaxis.set_ticklabels([])\nplt.show()\n",
"We can use the reduced PCA vectors as a way to visualize the results of clustering in the high-dimensional (124) original feature space.",
"y_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(X)\n# To getter a better understanding of interaction of the dimensions\n# plot the first three PCA dimensions\nfig = plt.figure(1, figsize=(8, 6))\nax = Axes3D(fig, elev=-150, azim=110)\nX_reduced = PCA(n_components=3).fit_transform(X)\nax.scatter(X_reduced[:, 0], X_reduced[:, 1], X_reduced[:, 2], c=y_pred,\n cmap=plt.cm.Set1, edgecolor='k', s=40)\nax.set_title(\"First three PCA directions\")\nax.set_xlabel(\"1st eigenvector\")\nax.w_xaxis.set_ticklabels([])\nax.set_ylabel(\"2nd eigenvector\")\nax.w_yaxis.set_ticklabels([])\nax.set_zlabel(\"3rd eigenvector\")\nax.w_zaxis.set_ticklabels([])\nplt.show()\n",
"We can compare the accuracy of clustering using the original ground truth labels based on the 124 features vs the accuracy using the 3-dimensional features reduced by PCA. As can be seen the performance is similar - the increase might be due to reduction of noise in the features.",
"y_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(X)\ntrain_accuracy = np.mean(y_pred.ravel() == y.ravel()) * 100\nprint(train_accuracy)\n\ny_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(X_reduced)\ntrain_accuracy = np.mean(y_pred.ravel() == y.ravel()) * 100\nprint(train_accuracy)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
joaoandre/algorithms
|
intro-python-data-science/week2_assignment2.ipynb
|
mit
|
[
"You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.\n\nAssignment 2 - Pandas Introduction\nAll questions are weighted the same in this assignment.\nPart 1\nThe following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on All Time Olympic Games Medals, and does some basic data cleaning. Use this dataset to answer the questions below.",
"import pandas as pd\n\ndf = pd.read_csv('olympics.csv', index_col=0, skiprows=1)\n\nfor col in df.columns:\n if col[:2]=='01':\n df.rename(columns={col:'Gold'+col[4:]}, inplace=True)\n if col[:2]=='02':\n df.rename(columns={col:'Silver'+col[4:]}, inplace=True)\n if col[:2]=='03':\n df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)\n if col[:1]=='№':\n df.rename(columns={col:'#'+col[1:]}, inplace=True)\n\nnames_ids = df.index.str.split('\\s\\(') # split the index by '('\n\ndf.index = names_ids.str[0] # the [0] element is the country name (new index) \ndf['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)\n\ndf = df.drop('Totals')\ndf.head()",
"Question 0 (Example)\nWhat is the first country in df?\nThis function should return a Series.",
"# You should write your whole answer within the function provided. The autograder will call\n# this function and compare the return value against the correct solution value\ndef answer_zero():\n # This function returns the row for Afghanistan, which is a Series object. The assignment\n # question description will tell you the general format the autograder is expecting\n return df.iloc[0]\n\n# You can examine what your function returns by calling it in the cell. If you have questions\n# about the assignment formats, check out the discussion forums for any FAQs\nanswer_zero() ",
"Question 1\nWhich country has won the most gold medals in summer games?\nThis function should return a single string value.",
"def answer_one():\n return df['Gold'].idxmax()\nanswer_one()",
"Question 2\nWhich country had the biggest difference between their summer and winter gold medal counts?\nThis function should return a single string value.",
"def answer_two():\n return (df['Gold'] - df['Gold.1']).idxmax()\n\nanswer_two()",
"Question 3\nWhich country has the biggest difference between their summer and winter gold medal counts relative to their total gold medal count? Only include countries that have won at least 1 gold in both summer and winter.\nThis function should return a single string value.",
"def answer_three():\n tmp_df = df[(df['Gold.1'] > 0) & (df['Gold'] > 0)]\n return ((tmp_df['Gold'] - tmp_df['Gold.1']) / ((tmp_df['Gold'] + tmp_df['Gold.1']))).idxmax()\nanswer_three()\n",
"Question 4\nWrite a function to update the dataframe to include a new column called \"Points\" which is a weighted value where each gold medal counts for 3 points, silver medals for 2 points, and bronze mdeals for 1 point. The function should return only the column (a Series object) which you created.\nThis function should return a Series named Points of length 146",
"def answer_four():\n Points = 3*df['Gold.2'] + 2*df['Silver.2'] + 1*df['Bronze.2']\n\n return Points\n\nanswer_four()",
"Part 2\nFor the next set of questions, we will be using census data from the United States Census Bureau. Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. See this document for a description of the variable names.\nThe census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate.\nQuestion 5\nWhich state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)\nThis function should return a single string value.",
"census_df = pd.read_csv('census.csv')\ncensus_df.columns\n\ndef answer_five():\n return census_df.groupby(['STNAME']).size().idxmax()\nanswer_five()",
"Question 6\nOnly looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)?\nThis function should return a list of string values.",
"def answer_six():\n t = census_df[census_df['SUMLEV'] == 50]\n t = t.sort_values(by=['STNAME', 'CENSUS2010POP'], ascending=False).groupby(['STNAME']).head(3)\n return list(t.groupby(['STNAME']).sum().sort_values(by='CENSUS2010POP', ascending=False).head(3).index)\n\nanswer_six()",
"Question 7\nWhich county has had the largest change in population within the five year period (hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all five columns)?\nThis function should return a single string value.",
"def answer_seven():\n tmp_df = census_df[census_df['SUMLEV'] == 50]\n tmp_df['2011'] = tmp_df['POPESTIMATE2011'] - tmp_df['POPESTIMATE2010']\n tmp_df['2012'] = tmp_df['POPESTIMATE2012'] - tmp_df['POPESTIMATE2011']\n tmp_df['2013'] = tmp_df['POPESTIMATE2013'] - tmp_df['POPESTIMATE2012']\n tmp_df['2014'] = tmp_df['POPESTIMATE2014'] - tmp_df['POPESTIMATE2013']\n tmp_df['2015'] = tmp_df['POPESTIMATE2015'] - tmp_df['POPESTIMATE2014']\n tmp_df['max'] = tmp_df[['2011', '2012', '2013', '2014', '2015']].max(axis=1)\n return tmp_df.sort_values(by='max', ascending=False).iloc[0].CTYNAME\nanswer_seven()\n",
"Question 8\nIn this datafile, the United States is broken up into four regions using the \"REGION\" column. \nCreate a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.\nThis function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).",
"def answer_eight():\n result = census_df[(census_df[\"REGION\"].isin([1,2])) & (census_df['CTYNAME'].str.startswith('Washington')) & (census_df['POPESTIMATE2015'] > census_df['POPESTIMATE2014'])]\n return result[['STNAME', 'CTYNAME']]\nanswer_eight()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
agiovann/Constrained_NMF
|
demos/notebooks/demo_motion_correction.ipynb
|
gpl-2.0
|
[
"Motion Correction demo\nThis notebook demonstrates the various routines for motion correction in the CaImAn package. It demonstrates the usage of rigid and piecewise rigid motion correction on a two-photon calcium imaging dataset using the NoRMCorre algorithm [1], as well as several measures for quality assessment. This notebook should be interpreted more as a tutorial of the various methods. In practice, you can use either rigid or piecewise rigid motion correction depending on the motion of the dataset.\nThe dataset used in this notebook is provided by Sue Ann Koay and David Tank, Princeton University. This is a two photon calcium imaging dataset. For motion correction of one photon microendoscopic data the procedure is similar, with the difference, that the shifts are inferred on high pass spatially filtered version of the data. For more information check the demos for one photon data in the CaImAn package.\nMore information about the NoRMCorre algorithm can be found in the following paper:\n<a name=\"normcorre\"></a>[1] Pnevmatikakis, E.A., and Giovannucci A. (2017). NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data. Journal of Neuroscience Methods, 291:83-92 [paper]",
"from builtins import zip\nfrom builtins import str\nfrom builtins import map\nfrom builtins import range\nfrom past.utils import old_div\n\nimport cv2\nimport glob\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport psutil\nimport scipy\nfrom skimage.external.tifffile import TiffFile\nimport sys\nimport time\nimport logging\n\ntry:\n cv2.setNumThreads(0)\nexcept:\n pass\n\ntry:\n if __IPYTHON__:\n get_ipython().magic('load_ext autoreload')\n get_ipython().magic('autoreload 2')\nexcept NameError:\n pass\n\nlogging.basicConfig(format=\n \"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s\",\n # filename=\"/tmp/caiman.log\",\n level=logging.DEBUG)\n\nimport caiman as cm\nfrom caiman.motion_correction import MotionCorrect, tile_and_correct, motion_correction_piecewise\nfrom caiman.utils.utils import download_demo",
"First download the file and load it in memory to view it. Note that it is not necessary to load the file in memory in order to perform motion correction. Here we load it to inspect it. Viewing the file occurs with OpenCV and will a open a new window. To exit click on the video and press q.\nThe download_demo function will download the specific file for you and return the complete path to the file which will be stored in your caiman_data directory. If you adapt this demo for your data make sure to pass the complete path to your file(s). Remember to pass the fname variable as a list.",
"fnames = 'Sue_2x_3000_40_-46.tif'\nfnames = [download_demo(fnames)] # the file will be downloaded if it doesn't already exist\nm_orig = cm.load_movie_chain(fnames)\ndownsample_ratio = .2 # motion can be perceived better when downsampling in time\nm_orig.resize(1, 1, downsample_ratio).play(q_max=99.5, fr=30, magnification=2) # play movie (press q to exit)",
"Now set some parameters that are used for motion correction.",
"max_shifts = (6, 6) # maximum allowed rigid shift in pixels (view the movie to get a sense of motion)\nstrides = (48, 48) # create a new patch every x pixels for pw-rigid correction\noverlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps)\nnum_frames_split = 100 # length in frames of each chunk of the movie (to be processed in parallel)\nmax_deviation_rigid = 3 # maximum deviation allowed for patch with respect to rigid shifts\npw_rigid = False # flag for performing rigid or piecewise rigid motion correction\nshifts_opencv = True # flag for correcting motion using bicubic interpolation (otherwise FFT interpolation is used)\nborder_nan = 'copy' # replicate values along the boundary (if True, fill in with NaN)",
"Note that here the data presented here has been downsampled in space by a factor of 2 to reduce the file size. As a result the spatial resolution is coarser here (around 2 microns per pixel). If we were operating at the original resolution, several of the parameters above, e.g., max_shifts, strides, overlaps, max_deviation_rigid, could have been larger by a factor of 2.\nMotion correction is performed in parallel on chunks taken across times.\nWe first a cluster. The default backend mode for parallel processing is through the multiprocessing package. To make sure that this package is viewable from everywhere before starting the notebook these commands need to be executed from the terminal (in Linux and Windows):\nbash\n export MKL_NUM_THREADS=1\n export OPENBLAS_NUM_THREADS=1",
"#%% start the cluster (if a cluster already exists terminate it)\nif 'dview' in locals():\n cm.stop_server(dview=dview)\nc, dview, n_processes = cm.cluster.setup_cluster(\n backend='local', n_processes=None, single_thread=False)",
"We first need to create a motion correction object with the parameters specified above. We pass directly its input arguments in the constructor below. Alternatively, we can use the params object and construct it by passing the arguments of params.motion. See the notebook demo_pipeline.ipynb for an example of this usage.",
"# create a motion correction object\nmc = MotionCorrect(fnames, dview=dview, max_shifts=max_shifts,\n strides=strides, overlaps=overlaps,\n max_deviation_rigid=max_deviation_rigid, \n shifts_opencv=shifts_opencv, nonneg_movie=True,\n border_nan=border_nan)",
"<h1> Rigid motion correction</h1>\n<p> The original file exhibits a lot of motion. In order to correct for it we are first trying a simple rigid motion correction algorithm. This has already been selected by setting the parameter `pw_rigid=False` during the construction of the `MotionCorrect` object. The algorithm first creates a template by averaging frames from the video. It then tries to match each frame to this template. In addition the template will get updated during the matching process, resulting in a single precise template that is used for subpixel registration. </p>\n<img src=\"../../docs/img/rigidcorrection.png\" />",
"%%capture\n# correct for rigid motion correction and save the file (in memory mapped form)\nmc.motion_correct(save_movie=True)",
"The motion corrected file is automatically save as memory mapped file in the location given by mc.mmap_file. The rigid shifts are also save in mc.shifts_rig.",
"# load motion corrected movie\nm_rig = cm.load(mc.mmap_file)\nbord_px_rig = np.ceil(np.max(mc.shifts_rig)).astype(np.int)\n#%% visualize templates\nplt.figure(figsize = (20,10))\nplt.imshow(mc.total_template_rig, cmap = 'gray')\n\n#%% inspect movie\nm_rig.resize(1, 1, downsample_ratio).play(\n q_max=99.5, fr=30, magnification=2, bord_px = 0*bord_px_rig) # press q to exit",
"plot the shifts computed by rigid registration",
"#%% plot rigid shifts\nplt.close()\nplt.figure(figsize = (20,10))\nplt.plot(mc.shifts_rig)\nplt.legend(['x shifts','y shifts'])\nplt.xlabel('frames')\nplt.ylabel('pixels')",
"Piecewise rigid registration\nWhile rigid registration corrected for a lot of the movement, there is still non-uniform motion present in the registered file. To correct for that we can use piece-wise rigid registration directly in the original file by setting mc.pw_rigid=True. As before the registered file is saved in a memory mapped format in the location given by mc.mmap_file.",
"%%capture\n#%% motion correct piecewise rigid\nmc.pw_rigid = True # turn the flag to True for pw-rigid motion correction\nmc.template = mc.mmap_file # use the template obtained before to save in computation (optional)\n\nmc.motion_correct(save_movie=True, template=mc.total_template_rig)\nm_els = cm.load(mc.fname_tot_els)\nm_els.resize(1, 1, downsample_ratio).play(\n q_max=99.5, fr=30, magnification=2,bord_px = bord_px_rig)",
"Now concatenate all the movies (raw, rigid, and pw-rigid) for inspection",
"cm.concatenate([m_orig.resize(1, 1, downsample_ratio) - mc.min_mov*mc.nonneg_movie,\n m_rig.resize(1, 1, downsample_ratio), m_els.resize(\n 1, 1, downsample_ratio)], axis=2).play(fr=60, q_max=99.5, magnification=2, bord_px=bord_px_rig)",
"From the movie we can see that pw-rigid registration corrected for the non uniform motion of the data. This was done by estimating different displacement vectors for the different patches in the FOV. This can be visualized by plotting all the computed shifts were a dispersion in the shifts in the y direction is apparent. In this case, the shifts along the two axes are stored in mc.x_shifts_els and mc.y_shifts_els, respectively.",
"#%% visualize elastic shifts\nplt.close()\nplt.figure(figsize = (20,10))\nplt.subplot(2, 1, 1)\nplt.plot(mc.x_shifts_els)\nplt.ylabel('x shifts (pixels)')\nplt.subplot(2, 1, 2)\nplt.plot(mc.y_shifts_els)\nplt.ylabel('y_shifts (pixels)')\nplt.xlabel('frames')\n#%% compute borders to exclude\nbord_px_els = np.ceil(np.maximum(np.max(np.abs(mc.x_shifts_els)),\n np.max(np.abs(mc.y_shifts_els)))).astype(np.int)",
"The improvement in performance can also be seen by a more crisp summary statistic image. Below we plot the correlation images for the three datasets.",
"plt.figure(figsize = (20,10))\nplt.subplot(1,3,1); plt.imshow(m_orig.local_correlations(eight_neighbours=True, swap_dim=False))\nplt.subplot(1,3,2); plt.imshow(m_rig.local_correlations(eight_neighbours=True, swap_dim=False))\nplt.subplot(1,3,3); plt.imshow(m_els.local_correlations(eight_neighbours=True, swap_dim=False))\n\ncm.stop_server(dview=dview) # stop the server",
"Quality assessment\nApart from inspection, the performance of the registration methods can be quantified using several measures. Below we compute measures such as correlation of each frame with mean, crispness of summary image, and residual optical flow for all three cases. For more info see [1]. Note that computation of the residual optical flow can be computationally intensive.",
"%%capture\n#% compute metrics for the results (TAKES TIME!!)\nfinal_size = np.subtract(mc.total_template_els.shape, 2 * bord_px_els) # remove pixels in the boundaries\nwinsize = 100\nswap_dim = False\nresize_fact_flow = .2 # downsample for computing ROF\n\ntmpl_rig, correlations_orig, flows_orig, norms_orig, crispness_orig = cm.motion_correction.compute_metrics_motion_correction(\n fnames[0], final_size[0], final_size[1], swap_dim, winsize=winsize, play_flow=False, resize_fact_flow=resize_fact_flow)\n\ntmpl_rig, correlations_rig, flows_rig, norms_rig, crispness_rig = cm.motion_correction.compute_metrics_motion_correction(\n mc.fname_tot_rig[0], final_size[0], final_size[1],\n swap_dim, winsize=winsize, play_flow=False, resize_fact_flow=resize_fact_flow)\n\ntmpl_els, correlations_els, flows_els, norms_els, crispness_els = cm.motion_correction.compute_metrics_motion_correction(\n mc.fname_tot_els[0], final_size[0], final_size[1],\n swap_dim, winsize=winsize, play_flow=False, resize_fact_flow=resize_fact_flow)",
"Plot correlation with mean frame for each dataset",
"plt.figure(figsize = (20,10))\nplt.subplot(211); plt.plot(correlations_orig); plt.plot(correlations_rig); plt.plot(correlations_els)\nplt.legend(['Original','Rigid','PW-Rigid'])\nplt.subplot(223); plt.scatter(correlations_orig, correlations_rig); plt.xlabel('Original'); \nplt.ylabel('Rigid'); plt.plot([0.3,0.7],[0.3,0.7],'r--')\naxes = plt.gca(); axes.set_xlim([0.3,0.7]); axes.set_ylim([0.3,0.7]); plt.axis('square');\nplt.subplot(224); plt.scatter(correlations_rig, correlations_els); plt.xlabel('Rigid'); \nplt.ylabel('PW-Rigid'); plt.plot([0.3,0.7],[0.3,0.7],'r--')\naxes = plt.gca(); axes.set_xlim([0.3,0.7]); axes.set_ylim([0.3,0.7]); plt.axis('square');\n\n\n# print crispness values\nprint('Crispness original: '+ str(int(crispness_orig)))\nprint('Crispness rigid: '+ str(int(crispness_rig)))\nprint('Crispness elastic: '+ str(int(crispness_els)))\n\n#%% plot the results of Residual Optical Flow\nfls = [mc.fname_tot_els[0][:-4] + '_metrics.npz', mc.fname_tot_rig[0][:-4] +\n '_metrics.npz', mc.fname[0][:-4] + '_metrics.npz']\n\nplt.figure(figsize = (20,10))\nfor cnt, fl, metr in zip(range(len(fls)),fls,['pw_rigid','rigid','raw']):\n with np.load(fl) as ld:\n print(ld.keys())\n print(fl)\n print(str(np.mean(ld['norms'])) + '+/-' + str(np.std(ld['norms'])) +\n ' ; ' + str(ld['smoothness']) + ' ; ' + str(ld['smoothness_corr']))\n \n plt.subplot(len(fls), 3, 1 + 3 * cnt)\n plt.ylabel(metr)\n try:\n mean_img = np.mean(\n cm.load(fl[:-12] + 'mmap'), 0)[12:-12, 12:-12]\n except:\n try:\n mean_img = np.mean(\n cm.load(fl[:-12] + '.tif'), 0)[12:-12, 12:-12]\n except:\n mean_img = np.mean(\n cm.load(fl[:-12] + 'hdf5'), 0)[12:-12, 12:-12]\n \n lq, hq = np.nanpercentile(mean_img, [.5, 99.5])\n plt.imshow(mean_img, vmin=lq, vmax=hq)\n plt.title('Mean')\n plt.subplot(len(fls), 3, 3 * cnt + 2)\n plt.imshow(ld['img_corr'], vmin=0, vmax=.35)\n plt.title('Corr image')\n plt.subplot(len(fls), 3, 3 * cnt + 3)\n #plt.plot(ld['norms'])\n #plt.xlabel('frame')\n #plt.ylabel('norm opt flow')\n #plt.subplot(len(fls), 3, 3 * cnt + 3)\n flows = ld['flows']\n plt.imshow(np.mean(\n np.sqrt(flows[:, :, :, 0]**2 + flows[:, :, :, 1]**2), 0), vmin=0, vmax=0.3)\n plt.colorbar()\n plt.title('Mean optical flow') "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PhilipReschke/TensorFlow-Code-Examples
|
Multilayer Perceptron Example.ipynb
|
gpl-3.0
|
[
"TensorFlow - Multilayer Perceptron Example\nAbout this notebook\nAuthor: Philip Reschke (http://www.philipreschke.com)\nProject: https://github.com/PhilipReschke/TensorFlow-Code-Examples\nI will build a MLP network using two hidden layers to which I apply the ReLu activation function. I will be using a rather large network in terms of nodes to give the network a chance to learn as much as it can but at the same time apply dropout to decrease the chance of overfitting. The result should be about 95+% precition accuracy, which is decent given how simple the network is.\nRequirements\n\nPython 3.5\nTensorFlow 1.1\n\nImport dependencies",
"import tensorflow as tf",
"Import data\nI use the MNIST database of handwritten digits found @ http://yann.lecun.com/exdb/mnist/.",
"from tensorflow.examples.tutorials.mnist import input_data\nmnist_data = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)",
"Define parameters\nIn this section, I am defining:\n\nrelevant hypter parameters (optimizer and model parameters),\ndefine input placeholders, and\ndefine weights and biases",
"# Hyper parameters\ntraining_epochs = 100\nlearning_rate = 0.01\nbatch_size = 256\nprint_loss_for_each_epoch = 10\ntest_validation_size = 512 # validation images to use during training - solely for printing purposes\n\n# Network parameters\nn_input = 784 # MNIST length of 28 by 28 image when stored as a column vector\nn_hidden_layer_1 = 1024 # features in the 1st hidden layer\nn_hidden_layer_2 = 1024 # features in the 2nd hidden layer\nn_classes = 10 # total label classes (0-9 digits)\ndropout_keep_rate = 0.75 # only 25% of the hidden outputs are passed on\n\n# Graph input placeholders\nx = tf.placeholder(tf.float32, [None, n_input])\ny = tf.placeholder(tf.float32, [None, n_classes])\nkeep_prob = tf.placeholder(tf.float32)\n\n# Define weights and biases\nweights = {'hl_1': tf.Variable(tf.truncated_normal([n_input,n_hidden_layer_1])),\n 'hl_2': tf.Variable(tf.truncated_normal([n_hidden_layer_1,n_hidden_layer_2])),\n 'output': tf.Variable(tf.truncated_normal([n_hidden_layer_2,n_classes]))}\n\nbiases = {'hl_1': tf.Variable(0.01 * tf.truncated_normal([n_hidden_layer_1])),\n 'hl_2': tf.Variable(0.01 * tf.truncated_normal([n_hidden_layer_2])),\n 'output': tf.Variable(0.01 * tf.truncated_normal([n_classes]))}",
"Create TF Graph\nI build the TensorFlow graph using two hidden layers with dropout to prevent overfitting. I am using Cross Entropy to classify my logits and finally an Adam Optimizer to reduce the training error. I found that the Adam Optimizer converges faster than Stocastic Gradient Descent.",
"def multilayer_perceptron_network(x, weights, biases):\n # Hidden layer 1 with ReLu\n hidden_layer_1 = tf.add(tf.matmul(x, weights['hl_1']), biases['hl_1'])\n hidden_layer_1 = tf.nn.relu(hidden_layer_1)\n hidden_layer_1 = tf.nn.dropout(hidden_layer_1, keep_prob=keep_prob)\n\n # Hidden layer 2 with ReLu\n hidden_layer_2 = tf.add(tf.matmul(hidden_layer_1, weights['hl_2']), biases['hl_2'])\n hidden_layer_2 = tf.nn.relu(hidden_layer_2)\n hidden_layer_2 = tf.nn.dropout(hidden_layer_2, keep_prob=keep_prob)\n\n # Output layer with linear activation\n output_layer = tf.add(tf.matmul(hidden_layer_2, weights['output']), biases['output'])\n return output_layer\n\n# Construct model\nlogits = multilayer_perceptron_network(x, weights, biases)\n\n# Define cost and optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss=cost)\n\n# Define accuracy\ncorrect_prediction = tf.equal(tf.arg_max(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))",
"Launch TF Graph\nFinally, a simple output to calculate the train, validation and test accuracy. Not bad for a simple MLP net.",
"# Init variables\ninit = tf.global_variables_initializer()\n\n# Run Graph\nwith tf.Session() as sess:\n sess.run(init)\n\n # Training cycle\n for epoch in range(training_epochs):\n\n for batch in range(mnist_data.train.num_examples//batch_size):\n\n # Get x and y values for the given batch\n batch_x, batch_y = mnist_data.train.next_batch(batch_size)\n\n # Compute graph with respect to 'optimizer' and 'cost'\n _, loss, training_accuracy = sess.run([optimizer, cost, accuracy], feed_dict={x: batch_x,\n y: batch_y,\n keep_prob: dropout_keep_rate})\n\n # Compute graph with respect to validation data\n validation_accuracy = sess.run(accuracy, feed_dict={x: mnist_data.validation.images[:test_validation_size],\n y: mnist_data.validation.labels[:test_validation_size],\n keep_prob: 1.})\n\n # Display logs per epoch step\n if epoch % print_loss_for_each_epoch == 0:\n print('Epoch {:>2}, Batches {:>3}, Loss: {:>10.4f}, Train Accuracy: {:.4f}, Val Accuracy: {:.4f}'.format(\n epoch + 1, # epoch starts at 0\n batch + 1, # batch starts at 0\n loss,\n training_accuracy,\n validation_accuracy))\n\n print('Optimization Finished!')\n\n # Testing cycle\n test_accuracy = sess.run(accuracy, feed_dict={x: mnist_data.test.images,\n y: mnist_data.test.labels,\n keep_prob: 1.})\n print('Test accuracy: {:3f}'.format(test_accuracy))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ThyrixYang/LearningNotes
|
MOOC/stanford_cnn_cs231n/assignment2/Dropout.ipynb
|
gpl-3.0
|
[
"Dropout\nDropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.\n[1] Geoffrey E. Hinton et al, \"Improving neural networks by preventing co-adaptation of feature detectors\", arXiv 2012",
"# As usual, a bit of setup\nfrom __future__ import print_function\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)",
"Dropout forward pass\nIn the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.\nOnce you have done so, run the cell below to test your implementation.",
"np.random.seed(231)\nx = np.random.randn(500, 500) + 10\n\nfor p in [0.3, 0.6, 0.75]:\n out, _ = dropout_forward(x, {'mode': 'train', 'p': p})\n out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})\n\n print('Running tests with p = ', p)\n print('Mean of input: ', x.mean())\n print('Mean of train-time output: ', out.mean())\n print('Mean of test-time output: ', out_test.mean())\n print('Fraction of train-time output set to zero: ', (out == 0).mean())\n print('Fraction of test-time output set to zero: ', (out_test == 0).mean())\n print()",
"Dropout backward pass\nIn the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.",
"np.random.seed(231)\nx = np.random.randn(10, 10) + 10\ndout = np.random.randn(*x.shape)\n\ndropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}\nout, cache = dropout_forward(x, dropout_param)\ndx = dropout_backward(dout, cache)\ndx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)\n\nprint('dx relative error: ', rel_error(dx, dx_num))",
"Fully-connected nets with Dropout\nIn the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.",
"np.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor dropout in [0, 0.25, 0.5]:\n print('Running check with dropout = ', dropout)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n weight_scale=5e-2, dtype=np.float64,\n dropout=dropout, seed=123)\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n print()",
"Regularization experiment\nAs an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.",
"# Train two identical nets, one with dropout and one without\nnp.random.seed(231)\nnum_train = 500\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\ndropout_choices = [0, 0.75]\nfor dropout in dropout_choices:\n model = FullyConnectedNet([500], dropout=dropout)\n print(dropout)\n\n solver = Solver(model, small_data,\n num_epochs=25, batch_size=100,\n update_rule='adam',\n optim_config={\n 'learning_rate': 5e-4,\n },\n verbose=True, print_every=100)\n solver.train()\n solvers[dropout] = solver\n\n# Plot train and validation accuracies of the two models\n\ntrain_accs = []\nval_accs = []\nfor dropout in dropout_choices:\n solver = solvers[dropout]\n train_accs.append(solver.train_acc_history[-1])\n val_accs.append(solver.val_acc_history[-1])\n\nplt.subplot(3, 1, 1)\nfor dropout in dropout_choices:\n plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)\nplt.title('Train accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend(ncol=2, loc='lower right')\n \nplt.subplot(3, 1, 2)\nfor dropout in dropout_choices:\n plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)\nplt.title('Val accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend(ncol=2, loc='lower right')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Question\nExplain what you see in this experiment. What does it suggest about dropout?\nAnswer"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
quantopian/research_public
|
notebooks/lectures/Introduction_to_Research/notebook.ipynb
|
apache-2.0
|
[
"Introduction to the Research Environment\nThe research environment is powered by IPython notebooks, which allow one to perform a great deal of data analysis and statistical validation. We'll demonstrate a few simple techniques here.\nCode Cells vs. Text Cells\nAs you can see, each cell can be either code or text. To select between them, choose from the 'Cell Type' dropdown menu on the top left.\nExecuting a Command\nA code cell will be evaluated when you press play, or when you press the shortcut, shift-enter. Evaluating a cell evaluates each line of code in sequence, and prints the results of the last line below the cell.",
"2 + 2",
"Sometimes there is no result to be printed, as is the case with assignment.",
"X = 2",
"Remember that only the result from the last line is printed.",
"2 + 2\n3 + 3",
"However, you can print whichever lines you want using the print statement.",
"print 2 + 2\n3 + 3",
"Knowing When a Cell is Running\nWhile a cell is running, a [*] will display on the left. When a cell has yet to be executed, [ ] will display. When it has been run, a number will display indicating the order in which it was run during the execution of the notebook [5]. Try on this cell and note it happening.",
"#Take some time to run something\nc = 0\nfor i in range(10000000):\n c = c + i\nc",
"Importing Libraries\nThe vast majority of the time, you'll want to use functions from pre-built libraries. You can't import every library on Quantopian due to security issues, but you can import most of the common scientific ones. Here I import numpy and pandas, the two most common and useful libraries in quant finance. I recommend copying this import statement to every new notebook.\nNotice that you can rename libraries to whatever you want after importing. The as statement allows this. Here we use np and pd as aliases for numpy and pandas. This is a very common aliasing and will be found in most code snippets around the web. The point behind this is to allow you to type fewer characters when you are frequently accessing these libraries.",
"import numpy as np\nimport pandas as pd\n\n# This is a plotting library for pretty pictures.\nimport matplotlib.pyplot as plt",
"Tab Autocomplete\nPressing tab will give you a list of IPython's best guesses for what you might want to type next. This is incredibly valuable and will save you a lot of time. If there is only one possible option for what you could type next, IPython will fill that in for you. Try pressing tab very frequently, it will seldom fill in anything you don't want, as if there is ambiguity a list will be shown. This is a great way to see what functions are available in a library.\nTry placing your cursor after the . and pressing tab.",
"np.random.",
"Getting Documentation Help\nPlacing a question mark after a function and executing that line of code will give you the documentation IPython has for that function. It's often best to do this in a new cell, as you avoid re-executing other code and running into bugs.",
"np.random.normal?",
"Sampling\nWe'll sample some random data using a function from numpy.",
"# Sample 100 points with a mean of 0 and an std of 1. This is a standard normal distribution.\nX = np.random.normal(0, 1, 100)",
"Plotting\nWe can use the plotting library we imported as follows.",
"plt.plot(X)",
"Squelching Line Output\nYou might have noticed the annoying line of the form [<matplotlib.lines.Line2D at 0x7f72fdbc1710>] before the plots. This is because the .plot function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows.",
"plt.plot(X);",
"Adding Axis Labels\nNo self-respecting quant leaves a graph without labeled axes. Here are some commands to help with that.",
"X = np.random.normal(0, 1, 100)\nX2 = np.random.normal(0, 1, 100)\n\nplt.plot(X);\nplt.plot(X2);\nplt.xlabel('Time') # The data we generated is unitless, but don't forget units in general.\nplt.ylabel('Returns')\nplt.legend(['X', 'X2']);",
"Generating Statistics\nLet's use numpy to take some simple statistics.",
"np.mean(X)\n\nnp.std(X)",
"Getting Real Pricing Data\nRandomly sampled data can be great for testing ideas, but let's get some real data. We can use get_pricing to do that. You can use the ? syntax as discussed above to get more information on get_pricing's arguments.",
"data = get_pricing('MSFT', start_date='2012-1-1', end_date='2015-6-1')",
"Our data is now a dataframe. You can see the datetime index and the colums with different pricing data.",
"data",
"This is a pandas dataframe, so we can index in to just get price like this. For more info on pandas, please click here.",
"X = data['price']",
"Because there is now also date information in our data, we provide two series to .plot. X.index gives us the datetime index, and X.values gives us the pricing values. These are used as the X and Y coordinates to make a graph.",
"plt.plot(X.index, X.values)\nplt.ylabel('Price')\nplt.legend(['MSFT']);",
"We can get statistics again on real data.",
"np.mean(X)\n\nnp.std(X)",
"Getting Returns from Prices\nWe can use the pct_change function to get returns. Notice how we drop the first element after doing this, as it will be NaN (nothing -> something results in a NaN percent change).",
"R = X.pct_change()[1:]",
"We can plot the returns distribution as a histogram.",
"plt.hist(R, bins=20)\nplt.xlabel('Return')\nplt.ylabel('Frequency')\nplt.legend(['MSFT Returns']);",
"Get statistics again.",
"np.mean(R)\n\nnp.std(R)",
"Now let's go backwards and generate data out of a normal distribution using the statistics we estimated from Microsoft's returns. We'll see that we have good reason to suspect Microsoft's returns may not be normal, as the resulting normal distribution looks far different.",
"plt.hist(np.random.normal(np.mean(R), np.std(R), 10000), bins=20)\nplt.xlabel('Return')\nplt.ylabel('Frequency')\nplt.legend(['Normally Distributed Returns']);",
"Generating a Moving Average\npandas has some nice tools to allow us to generate rolling statistics. Here's an example. Notice how there's no moving average for the first 60 days, as we don't have 60 days of data on which to generate the statistic.",
"# Take the average of the last 60 days at each timepoint.\nMAVG = pd.rolling_mean(X, window=60)\nplt.plot(X.index, X.values)\nplt.plot(MAVG.index, MAVG.values)\nplt.ylabel('Price')\nplt.legend(['MSFT', '60-day MAVG']);",
"This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rcrehuet/Python_for_Scientists_2017
|
notebooks/6_0_Numpy_exercices.ipynb
|
gpl-3.0
|
[
"Numpy exercices",
"import numpy as np",
"Working with arrays\nCreate the following array (without typing the elements!):\n array([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]])\n\nExtending arrays\nBecause arrays do no have the append method np.concatenate, np.vstack and np.hstack functions are very useful. \n\nCreate: array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])\nadd the raw [10, 20, 30]\nadd [100, 200, 300, 400] as a column to the new a in 2.",
"a = np.arange(9).reshape(3,3)\n\na.mean?",
"Saving and retrieving arrays\nLet's create a couple of large arrays:\na=random.random([1000, 1000])\nb=arange([1000])\n\nSave them to disk with save with savez and with savetxt. Delete a and b (del a,b) and recover them from disk. Check the size of the saved text file. Now append a .gz to the filename when saving with savetxt and see what happens. How do you load this file?\nRemoving negative numbers with Fancy Indexing\nWe have an array that has small negative numbers and we want to remove them, converting them to zero. First, we generate a random array.",
"np.random.seed(3)\na = np.random.random((4,3))-.2\n\na",
"Find the indices where this array is negative:",
"index = #Finish",
"Convert those indices into a zero:\nWe can also generte the indices in the same expression where we set a",
"np.random.seed(3) #This way we can reproduce the same \"random\" array as before.\na = np.random.random((4,3))-.2\na[a<0] = 0\na",
"We could have also used the where method. But this method does not change a in place.",
"np.random.seed(3) #This way we can reproduce the same \"random\" array as before.\na = np.random.random((4,3))-.2\nnp.where(a<0,0, a) #This does not change a",
"More on slicing\nNow we have an array and we need to substract the minimum value of each of the columns.",
"a = np.arange(12).reshape((4,3))\na",
"we can use the 'min method (or numpy function)",
"amin = a.min(axis=0)\namin",
"Finally we substract this from a:\nImagine we want to do the same for rows:",
"a = np.arange(12).reshape((4,3))\na\n\namin = a.min(axis=1)\n\na-amin",
"How can we solve this? We could transpose the arrays to get the right dimensions for broadcasting, or extend the dimensions of amin get again the right dimensions for broadcastinc",
"#Transposing (use .T method):\n\n\n\n#Extending axis\n\n\n#Extending axis\n",
"dot product and Outer product\nThe dot product is so common that it is implemented in numpy. But if it was not there, could you code it?",
"a = np.arange(5)\nb = np.arange(5)+10\n\nresult = #Finish\n\nassert result == a.dot(b)",
"Let's check which is faster for large vectors:",
"a = np.random.random(10000)\nb = np.random.random(10000)\n\n%timeit a.dot(b)\n\n%timeit #Your code",
"Try writing the code in a for loop (if you haven't) and check its performance with the %timeit magic function.\nThe dot vector can be represented as $\\vec x^T \\vec y$ if vectors are matrices of dimensions $(N, 1)$. As such we could also mupliply them this way $\\vec x \\vec y^T$. This returns a matrix of dimensions $(N, N)$, where each element $i, j$ of this matrix is $x_i y_j$. Can you code that?",
"a = np.arange(5)\nb = np.arange(5)+10\na,b\n\nm=np.random.random((10,10))-np.random.normal(scale=0.2, size=(10,10))\n\nnp.random.normal?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Vvkmnn/books
|
ThinkBayes/13_Simulation.ipynb
|
gpl-3.0
|
[
"# format the book\n%matplotlib inline\nimport sys\nfrom __future__ import division, print_function\nimport sys\nsys.path.insert(0,'../code')\nimport book_format\nbook_format.load_style('../code')",
"Simulation\nIn this chapter I describe my solution to a problem posed by a patient\nwith a kidney tumor. I think the problem is important and relevant to\npatients with these tumors and doctors treating them.\nAnd I think the solution is interesting because, although it is a\nBayesian approach to the problem, the use of Bayes’s theorem is\nimplicit. I present the solution and my code; at the end of the chapter\nI will explain the Bayesian part.\nIf you want more technical detail than I present here, you can read my\npaper on this work at http://arxiv.org/abs/1203.6890.\nThe Kidney Tumor problem\nI am a frequent reader and occasional contributor to the online\nstatistics forum at http://reddit.com/r/statistics. In November 2011,\nI read the following message:\n\n“I have Stage IV Kidney Cancer and am trying to determine if the\ncancer formed before I retired from the military. ... Given the dates\nof retirement and detection is it possible to determine when there was\na 50/50 chance that I developed the disease? Is it possible to\ndetermine the probability on the retirement date? My tumor was 15.5 cm\nx 15 cm at detection. Grade II.”\n\nI contacted the author of the message and got more information; I\nlearned that veterans get different benefits if it is “more likely than\nnot” that a tumor formed while they were in military service (among\nother considerations).\nBecause renal tumors grow slowly, and often do not cause symptoms, they\nare sometimes left untreated. As a result, doctors can observe the rate\nof growth for untreated tumors by comparing scans from the same patient\nat different times. Several papers have reported these growth rates.\nI collected data from a paper by Zhang et al[^3]. I contacted the\nauthors to see if I could get raw data, but they refused on grounds of\nmedical privacy. Nevertheless, I was able to extract the data I needed\nby printing one of their graphs and measuring it with a ruler.\n\n[fig.kidney2]\nThey report growth rates in reciprocal doubling time (RDT), which is in\nunits of doublings per year. So a tumor with $RDT=1$ doubles in volume\neach year; with $RDT=2$ it quadruples in the same time, and with\n$RDT=-1$, it halves. Figure [fig.kidney2] shows the distribution of RDT\nfor 53 patients.\nThe squares are the data points from the paper; the line is a model I\nfit to the data. The positive tail fits an exponential distribution\nwell, so I used a mixture of two exponentials.\nA simple model\nIt is usually a good idea to start with a simple model before trying\nsomething more challenging. Sometimes the simple model is sufficient for\nthe problem at hand, and if not, you can use it to validate the more\ncomplex model.\nFor my simple model, I assume that tumors grow with a constant doubling\ntime, and that they are three-dimensional in the sense that if the\nmaximum linear measurement doubles, the volume is multiplied by eight.\nI learned from my correspondent that the time between his discharge from\nthe military and his diagnosis was 3291 days (about 9 years). So my\nfirst calculation was, “If this tumor grew at the median rate, how big\nwould it have been at the date of discharge?”\nThe median volume doubling time reported by Zhang et al is 811 days.\nAssuming 3-dimensional geometry, the doubling time for a linear measure\nis three times longer.",
"# time between discharge and diagnosis, in days \ninterval = 3291.0\n\n# doubling time in linear measure is doubling time in volume * 3\ndt = 811.0 * 3\n\n# number of doublings since discharge\ndoublings = interval / dt\n\n# how big was the tumor at time of discharge (diameter in cm)\nd1 = 15.5\nd0 = d1 / 2.0 ** doublings",
"You can download the code in this chapter from\nhttp://thinkbayes.com/kidney.py. For more information see\nSection [download].\nThe result, d0, is about 6 cm. So if this tumor formed\nafter the date of discharge, it must have grown substantially faster\nthan the median rate. Therefore I concluded that it is “more likely than\nnot” that this tumor formed before the date of discharge.\nIn addition, I computed the growth rate that would be implied if this\ntumor had formed after the date of discharge. If we assume an initial\nsize of 0.1 cm, we can compute the number of doublings to get to a final\nsize of 15.5 cm:",
"from math import log2\n\n# assume an initial linear measure of 0.1 cm\nd0 = 0.1\nd1 = 15.5\n\n# how many doublings would it take to get from d0 to d1\ndoublings = log2(d1 / d0)\n\n# what linear doubling time does that imply?\ndt = interval / doublings\n\n# compute the volumetric doubling time and RDT\nvdt = dt / 3\nrdt = 365 / vdt\nprint(vdt, rdt)",
"dt is linear doubling time, so vdt is\nvolumetric doubling time, and rdt is reciprocal doubling\ntime.\nThe number of doublings, in linear measure, is 7.3, which implies an RDT\nof 2.4. In the data from Zhang et al, only 20% of tumors grew this fast\nduring a period of observation. So again, I concluded that is “more\nlikely than not” that the tumor formed prior to the date of discharge.\nThese calculations are sufficient to answer the question as posed, and\non behalf of my correspondent, I wrote a letter explaining my\nconclusions to the Veterans’ Benefit Administration.\nLater I told a friend, who is an oncologist, about my results. He was\nsurprised by the growth rates observed by Zhang et al, and by what they\nimply about the ages of these tumors. He suggested that the results\nmight be interesting to researchers and doctors.\nBut in order to make them useful, I wanted a more general model of the\nrelationship between age and size.\nA more general model\nGiven the size of a tumor at time of diagnosis, it would be most useful\nto know the probability that the tumor formed before any given date; in\nother words, the distribution of ages.\nTo find it, I run simulations of tumor growth to get the distribution of\nsize conditioned on age. Then we can use a Bayesian approach to get the\ndistribution of age conditioned on size.\nThe simulation starts with a small tumor and runs these steps:\n\n\nChoose a growth rate from the distribution of RDT.\n\n\nCompute the size of the tumor at the end of an interval.\n\n\nRecord the size of the tumor at each interval.\n\n\nRepeat until the tumor exceeds the maximum relevant size.\n\n\nFor the initial size I chose 0.3 cm, because carcinomas smaller than\nthat are less likely to be invasive and less likely to have the blood\nsupply needed for rapid growth (see\nhttp://en.wikipedia.org/wiki/Carcinoma_in_situ).\nI chose an interval of 245 days (about 8 months) because that is the\nmedian time between measurements in the data source.\nFor the maximum size I chose 20 cm. In the data source, the range of\nobserved sizes is 1.0 to 12.0 cm, so we are extrapolating beyond the\nobserved range at each end, but not by far, and not in a way likely to\nhave a strong effect on the results.\n\n[fig.kidney4]\nThe simulation is based on one big simplification: the growth rate is\nchosen independently during each interval, so it does not depend on age,\nsize, or growth rate during previous intervals.\nIn Section [serial] I review these assumptions and consider more\ndetailed models. But first let’s look at some examples.\nFigure [fig.kidney4] shows the size of simulated tumors as a function of\nage. The dashed line at 10 cm shows the range of ages for tumors at that\nsize: the fastest-growing tumor gets there in 8 years; the slowest takes\nmore than 35.\nI am presenting results in terms of linear measurements, but the\ncalculations are in terms of volume. To convert from one to the other,\nagain, I use the volume of a sphere with the given diameter.\nImplementation\nHere is the kernel of the simulation:",
"from kidney import Volume\n\ndef MakeSequence(rdt_seq, v0=0.01, interval=0.67, vmax=Volume(20.0)):\n seq = v0,\n age = 0\n\n for rdt in rdt_seq:\n age += interval\n final, seq = ExtendSequence(age, seq, rdt, interval)\n if final > vmax:\n break\n\n return seq",
"rdt_seq is an iterator that yields random values from the CDF of\ngrowth rate. v0 is the initial volume in mL.\ninterval is the time step in years. vmax is\nthe final volume corresponding to a linear measurement of 20 cm.\nVolume converts from linear measurement in cm to volume in\nmL, based on the simplification that the tumor is a sphere:\ndef Volume(diameter, factor=4*math.pi/3):\n return factor * (diameter/2.0)**3\n\nExtendSequence computes the volume of the tumor at the end\nof the interval.",
"def ExtendSequence(age, seq, rdt, interval):\n initial = seq[-1]\n doublings = rdt * interval\n final = initial * 2**doublings\n new_seq = seq + (final,)\n cache.Add(age, new_seq, rdt)\n\n return final, new_seq",
"age is the age of the tumor at the end of the interval.\nseq is a tuple that contains the volumes so far.\nrdt is the growth rate during the interval, in doublings\nper year. interval is the size of the time step in years.\nThe return values are final, the volume of the tumor at the\nend of the interval, and new_seq, a new tuple containing the volumes\nin seq plus the new volume final.\nCache.Add records the age and size of each tumor at the end\nof each interval, as explained in the next section.\nCaching the joint distribution\n\n[fig.kidney8]\nHere’s how the cache works.\n```python\nclass Cache(object):\ndef __init__(self):\n self.joint = thinkbayes.Joint()\n\n```\njoint is a joint Pmf that records the frequency of each\nage-size pair, so it approximates the joint distribution of age and\nsize.\nAt the end of each simulated interval, ExtendSequence calls\nAdd:\n```python\n # class Cache\n def Add(self, age, seq):\n final = seq[-1]\n cm = Diameter(final)\n bucket = round(CmToBucket(cm))\n self.joint.Incr((age, bucket))\n\n```\nAgain, age is the age of the tumor, and seq is\nthe sequence of volumes so far.\n\n[fig.kidney6]\nBefore adding the new data to the joint distribution, we use\nDiameter to convert from volume to diameter in centimeters:\ndef Diameter(volume, factor=3/math.pi/4, exp=1/3.0):\n return 2 * (factor * volume) ** exp\n\nAnd CmToBucket to convert from centimeters to a discrete\nbucket number:",
"import math\n\ndef CmToBucket(x, factor=10):\n return factor * math.log(x)",
"The buckets are equally spaced on a log scale. Using\nfactor=10 yields a reasonable number of buckets; for\nexample, 1 cm maps to bucket 0 and 10 cm maps to bucket 23.\nAfter running the simulations, we can plot the joint distribution as a\npseudocolor plot, where each cell represents the number of tumors\nobserved at a given size-age pair. Figure [fig.kidney8] shows the joint\ndistribution after 1000 simulations.\nConditional distributions\n\n[fig.kidney7]\nBy taking a vertical slice from the joint distribution, we can get the\ndistribution of sizes for any given age. By taking a horizontal slice,\nwe can get the distribution of ages conditioned on size.\nHere’s the code that reads the joint distribution and builds the\nconditional distribution for a given size.\n```python\nclass Cache\ndef ConditionalCdf(self, bucket):\n pmf = self.joint.Conditional(0, 1, bucket)\n cdf = pmf.MakeCdf()\n return cdf\n\n```\nbucket is the integer bucket number corresponding to tumor size.\nJoint.Conditional computes the PMF of age conditioned on\nbucket. The result is the CDF of age conditioned on\nbucket.\nFigure [fig.kidney6] shows several of these CDFs, for a range of sizes.\nTo summarize these distributions, we can compute percentiles as a\nfunction of size.",
"from kidney import Cache\n\npercentiles = [95, 75, 50, 25, 5]\ncache = Cache()\n\nfor bucket in cache.GetBuckets():\n cdf = ConditionalCdf(bucket) \n ps = [cdf.Percentile(p) for p in percentiles]",
"Figure [fig.kidney7] shows these percentiles for each size bucket. The\ndata points are computed from the estimated joint distribution. In the\nmodel, size and time are discrete, which contributes numerical errors,\nso I also show a least squares fit for each sequence of percentiles.\nSerial Correlation\nThe results so far are based on a number of modeling decisions; let’s\nreview them and consider which ones are the most likely sources of\nerror:\n\n\nTo convert from linear measure to volume, we assume that tumors are\n approximately spherical. This assumption is probably fine for tumors\n up to a few centimeters, but not for very large tumors.\n\n\nThe distribution of growth rates in the simulations are based on a\n continuous model we chose to fit the data reported by Zhang et al,\n which is based on 53 patients. The fit is only approximate and, more\n importantly, a larger sample would yield a different distribution.\n\n\nThe growth model does not take into account tumor subtype or grade;\n this assumption is consistent with the conclusion of Zhang et al:\n “Growth rates in renal tumors of different sizes, subtypes and\n grades represent a wide range and overlap substantially.” But with a\n larger sample, a difference might become apparent.\n\n\nThe distribution of growth rate does not depend on the size of the\n tumor. This assumption would not be realistic for very small and\n very large tumors, whose growth is limited by blood supply.\nBut tumors observed by Zhang et al ranged from 1 to 12 cm, and they\nfound no statistically significant relationship between size and\ngrowth rate. So if there is a relationship, it is likely to be weak,\nat least in this size range.\n\n\nIn the simulations, growth rate during each interval is independent\n of previous growth rates. In reality it is plausible that tumors\n that have grown quickly in the past are more likely to grow quickly.\n In other words, there is probably a serial correlation in growth\n rate.\n\n\nOf these, the first and last seem the most problematic. I’ll investigate\nserial correlation first, then come back to spherical geometry.\nTo simulate correlated growth, I wrote a generator[^4] that yields a\ncorrelated series from a given Cdf. Here’s how the algorithm works:\n\n\nGenerate correlated values from a Gaussian distribution. This is\n easy to do because we can compute the distribution of the next value\n conditioned on the previous value.\n\n\nTransform each value to its cumulative probability using the\n Gaussian CDF.\n\n\nTransform each cumulative probability to the corresponding value\n using the given Cdf.\n\n\nHere’s what that looks like in code:",
"def CorrelatedGenerator(cdf, rho):\n x = random.gauss(0, 1)\n yield Transform(x)\n\n sigma = math.sqrt(1 - rho**2); \n while True:\n x = random.gauss(x * rho, sigma)\n yield Transform(x)",
"cdf is the desired Cdf; rho is the desired\ncorrelation. The values of x are Gaussian;\nTransform converts them to the desired distribution.\nThe first value of x is Gaussian with mean 0 and standard\ndeviation 1. For subsequent values, the mean and standard deviation\ndepend on the previous value. Given the previous x, the\nmean of the next value is x \\* rho, and the variance is\n1 - rho\\*\\*2.\nTransform maps from each Gaussian value, x, to\na value from the given Cdf, y.",
"def Transform(x):\n p = thinkbayes.GaussianCdf(x)\n y = cdf.Value(p)\n return y",
"GaussianCdf computes the CDF of the standard Gaussian\ndistribution at x, returning a cumulative probability.\nCdf.Value maps from a cumulative probability to the\ncorresponding value in cdf.\nDepending on the shape of cdf, information can be lost in\ntransformation, so the actual correlation might be lower than\nrho. For example, when I generate 10000 values from the\ndistribution of growth rates with rho=0.4, the actual\ncorrelation is 0.37. But since we are guessing at the right correlation\nanyway, that’s close enough.\nRemember that MakeSequence takes an iterator as an\nargument. That interface allows it to work with different generators:",
"from kidney import UncorrelatedGenerator, CorrelatedGenerator\n\niterator = UncorrelatedGenerator(cdf)\nseq1 = MakeSequence(iterator)\n\niterator = CorrelatedGenerator(cdf, rho)\nseq2 = MakeSequence(iterator)",
"In this example, seq1 and seq2 are drawn from\nthe same distribution, but the values in seq1 are\nuncorrelated and the values in seq2 are correlated with a\ncoefficient of approximately rho.\nNow we can see what effect serial correlation has on the results; the\nfollowing table shows percentiles of age for a 6 cm tumor, using the\nuncorrelated generator and a correlated generator with target\n$\\rho = 0.4$.\nCorrelation makes the fastest growing tumors faster and the slowest\nslower, so the range of ages is wider. The difference is modest for low\npercentiles, but for the 95th percentile it is more than 6 years. To\ncompute these percentiles precisely, we would need a better estimate of\nthe actual serial correlation.\nHowever, this model is sufficient to answer the question we started\nwith: given a tumor with a linear dimension of 15.5 cm, what is the\nprobability that it formed more than 8 years ago?\nHere’s the code:\n```python\nclass Cache\ndef ProbOlder(self, cm, age):\n bucket = CmToBucket(cm)\n cdf = self.ConditionalCdf(bucket)\n p = cdf.Prob(age)\n return 1-p\n\n```\ncm is the size of the tumor; age is the age\nthreshold in years. ProbOlder converts size to a bucket\nnumber, gets the Cdf of age conditioned on bucket, and computes the\nprobability that age exceeds the given value.\nWith no serial correlation, the probability that a 15.5 cm tumor is\nolder than 8 years is 0.999, or almost certain. With correlation 0.4,\nfaster-growing tumors are more likely, but the probability is still\n0.995. Even with correlation 0.8, the probability is 0.978.\nAnother likely source of error is the assumption that tumors are\napproximately spherical. For a tumor with linear dimensions 15.5 x 15\ncm, this assumption is probably not valid. If, as seems likely, a tumor\nthis size is relatively flat, it might have the same volume as a 6 cm\nsphere. With this smaller volume and correlation 0.8, the probability of\nage greater than 8 is still 95%.\nSo even taking into account modeling errors, it is unlikely that such a\nlarge tumor could have formed less than 8 years prior to the date of\ndiagnosis.\nDiscussion\nWell, we got through a whole chapter without using Bayes’s theorem or\nthe Suite class that encapsulates Bayesian updates. What\nhappened?\nOne way to think about Bayes’s theorem is as an algorithm for inverting\nconditional probabilities. Given $\\mathrm{p}(B|A)$, we can\ncompute $\\mathrm{p}(A|B)$, provided we know\n$\\mathrm{p}(A)$ and $\\mathrm{p}(B)$. Of course\nthis algorithm is only useful if, for some reason, it is easier to\ncompute $\\mathrm{p}(B|A)$ than\n$\\mathrm{p}(A|B)$.\nIn this example, it is. By running simulations, we can estimate the\ndistribution of size conditioned on age, or\n$\\mathrm{p}(size|age)$. But it is harder to get the\ndistribution of age conditioned on size, or\n$\\mathrm{p}(age|size)$. So this seems like a perfect\nopportunity to use Bayes’s theorem.\nThe reason I didn’t is computational efficiency. To estimate\n$\\mathrm{p}(size|age)$ for any given size, you have to run\na lot of simulations. Along the way, you end up computing\n$\\mathrm{p}(size|age)$ for a lot of sizes. In fact, you end\nup computing the entire joint distribution of size and age,\n$\\mathrm{p}(size, age)$.\nAnd once you have the joint distribution, you don’t really need Bayes’s\ntheorem, you can extract $\\mathrm{p}(age|size)$ by taking\nslices from the joint distribution, as demonstrated in\nConditionalCdf.\nSo we side-stepped Bayes, but he was with us in spirit."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fangohr/mpimag
|
sandbox/image_bluring/timings/speedup-image-blurring-in-parallel.ipynb
|
bsd-2-clause
|
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline",
"Figures demonstrating the speedup achieved for the image blurring function when executed in parallel.\nIn this example, two figures were used. A relatively small one of $500 \\times 300$ pixels and a larger one with $3000 \\times 3000$. Splitting of the image occurred along the y-axis.\nThe program was run on 1, 2, 4, 8, 16, 32 and 48 processes (the node had 48 processes, so avoided any interconnect).\nFunction Timings\nSpeed up and efficiency of the actual image blurring function. The program was run 10 times and the average taken for 1, 2, 4, 8, 16, 32 and 48 processes.\nn.b. The image bluring time on the larger image was around 8 minutes in serial, making it difficult to re-execute the script more than 10 times. For consistency, this was only 10 repeats were made for this smaller image.",
"# load the data files using pandas\n# small image\ndatafile = 'ngcm_halo_small.csv'\ndf = pd.read_csv(datafile, header=None, names=['Size','Iterations','Total','Ave'])\n\n# large image\ndatafileL = 'ngcm_halo_large.csv'\ndfL = pd.read_csv(datafileL, header=None, names=['Size','Iterations','Total','Ave'])\n\niterations = 10\niteration_values = df['Iterations'] == iterations\n\n# Small image\n# array containing number of processes in each \nsizes = df['Size'][iteration_values].values\n# Average times over 10 iterations when run with the different number of processes.\ntimes = df['Ave'][iteration_values].values\n\n# Calculate speed up and efficiency\nspeedup = times[0]/times\nefficiency = speedup / sizes\n\n# Large image\ntimesL = dfL['Ave'][iteration_values].values\n\nspeedupL = timesL[0]/timesL\nefficiencyL = speedupL / sizes\n\nplt.plot(sizes, speedup, '.', label='small image')\nplt.plot(sizes, speedupL, '.', label='large image')\nplt.xlabel('Num Procs')\nplt.ylabel('Speed up (1/s)')\nplt.title('Speed up of parallel image blurring')\nplt.legend(loc='upper left')\nplt.show()",
"The above plot shows the speed up. Increasing number of processes is beneficial. However above increasing number of processes above 32 sees a slight decrease in speed for the smaller image. This is due to the size of the image.\nThe plot below shows the efficiency, which gradually decreases as the number of processes increases. The blurring of the larger image was significantly better.",
"plt.plot(sizes, efficiency, '.', label='small image')\nplt.plot(sizes, efficiencyL, '.', label='large image')\nplt.xlabel('Num Procs')\nplt.ylabel('Efficiency\\n(num proc / speedup)')\nplt.title('Efficiency of parallel image blurring')\nplt.legend(loc='lower left')\nplt.show()",
"Scatterv Timings\nThe following graphs show the average times required to scatter the image from process 0 to all the other processes.\nOnly 10 repeats done: should do more for improve accuracy.",
"datafile = 'ngcm_halo_small_scatter.csv'\ndf = pd.read_csv(datafile, header=None, names=['Size','Total'])\n\ngrouped_df = df.groupby('Size')\n\n# large image\ndatafileL = 'ngcm_halo_large_scatter.csv'\ndfL = pd.read_csv(datafileL, header=None, names=['Size','Total'])\n\ngrouped_dfL = dfL.groupby('Size')\n\nsizes = df['Size'][0:7].values\ntimes = grouped_df.mean().values[:,0]\n\ntimesL = grouped_dfL.mean().values[:,0]\n\nplt.plot(sizes, times, '.', label='small image')\nplt.plot(sizes, timesL, 'g.', label='large image')\nplt.xlabel('Num Procs')\nplt.xlabel('Num Procs')\nplt.ylabel('time (s)')\nplt.legend(loc='upper left')\nplt.show()",
"The above shows the average time to scatter the image.\nThe below plot shows the average time per process to scatter the image.",
"f, ((ax0), (ax1))= plt.subplots(1, 2, figsize=(12, 4))\n\nax0.plot(sizes, times/sizes, '.', label='small image')\nax1.plot(sizes, timesL/sizes, 'g.', label='large image')\nax0.legend(loc='upper left')\nax1.legend()\nax0.set_xlabel('Num Procs')\nax1.set_xlabel('Num Procs')\nax0.set_ylabel('time / num proc')\nplt.show()",
"Gatherv Times\nPlots to demostrate the times required to gather the data onto a single process after the image blurring has been performed.\nOnly 10 repeats done. Should repeat more to improve accuracy.",
"datafile = 'ngcm_halo_small_gather.csv'\ndf = pd.read_csv(datafile, header=None, names=['Size','Total'])\n\ngrouped_df = df.groupby('Size')\n\ndatafileL = 'ngcm_halo_large_gather.csv'\ndfL = pd.read_csv(datafileL, header=None, names=['Size','Total'])\n\ngrouped_dfL = dfL.groupby('Size')\n\nsizes = df['Size'][0:7].values\n\ntimes = grouped_df.mean().values[:,0]\n\ntimesL = grouped_dfL.mean().values[:,0]\n\nf, ((ax0), (ax1))= plt.subplots(1, 2, figsize=(12, 4))\n\nax0.plot(sizes, times, '.', label='small image')\nax1.plot(sizes, timesL, 'g.', label='large image')\nax0.legend(loc='upper left')\nax1.legend(loc='upper left')\nax0.set_xlabel('Num Procs')\nax1.set_xlabel('Num Procs')\nax0.set_ylabel('num proc')\nplt.show()\n\nf, ((ax0), (ax1))= plt.subplots(1, 2, figsize=(12, 4))\n\nax0.plot(sizes, times/sizes, '.', label='small image')\nax1.plot(sizes, timesL/sizes, 'g.', label='large image')\nax0.legend()\nax1.legend()\nax0.set_xlabel('Num Procs')\nax1.set_xlabel('Num Procs')\nax0.set_ylabel('time / num proc')\nplt.show()",
"Halo vs. my method\nCouple of plots to show the time differences to",
"datafile = 'halo_times.csv'\ndf = pd.read_csv(datafile, header=None, names=['Size','Iterations', 'Halo', 'Me'])\n\nsizes = df['Size'].values\nhalo = df['Halo'].values\nme = df['Me'].values\n\nplt.plot(sizes, halo, '.', label='halo')\nplt.plot(sizes, me, '.', label='me')\nplt.ylabel('ave time (s)')\nplt.xlabel('num proc')\nplt.title('Halo vs. my exchange method')\nplt.legend(loc='upper left')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eblur/clarsach
|
notebooks/TestAthenaXIFU.ipynb
|
gpl-3.0
|
[
"Test Whether Clàrsach Works With Athena X-IFU Simulations\nLet's test whether I can make it work on Athena X-IFU simulations. This will be complicated by the fact that sherpa doesn't seem to read the RMF file:",
"%matplotlib notebook\nimport matplotlib.pyplot as plt\n\ntry:\n import seaborn as sns\nexcept ImportError:\n print(\"Seaborn not installed. Oh well.\")\n \nimport numpy as np\n\nimport astropy.io.fits as fits\nimport sherpa.astro.ui as ui\n\nfrom clarsach.respond import RMF, ARF",
"Let's load some data:",
"datadir = \"../data/athena/\"\ndata_file = \"26.pha\"\nrmf_file = \"athena_xifu_rmf_highres_v20150609.rmf\"\narf_file = \"athena_xifu_sixte_1469_onaxis_v20150402.arf\"",
"Let's load the data using Clàrsach:",
"hdulist = fits.open(datadir+data_file)\n\nhdulist.info()\n\ns = hdulist[\"SPECTRUM\"]\n\ns.columns\n\nchannel = s.data.field(\"CHANNEL\")\ncounts = s.data.field(\"COUNTS\")\n\nplt.figure(figsize=(10,5))\nplt.plot(channel, counts)",
"Let's also load the ARF and RMF:",
"arf = ARF(datadir+arf_file)\nrmf = RMF(datadir+rmf_file)",
"Let's make an empty model to divide out the responses:",
"resp_model = np.ones_like(counts)\nm_arf = arf.apply_arf(resp_model)\nm_rmf = rmf.apply_rmf(m_arf)\n\nc_deconv = counts/m_rmf\n\nplt.figure(figsize=(10, 5))\nplt.plot(channel, c_deconv)\nplt.xscale(\"log\")",
"This seems to be working not badly. Let's try the same with sherpa:",
"ui.load_data(\"26\", datadir+data_file)\nd = ui.get_data(\"26\")\narf_s = d.get_arf()\nrmf_s = d.get_rmf()",
"Do the ARF and RMF exist?",
"print(\"ARF: \" + str(arf_s))\nprint(\"RMF: \" + str(rmf_s))",
"There's no RMF, because the RMF for Athena does not seem to be a variable length field and is thus not read in. Oh well.\nLet's check whether at least the ARF is the same:",
"assert np.all(arf_s.specresp == arf.specresp), \"Clarsach ARF is different from Sherpa ARF\"",
"Looks like this worked. Let's do the deconvolution with sherpa and look at the results:",
"ui.set_source(\"26\", ui.polynom1d.truespec)\nc_deconv_s = ui.get_ratio_plot(\"26\").y\ne_deconv_s = ui.get_ratio_plot(\"26\").x\n\n\nplt.figure(figsize=(10,5))\nplt.plot(e_deconv_s, c_deconv, label=\"Clarsach Deconvolution\")\nplt.plot(e_deconv_s, c_deconv_s, label=\"Sherpa Deconvolution\")\nplt.legend()\nplt.yscale('log')\n\nnp.allclose(c_deconv, c_deconv_s)",
"Well, I didn't actually expect them to be the same, so all right. This might also be due to the fact that I don't understand everything about what ratio_plot is doing; at some point I need to talk to Victoria about that.\nModeling the Spectrum\nI can't actually model the full spectrum right now, because I don't have the XSPEC models, thus I don't have the galactic absorption model. This'll have to wait to later. Let's take the second half of the spectrum and fit a model to it. For now, let's ignore the lines and just fit a basic power law:",
"import astropy.modeling.models as models\nfrom astropy.modeling.fitting import _fitter_to_model_params\nfrom scipy.special import gammaln as scipy_gammaln\n\npl = models.PowerLaw1D()",
"We'll need to fix the x_0 parameter of the power law model to continue:",
"pl.x_0.fixed = True",
"Let's define a Poisson log-likelihood:",
"class PoissonLikelihood(object):\n \n def __init__(self, x, y, model, arf=None, rmf=None, bounds=None):\n self.x = x\n self.y = y\n self.model = model\n self.arf = arf\n self.rmf = rmf\n \n if bounds is None:\n bounds = [self.x[0], self.x[-1]]\n \n min_idx = self.x.searchsorted(bounds[0])\n max_idx = self.x.searchsorted(bounds[1])\n \n self.idx = [min_idx, max_idx]\n \n def evaluate(self, pars):\n # store the new parameters in the model\n _fitter_to_model_params(self.model, pars)\n\n # evaluate the model at the positions x\n mean_model = self.model(self.x)\n\n # run the ARF and RMF calculations\n if arf is not None and rmf is not None:\n m_arf = arf.apply_arf(mean_model)\n ymodel = rmf.apply_rmf(m_arf)\n else:\n ymodel = mean_model\n \n # cut out the part of the spectrum that's of interest\n y = self.y[self.idx[0]:self.idx[1]]\n ymodel = ymodel[self.idx[0]:self.idx[1]]\n \n # compute the log-likelihood\n loglike = np.sum(-ymodel + y*np.log(ymodel) \\\n - scipy_gammaln(y + 1.))\n\n if np.isfinite(loglike):\n return loglike\n else:\n return -1.e16\n\n def __call__(self, pars):\n l = -self.evaluate(pars)\n #print(l)\n return l\n",
"Ok, cool, let's make a PoissonLikelihood object to use:",
"loglike = PoissonLikelihood(e_deconv_s, counts, pl, arf=arf, rmf=rmf, bounds=[1.0, 6.0])\n\nloglike([1.0, 2.0])",
"Let's fit this with a minimization algorithm:",
"from scipy.optimize import minimize\n\nopt = minimize(loglike, [1.0, 1.0])\n\nopt",
"Looks like it has accurately found the photon index of 2. Let's make a best-fit example model and plot the raw spectra:",
"_fitter_to_model_params(pl, opt.x)\nmean_model = pl(loglike.x)\n\nm_arf = arf.apply_arf(mean_model)\nymodel = rmf.apply_rmf(m_arf)\n\nymodel_small = ymodel[loglike.idx[0]:loglike.idx[1]]\ny_small = loglike.y[loglike.idx[0]:loglike.idx[1]]\ne_deconv_small = e_deconv_s[loglike.idx[0]:loglike.idx[1]]\n\nprint(np.mean(y_small-ymodel_small))\n\nplt.figure()\nplt.plot(e_deconv_small, y_small, label=\"Data\")\nplt.plot(e_deconv_small, ymodel_small, label=\"Model\")\nplt.legend()",
"Let's also plot the deconvolved version for fun:",
"plt.figure(figsize=(10,5))\nplt.plot(e_deconv_small, c_deconv[loglike.idx[0]:loglike.idx[1]], label=\"Data\")\nplt.plot(e_deconv_small, mean_model[loglike.idx[0]:loglike.idx[1]], label=\"Best-fit model\")\nplt.xlim(1.0, 6.0)\n\nplt.legend()\n\n",
"It works! Next, I need to include XSPEC models, but that is a whole other can of worms ..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yuyu2172/image-labelling-tool
|
Image labeller notebook.ipynb
|
mit
|
[
"Image labeller with persistence (example)\nThis notebook will demonstrate the follwing:\n\nusing the image labelling tool as an IPython Notebook plugin\nrendering labels to create edge maps and label images\nextracting images of individual objects\n\nIt will also describe the label JSON file format.",
"%matplotlib inline\n\nimport math\n\nfrom matplotlib import pyplot as plt\n\nfrom IPython.display import display, Javascript\n\nfrom image_labelling_tool import labelling_tool, labelling_tool_jupyter",
"Client side widget\nHere is the client side widget implementation:",
"display(Javascript(labelling_tool_jupyter.LABELLING_TOOL_JUPYTER_JS))",
"Example tool\nThe kernel-side widget model is defined in the ImageLabellingTool class in the labelling_tool module.\nNote: the 'Loading...' screen may persist for a while initially; this is because the client-side tool is waiting for the notebook to finish executing all the cells below so it can get round to sending the label data to the client.",
"# Specify our 3 label classes.\n# `LabelClass` parameters are: symbolic name, human readable name for UI, and RGB colour as list\nlabel_classes = [labelling_tool.LabelClass('tree', 'Trees', [0, 255, 192]),\n labelling_tool.LabelClass('building', 'Buldings', [255, 128, 0]),\n labelling_tool.LabelClass('lake', 'Lake', [0, 128, 255]),\n ]\n\n# Define the tool dimensions\nTOOL_WIDTH, TOOL_HEIGHT = 980, 480\n\n# Load in .JPG images from the 'images' directory.\nlabelled_images = labelling_tool.PersistentLabelledImage.for_directory('images', image_filename_pattern='*.jpg')\nprint 'Loaded {0} images'.format(len(labelled_images))\n\nlabelling_tool_config = {\n 'tools': {\n 'imageSelector': True,\n 'labelClassSelector': True,\n 'drawPolyLabel': True,\n 'compositeLabel': True,\n 'deleteLabel': True\n }\n}\n\n# Create the labelling tool IPython widget and display it\nlabeller = labelling_tool_jupyter.ImageLabellingTool(labelled_images=labelled_images, label_classes=label_classes,\n tool_width=TOOL_WIDTH, tool_height=TOOL_HEIGHT,\n labelling_tool_config=labelling_tool_config)\n\ndisplay(labeller)",
"Instructions for use\nTo navigate between images:\n\nUsing the left and right arrows to navigate the images one by one\nEnter a number in the box at the top to navigate to a specific image\n\nTo move around images:\n\nLeft-click drag to move the image\nUse the scroll wheel (on a mouse) or a two-finger gesture (on a tablet) to zoom\n\nTo label regions of the image:\n\nDrawing regions onto the image:\nClick the Draw poly button\nWithin the image pane, left-click to draw polygonal corners of your region\nWhen you have finished the region, right-click to stop\nYou are still in draw poly mode, so you can start left-clicking again to draw the next region\nTo exit draw poly mode, right-click a second time.\nIf you make a mistake, delete the region and re-draw it; see below\nSelecting regions:\nSelected regions have a red outline, yellow otherwise\nIf only one region is selected, clicking the Draw poly button will allow you to modify it; you will go back to draw poly mode\nTo select a different region, click the Select button and choose a different region by clicking on it. Multiple regions can be selected by holding SHIFT while clicking.\nDeleting regions:\nSelect regions using the select tool (see above)\nClick the wastebin button to delete them; you will be asked for confirmation\nChanging the label of a region:\nSelect regions using the select tool (see above)\nUse the drop-down (normally reads UNCLASSIFIED) within the Labels section to change the label\nIf the coloured regions are obscuring parts of the image that you need to see:\nWithin the Labels section, click the Hide labels checkbox to hide the labels\nUncheck it to show them afterwards\nWhen you are done:\nWhen you are satisfied that you have marked out all of the regions of interest and that they are correctly labelled, click the Finished checkbox within the Current image section. This will mark the image as finished within the system.\n\nUsing the label data\nYou can either render the labels with the provided code or read the JSON label data directly.\nRender the labels\nFirst, get the second image that comes with some labels pre-defined:",
"labelled_img = labelled_images[1]",
"Rendering label masks by label class\nAs seen in the Example Tool cell above we have three classes; tree, building and lake.\nThe label_classes parameter describes the classes to render. For example, lets render all the labels trees:",
"labels_2d = labelled_img.render_labels(label_classes=['tree', 'building', 'lake'], pixels_as_vectors=False)\nplt.imshow(labels_2d, cmap='gray')\nplt.show()",
"Re-ordering changes the label values:",
"labels_2d = labelled_img.render_labels(label_classes=['lake', 'building', 'tree'], pixels_as_vectors=False)\nplt.imshow(labels_2d, cmap='gray')\nplt.show()",
"You can render a subset of the labels used in the image; just the trees:",
"labels_2d = labelled_img.render_labels(['tree'], pixels_as_vectors=False)\nplt.imshow(labels_2d, cmap='gray')\nplt.show()",
"Each item in label_classes can also be a list of classes. This will result in the same value being used for multiple classes in the image. Lets render the natural featues - lake and trees - as one value and the non-natural - buildings - as another:",
"labels_2d = labelled_img.render_labels(label_classes=[['tree', 'lake'], 'building'], pixels_as_vectors=False)\nplt.imshow(labels_2d, cmap='gray')\nplt.show()",
"Setting the fill parameter to False results in outlines being rendered:",
"labels_2d = labelled_img.render_labels(label_classes=['tree', 'building', 'lake'], pixels_as_vectors=False, fill=False)\nplt.figure(figsize=(8,6))\nplt.imshow(labels_2d)\nplt.show()",
"Setting the pixels_as_vectors parameter to True will result in a multi-channel image in the form of a 3D array. It will have one channel for each item in label_classes. Pixels will have either a value of 0 or 1 in a given channel indicating presence of a label in that class:",
"labels_2dn = labelled_img.render_labels(label_classes=['tree', 'building', 'lake'], pixels_as_vectors=True)\nplt.figure(figsize=(16,4))\n# trees\nplt.subplot(1,3,1)\nplt.imshow(labels_2dn[:,:,0], cmap='gray')\n# buildings\nplt.subplot(1,3,2)\nplt.imshow(labels_2dn[:,:,1], cmap='gray')\n# lake\nplt.subplot(1,3,3)\nplt.imshow(labels_2dn[:,:,2], cmap='gray')\nplt.show()",
"Rendering label masks by individual label\nThe render_individual_labels method assigns a different label value to each individual object. It returns a multi-channel image as a 3D array and a the gives the number of labels in each channel. The label_classes parameter functions as with the render_labels method.",
"labels_2dn, label_count = labelled_img.render_individual_labels(label_classes=['tree', 'building', 'lake'])\nprint 'Label count={0}'.format(label_count)\nplt.figure(figsize=(16,4))\n# trees\nplt.subplot(1,3,1)\nplt.imshow(labels_2dn[:,:,0])\n# buildings\nplt.subplot(1,3,2)\nplt.imshow(labels_2dn[:,:,1])\n# lake\nplt.subplot(1,3,3)\nplt.imshow(labels_2dn[:,:,2])\nplt.show()",
"Setting the fill parameter to False results in an outline image as before:",
"# Only render trees so that we can show one large image, otherwise the 1-pixel-wide outlines\n# will be difficult to see:\nlabels_2dn, label_count = labelled_img.render_individual_labels(label_classes=['tree'],\n fill=False)\nprint 'Label count={0}'.format(label_count)\nplt.figure(figsize=(12,9))\nplt.imshow(labels_2dn[:,:,0])\nplt.show()",
"Extracting images of labelled objects\nThe extract_label_images method extracts the pixels covered by each individual labelled object from the original image. The label_class_set parameter specifies the classes of objects that should be rendered; objects whoses classes are not listed are not rendered. It can also be None to render all objects of all classes. It returns a list of images:",
"# Render all objects:\nobject_images = labelled_img.extract_label_images(label_class_set=None)\nn_cols = 4\nn_rows = int(math.ceil(float(len(object_images)) / n_cols))\nplt.figure(figsize=(16,n_rows*3))\n \nfor i, img in enumerate(object_images):\n plt.subplot(n_rows, n_cols, i + 1)\n plt.imshow(img)\nplt.show()",
"Extract objects separately by class:",
"for cls in ['tree', 'lake', 'building']:\n print 'Extracted objects of class \\'{0}\\':'.format(cls)\n object_images = labelled_img.extract_label_images(label_class_set=[cls])\n n_cols = 4\n n_rows = int(math.ceil(float(len(object_images)) / n_cols))\n plt.figure(figsize=(16,n_rows*3))\n\n for i, img in enumerate(object_images):\n plt.subplot(n_rows, n_cols, i + 1)\n plt.imshow(img)\n plt.show()",
"JSON Label Format\nThe label data is in JSON form. An example file is included in the images directory. The format of the file will now be described.\n```\n<root>: \n{\n image_filename: <image filename as string>\n labels: [\n <label_object 0>,\n <label_object 1>,\n ...\n <label_object N>\n ]\n}\n<label_object -- where label_type=polygon>:\n{\n label_class: <label class as string; identifiers used above to identify label classes>\n label_type: 'polygon',\n vertices: [\n <vertex 0>,\n <vertex 1>,\n ...\n <vertex N>\n ]\n}\n<vertex>:\n{\n x: <x-co-ordinate as float>,\n y: <y-co-ordinate as float>\n}\n```"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
keras-team/keras-io
|
examples/nlp/ipynb/multi_label_classification.ipynb
|
apache-2.0
|
[
"Large-scale multi-label text classification\nAuthor: Sayak Paul, Soumik Rakshit<br>\nDate created: 2020/09/25<br>\nLast modified: 2020/12/23<br>\nDescription: Implementing a large-scale multi-label text classification model.\nIntroduction\nIn this example, we will build a multi-label text classifier to predict the subject areas\nof arXiv papers from their abstract bodies. This type of classifier can be useful for\nconference submission portals like OpenReview. Given a paper\nabstract, the portal could provide suggestions for which areas the paper would\nbest belong to.\nThe dataset was collected using the\narXiv Python library\nthat provides a wrapper around the\noriginal arXiv API.\nTo learn more about the data collection process, please refer to\nthis notebook.\nAdditionally, you can also find the dataset on\nKaggle.\nImports",
"from tensorflow.keras import layers\nfrom tensorflow import keras\nimport tensorflow as tf\n\nfrom sklearn.model_selection import train_test_split\nfrom ast import literal_eval\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np",
"Perform exploratory data analysis\nIn this section, we first load the dataset into a pandas dataframe and then perform\nsome basic exploratory data analysis (EDA).",
"arxiv_data = pd.read_csv(\n \"https://github.com/soumik12345/multi-label-text-classification/releases/download/v0.2/arxiv_data.csv\"\n)\narxiv_data.head()",
"Our text features are present in the summaries column and their corresponding labels\nare in terms. As you can notice, there are multiple categories associated with a\nparticular entry.",
"print(f\"There are {len(arxiv_data)} rows in the dataset.\")",
"Real-world data is noisy. One of the most commonly observed source of noise is data\nduplication. Here we notice that our initial dataset has got about 13k duplicate entries.",
"total_duplicate_titles = sum(arxiv_data[\"titles\"].duplicated())\nprint(f\"There are {total_duplicate_titles} duplicate titles.\")",
"Before proceeding further, we drop these entries.",
"arxiv_data = arxiv_data[~arxiv_data[\"titles\"].duplicated()]\nprint(f\"There are {len(arxiv_data)} rows in the deduplicated dataset.\")\n\n# There are some terms with occurrence as low as 1.\nprint(sum(arxiv_data[\"terms\"].value_counts() == 1))\n\n# How many unique terms?\nprint(arxiv_data[\"terms\"].nunique())",
"As observed above, out of 3,157 unique combinations of terms, 2,321 entries have the\nlowest occurrence. To prepare our train, validation, and test sets with\nstratification, we need to drop\nthese terms.",
"# Filtering the rare terms.\narxiv_data_filtered = arxiv_data.groupby(\"terms\").filter(lambda x: len(x) > 1)\narxiv_data_filtered.shape",
"Convert the string labels to lists of strings\nThe initial labels are represented as raw strings. Here we make them List[str] for a\nmore compact representation.",
"arxiv_data_filtered[\"terms\"] = arxiv_data_filtered[\"terms\"].apply(\n lambda x: literal_eval(x)\n)\narxiv_data_filtered[\"terms\"].values[:5]",
"Use stratified splits because of class imbalance\nThe dataset has a\nclass imbalance problem.\nSo, to have a fair evaluation result, we need to ensure the datasets are sampled with\nstratification. To know more about different strategies to deal with the class imbalance\nproblem, you can follow\nthis tutorial.\nFor an end-to-end demonstration of classification with imbablanced data, refer to\nImbalanced classification: credit card fraud detection.",
"test_split = 0.1\n\n# Initial train and test split.\ntrain_df, test_df = train_test_split(\n arxiv_data_filtered,\n test_size=test_split,\n stratify=arxiv_data_filtered[\"terms\"].values,\n)\n\n# Splitting the test set further into validation\n# and new test sets.\nval_df = test_df.sample(frac=0.5)\ntest_df.drop(val_df.index, inplace=True)\n\nprint(f\"Number of rows in training set: {len(train_df)}\")\nprint(f\"Number of rows in validation set: {len(val_df)}\")\nprint(f\"Number of rows in test set: {len(test_df)}\")",
"Multi-label binarization\nNow we preprocess our labels using the\nStringLookup\nlayer.",
"terms = tf.ragged.constant(train_df[\"terms\"].values)\nlookup = tf.keras.layers.StringLookup(output_mode=\"multi_hot\")\nlookup.adapt(terms)\nvocab = lookup.get_vocabulary()\n\n\ndef invert_multi_hot(encoded_labels):\n \"\"\"Reverse a single multi-hot encoded label to a tuple of vocab terms.\"\"\"\n hot_indices = np.argwhere(encoded_labels == 1.0)[..., 0]\n return np.take(vocab, hot_indices)\n\n\nprint(\"Vocabulary:\\n\")\nprint(vocab)\n",
"Here we are separating the individual unique classes available from the label\npool and then using this information to represent a given label set with 0's and 1's.\nBelow is an example.",
"sample_label = train_df[\"terms\"].iloc[0]\nprint(f\"Original label: {sample_label}\")\n\nlabel_binarized = lookup([sample_label])\nprint(f\"Label-binarized representation: {label_binarized}\")",
"Data preprocessing and tf.data.Dataset objects\nWe first get percentile estimates of the sequence lengths. The purpose will be clear in a\nmoment.",
"train_df[\"summaries\"].apply(lambda x: len(x.split(\" \"))).describe()",
"Notice that 50% of the abstracts have a length of 154 (you may get a different number\nbased on the split). So, any number close to that value is a good enough approximate for the\nmaximum sequence length.\nNow, we implement utilities to prepare our datasets.",
"max_seqlen = 150\nbatch_size = 128\npadding_token = \"<pad>\"\nauto = tf.data.AUTOTUNE\n\n\ndef make_dataset(dataframe, is_train=True):\n labels = tf.ragged.constant(dataframe[\"terms\"].values)\n label_binarized = lookup(labels).numpy()\n dataset = tf.data.Dataset.from_tensor_slices(\n (dataframe[\"summaries\"].values, label_binarized)\n )\n dataset = dataset.shuffle(batch_size * 10) if is_train else dataset\n return dataset.batch(batch_size)\n",
"Now we can prepare the tf.data.Dataset objects.",
"train_dataset = make_dataset(train_df, is_train=True)\nvalidation_dataset = make_dataset(val_df, is_train=False)\ntest_dataset = make_dataset(test_df, is_train=False)",
"Dataset preview",
"text_batch, label_batch = next(iter(train_dataset))\n\nfor i, text in enumerate(text_batch[:5]):\n label = label_batch[i].numpy()[None, ...]\n print(f\"Abstract: {text}\")\n print(f\"Label(s): {invert_multi_hot(label[0])}\")\n print(\" \")",
"Vectorization\nBefore we feed the data to our model, we need to vectorize it (represent it in a numerical form).\nFor that purpose, we will use the\nTextVectorization layer.\nIt can operate as a part of your main model so that the model is excluded from the core\npreprocessing logic. This greatly reduces the chances of training / serving skew during inference.\nWe first calculate the number of unique words present in the abstracts.",
"# Source: https://stackoverflow.com/a/18937309/7636462\nvocabulary = set()\ntrain_df[\"summaries\"].str.lower().str.split().apply(vocabulary.update)\nvocabulary_size = len(vocabulary)\nprint(vocabulary_size)\n",
"We now create our vectorization layer and map() to the tf.data.Datasets created\nearlier.",
"text_vectorizer = layers.TextVectorization(\n max_tokens=vocabulary_size, ngrams=2, output_mode=\"tf_idf\"\n)\n\n# `TextVectorization` layer needs to be adapted as per the vocabulary from our\n# training set.\nwith tf.device(\"/CPU:0\"):\n text_vectorizer.adapt(train_dataset.map(lambda text, label: text))\n\ntrain_dataset = train_dataset.map(\n lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto\n).prefetch(auto)\nvalidation_dataset = validation_dataset.map(\n lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto\n).prefetch(auto)\ntest_dataset = test_dataset.map(\n lambda text, label: (text_vectorizer(text), label), num_parallel_calls=auto\n).prefetch(auto)\n",
"A batch of raw text will first go through the TextVectorization layer and it will\ngenerate their integer representations. Internally, the TextVectorization layer will\nfirst create bi-grams out of the sequences and then represent them using\nTF-IDF. The output representations will then\nbe passed to the shallow model responsible for text classification.\nTo learn more about other possible configurations with TextVectorizer, please consult\nthe\nofficial documentation.\nNote: Setting the max_tokens argument to a pre-calculated vocabulary size is\nnot a requirement.\nCreate a text classification model\nWe will keep our model simple -- it will be a small stack of fully-connected layers with\nReLU as the non-linearity.",
"\ndef make_model():\n shallow_mlp_model = keras.Sequential(\n [\n layers.Dense(512, activation=\"relu\"),\n layers.Dense(256, activation=\"relu\"),\n layers.Dense(lookup.vocabulary_size(), activation=\"sigmoid\"),\n ] # More on why \"sigmoid\" has been used here in a moment.\n )\n return shallow_mlp_model\n",
"Train the model\nWe will train our model using the binary crossentropy loss. This is because the labels\nare not disjoint. For a given abstract, we may have multiple categories. So, we will\ndivide the prediction task into a series of multiple binary classification problems. This\nis also why we kept the activation function of the classification layer in our model to\nsigmoid. Researchers have used other combinations of loss function and activation\nfunction as well. For example, in\nExploring the Limits of Weakly Supervised Pretraining,\nMahajan et al. used the softmax activation function and cross-entropy loss to train\ntheir models.",
"epochs = 20\n\nshallow_mlp_model = make_model()\nshallow_mlp_model.compile(\n loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"categorical_accuracy\"]\n)\n\nhistory = shallow_mlp_model.fit(\n train_dataset, validation_data=validation_dataset, epochs=epochs\n)\n\n\ndef plot_result(item):\n plt.plot(history.history[item], label=item)\n plt.plot(history.history[\"val_\" + item], label=\"val_\" + item)\n plt.xlabel(\"Epochs\")\n plt.ylabel(item)\n plt.title(\"Train and Validation {} Over Epochs\".format(item), fontsize=14)\n plt.legend()\n plt.grid()\n plt.show()\n\n\nplot_result(\"loss\")\nplot_result(\"categorical_accuracy\")",
"While training, we notice an initial sharp fall in the loss followed by a gradual decay.\nEvaluate the model",
"_, categorical_acc = shallow_mlp_model.evaluate(test_dataset)\nprint(f\"Categorical accuracy on the test set: {round(categorical_acc * 100, 2)}%.\")",
"The trained model gives us an evaluation accuracy of ~87%.\nInference\nAn important feature of the\npreprocessing layers provided by Keras\nis that they can be included inside a tf.keras.Model. We will export an inference model\nby including the text_vectorization layer on top of shallow_mlp_model. This will\nallow our inference model to directly operate on raw strings.\nNote that during training it is always preferable to use these preprocessing\nlayers as a part of the data input pipeline rather than the model to avoid\nsurfacing bottlenecks for the hardware accelerators. This also allows for\nasynchronous data processing.",
"# Create a model for inference.\nmodel_for_inference = keras.Sequential([text_vectorizer, shallow_mlp_model])\n\n# Create a small dataset just for demoing inference.\ninference_dataset = make_dataset(test_df.sample(100), is_train=False)\ntext_batch, label_batch = next(iter(inference_dataset))\npredicted_probabilities = model_for_inference.predict(text_batch)\n\n# Perform inference.\nfor i, text in enumerate(text_batch[:5]):\n label = label_batch[i].numpy()[None, ...]\n print(f\"Abstract: {text}\")\n print(f\"Label(s): {invert_multi_hot(label[0])}\")\n predicted_proba = [proba for proba in predicted_probabilities[i]]\n top_3_labels = [\n x\n for _, x in sorted(\n zip(predicted_probabilities[i], lookup.get_vocabulary()),\n key=lambda pair: pair[0],\n reverse=True,\n )\n ][:3]\n print(f\"Predicted Label(s): ({', '.join([label for label in top_3_labels])})\")\n print(\" \")",
"The prediction results are not that great but not below the par for a simple model like\nours. We can improve this performance with models that consider word order like LSTM or\neven those that use Transformers (Vaswani et al.).\nAcknowledgements\nWe would like to thank Matt Watson for helping us\ntackle the multi-label binarization part and inverse-transforming the processed labels\nto the original form."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
arushanova/echidna
|
echidna/scripts/tutorials/getting_started.ipynb
|
mit
|
[
"First set up environment with convenience imports and inline plotting:\n<!--- The following cell should be commented out in the python script\nversion of this notebook --->",
"%pylab inline\npylab.rc(\"savefig\", dpi=120) # set resolution of inline figures",
"The %pylab magic imports matplotlib.pyplot as plt and numpy as\nnp. We'll also, change the working directory to echidna's base\ndirectory, so that all the relative imports work.\n<!--- The following cell should be commented out in the python script\nversion of this notebook --->",
"%cd ../../..\n\n%%bash\npwd",
"The %cd inline-magic emmulates the bash cd command, allowing us to\nchange directory and the %%bash magic lets you run any bash command in\nthe cell but remaining in the notebook!\n\n<div class=\"alert alert-info\">\n <strong>A quick note about the ipython notebook:</strong>\n <ul>\n <li> To see the keyboard shortcuts at any time simply press the\n `Esc` key and then the `H` key </li>\n <li> The notebook has two basic modes: **Command** and **Edit**.\n Command mode is enabled by the `Esc` key and Edit by the\n `Enter` key. </li>\n <li> The main comand you will need is `Shift`+`Enter` (make sure\n you are in command mode first by pressing `Esc`). This\n executes the current cell and then selects the cell below. Try\n pressing `Shift`+`Enter` on this cell and then again to run\n the cell below. </li>\n </ul>\n</div>",
"print \"Hello World!\"",
"<div class=\"alert alert-info\">\n <par>\n As you can see, for cells containing valid python, the code\n snippet is executed as it would be in a python terminal shell and\n the output is displayed below. Try selecting the cell above and\n editing it (`Enter` for edit mode) so that it prints out\n `Goodbye World!` when executed.\n </par>\n <par>\n These commands should get you through the tutorial, but there are\n more in-depth tutorials\n <a href=\"https://nbviewer.jupyter.org/github/ipython/ipython/blob/4.0.x/examples/IPython%20Kernel/Index.ipynb\">\n here</a> if you are interested - you can even download them and\n work through them in the Jupyter viewer.\n </par>\n</div>\n\n<!--- Main script starts below ------------------------------------------->\n\nTutorial 1: Getting started with echidna\nThis guide tutorial aims to get you started with some basic tasks you can\naccomplish using echidna.\nSpectra creation\nThe Spectra class is echidna's most fundamental class. It holds the core\ndata structure and provides much of the core functionality required.\nCoincidentally, this guide will be centred around this class, how to\ncreate it and then some manipulations of the class.\nWe'll begin with how to create an instance of the Spectra class. It is\npart of the echidna.core.spectra module, so we will import this and make\na Spectra instance.",
"import echidna.core.spectra as spectra",
"Now we need a config file to create the spectrum from. There is an example\nconfig file in echidna/config. If we look at the contents of this yaml\nfile, we see it tells the Spectra class to create a data structure to\nhold two parameters:\n\nenergy_mc, with lower limit 0, upper limit 10 and 1000 bins\nradial_mc, with lower limit 0, upper limit 15000 and 1500 bins\n\nThis config should be fine for us. We can load it using the\nload_from_file method of the SpectraConfig class:",
"import echidna\nfrom echidna.core.config import SpectraConfig\n\nconfig = SpectraConfig.load_from_file(\n echidna.__echidna_base__ + \"/echidna/config/spectra_example.yml\")\nprint config.get_pars()",
"Note we used the __echidna_base__ member of the echidna module here.\nThis module has two special members for denoting the base directory (the\noutermost directory of the git repository) and the home directory (the\nechidna directory inside the base directory. The following lines show\nthe current location of these directories:",
"print echidna.__echidna_base__\nprint echidna.__echidna_home__",
"Finally before creating the spectrum, we should define the number of\nevents it should represent:",
"num_decays = 1000\n\nspectrum = spectra.Spectra(\"spectrum\", num_decays, config)\nprint spectrum",
"And there you have it, we've created a Spectra object.\nFilling the spectrum\nOk, so we now have a spectrum, let's fill it with some events. We'll\ngenerate random energies from a Gaussian distribution and random positions\nfrom a Uniform distribution. Much of echidna is built using the numpy\nand SciPy packages and we will use them here to generate the random\nnumbers. We'll also generate a third random number to simulate some form\nrudimentary detector efficiency.",
"# Import numpy\nimport numpy\n\n# Generate random energies from a Gaussin with mean (mu) and sigma\n# (sigma)\nmu = 2.5 # MeV\nsigma = 0.15 # MeV\n\n# Generate random radial position from a Uniform distribution\nouter_radius = 5997 # Radius of SNO+ AV\n\n# Detector efficiency\nefficiency = 0.9 # 90%\n\nfor event in range(num_decays):\n energy = numpy.random.normal(mu, sigma)\n radius = numpy.random.uniform(high=outer_radius)\n event_detected = (numpy.random.uniform() < efficiency)\n if event_detected: # Fill spectrum with values\n spectrum.fill(energy_mc=energy, radial_mc=radius)",
"This will have filled our Spectra class with the events. Make sure to\nuse the exact parameter names that were printed out above, as kewyord\narguments. To check we can now use the sum method. This returns the\ntotal number of events stored in the spectrum at a given time - the\nintegral of the spectrum.",
"print spectrum.sum()",
"The value returned by sum, should roughly equal:",
"print num_decays * efficiency",
"We can also inspect the raw data structure. This is saved in the _data\nmember of the Spectra class:",
"print spectrum._data",
"<div class=\"alert alert-info\">\n <strong>Note:</strong> you probably won't see any entries in the\n above. For large arrays, numpy only prints the first three and last\n three entries. Since our energy range is in the middle, all our events\n are in the `...` part at the moment. But we will see entries printed\n out later when we apply some cuts.\n</div>\n\nPlotting\nAnother useful way to inspect the Spectra created is to plot it. Support\nis available within echidna to plot using either ROOT or matplotlib\nand there are some useful plotting functions available in the plot an\nplot_root modules.",
"import echidna.output.plot as plot\nimport echidna.output.plot_root as plot_root",
"To plot the projection of the spectrum on the energy_mc axis:",
"fig1 = plot.plot_projection(spectrum, \"energy_mc\",\n fig_num=1, show_plot=False)",
"and to plot the projection on the radial_mc axis, this time using root:",
"plot_root.plot_projection(spectrum, \"radial_mc\", fig_num=2)",
"We can also project onto two dimensions and plot a surface:",
"fig_3 = plot.plot_surface(spectrum, \"energy_mc\", \"radial_mc\",\n fig_num=3, show_plot=False)",
"Convolution and cuts\nThe ability to smear the event, along a parameter axis, is built into\nechidna in the smear module. There are three classes in the module that\nallow us to create a smearer for different scenarios. There are two\nsmearers for energy-based parameters, EnergySmearRes and\nEnergySmearLY, which allow smearing by energy resolution (e.g.\n$\\frac{5\\%}{\\sqrt{(E[MeV])}}$ and light yield (e.g. 200 NHit/Mev)\nrespectively. Then additionally the RadialSmear class handles smearing\nalong the axis of any radial based parameter.\nWe will go through an example of how to smear our spectrum by a fixed\nenergy resolution of 5%. There are two main smearing algorithms: \"weighted\nsmear\" and \"random smear\". The \"random smear\" algorithm takes each event\nin each bin and randomly assigns it a new energy from the Gaussian\ndistribution for that bin - it is fast but not very accurate for low\nstatistics. The \"weighted smear\" algorithm is slower but much more\naccurate, as re-weights each bin by taking into account all other nearby\nbins within a pre-defined range. We will use the \"weighted smear\" method\nin this example.\nFirst to speed the smearing process, we will apply some loose cuts.\nAlthough, fewer bins means faster smearing, you should be wary of cutting\nthe spectrum too tightly before smearing as you may end up cutting bins\nthat would have influenced the smearing. Cuts can be applied using the\nshrink method. (Confusingly there is also a cut method which is almost\nidentical to the shrink method, but updates the number of events the\nspectrum represents, after the cut is applied. Unless you are sure this is\nwhat you want to do, it is probably better to use the shrink method.) To\nshrink over multiple parameters, it is best to construct a dictionary of\n_low and _high values for each parameter and then pass this to the\nshrink method.",
"shrink_dict = {\"energy_mc_low\": mu - 5.*sigma,\n \"energy_mc_high\": mu + 5.*sigma,\n \"radial_mc_low\": 0.0,\n \"radial_mc_high\": 3500}\nspectrum.shrink(**shrink_dict)",
"Using the sum method, we can check to see how many events were cut.",
"print spectrum.sum()",
"Import the smear class:",
"import echidna.core.smear as smear",
"and create the smearer object.",
"smearer = smear.EnergySmearRes()",
"By default the \"weighted smear\" method considers all bins within a $\\pm\n5\\sigma$ range. For the sake of speed, we will reduce this to three here.\nAlso set the energy resolution - 0.05 for 5%.",
"smearer.set_num_sigma(3)\nsmearer.set_resolution(0.05)",
"To smear our original spectrum and create the new Spectra object\nsmeared_spectrum:",
"smeared_spectrum = smearer.weighted_smear(spectrum)",
"this should hopefully only take a couple of seconds.\nThe following code shows how to make a simple script, using matplotlib, to\noverlay the original and smeared spectra.",
"def overlay_spectra(original, smeared,\n dimension=\"energy_mc\", fig_num=1):\n \"\"\" Overlay original and smeared spectra.\n\n Args:\n original (echidna.core.spectra.Spectra): Original spectrum.\n smeared (echidna.core.spectra.Spectra): Smeared spectrum.\n dimension (string, optional): Dimension to project onto.\n Default is \"energy_mc\".\n fignum (int, optional): Figure number, if producing multiple\n figures. Default is 1.\n\n Returns:\n matplotlib.figure.Figure: Figure showing overlaid spectra.\n \"\"\"\n par = original.get_config().get_par(dimension)\n # Define array of bin boundarie\n bins = par.get_bin_boundaries()\n # Define array of bin centres\n x = par.get_bin_centres()\n # Save bin width\n width = par.get_width()\n\n # Create figure and axes\n fig, ax = plt.subplots(num=fig_num)\n\n # Overlay two spectra using projection as weight\n ax.hist(x, bins, weights=original.project(dimension),\n histtype=\"stepfilled\", color=\"RoyalBlue\",\n alpha=0.5, label=original._name)\n ax.hist(x, bins, weights=smeared.project(dimension),\n histtype=\"stepfilled\", color=\"Red\",\n alpha=0.5, label=smeared._name)\n\n # Add label/style\n plt.legend(loc=\"upper right\")\n plt.ylim(ymin=0.0)\n plt.xlabel(dimension + \" [\" + par.get_unit() + \"]\")\n plt.ylabel(\"Events per \" + str(width) +\n \" \" + par.get_unit() + \" bin\")\n return fig\n\nfig_4 = overlay_spectra(spectrum, smeared_spectrum, fig_num=4)",
"Other spectra manipulations\nWe now have a nice smeared version of our original spectrum. To prepare\nthe spectrum for a final analysis there are a few final manipulations we\nmay wish to do.\nRegion of Interest (ROI)\nThere is a special version of the shrink method called shrink_to_roi\nthat can be used for ROI cuts. It saves some useful information about the\nROI in the Spectra class instance, including the efficiency i.e.\nintegral of spectrum after cut divided by integral of spectrum before cut.",
"# To get nice shape for rebinning\nroi = (mu - 0.5*sigma, mu + 1.45*sigma)\nsmeared_spectrum.shrink_to_roi(roi[0], roi[1], \"energy_mc\")\nprint smeared_spectrum.get_roi(\"energy_mc\")",
"Rebin\nOur spectrum is still quite finely binned, perhaps we want to bin it in 50\nkeV bins instead of 10 keV bins. The rebin method can be used to acheive\nthis.\nThe rebin method requires us to specify the new shape (tuple) of the\ndata. With just two dimensions this is trivial, but with more dimensions,\nit may be better to use a construct such as:",
"dimension = smeared_spectrum.get_config().get_pars().index(\"energy_mc\")\nold_shape = smeared_spectrum._data.shape\nreduction_factor = 5 # how many bins to combine into a single bin\nnew_shape = tuple([j / reduction_factor if i == dimension else j\n for i, j in enumerate(old_shape)])\nprint old_shape\nprint new_shape\n\nsmeared_spectrum.rebin(new_shape)",
"Scaling\nFinally, we \"simulated\" 1000 events, but we most likely want to scale this\ndown for to represent the number of events expected in our analysis. The\nSpectra class has a scale method to accomplish this. Remember that the\nscale method should always be supplied with the number of events the\nfull spectrum (i.e. before any cuts using shrink or shrink_to_roi)\nshould represent. Lets assume that our spectrum should actually represent\n104.25 events:",
"smeared_spectrum.scale(104.25)\nprint smeared_spectrum.sum()",
"Putting it all together\nAfter creating, filling, convolving and various other manipulations what\ndoes our final spectrum look like?",
"print smeared_spectrum._data\n\nfig_5 = plot.plot_projection(smeared_spectrum, \"energy_mc\",\n fig_num=5, show_plot=False)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MLWave/kepler-mapper
|
examples/breast-cancer/breast-cancer.ipynb
|
mit
|
[
"Plotly plot of the kmapper graph associated to the breast cancer dataset",
"import sys\ntry:\n import pandas as pd\nexcept ImportError as e:\n print(\"pandas is required for this example. Please install with conda or pip and then try again.\")\n sys.exit()\n\nimport numpy as np\nimport sklearn\nfrom sklearn import ensemble\nimport kmapper as km\nfrom kmapper.plotlyviz import *\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport plotly.graph_objs as go\nfrom ipywidgets import (HBox, VBox)\n\n# Data - the Wisconsin Breast Cancer Dataset\n# https://www.kaggle.com/uciml/breast-cancer-wisconsin-data\ndf = pd.read_csv(\"data.csv\")\nfeature_names = [c for c in df.columns if c not in [\"id\", \"diagnosis\"]]\ndf[\"diagnosis\"] = df[\"diagnosis\"].apply(lambda x: 1 if x == \"M\" else 0)\nX = np.array(df[feature_names].fillna(0))\ny = np.array(df[\"diagnosis\"])\n\n# Create a custom 1-D lens with Isolation Forest\nmodel = ensemble.IsolationForest(random_state=1729)\nmodel.fit(X)\nlens1 = model.decision_function(X).reshape((X.shape[0], 1))\n\n# Create another 1-D lens with L2-norm\nmapper = km.KeplerMapper(verbose=0)\nlens2 = mapper.fit_transform(X, projection=\"l2norm\")\n\n# Combine both lenses to get a 2-D [Isolation Forest, L^2-Norm] lens\nlens = np.c_[lens1, lens2]\n\n# Define the simplicial complex\nscomplex = mapper.map(lens,\n X,\n nr_cubes=15,\n overlap_perc=0.7,\n clusterer=sklearn.cluster.KMeans(n_clusters=2,\n random_state=3471))\n",
"First we visualize the resulting graph via a color_function that associates to lens data their x-coordinate distance to min, and colormap these coordinates to a given Plotly colorscale. Here we use the brewer colorscale with hex color codes.",
"pl_brewer = [[0.0, '#006837'],\n [0.1, '#1a9850'],\n [0.2, '#66bd63'],\n [0.3, '#a6d96a'],\n [0.4, '#d9ef8b'],\n [0.5, '#ffffbf'],\n [0.6, '#fee08b'],\n [0.7, '#fdae61'],\n [0.8, '#f46d43'],\n [0.9, '#d73027'],\n [1.0, '#a50026']]\n\ncolor_function = lens [:,0] - lens[:,0].min()\nmy_colorscale = pl_brewer\nkmgraph, mapper_summary, colorf_distribution = get_mapper_graph(scomplex, \n color_function, \n color_function_name='Distance to x-min', \n colorscale=my_colorscale)\n\n# assign to node['custom_tooltips'] the node label (0 for benign, 1 for malignant)\nfor node in kmgraph['nodes']:\n node['custom_tooltips'] = y[scomplex['nodes'][node['name']]] ",
"Since the chosen colorscale leads to a few light colors when it it used for histogram bars,\nwe set a black background color to make the bars visible:",
"bgcolor = 'rgba(10,10,10, 0.9)'\ny_gridcolor = 'rgb(150,150,150)'# on a black background the gridlines are set on grey\n\nplotly_graph_data = plotly_graph(kmgraph, graph_layout='fr', colorscale=my_colorscale, \n factor_size=2.5, edge_linewidth=0.5)\nlayout = plot_layout(title='Topological network representing the<br> breast cancer dataset', \n width=620, height=570,\n annotation_text=get_kmgraph_meta(mapper_summary), \n bgcolor=bgcolor)\n\nfw_graph = go.FigureWidget(data=plotly_graph_data, layout=layout)\nfw_hist = node_hist_fig(colorf_distribution, bgcolor=bgcolor,\n y_gridcolor=y_gridcolor)\nfw_summary = summary_fig(mapper_summary, height=300)\ndashboard = hovering_widgets(kmgraph, \n fw_graph, \n ctooltips=True, # ctooltips = True, because we assigned a label to each \n #cluster member\n bgcolor=bgcolor,\n y_gridcolor=y_gridcolor, \n member_textbox_width=600)\n\n#Update the fw_graph colorbar, setting its title:\n \nfw_graph.data[1].marker.colorbar.title = 'dist to<br>x-min'\n\n\nVBox([fw_graph, HBox([fw_summary, fw_hist])])\n\ndashboard",
"In the following we illustrate how we can duplicate FigureWidget(s) and update them. This is just a pretext to\nillustrate how the kmapper graph figure can be manipulated for a more in-depth study or for \npublication.\nHere we duplicate the initial FigureWidget, fw_graph, and recolor its graph nodes according to the proportion of malignant members. The two FigureWidgets are then restyled and plotted alonside each other for comparison.",
"breastc_dict = {0: 'benign', 1: 'malignant'}\ntooltips = list(fw_graph.data[1].text) # we perform this conversion because fw.data[1].text is a tuple and we want to update\n # the tooltips\n\nnew_color = []\nfor j, node in enumerate(kmgraph['nodes']):\n member_label_ids = y[scomplex['nodes'][node['name']]]\n member_labels = [breastc_dict[id] for id in member_label_ids]\n label_type, label_counts = np.unique(member_labels, return_counts=True) \n \n n_members = label_counts.sum()\n if label_type.shape[0] == 1:\n if label_type[0] == 'benign':\n new_color.append(0)\n else:\n new_color.append(1)\n else: \n new_color.append(1.0*label_counts[1]/n_members)#multiply by 1 for python 2.7.+\n \n for m in range(len(label_counts)):\n tooltips[j] += '<br>' + str(label_type[m]) + ': ' + str(label_counts[m]) # append how many benign/malign \n # members exist in each node\n\nfwn_graph = go.FigureWidget(fw_graph) # copy the initial FigureWidget\n\nwith fwn_graph.batch_update(): # make updates for the new figure \n fwn_graph.data[1].text = tooltips # add the new tooltips\n fwn_graph.data[1].marker.colorbar.x = -0.14 # place toolbar at the figure left side\n fwn_graph.layout.width = 550 # change the figure size in order to plot two \"parallel\" copies\n fwn_graph.layout.height = 550\n fwn_graph.layout.margin.r = 45 # decrease the right margin from 60px (default val) to 45 pixels\n \nfw1 = go.FigureWidget(fwn_graph) # define a new figure from the fwn_graph that will be colored by the new color function \nwith fw1.batch_update():\n fw1.data[1].marker.color = new_color # update node colors\n fw1.data[0].line.color = 'rgb(125,125,125)' # update the graph edge color\n fw1.layout.plot_bgcolor = 'rgb(240,240,240)'\n fw1.layout.annotations = None # remove the mapper_summary from the second plot\n fw1.data[1].marker.showscale = False # remove the colorbar\n fw1.layout.title = \"Nodes are colored according to the proportion<br> of malignant members\"",
"Plot the dashboard consisting in the two figures:",
"HBox([fwn_graph, fw1])",
"To save any of the FigureWidgets generated above uncomment the two lines in the next cell, and replace fw_graph and the name of pdf file\nadequately:",
"#import plotly.io as pio\n#pio.write_image(fw_graph, 'breast-graph1.pdf') # or png, swg"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
roebius/deeplearning_keras2
|
nbs/imagenet_batchnorm.ipynb
|
apache-2.0
|
[
"This notebook explains how to add batch normalization to VGG. The code shown here is implemented in vgg_bn.py, and there is a version of vgg_ft (our fine tuning function) with batch norm called vgg_ft_bn in utils.py.",
"from __future__ import division, print_function\n%matplotlib inline\nfrom importlib import reload\nimport utils; reload(utils)\nfrom utils import *",
"The problem, and the solution\nThe problem\nThe problem that we faced in the lesson 3 is that when we wanted to add batch normalization, we initialized all the dense layers of the model to random weights, and then tried to train them with our cats v dogs dataset. But that's a lot of weights to initialize to random - out of 134m params, around 119m are in the dense layers! Take a moment to think about why this is, and convince yourself that dense layers are where most of the weights will be. Also, think about whether this implies that most of the time will be spent training these weights. What do you think?\nTrying to train 120m params using just 23k images is clearly an unreasonable expectation. The reason we haven't had this problem before is that the dense layers were not random, but were trained to recognize imagenet categories (other than the very last layer, which only has 8194 params).\nThe solution\nThe solution, obviously enough, is to add batch normalization to the VGG model! To do so, we have to be careful - we can't just insert batchnorm layers, since their parameters (gamma - which is used to multiply by each activation, and beta - which is used to add to each activation) will not be set correctly. Without setting these correctly, the new batchnorm layers will normalize the previous layer's activations, meaning that the next layer will receive totally different activations to what it would have without new batchnorm layer. And that means that all the pre-trained weights are no longer of any use!\nSo instead, we need to figure out what beta and gamma to choose when we insert the layers. The answer to this turns out to be pretty simple - we need to calculate what the mean and standard deviation of that activations for that layer are when calculated on all of imagenet, and then set beta and gamma to these values. That means that the new batchnorm layer will normalize the data with the mean and standard deviation, and then immediately un-normalize the data using the beta and gamma parameters we provide. So the output of the batchnorm layer will be identical to it's input - which means that all the pre-trained weights will continue to work just as well as before.\nThe benefit of this is that when we wish to fine-tune our own networks, we will have all the benefits of batch normalization (higher learning rates, more resiliant training, and less need for dropout) plus all the benefits of a pre-trained network.\nTo calculate the mean and standard deviation of the activations on imagenet, we need to download imagenet. You can download imagenet from http://www.image-net.org/download-images . The file you want is the one titled Download links to ILSVRC2013 image data. You'll need to request access from the imagenet admins for this, although it seems to be an automated system - I've always found that access is provided instantly. Once you're logged in and have gone to that page, look for the CLS-LOC dataset section. Both training and validation images are available, and you should download both. There's not much reason to download the test images, however.\nNote that this will not be the entire imagenet archive, but just the 1000 categories that are used in the annual competition. Since that's what VGG16 was originally trained on, that seems like a good choice - especially since the full dataset is 1.1 terabytes, whereas the 1000 category dataset is 138 gigabytes.\nAdding batchnorm to Imagenet\nSetup\nSample\nAs per usual, we create a sample so we can experiment more rapidly.",
"# %pushd data/imagenet\n%pushd data/imagenet\n%cd train\n\n%mkdir ../sample\n%mkdir ../sample/train\n%mkdir ../sample/valid\n\nfrom shutil import copyfile\n\ng = glob('*')\nfor d in g: \n os.mkdir('../sample/train/'+d)\n os.mkdir('../sample/valid/'+d)\n\ng = glob('*/*.JPEG')\nshuf = np.random.permutation(g)\nfor i in range(25000): copyfile(shuf[i], '../sample/train/' + shuf[i])\n\n%cd ../valid\n\ng = glob('*/*.JPEG')\nshuf = np.random.permutation(g)\nfor i in range(5000): copyfile(shuf[i], '../sample/valid/' + shuf[i])\n\n%cd ..\n\n%mkdir sample/results\n\n%popd",
"Data setup\nWe set up our paths, data, and labels in the usual way. Note that we don't try to read all of Imagenet into memory! We only load the sample into memory.",
"sample_path = \"data/imagenet/sample/\"\npath = \"data/imagenet/\"\n\n#sample_path = 'data/jhoward/imagenet/sample/'\n# This is the path to my fast SSD - I put datasets there when I can to get the speed benefit\n#fast_path = '/home/jhoward/ILSVRC2012_img_proc/'\n#path = '/data/jhoward/imagenet/sample/'\n#path = 'data/jhoward/imagenet/'\n\nbatch_size=64\n\nsamp_trn = get_data(sample_path+'train')\nsamp_val = get_data(sample_path+'valid')\n\nsave_array(sample_path+'results/trn.dat', samp_trn)\nsave_array(sample_path+'results/val.dat', samp_val)\n\nsamp_trn = load_array(sample_path+'results/trn.dat')\nsamp_val = load_array(sample_path+'results/val.dat')\n\n(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)\n\n(samp_val_classes, samp_trn_classes, samp_val_labels, samp_trn_labels, \n samp_val_filenames, samp_filenames, samp_test_filenames) = get_classes(sample_path)",
"Model setup\nSince we're just working with the dense layers, we should pre-compute the output of the convolutional layers.",
"vgg = Vgg16()\nmodel = vgg.model\n\nlayers = model.layers\nlast_conv_idx = [index for index,layer in enumerate(layers) \n if type(layer) is Conv2D][-1]\nconv_layers = layers[:last_conv_idx+1]\n\ndense_layers = layers[last_conv_idx+1:]\n\nconv_model = Sequential(conv_layers)\n\nsamp_conv_val_feat = conv_model.predict(samp_val, batch_size=batch_size*2)\nsamp_conv_feat = conv_model.predict(samp_trn, batch_size=batch_size*2)\n\nsave_array(sample_path+'results/conv_val_feat.dat', samp_conv_val_feat)\nsave_array(sample_path+'results/conv_feat.dat', samp_conv_feat)\n\nsamp_conv_feat = load_array(sample_path+'results/conv_feat.dat')\nsamp_conv_val_feat = load_array(sample_path+'results/conv_val_feat.dat')\n\nsamp_conv_val_feat.shape",
"This is our usual Vgg network just covering the dense layers:",
"def get_dense_layers():\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dense(4096, activation='relu'),\n Dropout(0.5),\n Dense(4096, activation='relu'),\n Dropout(0.5),\n # Dense(1000, activation='softmax')\n Dense(1000, activation='relu')\n ]\n\ndense_model = Sequential(get_dense_layers())\n\nfor l1, l2 in zip(dense_layers, dense_model.layers):\n l2.set_weights(l1.get_weights())\n\ndense_model.add(Dense(763, activation='softmax'))",
"Check model\nIt's a good idea to check that your models are giving reasonable answers, before using them.",
"dense_model.compile(Adam(), 'categorical_crossentropy', ['accuracy'])\n\ndense_model.evaluate(samp_conv_val_feat, samp_val_labels)\n\nmodel.compile(Adam(), 'categorical_crossentropy', ['accuracy'])\n\n# should be identical to above\n# model.evaluate(val, val_labels)\n\n# should be a little better than above, since VGG authors overfit\n# dense_model.evaluate(conv_feat, trn_labels)",
"Adding our new layers\nCalculating batchnorm params\nTo calculate the output of a layer in a Keras sequential model, we have to create a function that defines the input layer and the output layer, like this:",
"k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()], \n [dense_model.layers[2].output])",
"Then we can call the function to get our layer activations:",
"d0_out = k_layer_out([samp_conv_val_feat, 0])[0]\n\nk_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()], \n [dense_model.layers[4].output])\n\nd2_out = k_layer_out([samp_conv_val_feat, 0])[0]",
"Now that we've got our activations, we can calculate the mean and standard deviation for each (note that due to a bug in keras, it's actually the variance that we'll need).",
"mu0,var0 = d0_out.mean(axis=0), d0_out.var(axis=0)\nmu2,var2 = d2_out.mean(axis=0), d2_out.var(axis=0)",
"Creating batchnorm model\nNow we're ready to create and insert our layers just after each dense layer.",
"nl1 = BatchNormalization()\nnl2 = BatchNormalization()\n\nbn_model = insert_layer(dense_model, nl2, 5)\nbn_model = insert_layer(bn_model, nl1, 3)\n\nbnl1 = bn_model.layers[3]\nbnl4 = bn_model.layers[6]",
"After inserting the layers, we can set their weights to the variance and mean we just calculated.",
"bnl1.set_weights([var0, mu0, mu0, var0])\nbnl4.set_weights([var2, mu2, mu2, var2])\n\nbn_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])",
"We should find that the new model gives identical results to those provided by the original VGG model.",
"bn_model.evaluate(samp_conv_val_feat, samp_val_labels)\n\nbn_model.evaluate(samp_conv_feat, samp_trn_labels)",
"Optional - additional fine-tuning\nNow that we have a VGG model with batchnorm, we might expect that the optimal weights would be a little different to what they were when originally created without batchnorm. So we fine tune the weights for one epoch.",
"feat_bc = bcolz.open(fast_path+'trn_features.dat')\n\nlabels = load_array(fast_path+'trn_labels.dat')\n\nval_feat_bc = bcolz.open(fast_path+'val_features.dat')\n\nval_labels = load_array(fast_path+'val_labels.dat')\n\nbn_model.fit(feat_bc, labels, nb_epoch=1, batch_size=batch_size,\n validation_data=(val_feat_bc, val_labels))",
"The results look quite encouraging! Note that these VGG weights are now specific to how keras handles image scaling - that is, it squashes and stretches images, rather than adding black borders. So this model is best used on images created in that way.",
"bn_model.save_weights(path+'models/bn_model2.h5')\n\nbn_model.load_weights(path+'models/bn_model2.h5')",
"Create combined model\nOur last step is simply to copy our new dense layers on to the end of the convolutional part of the network, and save the new complete set of weights, so we can use them in the future when using VGG. (Of course, we'll also need to update our VGG architecture to add the batchnorm layers).",
"new_layers = copy_layers(bn_model.layers)\nfor layer in new_layers:\n conv_model.add(layer)\n\ncopy_weights(bn_model.layers, new_layers)\n\nconv_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])\n\nconv_model.evaluate(samp_val, samp_val_labels)\n\nconv_model.save_weights(path+'models/inet_224squash_bn.h5')",
"The code shown here is implemented in vgg_bn.py, and there is a version of vgg_ft (our fine tuning function) with batch norm called vgg_ft_bn in utils.py."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
amueller/odscon-2015
|
executed/Part 05 - Preprocessing and Pipelines.ipynb
|
cc0-1.0
|
[
"Preprocessing and Pipelines",
"from sklearn.datasets import load_digits\nfrom sklearn.cross_validation import train_test_split\ndigits = load_digits()\nX_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)",
"Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold.\nTo do that, we build a pipeline.",
"from sklearn.pipeline import Pipeline\nfrom sklearn.svm import SVC\nfrom sklearn.preprocessing import StandardScaler\n\npipeline = Pipeline([(\"scaler\", StandardScaler()), (\"svm\", SVC())])\n# in new versions: make_pipeline(StandardScaler(), SVC())\n\npipeline.fit(X_train, y_train)\n\npipeline.predict(X_test)",
"Cross-validation with a pipeline",
"from sklearn.cross_validation import cross_val_score\ncross_val_score(pipeline, X_train, y_train)",
"Grid Search with a pipeline",
"import numpy as np\nfrom sklearn.grid_search import GridSearchCV\n\nparam_grid = {'svm__C': 10. ** np.arange(-3, 3), 'svm__gamma' : 10. ** np.arange(-3, 3)}\n\ngrid_pipeline = GridSearchCV(pipeline, param_grid=param_grid, n_jobs=-1)\n\ngrid_pipeline.fit(X_train, y_train)\n\ngrid_pipeline.score(X_test, y_test)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nguy/PyDisdrometer
|
Notebooks/PyDisdrometer Examples.ipynb
|
lgpl-2.1
|
[
"Quick usage examples for PyDisdrometer\nThis notebook shows some (very) brief examples of current PyDisdrometer functionality and how to interact with it.\nNASA GV Field Campaign Parsivel Format\nLet's start with a NASA Ground Validation Parsivel formatted file.",
"%pylab inline\n\nimport pydisdrometer\nfrom matplotlib.colors import LogNorm\n\nfilename = '/Users/hard505/Dropbox/Projects/NetworkRetrieval/data/Disdrometer/IFloodS_APU01_2013_0502_dsd.txt' #Parsivel 05, March 2nd\n\ndsd = pydisdrometer.read_parsivel_nasa_gv(filename, campaign='ifloods')",
"So at this point we have the drop size distribution read in. NASA strips out rainfall information though. Let's do some T-Matrix scattering though. This should take a little bit(up to a minute or so on my laptop depending on the file).",
"dsd.calculate_radar_parameters() ",
"This assumes BC shape relationship, X band, 10C. You can pass in a new wavelength to change that.\n Let's plot some of the parameters, and then try to do something with the data.",
"plt.figure(figsize=(12,12))\n\nplt.subplot(2,2,1)\nplt.plot(dsd.time/60.0, dsd.fields['Zh']['data'])\nplt.xlabel('Time(hrs)')\nplt.ylabel('Reflectivity(dBZ)')\nplt.xlim(5,24)\n\nplt.subplot(2,2,2)\nplt.plot(dsd.time/60.0, dsd.fields['Zdr']['data'])\nplt.xlabel('Time(hrs)')\nplt.ylabel('Differential Reflectivity(dB)')\nplt.xlim(5,24)\n\nplt.subplot(2,2,3)\nplt.plot(dsd.time/60.0, dsd.fields['Kdp']['data'])\nplt.xlabel('Time(hrs)')\nplt.ylabel('Specific Differential Phase(deg/km)')\nplt.xlim(5,24)\n\nplt.subplot(2,2,4)\nplt.pcolor(dsd.time/60.0, dsd.diameter, np.log10(dsd.Nd.T), vmin=0, vmax=6)\nplt.xlabel('Time(hrs)')\nplt.ylabel('Diameter')\nplt.colorbar()\nplt.ylim(0,5) #Zoom in some\nplt.xlim(5,24)\n\nplt.tight_layout()\n\nplt.show()",
"Next let's estimate some microphysical parameterizations.",
"dsd.calculate_dsd_parameterization()\n\nplt.figure(figsize=(12,12))\n\nplt.subplot(2,2,1)\nplt.plot(dsd.time/60.0, dsd.fields['D0']['data'])\nplt.xlabel('Time(hrs)')\nplt.ylabel('$D_0$')\nplt.xlim(5,24)\n\nplt.subplot(2,2,2)\nplt.plot(dsd.time/60.0, np.log10(dsd.fields['Nw']['data']))\nplt.xlabel('Time(hrs)')\nplt.ylabel('$log_{10}(N_w)$')\nplt.xlim(5,24)\n\nplt.subplot(2,2,3)\nplt.plot(dsd.time/60.0, dsd.fields['Dmax']['data'])\nplt.xlabel('Time(hrs)')\nplt.ylabel('Shape parameters')\nplt.xlim(5,24)\n\nplt.subplot(2,2,4)\nplt.pcolor(dsd.time/60.0, dsd.diameter, np.log10(dsd.Nd.T), vmin=0, vmax=6)\nplt.xlabel('Time(hrs)')\nplt.ylabel('Dmax')\nplt.colorbar()\nplt.ylim(0,5) #Zoom in some\nplt.xlim(5,24)\n\nplt.tight_layout()\n\nplt.show()",
"At hour twenty we see some drops that are probably instrument inaccuracies. Let's look at the overall behavior of the parameters with respect to each other.",
"plt.figure(figsize=(12,12))\n\nplt.subplot(2,2,1)\nplt.hist2d(dsd.fields['D0']['data'], np.log10(dsd.fields['Nw']['data']), \n range=((0,6),(0,6)), vmin=1, bins=50, norm=LogNorm())\nplt.ylabel('log10(Nw)')\nplt.xlabel('$D_0$')\n\nplt.subplot(2,2,2)\nplt.hist2d(dsd.fields['Zh']['data'], dsd.fields['Zdr']['data'], \n range=((-10,40),(-1,1)), vmin=1, bins=50, norm=LogNorm())\nplt.xlabel('$Z_h$')\nplt.ylabel('$Z_{dr}$')\n\nplt.subplot(2,2,3)\nplt.hist2d(dsd.fields['Dm']['data'], dsd.fields['Zdr']['data'],\n range=((0,2),(-1,1)), vmin=1, bins=50, norm=LogNorm())\nplt.xlabel('$D_m$')\nplt.ylabel('$Z_{dr}$')\n\nplt.subplot(2,2,4)\nplt.hist2d(dsd.fields['Dmax']['data'], dsd.fields['Zh']['data'],\n range=((0,8),(-10,40)), vmin=1, bins=50, norm=LogNorm())\nplt.ylabel('$Z_h$')\nplt.xlabel('$Dmax$')\n\nplt.tight_layout()\n\nplt.show()",
"Let's take what might be another common use case. Finding areas with certain size drops. In this case everything above 4mm. There are a few ways to do this but let's just create an indicator function for now.",
"plot(dsd.time/60.0 , 0<sum(dsd.Nd[:,dsd.diameter>4], axis=1))\nplt.xlim(5,24)",
"Or maybe we just want a listing of the time steps where there are drops greater than 4mm",
"time_hrs = dsd.time/60.0\nprint(time_hrs[0<sum(dsd.Nd[:,dsd.diameter>4], axis=1)])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tjctw/PythonNote
|
matrix/the_field_Coding_the_matrix_note_.ipynb
|
cc0-1.0
|
[
"• $R$, the field of real numbers,\n• $C$, the field of complex numbers, and\n• $GF (2)$, a field that consists of 0 and 1.\nProblem 1.4.5: Show that, for any two distinct points z1 and z2,\n• there is a translation that maps z1 to z2,\n• there is a translation that maps z2 to z1, and\n• there is no translation that both maps z1 to z2 and z2 to z1.\nSol:\n 一二從加減法就可得到,三從假設一二皆存在可推出悖論。",
"from plotting import plot\nS = {2 + 2j,3 + 2j,1.75 + 1j,2 + 1j,2.25 + 1j,2.5+1j,2.75+1j,3+1j,3.25+1j}\n#plot(S)\n#plot({1+2j+z for z in S}, 4)",
"Problem 1.4.6: Draw a diagram representing the complex number z0 = −3 + 3i using two arrows with their tails located at different points.",
"a = -3+3j\nb = 1+2j\nc = -4+1j\nplot([a,b,c+c])#看不太出效果...思考如何改進",
"Task 1.4.8 畫出旋轉90度並縮短1/2的所有向量的S,請用簡便的乘$/frac{1/2}$",
"from plotting import plot\nS = {2 + 2j,3 + 2j,1.75 + 1j,2 + 1j,2.25 + 1j,2.5+1j,2.75+1j,3+1j,3.25+1j}\nS1 = [x*0.5j for x in S]\nplot(S1)\n",
"同1.4.9 畫出再位移上1右二",
"from plotting import plot\nS = {2 + 2j,3 + 2j,1.75 + 1j,2 + 1j,2.25 + 1j,2.5+1j,2.75+1j,3+1j,3.25+1j}\nS1 = [x*0.5j for x in S]\nS2 = [x+2-1j for x in S1]\nplot(S2)",
"1.4.10",
"from image import file2image\nfrom image import image2display\nfrom plotting import plot\nimg = file2image(\"img01.png\")\n#image2display(img)\nwidth = len(img[0])\nheight = len(img)\nlt = []\n[[lt.append(x+(187*1j-y*1j)) for x in range(width) if sum(img[y][x])<360]for y in range(height)]\nplot(lt,188)",
"1.4.11",
"from plotting import plot\ndef f(pt_c):\n return pt_c.imag + pt_c.real*1j\nS = {2 + 2j,3 + 2j,1.75 + 1j,2 + 1j,2.25 + 1j,2.5+1j,2.75+1j,3+1j,3.25+1j}\nS1 = [f(x) for x in S]\nplot(S1)\n\n\nfrom image import file2image\nfrom image import image2display\nfrom plotting import plot\nimg = file2image(\"img01.png\")\n#image2display(img)\nwidth = len(img[0])\nheight = len(img)\nlt = []\n[[lt.append(x+(187*1j-y*1j)) for x in range(width) if sum(img[y][x])<360]for y in range(height)]\ndef f(pt_c):\n return pt_c.imag + pt_c.real*1j\nlt1 = [f(x) for x in lt]\nplot(lt1,188)\n\n#Euler's formular?\n#1.4.17\nfrom math import pi, e\nfrom plotting import plot\nn = 20\npts = [e**(x*2*pi*1j/n) for x in range(n)]\nplot(pts)\n\n#1.4.18\nfrom plotting import plot\nfrom math import pi, e\ndef f(pt_c):\n return pt_c*e**(pi/4*1j)\nS = {2 + 2j,3 + 2j,1.75 + 1j,2 + 1j,2.25 + 1j,2.5+1j,2.75+1j,3+1j,3.25+1j}\nS1 = [f(x) for x in S]\nplot(S1)\n\nfrom image import file2image\nfrom image import image2display\nfrom plotting import plot\nfrom math import pi, e\nimg = file2image(\"img01.png\")\n#image2display(img)\nwidth = len(img[0])\nheight = len(img)\nlt = []\n[[lt.append(x+(187*1j-y*1j)) for x in range(width) if sum(img[y][x])<360]for y in range(height)]\ndef f(pt_c):\n return pt_c*e**(pi/4*1j)\nlt1 = [f(x) for x in lt]\nplot(lt1,188)\n\n#1.4.20\nfrom image import file2image\nfrom image import image2display\nfrom plotting import plot\nfrom math import pi, e\nimg = file2image(\"img01.png\")\n#image2display(img)\nwidth = len(img[0])\nheight = len(img)\nlt = []\n[[lt.append((x+(187*1j-y*1j))) for x in range(width) if sum(img[y][x])<360]for y in range(height)]\ndef f(pt_c):\n return pt_c*e**(pi/4*1j)-99*1j\nlt1 = [f(x) for x in lt]\nplot(lt1,188)",
"The text about GF(2) is not so clear, so I need to check it out later.\nRef\nWhere to download the source code of thext book"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.22/_downloads/638c39682b0791ce4e430e4d2fcc4c45/plot_tf_dics.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Time-frequency beamforming using DICS\nCompute DICS source power :footcite:DalalEtAl2008 in a grid of time-frequency\nwindows.",
"# Author: Roman Goj <roman.goj@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.event import make_fixed_length_events\nfrom mne.datasets import sample\nfrom mne.time_frequency import csd_fourier\nfrom mne.beamformer import tf_dics\nfrom mne.viz import plot_source_spectrogram\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nnoise_fname = data_path + '/MEG/sample/ernoise_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nsubjects_dir = data_path + '/subjects'\nlabel_name = 'Aud-lh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name",
"Read raw data",
"raw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\n\n# Pick a selection of magnetometer channels. A subset of all channels was used\n# to speed up the example. For a solution based on all MEG channels use\n# meg=True, selection=None and add mag=4e-12 to the reject dictionary.\nleft_temporal_channels = mne.read_selection('Left-temporal')\npicks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,\n stim=False, exclude='bads',\n selection=left_temporal_channels)\nraw.pick_channels([raw.ch_names[pick] for pick in picks])\nreject = dict(mag=4e-12)\n# Re-normalize our empty-room projectors, which should be fine after\n# subselection\nraw.info.normalize_proj()\n\n# Setting time windows. Note that tmin and tmax are set so that time-frequency\n# beamforming will be performed for a wider range of time points than will\n# later be displayed on the final spectrogram. This ensures that all time bins\n# displayed represent an average of an equal number of time windows.\ntmin, tmax, tstep = -0.5, 0.75, 0.05 # s\ntmin_plot, tmax_plot = -0.3, 0.5 # s\n\n# Read epochs\nevent_id = 1\nevents = mne.read_events(event_fname)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n baseline=None, preload=True, proj=True, reject=reject)\n\n# Read empty room noise raw data\nraw_noise = mne.io.read_raw_fif(noise_fname, preload=True)\nraw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\nraw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])\nraw_noise.info.normalize_proj()\n\n# Create noise epochs and make sure the number of noise epochs corresponds to\n# the number of data epochs\nevents_noise = make_fixed_length_events(raw_noise, event_id)\nepochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,\n tmax_plot, baseline=None, preload=True, proj=True,\n reject=reject)\nepochs_noise.info.normalize_proj()\nepochs_noise.apply_proj()\n# then make sure the number of epochs is the same\nepochs_noise = epochs_noise[:len(epochs.events)]\n\n# Read forward operator\nforward = mne.read_forward_solution(fname_fwd)\n\n# Read label\nlabel = mne.read_label(fname_label)",
"Time-frequency beamforming based on DICS",
"# Setting frequency bins as in Dalal et al. 2008\nfreq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz\nwin_lengths = [0.3, 0.2, 0.15, 0.1] # s\n# Then set FFTs length for each frequency range.\n# Should be a power of 2 to be faster.\nn_ffts = [256, 128, 128, 128]\n\n# Subtract evoked response prior to computation?\nsubtract_evoked = False\n\n# Calculating noise cross-spectral density from empty room noise for each\n# frequency bin and the corresponding time window length. To calculate noise\n# from the baseline period in the data, change epochs_noise to epochs\nnoise_csds = []\nfor freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):\n noise_csd = csd_fourier(epochs_noise, fmin=freq_bin[0], fmax=freq_bin[1],\n tmin=-win_length, tmax=0, n_fft=n_fft)\n noise_csds.append(noise_csd.sum())\n\n# Computing DICS solutions for time-frequency windows in a label in source\n# space for faster computation, use label=None for full solution\nstcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,\n freq_bins=freq_bins, subtract_evoked=subtract_evoked,\n n_ffts=n_ffts, reg=0.05, label=label, inversion='matrix')\n\n# Plotting source spectrogram for source with maximum activity\n# Note that tmin and tmax are set to display a time range that is smaller than\n# the one for which beamforming estimates were calculated. This ensures that\n# all time bins shown are a result of smoothing across an identical number of\n# time windows.\nplot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,\n source_index=None, colorbar=True)",
"References\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nicku33/Data-Science-45min-Intros
|
bandit-algorithms-101/bandit-algorithms-101.ipynb
|
unlicense
|
[
"Bandit Agorithms 101\n2015 August 28\n\nSee imports below for requirements\nIn the same directory you cloned Data-Science-45min-Intros, you should also clone the BanditBook repository from https://github.com/johnmyleswhite/BanditsBook\n\nAgenda\n\nA simulation approach to building intuition about A/B testing\nPractical requirements of testing and operation\nOptimizing outcomes with multiple options with different payoffs\n\n1. A simulation approach to building intuition about A/B testing\nA/B Test:\n\nTreatment (e.g. content presented)\nReward (e.g. impression, click, proceeds of sale)\nAudience-in-context",
"from IPython.display import Image \nImage(filename='img/treat_aud_reward.jpg')\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nimport pandas as pd\nfrom numpy.random import binomial\nfrom ggplot import *\nimport random\nimport sys\nplt.figure(figsize=(6,6),dpi=80);\n%matplotlib inline",
"Simulating A/B testing",
"Image(filename='img/ab.jpg')",
"Each test is like flipping a fair coin N times",
"Image(filename='img/a.jpg')\n\ndf = pd.DataFrame({\"coin_toss\":binomial(1,0.5,100)})\ndf.hist()\nplt.show()\n\ndf.head()\n\ndf = pd.DataFrame({\"coin_toss\":binomial(1,0.6,100)})\ndf.hist()\nplt.show()\n\ndf = pd.DataFrame({\"coin_%i\"%i:binomial(1,0.5,100) for i in range(20)})\ndf.hist()\nplt.show()\n\ndf = pd.DataFrame({\"coin_%i\"%i:binomial(1,0.52,100) for i in range(20)})\ndf.hist()\nplt.show()",
"Rewards for choosing A or B",
"# 1 arm\npayoff = [-0.1,0.5]\na = np.bincount(binomial(1,0.5,100))\nprint np.dot(a, payoff)",
"Rewards for choosing A or B or C",
"# 2-arm, equal reward\npayoff = [0,1,2]\na = np.bincount(binomial(2,0.5,100))\nprint np.dot(a, payoff)",
"Simulating multiple options with different payoffs",
"payoff=[1,1.05]\nreward1 = np.bincount(binomial(1,0.5,100))[1] * payoff[0]\nprint reward1\nreward2 = np.bincount(binomial(1,0.5,100))[1] * payoff[1]\nprint reward2\ntotal_reward = reward1 + reward2\nprint total_reward",
"Can we differentiat two arms with different payoffs?",
"def trial(payoff=[1, 1.01]):\n reward1 = np.bincount(binomial(1,0.5,100))[1] * payoff[0]\n reward2 = np.bincount(binomial(1,0.5,100))[1] * payoff[1]\n return reward1, reward2, reward1 + reward2, reward1-reward2\n\ntrials = 100\nsim = np.array([trial() for i in range(trials)])\ndf = pd.DataFrame(sim, columns=[\"t1\", \"t2\", \"tot\", \"diff\"])\nprint len(df[df[\"diff\"] <= 0.0])\ndf.hist()\nplt.show()",
"Can we differentiat two arms with different probabilities?",
"def trial(ps=[0.5, 0.51], payoff=[1, 1]):\n reward1 = np.bincount(binomial(1,ps[0],100))[1] * payoff[0]\n reward2 = np.bincount(binomial(1,ps[1],100))[1] * payoff[1]\n return reward1, reward2, reward1 + reward2, reward1-reward2\ntrials = 100\nsim = np.array([trial() for i in range(trials)])\ndf = pd.DataFrame(sim, columns=[\"t1\", \"t2\", \"tot\", \"diff\"])\nprint len(df[df[\"diff\"] <= 0.0])\ndf.hist()\nplt.show()",
"More Arms",
"Image(filename='img/abcd.jpg')\n\ndf = pd.DataFrame({\"tot_reward\":binomial(2,0.5,100)})\ndf.hist()\nplt.show()\n\ndf = pd.DataFrame({\"tot_reward\":binomial(3,0.5,100)})\ndf.hist()\nplt.show()\n\ntrials = 100\nmeans = [0.1, 0.1, 0.9]\nreward = np.zeros(trials)\nfor m in means:\n # equal rewards of 1 or 0\n reward += binomial(1,m,trials)\ndf = pd.DataFrame({\"reward\":reward, \"fair_reward\":binomial(3,0.5,trials)})\ndf.hist()\nplt.show()",
"2. Practical requirements of testing and operation\nExplore vs Exploit?\n\nMarketer is going to have a hard time waiting for rigorous testing as winners appear to emerge\nImplement online system vs. testing system\nFlavor: Chase successes vs. analyzing failures\nWant to automate balance of explore and exploit and run continuously\nSome danger in forgetting about significance and power\n\nBandit Book Utilities\nfrom \"Bandit Algorithms\" by John Myles White\nThree utilities:\n* arms\n* algorithms\n* monte carlo testing framework",
"sys.path.append('../../BanditsBook/python')\nfrom core import *",
"3. Optimizing outcomes with multiple options with different payoffs\nEpsilon Greedy\n\nSplit on audience + context + creative with fraction\nKeep track of past results\nSplit experiment and exploit offerings with probability epsilon\n\nNotes:\n* epsilon is the fraction of exploration\n* randomize everything all the time",
"random.seed(1)\n# Mean arm payoff (Bernoulli)\nmeans = [0.1, 0.1, 0.1, 0.1, 0.9]\n# Mulitple arms!\nn_arms = len(means)\nrandom.shuffle(means)\narms = map(lambda (mu): BernoulliArm(mu), means)\nprint(\"Best arm is \" + str(ind_max(means)))\n\nt_horizon = 250\nn_sims = 1000\n\ndata = []\nfor epsilon in [0.1, 0.2, 0.3, 0.4, 0.5]:\n algo = EpsilonGreedy(epsilon, [], [])\n algo.initialize(n_arms)\n # results are column oriented\n # simulation_num, time, chosen arm, reward, cumulative reward\n results = test_algorithm(algo, arms, n_sims, t_horizon)\n results.append([epsilon]*len(results[0]))\n data.extend(np.array(results).T)\n \ndf = pd.DataFrame(data, columns = [\"Sim\", \"T\", \"ChosenArm\", \"Reward\", \"CumulativeReward\", \"Epsilon\"])\ndf.head()\n\na=df.groupby([\"Epsilon\", \"T\"]).mean().reset_index()\na.head()\n\nggplot(aes(x=\"T\",y=\"Reward\", color=\"Epsilon\"), data=a) + geom_line()\n\nggplot(aes(x=\"T\",y=\"CumulativeReward\", color=\"Epsilon\"), data=a) + geom_line()",
"Anealing Softmax\nUpgrades to $\\epsilon$-Greedy:\n* Need to run more experiments if rewards appear to be nearly equal\n* Keep track of results for exploration as well as exploitation\nTempted choose each are in proportion to its current value, i.e.:\n$p(A) \\propto \\frac{rA}{rA + RB}$\n$p(B) \\propto \\frac{rB}{rA + RB}$\nRemember Boltzmann, and about adding a temperature, $\\tau$:\n$p(A) \\propto \\frac{-\\exp(rA/\\tau)}{(\\exp(rA/\\tau) + \\exp(rB/\\tau))}$\n$p(B) \\propto \\frac{-\\exp(rB/\\tau)}{(\\exp(rA/\\tau) + \\exp(rB/\\tau))}$\nAnd what is annealing?\n\n$\\tau = 0$ is deterministic case of winner takes all\n$\\tau = \\infty$ is all random, all time\nLet the temperature go to zero over time to settle into the state slowly (adiabatically)",
"t_horizon = 250\nn_sims = 1000\n\nalgo = AnnealingSoftmax([], [])\nalgo.initialize(n_arms)\ndata = np.array(test_algorithm(algo, arms, n_sims, t_horizon)).T\ndf = pd.DataFrame(data)\n#df.head()\ndf.columns = [\"Sim\", \"T\", \"ChosenArm\", \"Reward\", \"CumulativeReward\"]\ndf.head()\na=df.groupby([\"T\"]).mean().reset_index()\na.head()\n\nggplot(aes(x=\"T\",y=\"Reward\", color=\"Sim\"), data=a) + geom_line()\n\nggplot(aes(x=\"T\",y=\"CumulativeReward\", color=\"Sim\"), data=a) + geom_line()",
"UCB2\nAdd a confidnece measure to our estimates of averages!",
"t_horizon = 250\nn_sims = 1000\n\ndata = []\nfor alpha in [0.1, 0.3, 0.5, 0.7, 0.9]:\n algo = UCB2(alpha, [], [])\n algo.initialize(n_arms)\n results = test_algorithm(algo, arms, n_sims, t_horizon)\n results.append([alpha]*len(results[0]))\n data.extend(np.array(results).T)\n \ndf = pd.DataFrame(data, columns = [\"Sim\", \"T\", \"ChosenArm\", \"Reward\", \"CumulativeReward\", \"Alpha\"])\ndf.head()\n\na=df.groupby([\"Alpha\", \"T\"]).mean().reset_index()\na.head()\n\nggplot(aes(x=\"T\",y=\"Reward\", color=\"Alpha\"), data=a) + geom_line()\n\nggplot(aes(x=\"T\",y=\"CumulativeReward\", color=\"Alpha\"), data=a) + geom_line()",
"The end"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.12/_downloads/plot_decoding_spatio_temporal_source.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Decoding source space data\nDecoding, a.k.a MVPA or supervised machine learning applied to MEG\ndata in source space on the left cortical surface. Here f-test feature\nselection is employed to confine the classification to the potentially\nrelevant features. The classifier then is trained to selected features of\nepochs in source space.",
"# Author: Denis A. Engemann <denis.engemann@gmail.com>\n# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport mne\nimport os\nimport numpy as np\nfrom mne import io\nfrom mne.datasets import sample\nfrom mne.minimum_norm import apply_inverse_epochs, read_inverse_operator\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nfname_fwd = data_path + 'MEG/sample/sample_audvis-meg-oct-6-fwd.fif'\nfname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'\nsubjects_dir = data_path + '/subjects'\nsubject = os.environ['SUBJECT'] = subjects_dir + '/sample'\nos.environ['SUBJECTS_DIR'] = subjects_dir",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nfname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'\nlabel_names = 'Aud-rh', 'Vis-rh'\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\n\ntmin, tmax = -0.2, 0.5\nevent_id = dict(aud_r=2, vis_r=4) # load contra-lateral conditions\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(2, None, method='iir') # replace baselining with high-pass\nevents = mne.read_events(event_fname)\n\n# Set up pick list: MEG - bad channels (modify to your needs)\nraw.info['bads'] += ['MEG 2443'] # mark bads\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=None, preload=True,\n reject=dict(grad=4000e-13, eog=150e-6),\n decim=5) # decimate to save memory and increase speed\n\nepochs.equalize_event_counts(list(event_id.keys()), 'mintime', copy=False)\nepochs_list = [epochs[k] for k in event_id]\n\n# Compute inverse solution\nsnr = 3.0\nlambda2 = 1.0 / snr ** 2\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\nn_times = len(epochs.times)\nn_vertices = 3732\nn_epochs = len(epochs.events)\n\n# Load data and compute inverse solution and stcs for each epoch.\n\nnoise_cov = mne.read_cov(fname_cov)\ninverse_operator = read_inverse_operator(fname_inv)\nX = np.zeros([n_epochs, n_vertices, n_times])\n\n# to save memory, we'll load and transform our epochs step by step.\nfor condition_count, ep in zip([0, n_epochs / 2], epochs_list):\n stcs = apply_inverse_epochs(ep, inverse_operator, lambda2,\n method, pick_ori=\"normal\", # saves us memory\n return_generator=True)\n for jj, stc in enumerate(stcs):\n X[condition_count + jj] = stc.lh_data",
"Decoding in sensor space using a linear SVM",
"# Make arrays X and y such that :\n# X is 3d with X.shape[0] is the total number of epochs to classify\n# y is filled with integers coding for the class to predict\n# We must have X.shape[0] equal to y.shape[0]\n\n# we know the first half belongs to the first class, the second one\ny = np.repeat([0, 1], len(X) / 2) # belongs to the second class\nX = X.reshape(n_epochs, n_vertices * n_times)\n# we have to normalize the data before supplying them to our classifier\nX -= X.mean(axis=0)\nX /= X.std(axis=0)\n\n# prepare classifier\nfrom sklearn.svm import SVC # noqa\nfrom sklearn.cross_validation import ShuffleSplit # noqa\n\n# Define a monte-carlo cross-validation generator (reduce variance):\nn_splits = 10\nclf = SVC(C=1, kernel='linear')\ncv = ShuffleSplit(len(X), n_splits, test_size=0.2)\n\n# setup feature selection and classification pipeline\nfrom sklearn.feature_selection import SelectKBest, f_classif # noqa\nfrom sklearn.pipeline import Pipeline # noqa\n\n# we will use an ANOVA f-test to preselect relevant spatio-temporal units\nfeature_selection = SelectKBest(f_classif, k=500) # take the best 500\n# to make life easier we will create a pipeline object\nanova_svc = Pipeline([('anova', feature_selection), ('svc', clf)])\n\n# initialize score and feature weights result arrays\nscores = np.zeros(n_splits)\nfeature_weights = np.zeros([n_vertices, n_times])\n\n# hold on, this may take a moment\nfor ii, (train, test) in enumerate(cv):\n anova_svc.fit(X[train], y[train])\n y_pred = anova_svc.predict(X[test])\n y_test = y[test]\n scores[ii] = np.sum(y_pred == y_test) / float(len(y_test))\n feature_weights += feature_selection.inverse_transform(clf.coef_) \\\n .reshape(n_vertices, n_times)\n\nprint('Average prediction accuracy: %0.3f | standard deviation: %0.3f'\n % (scores.mean(), scores.std()))\n\n# prepare feature weights for visualization\nfeature_weights /= (ii + 1) # create average weights\n# create mask to avoid division error\nfeature_weights = np.ma.masked_array(feature_weights, feature_weights == 0)\n# normalize scores for visualization purposes\nfeature_weights /= feature_weights.std(axis=1)[:, None]\nfeature_weights -= feature_weights.mean(axis=1)[:, None]\n\n# unmask, take absolute values, emulate f-value scale\nfeature_weights = np.abs(feature_weights.data) * 10\n\nvertices = [stc.lh_vertno, np.array([], int)] # empty array for right hemi\nstc_feat = mne.SourceEstimate(feature_weights, vertices=vertices,\n tmin=stc.tmin, tstep=stc.tstep,\n subject='sample')\n\nbrain = stc_feat.plot()\nbrain.set_time(100)\nbrain.show_view('l') # take the medial view to further explore visual areas"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fastats/fastats
|
fastats/maths/_notebooks/gamma_beta_t_benchmark.ipynb
|
mit
|
[
"Summary\nOur log gamma function has negligible error for $0 < z < 100$ when compared to that of scipy's, but runs ~7X slower than scipy for large arrays ($10^7$ elements). The log gamma function is used as a coefficient in the Beta and Student-t PDF's, both of which have negligible Hellinger distance when compared to scipy. Our Beta and Student-t PDFs also run ~10X and ~20X faster respectively than that of scipy's for large arrays ($10^7$ elements).\nWe compute $\\log \\left(\\Gamma(x) \\right)$ instead of $\\Gamma(x)$ for all calculations as the latter can blow up and cause numerical overflow even for moderate $x$. As the Beta and Student-t PDF's include coefficients that are quotients of $\\Gamma(x)$, we can simplify this by a difference of logs and then take the exponent of the result. This is much more numerically stable.\nBenchmarking\nAll comparisons are made against scipy.\n\nLog gamma absolute and relative errors are computed for $10,000$ points in the range $0<z<100$\nPDF comparisons are made using the Hellinger distance\nHellinger distances are computed $10,000$ times (n_trials) as follows\nBeta PDF: $10,000$ points (n_points) in the range $0<x<1$ with $\\alpha$ and $\\beta$ sampled uniformaly at random in the range $0.1<\\alpha,\\beta<50$ each trial\nStudent-t PDF: $10,000$ points (n_points) in the range $-40<x<40$ with $\\nu$ sampled uniformly at random in the range $1<\\nu<50$ each trial\n\n\n\nFinally, we compare PDF runtimes for varying array sizes from $10^0$ to $10^7$ while fixing input parameters.",
"import numpy as np\nfrom numpy import log, exp, power, sqrt, pi, absolute\nfrom numba import njit\nfrom scipy.special import gammaln\nfrom scipy import stats\nimport time\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style(\"darkgrid\")\n\n# define error measures\n\n@njit\ndef rel_err(x_true, x_est):\n \"\"\"\n Relative error between a\n true value `x_true` and estimated\n value `x_est`\n \"\"\"\n d = absolute(x_true - x_est)\n out = d / x_true\n return out\n\n@njit\ndef abs_err(x_true, x_est):\n \"\"\"\n Absolute error between a\n true value `x_true` and estaimated\n value `x_est`\n \"\"\"\n out = absolute(x_true - x_est)\n return out\n\n@njit(fastmath=True)\ndef hellinger(p, q):\n \"\"\"\n Hellinger distance between two\n probability distributions `p` and `q`\n \"\"\"\n out = 0\n n = p.shape[0]\n for i in range(n):\n u = sqrt(p[i]) - sqrt(q[i])\n u = power(u, 2)\n out += u\n out = sqrt(0.5 * out)\n return out\n\nGAMMA_COEFS = np.array([\n 57.1562356658629235, -59.5979603554754912,\n 14.1360979747417471, -0.491913816097620199,\n .339946499848118887e-4, .465236289270485756e-4,\n -.983744753048795646e-4, .158088703224912494e-3,\n -.210264441724104883e-3, .217439618115212643e-3,\n -.164318106536763890e-3, .844182239838527433e-4,\n -.261908384015814087e-4, .368991826595316234e-5\n])\n\n@njit(fastmath=True)\ndef gammaln_nr(z):\n \"\"\"\n Log Gamma function.\n \n Returns Log of the Gamma function for\n all `z` > 0; gammaln_nr(z) = (z-1)!\n \n Given in Numerical Recipes 6.1\n \"\"\"\n y = z\n tmp = z + 5.24218750000000000\n tmp = (z + 0.5) * log(tmp) - tmp\n ser = np.ones_like(y) * 0.999999999999997092\n \n n = GAMMA_COEFS.shape[0]\n for j in range(n):\n y = y + 1\n ser = ser + GAMMA_COEFS[j] / y\n \n out = tmp + log(2.5066282746310005 * ser / z)\n return out\n\n@njit(fastmath=True)\ndef beta_pdf(x, alpha, beta):\n \"\"\"\n Beta Probability Density Function.\n \n Beta PDF across `x` for parameters\n `a` and `b`\n \"\"\"\n u = gammaln_nr(alpha + beta) - gammaln_nr(alpha) - gammaln_nr(beta)\n u = exp(u)\n v = power(x, alpha - 1) * power(1 - x, beta - 1)\n out = u * v\n return out\n\n@njit(fastmath=True)\ndef t_pdf(x, nu):\n \"\"\"\n Student-t Probability\n Density Function.\n \n t PDF across `x` for\n degrees of freedom `nu`\n \"\"\"\n u = gammaln_nr(0.5 * (nu + 1)) - gammaln_nr(0.5 * nu)\n u = exp(u)\n v = sqrt(nu * pi) * power(1 + power(x, 2) / nu, 0.5 * (nu + 1))\n out = u / v\n return out",
"Log Gamma Error",
"zs = np.linspace(0.001, 100, 10000)\nerrs = np.zeros((len(zs),2))\n\nxnm = gammaln_nr(zs) # fastats\nxsc = gammaln(zs) # scipy\nerrs[:,0] = rel_err(xsc, xnm)\nerrs[:,1] = abs_err(xsc, xnm)\n\nfig, ax = plt.subplots(figsize=(12,8));\nsns.lineplot(zs[::20], errs[:,0][::20], label=\"Relative Error\", color='orange');\nax.axhline(y=errs[:,0].mean(), label=\"Average Relative Error\", color=\"orange\", linestyle=\"--\");\nsns.lineplot(zs[::20], errs[:,1][::20], label=\"Absolute Error\", color=\"blue\");\nax.axhline(y=errs[:,1].mean(), label=\"Average Absolute Error\", color=\"blue\", linestyle=\"--\");\nax.set_title(\"Log Gamma Error; Fastats vs. Scipy\")\nax.set_ylabel(\"Error\", fontsize=15)\nax.set_xlabel(\"z!\", fontsize=20)\nax.legend();",
"Log Gamma Execution Time",
"n_trials = 8\nscipy_times = np.zeros(n_trials)\nfastats_times = np.zeros(n_trials)\n\nfor i in range(n_trials):\n zs = np.linspace(0.001, 100, 10**i) # evaluate gammaln over this range\n\n # dont take first timing - this is just compi\n start = time.time()\n gammaln_nr(zs)\n end = time.time()\n\n start = time.time()\n gammaln_nr(zs)\n end = time.time()\n fastats_times[i] = end - start\n\n start = time.time()\n gammaln(zs)\n end = time.time()\n scipy_times[i] = end - start\n\nfig, ax = plt.subplots(figsize=(12,8))\nsns.lineplot(np.logspace(0, n_trials-1, n_trials), fastats_times, label=\"fastats\");\nsns.lineplot(np.logspace(0, n_trials-1, n_trials), scipy_times, label=\"scipy\");\nax.set(xscale=\"log\");\nax.set_xlabel(\"Array Size\", fontsize=15);\nax.set_ylabel(\"Execution Time (s)\", fontsize=15);\nax.set_title(\"Execution Time of Log Gamma; Fastats vs. Scipy\");\n\n# our log gamma function is still 2X faster for single values ;)\nz = np.random.uniform(1,100)\nprint(z)\n%timeit gammaln_nr(z)\n%timeit gammaln(z)",
"Beta PDF\nHellinger Distance\nChoose $\\alpha$ and $\\beta$ randomly for each trial",
"n_points = 10000\nzs = np.linspace(0.001, 0.999, n_points) # beta only defined on interval 0<x<1\nn_trials = 10000\nhellinger_dists = np.zeros(n_trials)\nalphas = np.random.uniform(0.1, 50, n_trials)\nbetas = np.random.uniform(0.1, 50, n_trials)\n\nfor i in range(n_trials):\n xfs = beta_pdf(zs, alphas[i], betas[i])\n xsc = stats.beta(a=alphas[i], b=betas[i]).pdf(zs)\n hellinger_dists[i] = hellinger(xfs, xsc)\n\nfig, ax = plt.subplots(figsize=(12,8));\nsns.lineplot(np.arange(n_trials)[::20], hellinger_dists[::20]); # plot every 20th point\nax.axhline(y=hellinger_dists.mean(), label=\"Average\", linestyle=\"--\");\n\nax.set_title(\"Hellinger Distance of Beta PDF; Fastats vs. Scipy\")\nax.set_ylabel(\"Hellinger Distance\", fontsize=15)\nax.set_xlabel(\"Trial\", fontsize=15)\nax.legend();",
"Execution Time\nFix $\\alpha$ and $\\beta$ for all array sizes",
"n_trials = 8\nscipy_times = np.zeros(n_trials)\nfastats_times = np.zeros(n_trials)\nalpha = 4\nbeta = 7\n\nfor i in range(n_trials):\n zs = np.linspace(0.001, 0.999, 10**i) # beta only defined on interval 0<x<1\n\n start = time.time()\n beta_pdf(zs, alpha, beta)\n end = time.time()\n\n start = time.time()\n beta_pdf(zs, alpha, beta)\n end = time.time()\n fastats_times[i] = end - start\n\n start = time.time()\n stats.beta(a=alpha, b=beta).pdf(zs)\n end = time.time()\n scipy_times[i] = end - start\n\nfig, ax = plt.subplots(figsize=(12,8))\nsns.lineplot(np.logspace(0, n_trials-1, n_trials), fastats_times, label=\"fastats\");\nsns.lineplot(np.logspace(0, n_trials-1, n_trials), scipy_times, label=\"scipy\");\nax.set(xscale=\"log\");\nax.set_xlabel(\"Array Size\", fontsize=15);\nax.set_ylabel(\"Execution Time (s)\", fontsize=15);\nax.set_title(\"Execution Time of Beta PDF; Fastats vs. Scipy\");",
"Student-t PDF\nHellinger Distance\nChoose $\\nu$ randomly for each trial",
"n_points = 10000\nzs = np.linspace(-40, 40, n_points) # t defined on whole real line\nn_trials = 10000\nhellinger_dists = np.zeros(n_trials)\nnus = np.random.uniform(1, 50, n_trials)\n\nfor i in range(n_trials):\n xfs = t_pdf(zs, nus[i])\n xsc = stats.t(df=nus[i]).pdf(zs)\n hellinger_dists[i] = hellinger(xfs, xsc)\n\nfig, ax = plt.subplots(figsize=(12,8));\nsns.lineplot(np.arange(n_trials)[::20], hellinger_dists[::20]); # plot every 20th point\nax.axhline(y=hellinger_dists.mean(), label=\"Average\", linestyle=\"--\");\n\nax.set_title(\"Hellinger Distance of Student-t PDF; Fastats vs. Scipy\")\nax.set_ylabel(\"Hellinger Distance\", fontsize=15)\nax.set_xlabel(\"Trial\", fontsize=15)\nax.legend();",
"Execution Time\nFix $\\nu$ for all array sizes",
"n_trials = 8\nscipy_times = np.zeros(n_trials)\nfastats_times = np.zeros(n_trials)\nnu = 5\n\nfor i in range(n_trials):\n zs = np.linspace(-40, 40, 10**i)\n\n start = time.time()\n t_pdf(zs, nu)\n end = time.time()\n\n start = time.time()\n t_pdf(zs, nu)\n end = time.time()\n fastats_times[i] = end - start\n\n start = time.time()\n stats.t(df=nu).pdf(zs)\n end = time.time()\n scipy_times[i] = end - start\n\nfig, ax = plt.subplots(figsize=(12,8))\nsns.lineplot(np.logspace(0, n_trials-1, n_trials), fastats_times, label=\"fastats\");\nsns.lineplot(np.logspace(0, n_trials-1, n_trials), scipy_times, label=\"scipy\");\nax.set(xscale=\"log\");\nax.set_xlabel(\"Array Size\", fontsize=15);\nax.set_ylabel(\"Execution Time (s)\", fontsize=15);\nax.set_title(\"Execution Time of Student-t PDF; Fastats vs. Scipy\");",
"Conclusion\nOur Log Gamma function is extremely accurate for $0 < z < 100$, with both relative error and absolute error almost always $<10^{-13}$. The Log Gamma run time is ~7X slower than scipy's for large arrays.\nOur Beta and Student-t PDF's have negligible Hellinger distance; $<10^{-12}$ and $<10^{-13}$ on average respectively. Our PDF execution times are always less than that of scipy's. For large arrays, with $10^7$ elements, this can be ~10X for Beta PDF and ~20X for Student-t faster than scipy."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
joommf/tutorial
|
workshops/Durham/.ipynb_checkpoints/tutorial1_geometry-checkpoint.ipynb
|
bsd-3-clause
|
[
"Tutorial 1 - Geometry\nIn this tutorial we explore how simulated geometries can be defined and initial magnetisation states specified. The package we use to define finite difference meshes and fields is discretisedfield.",
"import discretisedfield as df\n%matplotlib inline",
"Defining the geometry\nLet us say that we need to define a nanocube mesh with edge length $L=100\\,\\text{nm}$ and discretisation cell $(d, d, d)$, with $d=10 \\,\\text{nm}$. For that we need to define two points $p_{1}$ and $p_{2}$ between which the mesh spans and provide them (together with the discretisation cell) to the Mesh class:",
"L = 100e-9 # edge length (m)\nd = 10e-9 # cell size (m)\n\np1 = (0, 0, 0) # first point of cuboid containing simulation geometry\np2 = (L, L, L) # second point\ncell = (d, d, d) # discretisation cell\n\nmesh = df.Mesh(p1=p1, p2=p2, cell=cell) # mesh definition",
"We can then inspect some basic parameters of the mesh:\n\nEdge length:",
"mesh.l # edge length",
"Number of discretisation cells in all three directions:",
"mesh.n # number of cells ",
"Minimum mesh domain coordinate:",
"mesh.pmin # minimum mesh domain coordinate",
"Maximum mesh domain coordinate:",
"mesh.pmax # maximum mesh domain coordinate",
"Or we can visualise the mesh domain and a discretisation cell:",
"mesh",
"Defining a field on a geometry\nAfter we defined a mesh, we can define different finite difference fields. For that, we use Field class. We need to provide the mesh, dimension of the data values, and the value of the field. Let us define a 3d-vector field (dim=3) that is uniform in the $(1, 0, 0)$ direction.",
"m = df.Field(mesh, dim=3, value=(1, 0, 0))",
"A simple slice visualisation of the mesh in the $z$ direction at $L/2$ is:",
"m.plot_plane(\"z\");",
"Spatially varying field\nWhen we defined a uniform vector field, we used a tuple (1, 0, 0) to define its value. However, we can also provide a Python function if we want to define a non-uniform field. This function takes the position in the mesh as input, and returns a value that the field should have at that point:",
"def m_value(pos):\n x, y, z = pos # unpack position into individual components\n if x > L/4:\n return (1, 1, 0)\n else:\n return (-1, 0, 0)\n \nm = df.Field(mesh, dim=3, value=m_value)\n\nm.plot_plane(\"z\");",
"The field object can be treated as a mathematical function - if we pass a position tuple to the function, it will return the vector value of the field at that location:",
"point = (0, 0, 0)\nm(point)\n\nm([90e-9, 0, 0])",
"In micromagnetics, the saturation magnetisation $M_\\mathrm{s}$ is typically constant (at least for each position). The Field constructor accepts an additional parameter norm which we can use for that:",
"Ms = 8e6 # saturation magnetisation (A/m)\nm = df.Field(mesh, dim=3, value=m_value, norm=Ms)\n\nm([0, 0, 0])\n\nm([90e-9,0,0])",
"Spatially varying norm $M_\\mathrm{s}$\nBy defining different norms, we can specify different geometries, so that $M_\\text{s}=0$ outside the mesh. For instance, let us assume we want to define a sphere of radius $L/2$ and magnetise it in the negative $y$ direction.",
"mesh = df.Mesh(p1=(-L/2, -L/2, -L/2), p2=(L/2, L/2, L/2), cell=(d, d, d))\n\ndef Ms_value(pos):\n x, y, z = pos\n if (x**2 + y**2 + z**2)**0.5 < L/2:\n return Ms\n else:\n return 0\n\nm = df.Field(mesh, dim=3, value=(0, -1, 0), norm=Ms_value)\n\nm.plot_plane(\"z\");",
"Exercise 1a\nThe code below defines as thin film (thickness $t$) in the x-y plane. Extend the code in the following cell so that the magnetisation $M_\\mathrm{s}$ is $10^7\\mathrm{A/m}$ in a disk of thickness $t = 10 \\,\\text{nm}$ and diameter $d = 120 \\,\\text{nm}$. The disk is centred around the origin (0, 0, 0). The magnetisation should be $\\mathbf{m} = (1, 0, 0)$.",
"t = 10e-9 # thickness (m)\nd = 120e-9 # diameter (m)\ncell = (5e-9, 5e-9, 5e-9) # discretisation cell size (m)\nMs = 1e7 # saturation magnetisation (A/m)\n\nmesh = df.Mesh(p1=(-d/2, -d/2, 0), p2=(d/2, d/2, t), cell=cell)\n\ndef Ms_value(pos):\n x, y, z = pos\n # insert code here to define disk\n return Ms\n \nm = df.Field(mesh, value=(1, 0, 0), norm=Ms_value)\n\nm.plot_plane(\"z\");",
"Exercise 1b\nExtend the previous example in the next cell so that the magnetisation is:\n$$\\mathbf{m} = \\begin{cases} (-1, 0, 0) & \\text{for } y \\le 0 \\ (1, 0, 0) & \\text{for } y > 0 \\end{cases}$$\nwith saturation magnetisation $10^{7} \\,\\text{A}\\,\\text{m}^{-1}$.",
"t = 10e-9 # thickness (m)\nd = 120e-9 # diameter (m)\ncell = (5e-9, 5e-9, 5e-9) # discretisation cell size (m)\nMs = 1e7 # saturation magnetisation (A/m)\n\nmesh = df.Mesh(p1=(-d/2, -d/2, 0), p2=(d/2, d/2, t), cell=cell)\n\ndef Ms_value(pos):\n x, y, z = pos\n # copy code from exercise 1a\n return Ms\n \ndef m_value(pos):\n x, y, z = pos\n # insert code here for defining the two sections\n return (1, 0, 0)\n \nm = df.Field(mesh, value=m_value, norm=Ms_value)\n\nm.plot_plane(\"z\");",
"Exercise 2\nExtend the code sample provided below to define the following geometry with $10\\,\\text{nm}$ thickness:\n<img src=\"geometry_exercise2.png\",width=400>\nThe magnetisation saturation is $8 \\times 10^{6} \\,\\text{A}\\,\\text{m}^{-1}$ and the magnetisation direction is as shown in the figure.",
"cell = (5e-9, 5e-9, 5e-9) # discretisation cell size (m)\nMs = 8e6 # saturation magnetisation (A/m)\n\nmesh = df.Mesh(p1=(0, 0, 0), p2=(100e-9, 50e-9, 10e-9), cell=cell)\n\ndef Ms_value(pos):\n x, y, z = pos\n # Insert missing code here to get the right shape of geometry.\n return Ms\n \ndef m_value(pos):\n x, y, z = pos\n # Insert missing code here.\n return (1, 1, 1)\n \nm = df.Field(mesh, value=m_value, norm=Ms_value)\n\nm.plot_plane(\"z\");"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tritemio/multispot_paper
|
usALEX - Corrections - Leakage fit.ipynb
|
mit
|
[
"Leakage coefficient fit\n\nThis notebook estracts the leakage coefficient from the set of 5 us-ALEX smFRET measurements.\n\nWhat it does?\nFor each measurement, we fit the donor-only peak position of the uncorrected proximity ratio histogram. These values are saved in a .txt file. This notebook just performs a weighted mean where the weights are the number of bursts in each measurement.\nThis notebook read data from the file:",
"#bsearch_ph_sel = 'all-ph'\n#bsearch_ph_sel = 'Dex'\nbsearch_ph_sel = 'DexDem'\n\ndata_file = 'results/usALEX-5samples-PR-raw-%s.csv' % bsearch_ph_sel",
"To recompute the PR data used by this notebook run the \n8-spots paper analysis notebook.\nComputation",
"from __future__ import division\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n%config InlineBackend.figure_format='retina' # for hi-dpi displays\n\nsns.set_style('whitegrid')\npalette = ('Paired', 10)\nsns.palplot(sns.color_palette(*palette))\nsns.set_palette(*palette)\n\ndata = pd.read_csv(data_file).set_index('sample')\ndata\n\ndisplay(data[['E_pr_do_gauss', 'E_pr_do_kde', 'E_pr_do_hsm', 'n_bursts_do']])\nprint('KDE Mean (%): ', data.E_pr_do_kde.mean()*100)\nprint('KDE Std. Dev. (%):', data.E_pr_do_kde.std()*100)\n\nd = data[['E_pr_do_gauss', 'E_pr_do_kde', 'E_pr_do_hsm']]#, 'n_bursts_do']]\nd.plot(lw=3);",
"Create Leakage Table",
"E_table = data[['E_pr_do_gauss', 'E_pr_do_kde']]\nE_table\n\nlk_table = E_table / (1 - E_table)\nlk_table.columns = [c.replace('E_pr_do', 'lk') for c in E_table.columns]\nlk_table['num_bursts'] = data['n_bursts_do']\nlk_table",
"Average leakage coefficient",
"data.E_pr_do_kde\n\nlk_table.lk_kde\n\nE_m = np.average(data.E_pr_do_kde, weights=data.n_bursts_do)\nE_m\n\nk_E_m = E_m / (1 - E_m)\nk_E_m\n\nk_m = np.average(lk_table.lk_kde, weights=data.n_bursts_do)\nk_m",
"Conclusions\nEither averaging $E_{PR}$ or the corresponding $k = n_d/n_a$ the result for the leakage coefficient is ~10 % (D-only peak fitted finding the maximum of the KDE).\nSave data\nFull table",
"stats = pd.concat([lk_table.mean(), lk_table.std()], axis=1, keys=['mean', 'std']).T\nstats\n\ntable_to_save = lk_table.append(stats)\ntable_to_save = table_to_save.round({'lk_gauss': 5, 'lk_kde': 5, 'num_bursts': 2})\ntable_to_save\n\ntable_to_save.to_csv('results/table_usalex_5samples_leakage_coeff.csv')",
"Average coefficient",
"'%.5f' % k_m\n\nwith open('results/usALEX - leakage coefficient %s.csv' % bsearch_ph_sel, 'w') as f:\n f.write('%.5f' % k_m)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adrn/thejoker
|
docs/examples/Thompson-black-hole.ipynb
|
mit
|
[
"%run notebook_setup",
"Reproducing the black hole discovery in Thompson et al. 2019\nIn this science demo tutorial, we will reproduce the results in Thompson et al. 2019, who found and followed-up a candidate stellar-mass black hole companion to a giant star in the Milky Way. We will first use The Joker to constrain the orbit of the system using the TRES follow-up radial velocity data released in their paper and show that we get consistent period and companion mass constraints from modeling these data. We will then do a joint analysis of the TRES and APOGEE data for this source by simultaneously fitting for and marginalizing over an unknown constant velocity offset between the two surveys.\nA bunch of imports we will need later:",
"from astropy.io import ascii\nfrom astropy.time import Time\nimport astropy.units as u\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\n\nimport pymc3 as pm\nimport pymc3_ext as pmx\nimport exoplanet.units as xu\nimport exoplanet as xo\nimport corner\nimport arviz as az\n\nimport thejoker as tj\nfrom twobody.transforms import get_m2_min\n\n# set up a random number generator to ensure reproducibility\nseed = 42\nrnd = np.random.default_rng(seed=seed)",
"Load the data\nWe will start by loading data, copy-pasted from Table S2 in Thompson et al. 2019):",
"tres_tbl = ascii.read(\n \"\"\"8006.97517 0.000 0.075\n 8023.98151 -43.313 0.075\n 8039.89955 -27.963 0.045\n 8051.98423 10.928 0.118\n 8070.99556 43.782 0.075\n 8099.80651 -30.033 0.054\n 8106.91698 -42.872 0.135\n 8112.81800 -44.863 0.088\n 8123.79627 -25.810 0.115\n 8136.59960 15.691 0.146\n 8143.78352 34.281 0.087\"\"\", \n names=['HJD', 'rv', 'rv_err'])\ntres_tbl['rv'].unit = u.km/u.s\ntres_tbl['rv_err'].unit = u.km/u.s\n\napogee_tbl = ascii.read(\n \"\"\"6204.95544 -37.417 0.011\n 6229.92499 34.846 0.010\n 6233.87715 42.567 0.010\"\"\", \n names=['HJD', 'rv', 'rv_err'])\napogee_tbl['rv'].unit = u.km/u.s\napogee_tbl['rv_err'].unit = u.km/u.s\n\ntres_data = tj.RVData(\n t=Time(tres_tbl['HJD'] + 2450000, format='jd', scale='tcb'),\n rv=u.Quantity(tres_tbl['rv']), \n rv_err=u.Quantity(tres_tbl['rv_err']))\n\napogee_data = tj.RVData(\n t=Time(apogee_tbl['HJD'] + 2450000, format='jd', scale='tcb'),\n rv=u.Quantity(apogee_tbl['rv']), \n rv_err=u.Quantity(apogee_tbl['rv_err']))",
"Let's now plot the data from these two instruments:",
"for d, name in zip([tres_data, apogee_data], ['TRES', 'APOGEE']):\n d.plot(color=None, label=name)\nplt.legend(fontsize=18)",
"Run The Joker with just the TRES data\nThe two data sets are separated by a large gap in observations between the end of APOGEE and the start of the RV follow-up with TRES. Since there are more observations with TRES, we will start by running The Joker with just data from TRES before using all of the data. Let's plot the TRES data alone:",
"_ = tres_data.plot()",
"It is pretty clear that there is a periodic signal in the data, with a period between 10s to ~100 days (from eyeballing the plot above), so this limits the range of periods we need to sample over with The Joker below. The reported uncertainties on the individual RV measurements (plotted above, I swear) are all very small (typically smaller than the markers). So, we may want to allow for the fact that these could be under-estimated. With The Joker, we support this by accepting an additional nonlinear parameter, s, that specifies a global, extra uncertainty that is added in quadrature to the data uncertainties while running the sampler. That is, the uncertainties used for computing the likelihood in The Joker are computed as:\n$$\n\\sigma_n = \\sqrt{\\sigma_{n,0}^2 + s^2}\n$$\nwhere $\\sigma_{n,0}$ are the values reported for each $n$ data point in the tables above. We'll use a log-normal prior on this extra error term, but will otherwise use the default prior form for The Joker:",
"with pm.Model() as model:\n # Allow extra error to account for under-estimated error bars\n s = xu.with_unit(pm.Lognormal('s', -2, 1),\n u.km/u.s)\n \n prior = tj.JokerPrior.default(\n P_min=16*u.day, P_max=128*u.day, # Range of periods to consider\n sigma_K0=30*u.km/u.s, P0=1*u.year, # scale of the prior on semiamplitude, K\n sigma_v=25*u.km/u.s, # std dev of the prior on the systemic velocity, v0\n s=s\n )",
"With the prior set up, we can now generate prior samples, and run the rejection sampling step of The Joker:",
"# Generate a large number of prior samples:\nprior_samples = prior.sample(size=1_000_000,\n random_state=rnd)\n\n# Run rejection sampling with The Joker:\njoker = tj.TheJoker(prior, random_state=rnd)\nsamples = joker.rejection_sample(tres_data, prior_samples,\n max_posterior_samples=256)\nsamples",
"Only 1 sample is returned from the rejection sampling step - let's see how well it matches the data:",
"_ = tj.plot_rv_curves(samples, data=tres_data)",
"Let's look at the values of the sample that was returned, and compare that to the values reported in Thompson et al. 2019, included below for convenience:\n$$\nP = 83.205 \\pm 0.064\\\ne = 0.00476 \\pm 0.00255\\\nK = 44.615 \\pm 0.123\n$$",
"samples.tbl['P', 'e', 'K']",
"Already these look very consistent with the values inferred in the paper! \nLet's now also plot the data phase-folded on the period returned in the one sample we got from The Joker:",
"_ = tres_data.plot(phase_fold=samples[0]['P'])",
"At this point, since the data are very constraining, we could use this one Joker sample to initialize standard MCMC to generate posterior samplings in the orbital parameters for this system. We will do that below, but first let's see how things look if we include both TRES and APOGEE data in our modeling.\nRun The Joker with TRES+APOGEE data\nOne of the challenges with incorporating data from the two surveys is that they were taken with two different spectrographs, and there could be instrumental offsets that manifest as shifts in the absolute radial velocities measured between the two instruments. The Joker now supports simultaneously sampling over additional parameters that represent instrumental or calibratrion offsets, so let's take a look at how to run The Joker in this mode. \nTo start, we can pack the two datasets into a single list that contains data from both surveys:",
"data = [apogee_data, tres_data]",
"Before we run anything, let's try phase-folding both datasets on the period value we got from running on the TRES data alone:",
"tres_data.plot(color=None, phase_fold=np.mean(samples['P']))\napogee_data.plot(color=None, phase_fold=np.mean(samples['P']))",
"That looks pretty good, but the period is clearly slightly off and there seems to be a constant velocity offset between the two surveys, given that the APOGEE RV points don't seem to lie in the RV curve. So, let's now try running The Joker on the joined dataset!\nTo allow for an unknown constant velocity offset between TRES and APOGEE, we have to define a new parameter for this offset and specify a prior. We'll put a Gaussian prior on this offset parameter (named dv0_1 below), with a mean of 0 and a standard deviation of 10 km/s, because it doesn't look like the surveys have a huge offset.",
"with pm.Model() as model:\n # The parameter that represents the constant velocity offset between\n # APOGEE and TRES:\n dv0_1 = xu.with_unit(pm.Normal('dv0_1', 0, 5.),\n u.km/u.s)\n \n # The same extra uncertainty parameter as previously defined\n s = xu.with_unit(pm.Lognormal('s', -2, 1),\n u.km/u.s)\n \n # We can restrict the prior on prior now, using the above\n prior_joint = tj.JokerPrior.default(\n # P_min=16*u.day, P_max=128*u.day,\n P_min=75*u.day, P_max=90*u.day,\n sigma_K0=30*u.km/u.s, P0=1*u.year,\n sigma_v=25*u.km/u.s,\n v0_offsets=[dv0_1],\n s=s\n )\n \nprior_samples_joint = prior_joint.sample(size=10_000_000, \n random_state=rnd)\n\n# Run rejection sampling with The Joker:\njoker_joint = tj.TheJoker(prior_joint, random_state=rnd)\nsamples_joint = joker_joint.rejection_sample(data, \n prior_samples_joint,\n max_posterior_samples=256)\nsamples_joint",
"Here we again only get one sample back from The Joker, because these ata are so constraining:",
"_ = tj.plot_rv_curves(samples_joint, data=data)",
"Now, let's fire up standard MCMC, using the one Joker sample to initialize. We will use the NUTS sampler in pymc3 to run here. When running MCMC to model radial velocities with Keplerian orbits, it is typically important to think about the parametrization. There are several angle parameters in the two-body problem (e.g., argument of pericenter, phase, inclination, etc.) that can be especially hard to sample over naïvely. Here, for running MCMC, we will instead sample over $M_0 - \\omega, \\omega$ instead of $M_0, \\omega$, and we will define these angles as pymc3_ext.distributions.Angle distributions, which internally transform and sample in $\\cos{x}, \\sin{x}$ instead:",
"from pymc3_ext.distributions import Angle\n\nwith pm.Model():\n \n # See note above: when running MCMC, we will sample in the parameters\n # (M0 - omega, omega) instead of (M0, omega)\n M0_m_omega = xu.with_unit(Angle('M0_m_omega'), u.radian)\n omega = xu.with_unit(Angle('omega'), u.radian)\n # M0 = xu.with_unit(Angle('M0'), u.radian)\n M0 = xu.with_unit(pm.Deterministic('M0', M0_m_omega + omega),\n u.radian)\n \n # The same offset and extra uncertainty parameters as above:\n dv0_1 = xu.with_unit(pm.Normal('dv0_1', 0, 5.), u.km/u.s)\n s = xu.with_unit(pm.Lognormal('s', -2, 0.5),\n u.km/u.s)\n \n prior_mcmc = tj.JokerPrior.default(\n P_min=16*u.day, P_max=128*u.day,\n sigma_K0=30*u.km/u.s, P0=1*u.year,\n sigma_v=25*u.km/u.s,\n v0_offsets=[dv0_1],\n s=s,\n pars={'M0': M0, 'omega': omega}\n )\n \n joker_mcmc = tj.TheJoker(prior_mcmc, random_state=rnd)\n mcmc_init = joker_mcmc.setup_mcmc(data, samples_joint)\n \n trace = pmx.sample(\n tune=500, draws=1000,\n start=mcmc_init,\n random_seed=seed,\n cores=1, chains=2)",
"We can now use pymc3 to look at some statistics of the MC chains to assess convergence:",
"az.summary(trace, var_names=prior_mcmc.par_names)",
"We can then transform the MCMC samples back into a JokerSamples instance so we can manipulate and visualize the samples:",
"mcmc_samples = joker_mcmc.trace_to_samples(trace, data=data)\nmcmc_samples.wrap_K()",
"For example, we can make a corner plot of the orbital parameters (note the strong degenceracy between M0 and omega! But also note that we don't sample in these parameters explicitly, so this shouldn't affect convergence):",
"df = mcmc_samples.tbl.to_pandas()\n_ = corner.corner(df)",
"We can also use the median MCMC sample to fold the data and plot residuals relative to our inferred RV model:",
"fig, axes = plt.subplots(2, 1, figsize=(6, 8), sharex=True)\n\n_ = tj.plot_phase_fold(mcmc_samples.median(), data, ax=axes[0], add_labels=False)\n_ = tj.plot_phase_fold(mcmc_samples.median(), data, ax=axes[1], residual=True)\n\nfor ax in axes:\n ax.set_ylabel(f'RV [{apogee_data.rv.unit:latex_inline}]')\n \naxes[1].axhline(0, zorder=-10, color='tab:green', alpha=0.5)\naxes[1].set_ylim(-1, 1)",
"Finally, let's convert our orbit samples into binary mass function, $f(M)$, values to compare with one of the main conclusions of the Thompson et al. paper. We can do this by first converting the samples to KeplerOrbit objects, and then using the .m_f attribute to get the binary mass function values:",
"mfs = u.Quantity([mcmc_samples.get_orbit(i).m_f\n for i in np.random.choice(len(mcmc_samples), 1024)])\nplt.hist(mfs.to_value(u.Msun), bins=32);\nplt.xlabel(rf'$f(M)$ [{u.Msun:latex_inline}]');\n\n# Values from Thompson et al., showing 1-sigma region\nplt.axvline(0.766, zorder=100, color='tab:orange')\nplt.axvspan(0.766 - 0.00637,\n 0.766 + 0.00637,\n zorder=10, color='tab:orange', \n alpha=0.4, lw=0)",
"In the end, using both the APOGEE and TRES data, we confirm the results from the paper, and find that the binary mass function value suggests a large mass companion. A success for reproducible science!"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/fio-ronm/cmip6/models/sandbox-1/atmos.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: FIO-RONM\nSource ID: SANDBOX-1\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:01\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nimagh/MachineLearning
|
BayesianOptimization/BayesianOptimization.ipynb
|
gpl-2.0
|
[
"Bayesian Optimization\n\nEver thought about an automatic way to tune hyperparameters of your beloved machine learning algorithm? for example learning rate, weight decay, and drop out probability in a neural network? here we will look through a proposed way to achive a set of good hyperparameters by bayesian means.\nIn Bayesian Optimization (BO) a machine learning algorithem can be looked as a blackbox which gives out some measure of performance, e.g accuracy, accuracy per second, or any other score value that change relative to a set of parameters. \nBy the end of this jupyter notebook we will utilize BO to get optimized parameters for a minimalistic network to learn digit handwriting recognition from MNIST dataset. \nIt is recommended to go through my notebook on gaussian process regression before progressing with material presented here.\nI will use TensorFlow for neural network implementation and it is good if you go through this tutorial if you have no previous experience with this framework. \nFor an all-in-one Docker image containig major deep learning frameworks consider this repository. \nEnvironment Setup",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import minimize\n\n\nfrom scipy.linalg import det\nfrom scipy.linalg import pinv2 as inv #pinv uses linalg.lstsq algorithm while pinv2 uses SVD\nfrom scipy.stats import norm\n\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n%autosave 0",
"We will begin by a simple 1D optimization problem for a function that we dont know the closed form of it (for presentation purposes we will know thee analytic optimum point of our function). Then we will try with a 2D function and finally we will apply our be then prepared tools in a real neural network case. Let's begin with our fun! but first we need our building parts.\nGP Tools\nHere we will include code for all the GP tools that we will use further in the notebook. For closer explanation you can see my notebook on gaussian process regression.",
"# %load GPR.py\n# GP squared exponential with its own hyperparameters\ndef get_kernel(X1,X2,sigmaf,l,sigman):\n k = lambda x1,x2,sigmaf,l,sigman:(sigmaf**2)*np.exp(-(1/float(2*(l**2)))*np.dot((x1-x2),(x1-x2).T)) + (sigman**2);\n K = np.zeros((X1.shape[0],X2.shape[0]))\n for i in range(0,X1.shape[0]):\n for j in range(0,X2.shape[0]):\n if i==j:\n K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,sigman);\n else:\n K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,0);\n return K\n\n# finding optimized parameters for GP by maximizing closed-form log p(y|x,theta)\ndef fit_GP(x,y, gp_params):\n # log p(y|x,theta) = 0.5y'K^-1y-0.5log|K|-0.5nlog2pi\n # sigman > measurement noise\n # sigmaf > maximum covariance value\n # l > lenght-scale of our GP with squared exponential kernel\n \n bounds = [(0.001,2),(0.001,1),(0.001,1)]\n logpxtheta = lambda p: 0.5*(np.dot(y.T,np.dot(inv(get_kernel(x,x,p[0],p[1],p[2])),y)) + np.log(det(get_kernel(x,x,p[0],p[1],p[2]))) + x.shape[0]*np.log(2*np.pi)).reshape(-1,1)[0];\n \n res = minimize(fun=logpxtheta,\n x0=gp_params,\n bounds=bounds,\n method='L-BFGS-B')\n new_gp_params = res['x']\n #print 'new GP parameters', new_gp_params\n return new_gp_params\n\n#putting everything about GP to use here\ndef GPR(x_predict,x,y,gp_params):\n\n pdim = x_predict.shape[0]\n\n sigmaf, l, sigman = gp_params\n\n K = get_kernel(x, x, sigmaf, l, sigman)\n K_s = get_kernel(x_predict, x, sigmaf, l, 0)\n K_ss = get_kernel(x_predict, x_predict, sigmaf, l, sigman)\n\n #print K.shape,K_s.shape, K_ss.shape, y.shape\n y_predict_mean = np.dot(np.dot(K_s,inv(K)),y).reshape(pdim,-1)\n y_predict_var = np.diag(K_ss - np.dot(K_s,(np.dot(inv(K),K_s.T))))#.reshape(-1,1)\n #print y_predict_mean.shape\n return (y_predict_mean, y_predict_var)",
"Acquisition Function\nWith acquisition function we determine where to sample next from our GP prior to best achieve our optimization objective. This functions yields an automatic optimized choice between exploration (where GP posterior variance is high) and exploitation (where the mean of GP is high). We will choose new set of parameters where exploitation is high (high GP mean) and also exploration is high (high GP uncertainty).\nDifferent options exist for the choice of acquisition fucntion:\n- Expected improvement: $a_{EI}(x,{x_n,y_n},\\theta) = \\sigma_{GP}(x,{x_n,y_n},\\theta)\\left[\\gamma(x)\\Phi\\left(\\gamma(x)\\right) + \\mathcal{N}(\\gamma(x); 0, 1)\\right]$\nwhere\n$\\gamma(x)=\\frac{f(x_{best}) - \\mu_{GP}(x,{x_n,y_n},\\theta)}{\\sigma_{GP}(x,{x_n,y_n},\\theta)}$ \n$\\Phi$ is the normal cumulative distribution and $x{best}$ is the location of the lowest posterior mean._\nIn conlusion, each time we get new observed point the question is where to look next to get the optimum (here assume maximum) point in the underlying unseen function; and by finding the optimum point (min or max) of an acquizition function we will choose between exploitation/exploration dillema. The good thing about EI acquisition function is that, it has no parameter of itself and the choice is automatic for optimum point which will be the next point of sampling from the undelying function. Further on we will use LBFGS algorithem to find the min of the acquisition function (or negative of ac function for its maximum) but this can be also part of the OB itself.",
"def expected_improvement(x, xp, yp, gp_params, kappa=0.0, n_params = 1):\n \n xpredict = np.asarray(x).reshape(-1, n_params)\n\n GP_mu, GP_sigma = GPR(xpredict, xp, yp, gp_params)\n GP_sigma = GP_sigma.reshape(-1,1)\n f_best = np.max(yp)\n\n #print n_params, xp.shape,yp.shape,xpredict.shape\n with np.errstate(divide='ignore'):\n gamma_x = (GP_mu - f_best) / GP_sigma \n EIx = GP_sigma * (gamma_x * norm.cdf(gamma_x) + norm.pdf(gamma_x))\n EIx[GP_sigma == 0.0] == 0\n return -1*EIx # negative because we will use a minimizer to find the maximum\n\ndef upper_confidence_bound(x, xp, yp, gp_params, kappa=0.0, n_params = 1):\n xpredict = np.asarray(x).reshape(-1, n_params)\n GP_mu, GP_sigma = GPR(xpredict, xp, yp, gp_params)\n GP_sigma = GP_sigma.reshape(-1,n_params)\n return -1*(GP_mu + kappa * GP_sigma)\n\ndef sample_next_hyperparameter(acquisition_func, xp, yp, gp_params, bounds, kappa=0.0, n_restarts=10):\n '''n_restarts: integer.\n Number of times to run the minimizer with different starting points.'''\n best_x = None\n best_acquisition_value = 1\n n_params = bounds.shape[0]\n for starting_point in np.random.uniform(bounds[:, 0], bounds[:, 1], size=(n_restarts, n_params)):\n res = minimize(fun=acquisition_func,\n x0=starting_point.reshape(-1, n_params),\n bounds=bounds,\n method='L-BFGS-B',\n args=(xp,yp, gp_params,kappa, n_params))\n\n if res.fun < best_acquisition_value:\n best_acquisition_value = res.fun\n best_x = res.x\n\n return best_x",
"Shall Optimize!",
"def BayesianOptimization(f, bounds,max_iter=10,n_pre_samples=2,\n gp_params=(0.5, 1., 0.001),\n fit_GP_every=0,dis_every=1,plot_res = 0, kappa = 0.0):\n x_list = []\n y_list = []\n \n n_param = bounds.shape[0]\n \n #randomly sample the function to be optimized\n for j, params in enumerate(np.random.uniform(bounds[:, 0], bounds[:, 1], (n_pre_samples, bounds.shape[0]))):\n x_list.append(params)\n y_list.append(f(params))\n #if dis_every and i%dis_every==0: print 'Generating sample %d ...'%i\n\n xp = np.array(x_list)\n yp = np.array(y_list)\n\n for i in range(max_iter):\n \n if fit_GP_every and i%fit_GP_every==0: gp_params = fit_GP(xp,yp, gp_params) # sigmaf, l, sigman\n\n next_sample = sample_next_hyperparameter(expected_improvement, xp, yp,gp_params, bounds=bounds,n_restarts=20)\n #next_sample = sample_next_hyperparameter(upper_confidence_bound, xp, yp,gp_params, bounds=bounds,kappa=kappa)\n\n # avoid very close points\n while np.any(np.abs(next_sample - xp) <= np.finfo(float).eps):\n next_sample = np.random.uniform(bounds[:, 0], bounds[:, 1], bounds.shape[0])\n\n cv_score = f(next_sample)\n\n # save previous values for plotting\n prev_xp = xp\n prev_yp = yp\n \n # Updates\n x_list.append(next_sample)\n y_list.append(cv_score)\n\n xp = np.array(x_list)\n yp = np.array(y_list)\n\n if dis_every and i%dis_every==0: \n print 'Iter. # %d - Best Results: '%(i+1),\n for j,val in enumerate(next_sample):\n #print 'parameter_%d = %2.2f,'%(j,val),\n print 'parameter_%d = %2.2f,'%(j+1,xp[np.argmax(yp),j]),\n print ' val=%2.2f'%max(yp)\n if plot_res:\n if xp.shape[1] == 1: # 2D plots\n fig = plt.figure(figsize=(8,8))\n param_choices = np.arange(bounds[0,0],bounds[0,1],0.1).reshape(-1,xp.shape[1])\n y_mean, y_std = GPR(param_choices, prev_xp, prev_yp, gp_params)\n\n EIx = -expected_improvement(param_choices, prev_xp, prev_yp, gp_params, n_params = 1)\n #EIx = -upper_confidence_bound(param_choices, prev_xp, prev_yp, gp_params,kappa, n_params = 1)\n EIxnorm=np.linalg.norm(EIx)\n if EIxnorm!=0: EIx = EIx/EIxnorm # normalizing for beter visualiation\n plt.plot(param_choices[:,0], 2*EIx[:,0],'m')\n\n plt.plot(next_sample,f(next_sample),'ro')\n\n plt.plot(param_choices[:,0],f(param_choices[:,0]),'k--')\n plt.plot(prev_xp,prev_yp,'b*')\n\n plt.plot(param_choices[:,0], y_mean[:,0],'b')\n plt.fill_between(param_choices[:,0], y_mean[:,0]-y_std,y_mean[:,0]+y_std,alpha=0.5, edgecolor='#CC4F1B', facecolor='#FF9848')\n plt.title('Iter. #%2d best val so far is %2.2f'%(i+1,max(yp)))\n plt.ylim([-1.5,2])\n fig.savefig('tmp_BO_%d.png'%i)\n plt.show()\n else: print 'High dimensional plots not yet implemented!'\n best_params = []\n print 'Iterations Done... Best Result: ',\n for j in range(xp.shape[1]):\n print 'parameter_%d = %2.2f,'%(j+1,xp[np.argmax(yp),j]),\n best_params.append(xp[np.argmax(yp),j])\n print 'which yiels %2.5f !!'%max(yp)\n return tuple(best_params),max(yp)",
"1D Bayesian Optimization\nWe run BO on our toy 1D function and see that the found max is close to the true maximum of our function",
"#sample function definiton\nf = lambda x: np.exp(-abs(x))*np.cos(0.5*np.pi*x)+2*np.exp(-0.5*abs(x))*np.sin(0.7*np.pi*x)\nx = np.linspace(-5,5,1000)\ny = f(x)\nplt.plot(x,y)\nplt.axis([-5, 5, -1.5, 2])\nplt.show()\nprint 'real func max is: %2.3f'%np.max(y)",
"Let's see how our BO solves this maximization problem.\nYou might need to run it couple of times to get the best result, this strategy is not deterministic",
"x,y = BayesianOptimization(f, bounds=np.array([[-5,5]]),\n max_iter=10, n_pre_samples=2, \n gp_params=(0.6, .8, 0.001),fit_GP_every=0,\n dis_every=1,plot_res = 0,kappa = 0)",
"2D Bayesian Optimization\nBO on a 2D space with an imaginary function.",
"# we define a function here and visualize it\nfrom mpl_toolkits.mplot3d import axes3d #it has to be imported\n\nsize = 500\nsigma_x = 2.5\nsigma_y = 2.5\n\nf2 = lambda p: 10000*(1/(2*np.pi*sigma_x*sigma_y) * np.exp(-(p[0]**2/(2*sigma_x**2) + p[1]**2/(2*sigma_y**2)))*np.sin(0.01*p[0]*p[1]))\nfig = plt.figure(figsize = (7,7))\nax = fig.add_subplot(111, projection='3d')\n\nx = np.linspace(-10, 10, size)\ny = np.linspace(-10, 10, size)\n\nx, y = np.meshgrid(x, y)\nz = f2((x,y))\n\nax.plot_surface(x, y, z, cmap=plt.cm.hot)\nplt.show()\n\nprint 'real func max is: %2.3f'%np.max(z)",
"Let's see how close BO can get to the real maximum",
"x,y = BayesianOptimization(f2, bounds=np.array([[-5,5],[-5,5]]),\n max_iter=10, n_pre_samples=3, \n gp_params=(0.6, .8, 0.001),fit_GP_every=0,\n dis_every=1,kappa = 0)",
"Optimizing a Neural Network's Hyperparameters with Bayesian Optimization\nChoosing the best configuration of a neural network (it's hidden layer depth, learning rate, batch size, ...) can be seen as an optimization which can be also targeted with bayesian optimization using gaussian processes regressors. Through out this section we will test BO's power in this problem.\nFirst we define the initial network from TensorFlow MNIST tutorial:",
"# Getting the data\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport tensorflow as tf\n\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\n\n# Create a 2 layer fully connected network\ndef multilayer_FCN(x, weights, biases):\n # Hidden layer with RELU activation\n layer_1 = tf.add(tf.matmul(x, weights['w1']), biases['b1'])\n layer_1 = tf.nn.relu(layer_1)\n # output layer with RELU activation\n layer_2 = tf.add(tf.matmul(layer_1, weights['w2']), biases['b2'])\n #layer_2 = tf.nn.relu(layer_2)\n return layer_2\n\n# define a function which will construct and train a network with our hyperparameter set\ndef train_network(learning_rate=0.5,batch_size=100,training_epochs=1,hidden_n=500):\n\n x = tf.placeholder(tf.float32, [None, 784])\n weights = {\n 'w1': tf.Variable(tf.random_normal([784, hidden_n])),\n 'w2': tf.Variable(tf.random_normal([hidden_n, 10])),\n }\n biases = {\n 'b1': tf.Variable(tf.random_normal([hidden_n])),\n 'b2': tf.Variable(tf.random_normal([10])),\n }\n\n y = tf.placeholder(tf.float32, [None, 10])\n \n # Construct model\n pred = multilayer_FCN(x, weights, biases)\n\n # Define loss and optimizer\n cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))\n optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\n\n # Initializing the variables\n init = tf.global_variables_initializer()\n\n # Launch the graph\n with tf.Session() as sess:\n sess.run(init)\n\n # Training cycle\n for epoch in range(training_epochs):\n avg_cost = 0.\n total_batch = int(mnist.train.num_examples/batch_size/2)\n # Loop over all batches\n for i in range(total_batch):\n batch_x, batch_y = mnist.train.next_batch(batch_size)\n # Run optimization op (backprop) and cost op (to get loss value)\n _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,\n y: batch_y})\n # Compute average loss\n avg_cost += c / total_batch\n\n # Test model\n correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))\n # Calculate accuracy\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n accuracy = accuracy.eval({x: mnist.test.images, y: mnist.test.labels})\n return accuracy\n\nfneural = lambda p: train_network(learning_rate=p[0],batch_size=int(p[1]))\n\n#########make it a function after this point\nx,y = BayesianOptimization(fneural, bounds=np.array([[0.0,1],[30,400]]),\n max_iter=100, n_pre_samples=3, \n gp_params=(0.6, .8, 0.001),fit_GP_every=0,\n dis_every=10)",
"Practical Bayesian Optimization of Machine Learning Algorithms, Snoek et al 2012, arXiv:1206.2944v2"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
turbomanage/training-data-analyst
|
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
|
apache-2.0
|
[
"Kubeflow pipelines\nLearning Objectives:\n 1. Learn how to deploy a Kubeflow cluster on GCP\n 1. Learn how to use the notebook server on Kubeflow\n 1. Learn how to create a experiment in Kubeflow\n 1. Learn how to package you code into a Kubeflow pipeline\n 1. Learn how to run a Kubeflow pipeline in a repeatable and traceable way\nIntroduction\nIn this notebook, we will first setup a Kubeflow cluster on GCP, and then launch a Kubeflow Notebook Server from where we will run this notebook. This will allow us to pilote the Kubeflow cluster from the notebook. Then, we will create a Kubeflow experiment and a Kubflow pipeline from our taxifare machine learning code. At last, we will run the pipeline on the Kubeflow cluster, providing us with a reproducible and traceable way to execute machine learning code.",
"from os import path\n\nimport kfp\nimport kfp.compiler as compiler\nimport kfp.components as comp\nimport kfp.dsl as dsl\nimport kfp.gcp as gcp\nimport kfp.notebook",
"Setup a Kubeflow cluster on GCP\nTODO 1\nTo deploy a Kubeflow cluster\nin your GCP project, use the Kubeflow cluster deployer.\nThere is a setup video that will\ntake you over all the steps in details, and explains how to access to the Kubeflow Dashboard UI, once it is \nrunning. \nYou'll need to create an OAuth client for authentication purposes: Follow the \ninstructions here.\nLaunch a Jupyter notebook server on the Kubeflow cluster\nTODO 2\nA Kubeflow cluster allows you not only to run Kubeflow pipelines, but it also allows you to launch a Jupyter notebook server from which you can pilote the Kubeflow cluster. In particular, you can create experiments, define and run pipelines from whithin notebooks running on that Jupiter notebook server. This is exactly what we are going to do in this notebook.\nFirst of all, click on the \"Notebook Sever\" tab in the Kubeflow Dashboard UI, and create a Notebook Server. Once it's ready connect to it.\nSince the goal is to run this notebook on that Kubeflow Notebook Server, first create new notebook and clone the training-data-analysis repo by running the following command in a cell and then naviguating to this notebook:\n```bash\n$ git clone -b ml_on_gcp-kubeflow_pipelines --single-branch https://github.com/GoogleCloudPlatform/training-data-analyst.git\n```\nCreate an experiment\nTODO 3\nFrom now on, you should be running this notebook from the Notebook Server from the Kubeflow cluster you created on your GCP project.\nWe will start by creating a Kubeflow client to pilote the Kubeflow cluster:",
"client = kfp.Client()",
"Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single \"Default\" experiment:",
"client.list_experiments()",
"Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline:",
"exp = client.create_experiment(name='taxifare')",
"Let's make sure the experiment has been created correctly:",
"client.list_experiments()",
"Packaging you code into Kubeflow components\nWe have packaged our taxifare ml pipeline into three components:\n* ./components/bq2gcs that creates the training and evaluation data from BigQuery and exports it to GCS\n* ./components/trainjob that launches the training container on AI-platform and exports the model\n* ./components/deploymodel that deploys the trained model to AI-platform as a REST API\nEach of these components has been wrapped into a Docker container, in the same way we did with the taxifare training code in the previous lab.\nIf you inspect the code in these folders, you'll notice that the main.py or main.sh files contain the code we previously executed in the notebooks (loading the data to GCS from BQ, or launching a training job to AI-platform, etc.). The last line in the Dockerfile tells you that these files are executed when the container is run. \nSo we just packaged our ml code into light container images for reproducibility. \nWe have made it simple for you to build the container images and push them to the Google Cloud image registry gcr.io in your project: just type make in the pipelines directory! However, you can't do that from a Kubeflow notebook because Docker is not installed there. So you'll have to do that from Cloud Shell.\nFor that, open Cloud Shell, and clone this repo there. Then cd to the pipelines subfolder:\n```bash\n$ git clone -b ml_on_gcp-kubeflow_pipelines --single-branch https://github.com/GoogleCloudPlatform/training-data-analyst.git\n$ cd training-data-analyst/courses/machine_learning/production_ml_systems/pipelines/\n```\nThen run make to build and push the images. \nNow that the container images are pushed to the regsitry in your project, we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially \n* describing what arguments Kubeflow needs to pass to the containers when it runs them\n* telling Kubeflow where to fetch the corresponding Docker images\nIn the cells below, we have three of these \"Kubeflow component description files\", one for each of our components.\nFor each of these, correct the image URI to reflect that you pushed the images into the gcr.io associated with your project:\nTODO 4",
"%%writefile bq2gcs.yaml\n\nname: bq2gcs\n \ndescription: |\n This component creates the training and\n validation datasets as BiqQuery tables and export\n them into a Google Cloud Storage bucket at\n gs://<BUCKET>/taxifare/data.\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: gcr.io/PROJECT/taxifare-bq2gcs\n args: [\"--bucket\", {inputValue: Input Bucket}]\n\n%%writefile trainjob.yaml\n\nname: trainjob\n \ndescription: |\n This component trains a model to predict that taxi fare in NY.\n It takes as argument a GCS bucket and expects its training and\n eval data to be at gs://<BUCKET>/taxifare/data/ and will export\n the trained model at gs://<BUCKET>/taxifare/model/.\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: gcr.io/PROJECT/taxifare-trainjob\n args: [{inputValue: Input Bucket}]\n\n%%writefile deploymodel.yaml\n\nname: deploymodel\n \ndescription: |\n This component deploys a trained taxifare model on GCP as taxifare:dnn.\n It takes as argument a GCS bucket and expects the model to deploy \n to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/\n \ninputs:\n - {name: Input Bucket , type: String, description: 'GCS directory path.'}\n\nimplementation:\n container:\n image: gcr.io/PROJECT/taxifare-deploymodel\n args: [{inputValue: Input Bucket}]",
"Create a Kubeflow pipeline\nThe code below creates a kubeflow pipeline by decorating a regular fuction with the\n@dsl.pipeline decorator. Now the arguments of this decorated function will be\nthe input parameters of the Kubeflow pipeline.\nInside the function, we describe the pipeline by\n* loading the yaml component files we created above into a Kubeflow op\n* specifying the order into which the Kubeflow ops should be run",
"# TODO 4\nPIPELINE_TAR = 'taxifare.tar.gz'\nBQ2GCS_YAML = './bq2gcs.yaml'\nTRAINJOB_YAML = './trainjob.yaml'\nDEPLOYMODEL_YAML = './deploymodel.yaml'\n\n\n@dsl.pipeline(\n name='Taxifare',\n description='Train a ml model to predict the taxi fare in NY')\ndef pipeline(gcs_bucket_name='<bucket where data and model will be exported>'):\n\n\n bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML)\n bq2gcs = bq2gcs_op(\n input_bucket=gcs_bucket_name,\n ).apply(gcp.use_gcp_secret('user-gcp-sa'))\n\n\n trainjob_op = comp.load_component_from_file(TRAINJOB_YAML)\n trainjob = trainjob_op(\n input_bucket=gcs_bucket_name,\n ).apply(gcp.use_gcp_secret('user-gcp-sa'))\n\n\n deploymodel_op = comp.load_component_from_file(DEPLOYMODEL_YAML)\n deploymodel = deploymodel_op(\n input_bucket=gcs_bucket_name,\n ).apply(gcp.use_gcp_secret('user-gcp-sa'))\n\n\n trainjob.after(bq2gcs)\n deploymodel.after(trainjob)",
"The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below:",
"compiler.Compiler().compile(pipeline, PIPELINE_TAR)\n\nls $PIPELINE_TAR",
"If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the\nPython description of the pipeline into yaml description!\nNow let's feed Kubeflow with our pipeline and run it using our client:",
"# TODO 5\nrun = client.run_pipeline(\n experiment_id=exp.id, \n job_name='taxifare', \n pipeline_package_path='taxifare.tar.gz', \n params={\n 'gcs-bucket-name': \"dherin-sandbox\",\n },\n)",
"Have a look at the link to monitor the run. \nNow all the runs are nicely organized under the experiment in the UI, and new runs can be either manually launched or scheduled through the UI in a completely repeatable and traceable way!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
chemiskyy/simmit
|
Examples/Continuum_Mechanics/constitutive_props.ipynb
|
gpl-3.0
|
[
"constitutive : The Constitutive Library",
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom simmit import smartplus as sim\nimport os",
"L_iso\nProvides the elastic stiffness tensor for an isotropic material.\nThe two first arguments are a couple of elastic properties. The third argument specifies which couple has been provided and the nature and order of coefficients.\nExhaustive list of possible third argument :\n‘Enu’,’nuE,’Kmu’,’muK’, ‘KG’, ‘GK’, ‘lambdamu’, ‘mulambda’, ‘lambdaG’, ‘Glambda’.\nReturn a numpy ndarray.\nExample :",
"E = 70000.0\nnu = 0.3\nL = sim.L_iso(E,nu,\"Enu\")\nprint np.array_str(L, precision=4, suppress_small=True)\n\nd = sim.check_symetries(L)\nprint(d['umat_type'])\nprint(d['props'])\n\nx = sim.L_iso_props(L)\nprint(x)",
"M_iso\nProvides the elastic compliance tensor for an isotropic material.\nThe two first arguments are a couple of elastic properties. The third argument specify which couple has been provided and the nature and order of coefficients.\nExhaustive list of possible third argument :\n‘Enu’,’nuE,’Kmu’,’muK’, ‘KG’, ‘GK’, ‘lambdamu’, ‘mulambda’, ‘lambdaG’, ‘Glambda’.",
"E = 70000.0\nnu = 0.3\nM = sim.M_iso(E,nu,\"Enu\")\nprint np.array_str(M, precision=2)\n\nL_inv = np.linalg.inv(M)\nd = sim.check_symetries(L_inv)\nprint(d['umat_type'])\nprint(d['props'])\n\nx = sim.M_iso_props(M)\nprint(x)\n",
"L_cubic\nProvides the elastic stiffness tensor for a cubic material. Arguments are the stiffness coefficients C11, C12 and C44, or the elastic constants E, nu, G\nExhaustive list of possible third argument : ‘Cii’,’EnuG, the by-default argument is 'Cii'",
"E = 70000.0\nnu = 0.3\nG = 23000.0\nL = sim.L_cubic(E,nu,G,\"EnuG\")\nprint np.array_str(L, precision=2)\n\nd = sim.check_symetries(L)\nprint(d['umat_type'])\nprint(d['props'])\n\nx = sim.L_cubic_props(L)\nprint(x)\n",
"M_cubic\nProvides the elastic compliance tensor for a cubic material. Arguments are the stiffness coefficients C11, C12 and C44, or the elastic constants E, nu, G\nExhaustive list of possible third argument : ‘Cii’,’EnuG, the by-default argument is 'Cii'",
"E = 70000.0\nnu = 0.3\nG = 23000.0\nM = sim.M_cubic(E,nu,G,\"EnuG\")\nprint np.array_str(M, precision=2)\n\nL = np.linalg.inv(M)\nd = sim.check_symetries(L)\nprint(d['umat_type'])\nprint(d['props'])\n\nx = sim.L_cubic_props(L)\nprint(x)",
"L_isotrans\nProvides the elastic stiffness tensor for an isotropic transverse material.\nArguments are longitudinal Young modulus EL, transverse young modulus, Poisson’s ratio for loading along the longitudinal axis nuTL, Poisson’s ratio for loading along the transverse axis nuTT, shear modulus GLT and the axis of symmetry.",
"EL = 70000.0\nET = 20000.0\nnuTL = 0.08\nnuTT = 0.3\nGLT = 12000.0\naxis = 3\nL = sim.L_isotrans(EL,ET,nuTL,nuTT,GLT,axis)\nprint np.array_str(L, precision=2)\n\nd = sim.check_symetries(L)\nprint(d['umat_type'])\nprint(d['axis'])\nprint np.array_str(d['props'], precision=2)\n\nx = sim.L_isotrans_props(L,axis)\nprint np.array_str(x, precision=2)",
"bp::def(\"L_iso\", L_iso);\nbp::def(\"M_iso\", M_iso);\nbp::def(\"L_cubic\", L_cubic);\nbp::def(\"M_cubic\", M_cubic);\nbp::def(\"L_ortho\", L_ortho);\nbp::def(\"M_ortho\", M_ortho);\nbp::def(\"L_isotrans\", L_isotrans);\nbp::def(\"M_isotrans\", M_isotrans);\n\nbp::def(\"check_symetries\", check_symetries);\nbp::def(\"L_iso_props\", L_iso_props);\nbp::def(\"M_iso_props\", M_iso_props);\nbp::def(\"L_isotrans_props\", L_isotrans_props);\nbp::def(\"M_isotrans_props\", M_isotrans_props);\nbp::def(\"L_cubic_props\", L_cubic_props);\nbp::def(\"M_cubic_props\", M_cubic_props);\nbp::def(\"L_ortho_props\", L_ortho_props);\nbp::def(\"M_ortho_props\", M_ortho_props);\nbp::def(\"M_aniso_props\", M_aniso_props);",
"v = sim.Ith()\nprint v",
"Ir2()\nProvides the vector $I_{r2} = \\left( \\begin{array}{ccc} 1 \\ 1 \\ 1 \\ 2 \\ 2 \\ 2 \\end{array} \\right)$. Return a vec.\nThis vector is usefull when transferring from \"stress\"-type Voigt conventions to \"strain\"-type.\nExample:",
"v = sim.Ir2()\nprint v",
"Ir05()\n</h3> Provides the vector $I_{r05} = \\left( \\begin{array}{ccc} 1 \\ 1 \\ 1 \\ 0.5 \\ 0.5 \\ 0.5 \\end{array} \\right)$. Return a vec. Example:",
"v = sim.Ir05()\nprint v"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arongdari/sparse-graph-prior
|
notebooks/PosteriorInferenceGGPgraph.ipynb
|
mit
|
[
"Posterior inference for GGP graph model\nIn this notebook, we'll infer the posterior distribution of yeast dataset using generalised gamma process graph model.\nOriginal source of the dataset with detailed description: http://www.cise.ufl.edu/research/sparse/matrices/Pajek/yeast.html",
"import os\nimport pickle\nimport time\nfrom collections import defaultdict\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.io import loadmat\n\nfrom sgp import GGPgraphmcmc\n\n%matplotlib inline",
"Loading yeast dataset",
"mat = loadmat('../data/yeast/yeast.mat')\ngraph = mat['Problem'][0][0][2]",
"Run MCMC sampler",
"modelparam = dict()\nmcmcparam = dict()\n\nmodelparam['alpha'] = (0, 0)\nmodelparam['sigma'] = (0, 0)\nmodelparam['tau'] = (0, 0)\n\nmcmcparam['niter'] = 500\nmcmcparam['nburn'] = 1\nmcmcparam['thin'] = 1\nmcmcparam['leapfrog.L'] = 5\nmcmcparam['leapfrog.epsilon'] = 0.1\nmcmcparam['leapfrog.nadapt'] = 1\nmcmcparam['latent.MH_nb'] = 1\nmcmcparam['hyper.MH_nb'] = 2\nmcmcparam['hyper.rw_std'] = [0.02, 0.02]\nmcmcparam['store_w'] = True\n\ntypegraph='undirected' # or simple\n\nsamples, stats = GGPgraphmcmc(graph, modelparam, mcmcparam, typegraph, verbose=True)",
"The invalid values are carefully handled in the inference codes. It is safe to ignore the warning messages.\nTrace plots of some variables of interest",
"plt.plot(samples['sigma'])\nplt.title('Trace plot of $\\sigma$ variable')",
"When the sigma is less than 0, the inferred graph is dense.",
"plt.plot(stats['w_rate'])\nplt.title('MH acceptance rate for weight w')\n\nplt.plot(stats['hyper_rate'])\nplt.title('MH acceptance rate for hyper-params')",
"checking the acceptance ratio"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
TariqAHassan/BioVida
|
tutorials/3_domain_unification_and_data_management.ipynb
|
bsd-3-clause
|
[
"BioVida: Domain Unification and Data Management\n\nThis tutorial will cover the facilities BioVida offers to:\n\n\nintegrate images data against other kinds of biomedical data\n\n\nmanage cached resources.\n\n\n\nDomain Unification\nWhile primarily focused on image data, BioVida also contains interfaces to allow you to easily gain access to other biomedical data types. Namely, medical diagnostics and genomics data. This section will show how one, or several, image interfaces can be unified into a single DataFrame, complete with data from these additional sources.\nWe can start by collecting some data.",
"from biovida.images import OpeniInterface\nopi = OpeniInterface()\nopi.search(query='lung cancer')\npull_df1 = opi.pull()",
"Let's also get some data from the Cancer Imaging Archive.",
"from biovida.images import CancerImageInterface\ncii = CancerImageInterface(api_key=YOUR_API_KEY_HERE)\ncii.search(cancer_type='lung')\npull_df2 = cii.pull(collections_limit=1) # only download the first collection/study",
"Next, we can import the tool we will be using to unify the data",
"from biovida.unification import unify_against_images\n\nunified_df = unify_against_images(instances=[opi, cii])\n\nimport numpy as np\ndef simplify_df(df):\n \"\"\"This function simplifies dataframes\n for the purposes of this tutorial.\"\"\"\n data_frame = df.copy()\n for c in ('source_images_path', 'cached_images_path'):\n data_frame[c] = 'path_to_image'\n return data_frame.replace({np.NaN: ''})",
"To close out this section, we can take a quick look at the resultant DataFrame.",
"simplify_df(unified_df)[85:90]",
"Note: the 'mentioned_symptoms' column provides a list of symptoms known to be associated with the disease which were mentioned in the article.\n\nData Management\nThis section is intended to provide a brief overview of the ways in which data downloaded with BioVida can be removed from your computer. \n1. The simplest way to delete BioVida data is to manually delete the biovida_cache folder, or some portion of files (e.g., images) contained within in. Both OpeniInterfaces and CancerImageInterface check for deleted files each time they are instantiated. <br>\n2. While the first approach is straightforward, it is neither elegant nor precise. For situations that require more finesse, we can employ the image_delete tool.",
"from biovida.images import image_delete",
"Next, we simply define a which will inform image_delete of which rows to delete.",
"def my_delete_rule(row):\n if isinstance(row['abstract'], str) and 'proteins' in row['abstract'].lower():\n return True",
"In this example, we'll use the instance of OpeniInterface created above.",
"deleted_rows = image_delete(opi, delete_rule=my_delete_rule, only_recent=True)",
"This will not only delete the row, but any images associated with it. Therefore, as a precaution, you will be asked to confirm this action before it is performed.\nWarning: <br>\nThe default behavior of image_delete is to delete any rows for which your 'delete_rule' returns True, including those in cache_records_db which were not downloaded in the most recent pull(). The only_recent parameter can be used to limit deletion to data obtained in the most recent pull, as shown above.\n\nConclusion\nIn this tutorial we have reviewed how to unify images obtained with BioVida both with eachother as well as against external biomedial databases. Additionally, we have explored methods for deleting downloaded data."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
aburrell/davitpy
|
docs/notebook/models.ipynb
|
gpl-3.0
|
[
"<a name=\"top\"></a>\nDaViTpy - models\n\nThis notebook introduces useful space science models included in davitpy. \nCurrently we have ported/wrapped the following models to python: \n\n<a href=\"#igrf\">IGRF-11</a>\n<a href=\"#iri\">IRI</a>\n<a href=\"#tsyg\">TSYGANENKO (T96)</a>\n<a href=\"#msis\">MSIS (NRLMSISE00)</a>\n<a href=\"#hwm\">HWM-07</a>\n<a href=\"#hwm\">AACGM</a>",
"%pylab inline\nfrom datetime import datetime as dt\nfrom davitpy.models import *\nfrom davitpy import utils",
"<a name=\"igrf\"/>IGRF - International Geomagnetic Reference Field\n<a href=\"#top\">[top]</a>",
"# INPUTS\nitype = 1 # Geodetic coordinates\npyDate = dt(2006,2,23)\ndate = utils.dateToDecYear(pyDate) # decimal year\nalt = 300. # altitude\nstp = 5.\nxlti, xltf, xltd = -90.,90.,stp # latitude start, stop, step\nxlni, xlnf, xlnd = -180.,180.,stp # longitude start, stop, step\nifl = 0 # Main field\n# Call fortran subroutine\nlat,lon,d,s,h,x,y,z,f = igrf.igrf11(itype,date,alt,ifl,xlti,xltf,xltd,xlni,xlnf,xlnd)\n\n# Check that it worked by plotting magnetic dip angle contours on a map\nfrom mpl_toolkits.basemap import Basemap\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom numpy import meshgrid\n\n# Set figure\nfig = figure(figsize=(10,5))\nax = fig.add_subplot(111)\nrcParams.update({'font.size': 14})\n\n# Set-up the map background\nmap = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\\\n llcrnrlon=-180,urcrnrlon=180,resolution='c')\nmap.drawmapboundary()\nmap.drawcoastlines(color='0.5')\n# draw parallels and meridians.\nmap.drawparallels(np.arange(-80.,81.,20.))\nmap.drawmeridians(np.arange(-180.,181.,20.))\n\n# The igrf output needs to be reshaped to be plotted\ndip = s.reshape((180./stp+1,360./stp+1))\ndec = d.reshape((180./stp+1,360./stp+1))\nlo = lon[0:(360./stp+1)]\nla = lat[0::(360./stp+1)]\nx,y = meshgrid(lo,la)\nv = arange(0,90,20)\n\n# Plot dip angle contours and labels\ncs = map.contour(x, y, abs(dip), v, latlon=True, linewidths=1.5, colors='k')\nlabs = plt.clabel(cs, inline=1, fontsize=10)\n\n# Plot declination and colorbar\nim = map.pcolormesh(x, y, dec, vmin=-40, vmax=40, cmap='coolwarm')\ndivider = make_axes_locatable(ax)\ncax = divider.append_axes(\"right\", \"3%\", pad=\"3%\")\ncolorbar(im, cax=cax)\ncax.set_ylabel('Magnetic field declination')\ncticks = cax.get_yticklabels()\ncticks = [t.__dict__['_text'] for t in cticks]\ncticks[0], cticks[-1] = 'W', 'E'\n_ = cax.set_yticklabels(cticks)\nsavefig('dipdec.png')\n\n# Check that it worked by plotting magnetic dip angle contours on a map\nfrom mpl_toolkits.basemap import Basemap\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom numpy import meshgrid\n\n# Set figure\nfig = figure(figsize=(10,5))\nax = fig.add_subplot(111)\nrcParams.update({'font.size': 14})\n\n# Set-up the map background\nmap = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\\\n llcrnrlon=-180,urcrnrlon=180,resolution='c')\nmap.drawmapboundary()\nmap.drawcoastlines(color='0.5')\n# draw parallels and meridians.\nmap.drawparallels(np.arange(-80.,81.,20.))\nmap.drawmeridians(np.arange(-180.,181.,20.))\n\n# The igrf output needs to be reshaped to be plotted\nbabs = f.reshape((180./stp+1,360./stp+1))\nlo = lon[0:(360./stp+1)]\nla = lat[0::(360./stp+1)]\nx,y = meshgrid(lo,la)\nv = arange(0,90,20)\n\n# Plot declination and colorbar\nim = map.pcolormesh(x, y, babs, cmap='jet')\ndivider = make_axes_locatable(ax)\ncax = divider.append_axes(\"right\", \"3%\", pad=\"3%\")\ncolorbar(im, cax=cax)\ncax.set_ylabel('Magnetic field intensity [nT]')\n\nsavefig('babs.png')",
"<a name=\"iri\"/>IRI - International Reference Ionosphere\n<a href=\"#top\">[top]</a>\n\n\nJF switches to turn off/on (True/False) several options\n\n\n[0] : True\n\nNe computed \nNe not computed\n\n\n[1] : True\nTe, Ti computed \nTe, Ti not computed\n\n\n[2] : True\nNe & Ni computed \nNi not computed\n\n\n[3] : False\nB0 - Table option \nB0 - other models jf[30]\n\n\n[4] : False\nfoF2 - CCIR \nfoF2 - URSI\n\n\n[5] : False\nNi - DS-95 & DY-85 \nNi - RBV-10 & TTS-03\n\n\n[6] : True\nNe - Tops: f10.7<188 \nf10.7 unlimited \n\n\n[7] : True\nfoF2 from model \nfoF2 or NmF2 - user input\n\n\n[8] : True\nhmF2 from model \nhmF2 or M3000F2 - user input\n\n\n[9] : True\nTe - Standard \nTe - Using Te/Ne correlation\n\n\n[10] : True\nNe - Standard Profile \nNe - Lay-function formalism\n\n\n[11] : True\nMessages to unit 6 \nto meesages.text on unit 11\n\n\n[12] : True\nfoF1 from model \nfoF1 or NmF1 - user input\n\n\n[13] : True\nhmF1 from model \nhmF1 - user input (only Lay version)\n\n\n[14] : True\nfoE from model \nfoE or NmE - user input\n\n\n[15] : True\nhmE from model \nhmE - user input\n\n\n[16] : True\nRz12 from file \nRz12 - user input\n\n\n[17] : True\nIGRF dip, magbr, modip \nold FIELDG using POGO68/10 for 1973\n\n\n[18] : True\nF1 probability model \ncritical solar zenith angle (old)\n\n\n[19] : True\nstandard F1 \nstandard F1 plus L condition\n\n\n[20] : False\nion drift computed \nion drift not computed\n\n\n[21] : True\nion densities in % \nion densities in m-3\n\n\n[22] : False\nTe_tops (Aeros,ISIS) \nTe_topside (TBT-2011)\n\n\n[23] : True\nD-region: IRI-95 \nSpecial: 3 D-region models\n\n\n[24] : True\nF107D from APF107.DAT \nF107D user input (oarr[41])\n\n\n[25] : True\nfoF2 storm model \nno storm updating\n\n\n[26] : True\nIG12 from file \nIG12 - user\n\n\n[27] : False\nspread-F probability \nnot computed\n\n\n[28] : False\nIRI01-topside \nnew options as def. by JF[30]\n\n\n[29] : False\nIRI01-topside corr. \nNeQuick topside model\n\n\n[28,29]:\n[t,t] IRIold, \n[f,t] IRIcor, \n[f,f] NeQuick, \n[t,f] Gulyaeva\n\n\n[30] : True\nB0,B1 ABT-2009 \nB0 Gulyaeva h0.5\n\n\n[31] : True\nF10.7_81 from file \nPF10.7_81 - user input (oarr[45])\n\n\n[32] : False\nAuroral boundary model on\nAuroral boundary model off\n\n\n[33] : True\nMessages on \nMessages off\n\n\n[34] : False\nfoE storm model \nno foE storm updating\n\n\n[..] : ....\n[50] : ....",
"# Inputs\njf = [True]*50\njf[2:6] = [False]*4\njf[20] = False\njf[22] = False\njf[27:30] = [False]*3\njf[32] = False\njf[34] = False\njmag = 0.\nalati = 40. \nalong = -80.\niyyyy = 2012\nmmdd = 806 \ndhour = 12. \nheibeg, heiend, heistp = 80., 500., 10. \noarr = np.zeros(100)\n# Call fortran subroutine\noutf,oarr = iri.iri_sub(jf,jmag,alati,along,iyyyy,mmdd,dhour,heibeg,heiend,heistp,oarr)\n\n# Check that it worked by plotting vertical electron density profile\nfigure(figsize=(5,8))\n\nalt = np.arange(heibeg,heiend,heistp)\nax = plot(outf[0,0:len(alt)],alt)\n\nxlabel(r'Electron density [m$^{-3}$]')\nylabel('Altitude [km]')\ngrid(True)\nrcParams.update({'font.size': 12})",
"<a name=\"tsyg\"/>Tsyganenko (Geopack and T96)\n<a href=\"#top\">[top]</a>\n\nThe \"Porcelain\" way (recommended)",
"lats = range(10, 90, 10)\nlons = zeros(len(lats))\nrhos = 6372.*ones(len(lats))\ntrace = tsyganenko.tsygTrace(lats, lons, rhos)\nprint trace\nax = trace.plot()",
"The \"Plumbing\" way",
"# Inputs\n# Date and time\nyear = 2000\ndoy = 1\nhr = 1\nmn = 0\nsc = 0\n# Solar wind speed\nvxgse = -400.\nvygse = 0.\nvzgse = 0.\n# Execution parameters\nlmax = 5000\nrlim = 60. \nr0 = 1. \ndsmax = .01\nerr = .000001\n# Direction of the tracing\nmapto = 1\n# Magnetic activity [SW pressure (nPa), Dst, ByIMF, BzIMF]\nparmod = np.zeros(10)\nparmod[0:4] = [2, -8, -2, -5]\n# Start point (rh in Re)\nlat = 50.\nlon = 0.\nrh = 0.\n\n# This has to be called first\ntsyganenko.tsygFort.recalc_08(year,doy,hr,mn,sc,vxgse,vygse,vzgse)\n\n# Convert lat,lon to geographic cartesian and then gsw\nr,theta,phi, xgeo, ygeo, zgeo = tsyganenko.tsygFort.sphcar_08(1., np.radians(90.-lat), np.radians(lon), 0., 0., 0., 1)\nxgeo,ygeo,zgeo,xgsw,ygsw,zgsw = tsyganenko.tsygFort.geogsw_08(xgeo, ygeo, zgeo,0,0,0,1)\n\n# Trace field line\nxfgsw,yfgsw,zfgsw,xarr,yarr,zarr,l = tsyganenko.tsygFort.trace_08(xgsw,ygsw,zgsw,mapto,dsmax,err, \n rlim,r0,0,parmod,'T96_01','IGRF_GSW_08',lmax) \n\n# Convert back to spherical geographic coords\nxfgeo,yfgeo,zfgeo,xfgsw,yfgsw,zfgsw = tsyganenko.tsygFort.geogsw_08(0,0,0,xfgsw,yfgsw,zfgsw,-1)\ngcR, gdcolat, gdlon, xgeo, ygeo, zgeo = tsyganenko.tsygFort.sphcar_08(0., 0., 0., xfgeo, yfgeo, zfgeo, -1)\n\n\nprint '** START: {:6.3f}, {:6.3f}, {:6.3f}'.format(lat, lon, 1.)\nprint '** STOP: {:6.3f}, {:6.3f}, {:6.3f}'.format(90.-np.degrees(gdcolat), np.degrees(gdlon), gcR)\n\n# A quick checking plot\nfrom mpl_toolkits.mplot3d import proj3d\nimport numpy as np\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111, projection='3d')\n# Plot coordinate system\nax.plot3D([0,1],[0,0],[0,0],'b')\nax.plot3D([0,0],[0,1],[0,0],'g')\nax.plot3D([0,0],[0,0],[0,1],'r')\n\n# First plot a nice sphere for the Earth\nu = np.linspace(0, 2 * np.pi, 179)\nv = np.linspace(0, np.pi, 179)\ntx = np.outer(np.cos(u), np.sin(v))\nty = np.outer(np.sin(u), np.sin(v))\ntz = np.outer(np.ones(np.size(u)), np.cos(v))\nax.plot_surface(tx,ty,tz,rstride=10, cstride=10, color='grey', alpha=.5, zorder=2, linewidth=0.5)\n\n# Then plot the traced field line\nlatarr = [10.,20.,30.,40.,50.,60.,70.,80.]\nlonarr = [0., 180.]\nrh = 0.\nfor lon in lonarr:\n for lat in latarr:\n r,theta,phi, xgeo, ygeo, zgeo = tsyganenko.tsygFort.sphcar_08(1., np.radians(90.-lat), np.radians(lon), 0., 0., 0., 1)\n xgeo,ygeo,zgeo,xgsw,ygsw,zgsw = tsyganenko.tsygFort.geogsw_08(xgeo, ygeo, zgeo,0,0,0,1)\n xfgsw,yfgsw,zfgsw,xarr,yarr,zarr,l = tsyganenko.tsygFort.trace_08(xgsw,ygsw,zgsw,mapto,dsmax,err, \n rlim,r0,0,parmod,'T96_01','IGRF_GSW_08',lmax) \n for i in xrange(l):\n xgeo,ygeo,zgeo,dum,dum,dum = tsyganenko.tsygFort.geogsw_08(0,0,0,xarr[i],yarr[i],zarr[i],-1)\n xarr[i],yarr[i],zarr[i] = xgeo,ygeo,zgeo\n ax.plot3D(xarr[0:l],yarr[0:l],zarr[0:l], zorder=3, linewidth=2, color='y')\n\n# Set plot limits\nlim=4\nax.set_xlim3d([-lim,lim])\nax.set_ylim3d([-lim,lim])\nax.set_zlim3d([-lim,lim])",
"<a name=\"msis\"/>MSIS - Mass Spectrometer and Incoherent Scatter Radar\n<a href=\"#top\">[top]</a>\nThe fortran subroutine needed is gtd7:\n\nINPUTS:\nIYD - year and day as YYDDD (day of year from 1 to 365 (or 366)) (Year ignored in current model)\nSEC - UT (SEC)\nALT - altitude (KM)\nGLAT - geodetic latitude (DEG)\nGLONG - geodetic longitude (DEG)\nSTL - local aparent solar time (HRS; see Note below)\nF107A - 81 day average of F10.7 flux (centered on day DDD)\nF107 - daily F10.7 flux for previous day\nAP - magnetic index (daily) OR when SW(9)=-1., array containing:\n(1) daily AP\n(2) 3 HR AP index FOR current time\n(3) 3 HR AP index FOR 3 hrs before current time\n(4) 3 HR AP index FOR 6 hrs before current time\n(5) 3 HR AP index FOR 9 hrs before current time\n(6) average of height 3 HR AP indices from 12 TO 33 HRS prior to current time\n(7) average of height 3 HR AP indices from 36 TO 57 HRS prior to current time\n\n\nMASS - mass number (only density for selected gass is calculated. MASS 0 is temperature.\n MASS 48 for ALL. MASS 17 is Anomalous O ONLY.)\nOUTPUTS:\nD(1) - HE number density(CM-3)\nD(2) - O number density(CM-3)\nD(3) - N2 number density(CM-3)\nD(4) - O2 number density(CM-3)\nD(5) - AR number density(CM-3) \nD(6) - total mass density(GM/CM3)\nD(7) - H number density(CM-3)\nD(8) - N number density(CM-3)\nD(9) - Anomalous oxygen number density(CM-3)\nT(1) - exospheric temperature\nT(2) - temperature at ALT",
"# Inputs\nimport datetime as dt\nmyDate = dt.datetime(2012, 7, 5, 12, 35)\nglat = 40.\nglon = -80.\nmass = 48\n\n# First, MSIS needs a bunch of input which can be obtained from tabulated values\n# This function was written to access these values (not provided with MSIS by default)\nsolInput = msis.getF107Ap(myDate)\n\n# Also, to switch to SI units:\nmsis.meters(True)\n\n# Other input conversion\niyd = (myDate.year - myDate.year/100*100)*100 + myDate.timetuple().tm_yday\nsec = myDate.hour*24. + myDate.minute*60.\nstl = sec/3600. + glon/15.\n\naltitude = linspace(0., 500., 100)\ntemp = zeros(shape(altitude))\ndens = zeros(shape(altitude))\nN2dens = zeros(shape(altitude))\nO2dens = zeros(shape(altitude))\nOdens = zeros(shape(altitude))\nNdens = zeros(shape(altitude))\nArdens = zeros(shape(altitude))\nHdens = zeros(shape(altitude))\nHedens = zeros(shape(altitude))\nfor ia,alt in enumerate(altitude):\n d,t = msis.gtd7(iyd, sec, alt, glat, glon, stl, solInput['f107a'], solInput['f107'], solInput['ap'], mass)\n temp[ia] = t[1]\n dens[ia] = d[5]\n N2dens[ia] = d[2]\n O2dens[ia] = d[3]\n Ndens[ia] = d[7]\n Odens[ia] = d[1]\n Hdens[ia] = d[6]\n Hedens[ia] = d[0]\n Ardens[ia] = d[4]\n\nfigure(figsize=(16,8))\n#rcParams.update({'font.size': 12})\n\nsubplot(131)\nplot(temp, altitude)\ngca().set_xscale('log')\nxlabel('Temp. [K]')\nylabel('Altitude [km]')\n\nsubplot(132)\nplot(dens, altitude)\ngca().set_xscale('log')\ngca().set_yticklabels([])\nxlabel(r'Mass dens. [kg/m$^3$]')\n\nsubplot(133)\nplot(Odens, altitude, 'r-', \n O2dens, altitude, 'r--',\n Ndens, altitude, 'g-',\n N2dens, altitude, 'g--',\n Hdens, altitude, 'b-',\n Hedens, altitude, 'y-',\n Ardens, altitude, 'm-')\ngca().set_xscale('log')\ngca().set_yticklabels([])\nxlabel(r'Density [m$^3$]')\nleg = legend( (r'O', \n r'O$_2$', \n r'N',\n r'N$_2$',\n r'H',\n r'He',\n r'Ar',),\n 'upper right')\n\ntight_layout()",
"<a name=\"hwm\"/>HWM07: Horizontal Wind Model\n<a href=\"#top\">[top]</a>\n\n\nInput arguments:\n\niyd - year and day as yyddd\nsec - ut(sec)\nalt - altitude(km)\nglat - geodetic latitude(deg)\nglon - geodetic longitude(deg)\nstl - not used\nf107a - not used\nf107 - not used\nap - two element array with \nap(1) = not used \nap(2) = current 3hr ap index\n\n\n\n\n\nOutput argument:\n\nw(1) = meridional wind (m/sec + northward)\nw(2) = zonal wind (m/sec + eastward)",
"w = hwm.hwm07(11001, 0., 200., 40., -80., 0, 0, 0, [0, 0])\n\nprint w",
"<a name=\"hwm\"/>AACGM--Altitude Adjusted Corrected Geomagnetic Coordinates</a>\n<a href=\"http://superdarn.jhuapl.edu/software/analysis/aacgm/\">AACGM Homepage</a><br>\n<a href=\"#top\">[top]</a>\nmodels.aacgm.aacgmConv(lat,lon,alt,flg)\nconvert between geographic coords and aacgm\n\n\nInput arguments:\n\nlat - latitude\nlon - longitude\nalt - altitude(km)\nflg - flag to indicate geo to AACGM (0) or AACGM to geo (1)\n\n\n\nOutputs:\n\nolat = output latitude\nolon = output longitude\nr = the accuracy of the transform",
"#geo to aacgm\nglat,glon,r = aacgm.aacgmConv(42.0,-71.4,300.,2000,0)\nprint glat, glon, r\n\n#aacgm to geo\nglat,glon,r = aacgm.aacgmConv(52.7,6.6,300.,2000,1)\nprint glat, glon, r",
"models.aacgm.aacgmConvArr(lat,lon,alt,flg)\nconvert between geographic coords and aacgm (array form)\n\n\nInput arguments:\n\nlat - latitude list\nlon - longitude list\nalt - altitude(km) list\nflg - flag to indicate geo to AACGM (0) or AACGM to geo (1)\n\n\n\nOutputs:\n\nolat = output latitude list\nolon = output longitude list\nr = the accuracy of the transform",
"#geo to aacgm\nolat,olon,r = aacgm.aacgmConvArr([10.,20.,30.,40.],[80.,90.,100.,110.],[100.,150.,200.,250.],2000,0)\nprint olat\nprint olon\nprint r",
"models.aacgm.mltFromEpoch(epoch,mlon)\ncalculate magnetic local time from epoch time and mag lon\n\n\nInput arguments:\n\nepoch - the target time in epoch format\nmlon - the input magnetic longitude\n\n\n\nOutputs:\n\nmlt = the magnetic local time",
"import datetime as dt\nmyDate = dt.datetime(2012,7,10)\nepoch = utils.timeUtils.datetimeToEpoch(myDate)\nmlt = aacgm.mltFromEpoch(epoch,52.7)\nprint mlt",
"models.aacgm.mltFromYmdhms(yr,mo,dy,hr,mt,sc,mlon)\ncalculate magnetic local time from year, month, day, hour, minute, second and mag lon\n\n\nInput arguments:\n\nyr - the year\nmo - the month\ndy - the day\nhr - the hour\nmt - the minute\nsc - the second\nmlon - the input magnetic longitude\n\n\n\nOutputs:\n\nmlt = the magnetic local time",
"mlt = aacgm.mltFromYmdhms(2012,7,10,0,0,0,52.7)\nprint mlt",
"models.aacgm.mltFromYrsec(yr,yrsec,mlon)\ncalculate magnetic local time from seconds elapsed in the year and mag lon\n\n\nInput arguments:\n\nyr - the year\nyrsec - the year seconds\nmlon - the input magnetic longitude\n\n\n\nOutputs:\n\nmlt = the magnetic local time",
"yrsec = int(utils.timeUtils.datetimeToEpoch(dt.datetime(2012,7,10)) - utils.timeUtils.datetimeToEpoch(dt.datetime(2012,1,1)))\nprint yrsec\nmlt = aacgm.mltFromYrsec(2013,yrsec,52.7)\nprint mlt"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arcyfelix/Courses
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/01-Linear-Autoencoder-for-PCA-Exercise.ipynb
|
apache-2.0
|
[
"Linear Autoencoder for PCA - EXERCISE\n Follow the bold instructions below to reduce a 30 dimensional data set for classification into a 2-dimensional dataset! Then use the color classes to see if you still kept the same level of class separation in the dimensionality reduction\nThe Data\n Import numpy, matplotlib, and pandas",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Use pandas to read in the csv file called anonymized_data.csv . It contains 500 rows and 30 columns of anonymized data along with 1 last column with a classification label, where the columns have been renamed to 4 letter codes.",
"data = pd.read_csv('./data/anonymized_data.csv')\n\ndata.head()\n\ndata.info()\n\ndata.describe()",
"Scale the Data\n Use scikit learn to scale the data with a MinMaxScaler. Remember not to scale the Label column, just the data. Save this scaled data as a new variable called scaled_data.",
"from sklearn.preprocessing import MinMaxScaler\n\nscaler = MinMaxScaler()\n\nX_data = scaler.fit_transform(data.drop('Label', axis = 1))\n\npd.DataFrame(X_data, columns = data.columns[:-1]).describe()",
"The Linear Autoencoder\n Import tensorflow and import fully_connected layers from tensorflow.contrib.layers.",
"import tensorflow as tf\nfrom tensorflow.contrib.layers import fully_connected",
"Fill out the number of inputs to fit the dimensions of the data set and set the hidden number of units to be 2. Also set the number of outputs to match the number of inputs. Also choose a learning_rate value.",
"num_inputs = 30 # FILL ME IN\nnum_hidden = 2 # FILL ME IN \nnum_outputs = num_inputs # Must be true for an autoencoder!\n\nlearning_rate = 0.01 #FILL ME IN",
"Placeholder\n Create a placeholder fot the data called X.",
"X = tf.placeholder(tf.float32, shape = [None, num_inputs])",
"Layers\n Create the hidden layer and the output layers using the fully_connected function. Remember that to perform PCA there is no activation function.",
"hidden_layer = fully_connected(inputs = X, \n num_outputs = num_hidden, \n activation_fn = None)\noutputs = fully_connected(inputs = hidden_layer, \n num_outputs = num_outputs, \n activation_fn = None)",
"Loss Function\n Create a Mean Squared Error loss function.",
"loss = tf.reduce_mean(tf.square(outputs - X))",
"Optimizer\n Create an AdamOptimizer designed to minimize the previous loss function.",
"optimizer = tf.train.AdamOptimizer(learning_rate)\ntrain = optimizer.minimize(loss)",
"Init\n Create an instance of a global variable intializer.",
"init = tf.global_variables_initializer()",
"Running the Session\n Now create a Tensorflow session that runs the optimizer for at least 1000 steps. (You can also use epochs if you prefer, where 1 epoch is defined by one single run through the entire dataset.",
"num_steps = 1000\n\nwith tf.Session() as sess:\n sess.run(init)\n for iteration in range(num_steps):\n sess.run(train,\n feed_dict = {X: X_data})\n\n # Now ask for the hidden layer output (the 2 dimensional output)\n output_2d = hidden_layer.eval(feed_dict = {X: X_data})",
"Confirm that your output is now 2 dimensional along the previous axis of 30 features.",
"output_2d.shape",
"Now plot out the reduced dimensional representation of the data. Do you still have clear separation of classes even with the reduction in dimensions? Hint: You definitely should, the classes should still be clearly seperable, even when reduced to 2 dimensions.",
"plt.scatter(output_2d[:, 0],\n output_2d[:, 1],\n c = data['Label'])",
"Great Job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/ukesm1-0-ll/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: UKESM1-0-LL\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:26\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-ll', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
|
doc/notebooks/Transducers.ipynb
|
gpl-3.0
|
[
"Transducers\nTransducers, also called k-tape automata, are finite state machines where transitions are labeled on several tapes. The labelset of a transducer is a cartesian product of the labelsets of each tape: $L = L_1 \\times \\dots \\times L_k$.\nUsually, it is common to manipulate 2-tape transducers, and to consider one as the input tape, and the other as the output tape. For example, we can define a 2-tape transducer with the first tape accepting letters in [a-c], and the same for the second tape:",
"import vcsn\nctx = vcsn.context(\"lat<lal_char(abc), lal_char(abc)>, b\")\nctx",
"Now we can define a transducer that will transform every a into b, and keep the rest of the letters. When writing the expression, to delimit the labels (a letter for each tape), we have to use simple quotes.",
"r = ctx.expression(\"(a|b+b|b+c|c)*\")\nr\n\nr.automaton()",
"Similarly, it is possible to define weighted transducers, as for weighted automata:",
"import vcsn\nctxw = vcsn.context(\"lat<lan_char(ab), lan_char(xy)>, z\")\nctxw\n\nr = ctxw.expression(\"(a|x)*((a|y)(b|x))*(b|y)*\")\nr\n\nr.thompson()",
"This transducer transforms the as at the beginning into xs, then ab into yx, then bs into ys. As you can see, it's possible to have $\\varepsilon$-transitions in a transducer.\nKeep in mind that while it is the common use-case, transducers are not limited to 2 tapes, but can have an arbitrary number of tapes. The notion of input tape and output tape becomes fuzzy, and the problem will have to be addressed in the algorithms' interface."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google-aai/sc17
|
cats/step_1_to_3.ipynb
|
apache-2.0
|
[
"Performance Metric and Requirements\nAuthor(s): kozyr@google.com\nBefore we get started on data, we have to choose our project performance metric and decide the statistical testing criteria. We'll make use of the metric code we write here when we get to Step 6 (Training) and we'll use the criteria in Step 9 (Testing).",
"# Required libraries:\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns",
"Performance Metric: Accuracy\nWe've picked accuracy as our performance metric.\nAccuracy $ = \\frac{\\text{correct predictions}}{\\text{total predictions}}$",
"# Accuracy metric:\ndef get_accuracy(truth, predictions, threshold=0.5, roundoff=2):\n \"\"\"\n Args:\n truth: can be Boolean (False, True), int (0, 1), or float (0, 1)\n predictions: number between 0 and 1, inclusive\n threshold: we convert predictions to 1s if they're above this value\n roundoff: report accuracy to how many decimal places?\n\n Returns:\n accuracy: number correct divided by total predictions\n \"\"\"\n\n truth = np.array(truth) == (1|True)\n predicted = np.array(predictions) >= threshold\n matches = sum(predicted == truth)\n accuracy = float(matches) / len(truth)\n return round(accuracy, roundoff)\n\n# Try it out:\nacc = get_accuracy(truth=[0, False, 1], predictions=[0.2, 0.7, 0.6])\nprint 'Accuracy is ' + str(acc) + '.'",
"Compare Loss Function with Performance Metric",
"def get_loss(predictions, truth):\n # Our methods will be using cross-entropy loss.\n return -np.mean(truth * np.log(predictions) + (1 - truth) * np.log(1 - predictions))\n\n# Simulate some situations:\nloss = []\nacc = []\nfor i in range(1000):\n for n in [10, 100, 1000]:\n p = np.random.uniform(0.01, 0.99, (1, 1))\n y = np.random.binomial(1, p, (n, 1))\n x = np.random.uniform(0.01, 0.99, (n, 1))\n acc = np.append(acc, get_accuracy(truth=y, predictions=x, roundoff=6))\n loss = np.append(loss, get_loss(predictions=x, truth=y))\n\ndf = pd.DataFrame({'accuracy': acc, 'cross-entropy': loss})\n\n# Visualize with Seaborn\nimport seaborn as sns\n%matplotlib inline\nsns.regplot(x=\"accuracy\", y=\"cross-entropy\", data=df)",
"Hypothesis Testing Setup",
"# Testing setup:\nSIGNIFICANCE_LEVEL = 0.05\nTARGET_ACCURACY = 0.80\n\n# Hypothesis test we'll use:\nfrom statsmodels.stats.proportion import proportions_ztest\n\n# Using standard notation for a one-sided test of one population proportion:\nn = 100 # Example number of predictions\nx = 95 # Example number of correct predictions\np_value = proportions_ztest(count=x, nobs=n, value=TARGET_ACCURACY, alternative='larger')[1]\nif p_value < SIGNIFICANCE_LEVEL:\n print 'Congratulations! Your model is good enough to build. It passes testing. Awesome!'\nelse:\n print 'Too bad. Better luck next project. To try again, you need a pristine test dataset.'",
"Step 2 - Get Data\nThis part is done outside Jupyter and run in your VM using the shell script provided.\nStep 3 - Split Data\nThis part is done outside Jupyter and run in your VM using the shell script provided."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
numenta/htmresearch
|
projects/localization_rnn/1d.ipynb
|
agpl-3.0
|
[
"1D localization using landmarks\nMaxwell Pollack, following Kanitscheider + Fiete 2017\nAn agent accelerates randomly in a periodic 1D environment of length $L$ with 2 point-like, randomly-placed, indistinguishable landmarks.\n$$\\begin{split}\na^t & \\sim \\mathcal{U}(-a_\\text{max}, a_\\text{max}) \\\nv^t & = v_\\text{max} - \\bigg| \\big(v_\\text{max} + \\sum\\limits_{t'=0}^{t} a^{t'}\\big) \\% 4v_\\text{max} - 2v_\\text{max} \\bigg| \\\nx^t & = \\bigg(\\sum\\limits_{t'=0}^t v^{t'}\\bigg) \\% L \\\n\\lambda^n & \\sim \\mathcal{U}(0,L)\n\\end{split}\n$$\nSuperscripts here denote tensor indices, like time $t={0,1,...,T-1}$ and landmark index $n={0,1}$.",
"import torch\n\ndef simulate(T, L, nLM=2, vMax=1, aMax=0.25):\n \n a = aMax * (2*torch.rand(T) - 1)\n v = vMax - abs((a.cumsum(0) + vMax) % (4*vMax) - 2*vMax)\n x = (L*torch.rand(1) + v.cumsum(0)) % L\n landmarks = torch.rand(nLM) * L\n \n lmSense = sum(torch.relu(1 - L/2 + abs(L/2 - abs(x-landmarks.unsqueeze(1)))))\n lmMap = sum(torch.relu(1 - L/2 + abs(L/2 - abs(landmarks.unsqueeze(1) - torch.arange(float(L))))))\n input = torch.cat((lmMap.expand(T,L), lmSense.view(T,1), v.view(T,1)/vMax), 1)\n \n target = torch.relu(1 - L/2 + abs(L/2 - abs(x.unsqueeze(1)-torch.arange(float(L)))))\n \n return (input, target, landmarks, x)",
"Network\nA simple RNN is given as input a landmark map, a short-range landmark proximity sense, and the agent's velocity.\n$$z_\\text{input}^{t,i} = \\begin{cases}\n\\sum\\limits_n max\\big(0,1 - \\frac{L}{2} + \\big|\\frac{L}{2}-\\big|\\lambda^n - \\frac{2\\pi i}{L}\\big|\\big|\\big) & 0 \\leq i \\lt L \\\n\\sum\\limits_n max\\big( 0, 1 - \\frac{L}{2} + \\big|\\frac{L}{2}-\\big|x^{t} - \\lambda^n\\big|\\big|\\big) & i=L \\\nv^t/v_\\text{max} & i=L+1\n\\end{cases}$$\nInput neurons are indexed by $i={0,1,...,L+1}$, hidden neurons by $j = {0,1,...,2L-1}$, and output neurons by $k = {0,1,...,L-1}$.\nThe network is trained with Adam to minimize the mean squared error between its output and the agent's location represented as the set of target activations\n$$z_\\text{target}^{t,k} = max\\bigg(0,1-\\big|x^t - \\frac{2\\pi k}{N}\\big|\\bigg)$$",
"class Network(torch.nn.Module):\n def __init__(self, nInput, nHidden, nOutput):\n super(Network, self).__init__()\n self.rnn = torch.nn.RNN(nInput, nHidden, batch_first=True, nonlinearity='relu')\n self.linear = torch.nn.Linear(nHidden, nOutput)\n\n def forward(self, input):\n hidden = self.rnn(input)[0]\n return (torch.nn.functional.leaky_relu(self.linear(hidden)), hidden)\n \nT = 256\nL = 16\n\nnetwork = Network(L+2, 2*L, L)\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.Adam(network.parameters(), lr=1e-4)\n\nfor trial in range(5*10**5):\n \n input, target, landmarks, location = simulate(T, L)\n output, hidden = network(input.unsqueeze(0))\n\n loss = criterion(output, target)\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()",
"Visualization",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfig,ax = plt.subplots(6,\n 2,\n figsize=(17,12),\n gridspec_kw = {'height_ratios':[L,L,L,2*L,L,L],\n 'width_ratios':[2*L,T]}\n )\ninput, target, landmarks, x = simulate(10**6, L)\noutput, hidden = network(input.unsqueeze(0))\nw_ih, w_hh, b_ih, b_hh, w_ho, b_ho = [p for p in network.parameters()]\n\n# Sort hidden cells by their velocity input weight\nhid = torch.argsort(w_ih[:,-1])\n\nax[0,0].imshow(w_ih.data.numpy()[hid,:L].T, cmap='bwr', vmin=-1, vmax=1)\nax[0,0].set_title('LM Map - Hidden')\nax[0,0].set_yticks([0,L-1])\nax[0,0].set_ylabel('i')\nax[0,1].imshow(input[:T,:L].data.numpy().T)\nax[0,1].set_yticks([0,L-1])\nax[0,1].set_title('Landmark Map')\nax[1,0].imshow([w_ih.data.numpy()[hid,-2]], aspect=L, extent=(0,2*L,L-0.5,L+0.5), cmap='bwr', vmin=-1, vmax=1)\nax[1,0].set_yticks([L])\nax[1,0].set_ylabel('i')\nax[1,0].set_title('LM Proximity - Hidden')\nax[1,1].imshow([input[:T,-2].data.numpy()], aspect=L, extent=(0,T,L-0.5,L+0.5))\nax[1,1].set_yticks([L])\nax[1,1].set_title('Landmark Proximity')\nax[2,0].imshow([w_ih.data.numpy()[hid,-1]], aspect=L, extent=(0,2*L,L+0.5,L+1.5), cmap='bwr', vmin=-1, vmax=1)\nax[2,0].set_yticks([L+1])\nax[2,0].set_ylabel('i')\nax[2,0].set_title('Velocity - Hidden')\nax[2,1].imshow([input[:T,-1].data.numpy()], aspect=L, extent=(0,T,L+0.5,L+1.5))\nax[2,1].set_yticks([L+1])\nax[2,1].set_title('Velocity')\nax[3,0].imshow(w_hh.data.numpy()[hid][:,hid], cmap='bwr', vmin=-1, vmax=1)\nax[3,0].set_title('Hidden - Hidden')\nax[3,0].set_ylabel('j')\nax[3,1].imshow(hidden[0,:T,hid].data.numpy().T)\nax[3,1].set_title('Hidden')\nim1=ax[4,0].imshow(w_ho.data.numpy()[:,hid], cmap='bwr', vmin=-1, vmax=1)\nax[4,0].set_title('Hidden - Output')\nax[4,0].set_ylabel('k')\nax[4,0].set_xlabel('j')\nim2=ax[4,1].imshow(output[0,:T].data.numpy().T)\nax[4,1].set_title('Output')\nax[5,0].axis('off')\ncb1=plt.colorbar(im2, ax=ax[5,0], orientation='horizontal', aspect=8, fraction=0.15, ticks=[])\ncb1.ax.set_xlabel('normalized')\ncb1.ax.set_title('activation')\ncb2=plt.colorbar(im1, ax=ax[5,0], orientation='horizontal', fraction=0.7, aspect=8)\ncb2.ax.set_title('weight')\nax[5,1].imshow(target[:T].data.numpy().T)\nax[5,1].set_title('Target')\nax[5,1].set_ylabel('k')\nax[5,1].set_xlabel('t')\nplt.show()",
"Position + velocity responses of the hidden cells",
"from math import ceil\n\nfig,ax = plt.subplots(ceil(L/4),8,figsize=(19,8))\n\nfor cell in range(2*L):\n ax[cell//8,cell%8].set_title(str(cell))\n ax[cell//8,cell%8].axis('off')\n im = ax[cell//8,cell%8].hexbin(input[:,-1].data.numpy(),\n x.data.numpy(),\n C=hidden[0,:,hid[cell]].data.numpy(),\n extent=(-1,1,0,L),\n gridsize=256)\n ax[cell//8,cell%8].hlines(landmarks.data.numpy(),\n xmin=-1,\n xmax=1,\n color='w',\n linestyles='dotted')\n ax[cell//8,cell%8].invert_yaxis()\n\nax[-1,0].axis('on')\nax[-1,0].set_ylabel('position')\nax[-1,0].set_xlabel('velocity')\nplt.colorbar(im, ax=ax, pad=0.018, ticks=[], aspect=40).ax.set_ylabel(r'average activation (normalized)$\\rightarrow$')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
agile-geoscience/notebooks
|
Jerk_jounce_etc.ipynb
|
apache-2.0
|
[
"Jerk, jounce, etc.\nThis notebook accompanies a blog post on Agile.\nFirst, the usual preliminaries...",
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport seaborn as sns\nsns.set()",
"Load the data\nThis dataset is from this (slightly weird) blog post https://www.duckware.com/blog/tesla-elon-musk-nytimes-john-broder-feud/index.html. It was the only decent bit of telemetry data I could find. I doubt it's properly licensed. If you have access to any open data — maybe from a Formula 1 car, or maybe your own vehicle, I'd love to know about it!",
"data = np.loadtxt('data/tesla_speed.csv', delimiter=',')",
"Convert x to m and v to m/s, per the instructions in the blog post about the dataset (modified for metric units).",
"x = (data[:, 0] + 3) * 2.05404\nx = x - np.min(x)\nv_x = np.mean(data[:, 1:], axis=1) * 0.0380610\n\nplt.plot(x, v_x)\nplt.xlabel('Displacement [m]')\nplt.ylabel('Velocity [m/s]')\nplt.show()",
"Note that the sampling was done per unit of displacement; we'd really prefer time. Let's convert it!\nTime conversion\nConvert to the time domain, since we want derivatives with respect to time, not distance.",
"elapsed_time = np.cumsum(1 / v_x)",
"Adjust the last entry, to avoid a very long interval.",
"elapsed_time[-1] = 2 * elapsed_time[-2] - elapsed_time[-3]\n\nt = np.linspace(0, elapsed_time[-1], 1000)\n\nv_t = np.interp(t, elapsed_time, v_x)\n\nplt.plot(t, v_t)\nplt.show()",
"Compute integrals\nUse trapezoidal integral approximation, https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.cumtrapz.html",
"import scipy.integrate\n\n# Displacement, d\nd = scipy.integrate.cumtrapz(v_t, t, initial=0)\n\nplt.plot(t, d)\nplt.show()\n\n# Absement\nabt = scipy.integrate.cumtrapz(d, t, initial=0)\n\n# Absity\naby = scipy.integrate.cumtrapz(abt, t, initial=0)\n\n# Abseleration\nabn = scipy.integrate.cumtrapz(aby, t, initial=0)\n\nplt.plot(abn)\nplt.show()",
"That's a boring graph!\nCheck that derivative of displacement gives back velocity\nUse Savitsky-Golay filter for differentiation with some smoothing: https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter",
"import scipy.signal\n\ndt = t[1] - t[0]\n\n# Check that Savitsky-Golay filter gives velocity from d/dt displacement.\nv_ = scipy.signal.savgol_filter(d, delta=dt, window_length=3, polyorder=2, deriv=1)\n\nplt.figure(figsize=(15, 3))\nplt.plot(t, v_, lw=3)\nplt.plot(t, v_t, '--', lw=3)",
"It does: we seem to be computing integrals properly.\nCompute derivatives",
"# Acceleration\na = scipy.signal.savgol_filter(v_t, delta=dt, window_length=11, polyorder=2, deriv=1)\n\nplt.figure(figsize=(15,3))\nplt.plot(a, lw=3, color='green')\nplt.axhline(c='k', lw=0.5, zorder=0)\nplt.show()\n\nplt.figure(figsize=(15,3))\nplt.imshow([a], cmap='RdBu_r', vmin=-1.6, vmax=1.6, alpha=0.8,\n aspect='auto', extent=[t.min(), t.max(), v_t.min(), v_t.max()])\nplt.colorbar(label=\"Acceleration [m/s²]\")\nplt.plot(t, v_t, 'white', lw=4)\nplt.plot(t, v_t, 'green')\nplt.title(\"Velocity (green) and acceleration (red-blue)\")\nplt.xlabel('Time [s]')\nplt.ylabel('Velocity [m/s]')\nplt.grid('off')\nplt.show()",
"Jerk, jounce, and so on",
"j = scipy.signal.savgol_filter(v_t, delta=dt, window_length=11, polyorder=2, deriv=2)\ns = scipy.signal.savgol_filter(v_t, delta=dt, window_length=15, polyorder=3, deriv=3)\nc = scipy.signal.savgol_filter(v_t, delta=dt, window_length=19, polyorder=4, deriv=4)\np = scipy.signal.savgol_filter(v_t, delta=dt, window_length=23, polyorder=5, deriv=5)\n\nplt.figure(figsize=(15,3))\nplt.imshow([j], cmap='RdBu_r', vmin=-3, vmax=3, alpha=0.8,\n aspect='auto', extent=[t.min(), t.max(), v_t.min(), v_t.max()])\nplt.colorbar(label=\"Jerk [m/s³]\")\nplt.plot(t, v_t, 'white', lw=4)\nplt.plot(t, v_t, 'green')\nplt.title(\"Velocity (green) and jerk (red-blue)\")\nplt.xlabel('Time [s]')\nplt.ylabel('Velocity [m/s]')\nplt.grid('off')\nplt.show()",
"Plot everything!",
"plots = {\n 'Abseleration': abn,\n 'Absity': aby,\n 'Absement': abt,\n 'Displacement': d,\n 'Velocity': v_t,\n 'Acceleration': a,\n 'Jerk': j,\n 'Jounce': s,\n# 'Crackle': c,\n# 'Pop': p, \n}\n\ncolors = ['C0', 'C0', 'C0', 'C1', 'C2', 'C2', 'C2', 'C2']\n\nfig, axs = plt.subplots(figsize=(15,15), nrows=len(plots))\npos = 0.01, 0.8\nparams = dict(fontsize=13)\n\nfor i, (k, v) in enumerate(plots.items()):\n ax = axs[i]\n ax.plot(t, v, lw=2, color=colors[i])\n ax.text(*pos, k, transform=ax.transAxes, **params)\n# if np.min(v) < 0:\n# ax.axhline(color='k', lw=0.5, zorder=0)\n if i < len(plots)-1:\n ax.set_xticklabels([])\n\nplt.show()",
"<hr>\n\n© 2018 Agile Scientific — licensed CC-BY"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cod3licious/simec
|
03_embed_classlabels.ipynb
|
mit
|
[
"Experiments with Similarity Encoders\n...to show that SimEc can create similarity preserving embeddings based on human ratings\nIn this iPython Notebook are some examples to illustrate the potential of Similarity Encoders (SimEc) for creating similarity preserving embeddings. For further details and theoretical background on this new neural network architecture, please refer to the corresponding paper.",
"from __future__ import unicode_literals, division, print_function, absolute_import\nfrom builtins import range\nimport numpy as np\nnp.random.seed(28)\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import Ridge\nfrom sklearn.decomposition import PCA, KernelPCA\nfrom sklearn.datasets import load_digits, fetch_mldata, fetch_20newsgroups\nfrom sklearn.neighbors import KNeighborsClassifier as KNN\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import GridSearchCV\nimport tensorflow as tf\ntf.set_random_seed(28)\nimport keras\n\n# find nlputils at https://github.com/cod3licious/nlputils\nfrom nlputils.features import FeatureTransform, features2mat\n\nfrom simec import SimilarityEncoder\nfrom utils import center_K, check_similarity_match\nfrom utils_plotting import get_colors, plot_digits, plot_mnist, plot_20news\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n# set this to True if you want to save the figures from the paper\nsavefigs = False",
"Handwritten Digits (8x8 px)\nSee http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html",
"# load digits dataset\ndigits = load_digits()\nX = digits.data\nX /= float(X.max())\nss = StandardScaler(with_std=False)\nX = ss.fit_transform(X)\ny = digits.target\nn_samples, n_features = X.shape",
"SimEc based on class labels\nWe've seen that SimEcs can reach the same solutions as traditional spectral methods such as kPCA and isomap. However, these methods have the limitation that you can only embed new data points if you can compute their kernel map, i.e. the similarity to the training examples. But what if the similarity matrix used as targets during training was generated by an unknown process such as human similarity judgments?\nTo show how we can use SimEc in such a scenario, we construct the similarity matrix from the class labels assigned by human annotators (1=same class, 0=different class).",
"Y = np.tile(y, (len(y), 1))\nS = center_K(np.array(Y==Y.T, dtype=int))\n# take only some of the samples as targets to speed it all up\nn_targets = 1000\n\n# knn accuracy using all original feature dimensions\nclf = KNN(n_neighbors=10)\nclf.fit(X[:n_targets], y[:n_targets])\nprint(\"knn accuracy: %f\" % clf.score(X[n_targets:], y[n_targets:]))\n\n# PCA\npca = PCA(n_components=2)\nX_embedp = pca.fit_transform(X)\nplot_digits(X_embedp, digits, title='Digits embedded with PCA')\nclf = KNN(n_neighbors=10)\nclf.fit(X_embedp[:n_targets], y[:n_targets])\nprint(\"knn accuracy: %f\" % clf.score(X_embedp[n_targets:], y[n_targets:]))\n\n# check how many relevant dimensions there are\neigenvals = np.linalg.eigvalsh(S)[::-1]\nplt.figure();\nplt.plot(list(range(1, S.shape[0]+1)), eigenvals, '-o', markersize=3);\nplt.plot([1, S.shape[0]],[0,0], 'k--', linewidth=0.5);\nplt.xlim(1, X.shape[1]+1);\nplt.title('Eigenvalue spectrum of S (based on class labels)');\n\nD, V = np.linalg.eig(S)\n# regular kpca embedding: take largest EV\nD1, V1 = D[np.argsort(D)[::-1]], V[:,np.argsort(D)[::-1]]\nX_embed = np.dot(V1.real, np.diag(np.sqrt(np.abs(D1.real))))\nplot_digits(X_embed[:,:2], digits, title='Digits embedded based on first 2 components', plot_box=False)\nclf = KNN(n_neighbors=10)\nclf.fit(X_embed[:n_targets,:2], y[:n_targets])\nprint(\"knn accuracy: %f\" % clf.score(X_embed[n_targets:,:2], y[n_targets:]))\nprint(\"similarity approximation - mse: %f\" % check_similarity_match(X_embed[:,:2], S)[0])",
"Lets first try a simple linear SimEc.",
"# similarity encoder with similarities relying on class information - linear\nsimec = SimilarityEncoder(X.shape[1], 2, n_targets, l2_reg_emb=0.00001, l2_reg_out=0.0000001, \n s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets], opt=keras.optimizers.Adamax(lr=0.005))\nsimec.fit(X, S[:,:n_targets])\nX_embed = simec.transform(X)\nplot_digits(X_embed, digits, title='Digits - SimEc (class sim, linear)')\n# of course we're overfitting here quite a bit since we used all samples for training\n# even if we didn't use the corresponding similarities...but this is only a toy example anyways\nclf = KNN(n_neighbors=10)\nclf.fit(X_embed[:n_targets], y[:n_targets])\nprint(\"knn accuracy: %f\" % clf.score(X_embed[n_targets:], y[n_targets:]))\nprint(\"similarity approximation - mse: %f\" % check_similarity_match(X_embed, S)[0])",
"Great, we already see some clusters separating from the rest! What if we add more layers?\nWe can examine how the embedding changes during training: first some clusters separate, then it starts to look like the eigenvalue based embedding with the clusters of several numbers pulled together.",
"# similarity encoder with similarities relying on class information - 1 hidden layer\nn_targets = 1000\nsimec = SimilarityEncoder(X.shape[1], 2, n_targets, hidden_layers=[(100, 'tanh')], \n l2_reg=0.00000001, l2_reg_emb=0.00001, l2_reg_out=0.0000001, \n s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets], opt=keras.optimizers.Adamax(lr=0.01))\ne_total = 0\nfor e in [5, 10, 10, 10, 15, 25, 25]:\n e_total += e\n print(e_total)\n simec.fit(X, S[:,:n_targets], epochs=e)\n X_embed = simec.transform(X)\n clf = KNN(n_neighbors=10)\n clf.fit(X_embed[:1000], y[:1000])\n acc = clf.score(X_embed[1000:], y[1000:])\n print(\"knn accuracy: %f\" % acc)\n print(\"similarity approximation - mse: %f\" % check_similarity_match(X_embed, S)[0])\n plot_digits(X_embed, digits, title='SimEc after %i epochs; accuracy: %.1f' % (e_total, 100*acc) , plot_box=False)",
"MNIST Dataset\nEmbedding the regular 28x28 pixel MNIST digits",
"# load digits\nmnist = fetch_mldata('MNIST original', data_home='data')\nX = mnist.data/255. # normalize to 0-1\ny = np.array(mnist.target, dtype=int)\n# subsample 10000 random data points\nnp.random.seed(42)\nn_samples = 10000\nn_test = 2000\nn_targets = 1000\nrnd_idx = np.random.permutation(X.shape[0])[:n_samples]\nX_test, y_test = X[rnd_idx[:n_test],:], y[rnd_idx[:n_test]]\nX, y = X[rnd_idx[n_test:],:], y[rnd_idx[n_test:]]\n# scale\nss = StandardScaler(with_std=False)\nX = ss.fit_transform(X)\nX_test = ss.transform(X_test)\nn_train, n_features = X.shape\n\n# compute similarity matrix based on class labels\nY = np.tile(y, (len(y), 1))\nS = center_K(np.array(Y==Y.T, dtype=int))\nY = np.tile(y_test, (len(y_test), 1))\nS_test = center_K(np.array(Y==Y.T, dtype=int))",
"\"Kernel PCA\" and Ridge Regression vs. SimEc\nTo get an idea of how a perfect similarity preserving embedding would look like when computing similarities from class labels, we can embed the data by performing an eigendecomposition of the similarity matrix (i.e. performing kernel PCA). However, since in a real setting we would be unable to compute the similarities of the test samples to the training samples (since we don't know their class labels), to map the test samples into the embedding space we additionally need to train a (ridge) regression model to map from the original input space to the embedding space.\nA SimEc with multiple hidden layers starts to get close to the eigendecomposition solution.",
"D, V = np.linalg.eig(S)\n# as a comparison: regular kpca embedding: take largest EV\nD1, V1 = D[np.argsort(D)[::-1]], V[:,np.argsort(D)[::-1]]\nX_embed = np.dot(V1.real, np.diag(np.sqrt(np.abs(D1.real))))\nplot_mnist(X_embed[:,:2], y, title='MNIST (train) - largest 2 EV')\nprint(\"similarity approximation 2D - mse: %f\" % check_similarity_match(X_embed[:,:2], S)[0])\nprint(\"similarity approximation 5D - mse: %f\" % check_similarity_match(X_embed[:,:5], S)[0])\nprint(\"similarity approximation 7D - mse: %f\" % check_similarity_match(X_embed[:,:7], S)[0])\nprint(\"similarity approximation 10D - mse: %f\" % check_similarity_match(X_embed[:,:10], S)[0])\nprint(\"similarity approximation 25D - mse: %f\" % check_similarity_match(X_embed[:,:25], S)[0])\n\nn_targets = 2000\n# get good alpha for RR model\nm = Ridge()\nrrm = GridSearchCV(m, {'alpha': [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.25, 0.5, 0.75, 1., 2.5, 5., 7.5, 10., 25., 50., 75., 100., 250., 500., 750., 1000.]})\nrrm.fit(X, X_embed[:,:8])\nalpha = rrm.best_params_[\"alpha\"]\nprint(\"Ridge Regression with alpha: %r\" % alpha)\nmse_ev, mse_rr, mse_rr_test = [], [], []\nmse_simec, mse_simec_test = [], []\nmse_simec_hl, mse_simec_hl_test = [], []\ne_dims = [2, 3, 4, 5, 6, 7, 8, 9, 10, 15]\nfor e_dim in e_dims:\n print(e_dim)\n # eigenvalue based embedding\n mse = check_similarity_match(X_embed[:,:e_dim], S)[0]\n mse_ev.append(mse)\n # train a linear ridge regression model to learn the mapping from X to Y\n model = Ridge(alpha=alpha)\n model.fit(X, X_embed[:,:e_dim])\n X_embed_r = model.predict(X)\n X_embed_test_r = model.predict(X_test)\n mse = check_similarity_match(X_embed_r, S)[0]\n mse_rr.append(mse)\n mse = check_similarity_match(X_embed_test_r, S_test)[0]\n mse_rr_test.append(mse)\n # simec - linear\n simec = SimilarityEncoder(X.shape[1], e_dim, n_targets, s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets],\n orth_reg=0.001 if e_dim > 8 else 0., l2_reg_emb=0.00001, \n l2_reg_out=0.0000001, opt=keras.optimizers.Adamax(lr=0.001))\n simec.fit(X, S[:,:n_targets])\n X_embeds = simec.transform(X)\n X_embed_tests = simec.transform(X_test)\n mse = check_similarity_match(X_embeds, S)[0]\n mse_simec.append(mse)\n mse_t = check_similarity_match(X_embed_tests, S_test)[0]\n mse_simec_test.append(mse_t)\n # simec - 2hl\n simec = SimilarityEncoder(X.shape[1], e_dim, n_targets, hidden_layers=[(25, 'tanh'), (25, 'tanh')],\n s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets], orth_reg=0.001 if e_dim > 7 else 0., \n l2_reg=0., l2_reg_emb=0.00001, l2_reg_out=0.0000001, opt=keras.optimizers.Adamax(lr=0.001))\n simec.fit(X, S[:,:n_targets])\n X_embeds = simec.transform(X)\n X_embed_tests = simec.transform(X_test)\n mse = check_similarity_match(X_embeds, S)[0]\n mse_simec_hl.append(mse)\n mse_t = check_similarity_match(X_embed_tests, S_test)[0]\n mse_simec_hl_test.append(mse_t)\n print(\"mse ev: %f; mse rr: %f (%f); mse simec (0hl): %f (%f); mse simec (2hl): %f (%f)\" % (mse_ev[-1], mse_rr[-1], mse_rr_test[-1], mse_simec[-1], mse_simec_test[-1], mse, mse_t))\nkeras.backend.clear_session()\ncolors = get_colors(15)\nplt.figure();\nplt.plot(e_dims, mse_ev, '-o', markersize=3, c=colors[14], label='Eigendecomposition');\nplt.plot(e_dims, mse_rr, '-o', markersize=3, c=colors[12], label='ED + Regression');\nplt.plot(e_dims, mse_rr_test, '--o', markersize=3, c=colors[12], label='ED + Regression (test)');\nplt.plot(e_dims, mse_simec, '-o', markersize=3, c=colors[8], label='SimEc 0hl');\nplt.plot(e_dims, mse_simec_test, '--o', markersize=3, c=colors[8], label='SimEc 0hl (test)');\nplt.plot(e_dims, mse_simec_hl, '-o', markersize=3, c=colors[4], label='SimEc 2hl');\nplt.plot(e_dims, mse_simec_hl_test, '--o', markersize=3, c=colors[4], label='SimEc 2hl (test)');\nplt.legend(loc=0);\nplt.title('MNIST (class based similarities)');\nplt.plot([0, e_dims[-1]], [0,0], 'k--', linewidth=0.5);\nplt.xticks(e_dims, e_dims);\nplt.xlabel('Number of Embedding Dimensions ($d$)')\nplt.ylabel('Mean Squared Error of $\\hat{S}$')\nprint(\"e_dims=\", e_dims)\nprint(\"mse_ev=\", mse_ev)\nprint(\"mse_rr=\", mse_rr)\nprint(\"mse_rr_test=\", mse_rr_test)\nprint(\"mse_simec=\", mse_simec)\nprint(\"mse_simec_test=\", mse_simec_test)\nprint(\"mse_simec_hl=\", mse_simec_hl)\nprint(\"mse_simec_hl_test=\", mse_simec_hl_test)\nif savefigs: plt.savefig('fig_class_mse_edim.pdf', dpi=300)",
"20 Newsgroups\nTo show that SimEc embeddings can also be computed for other types of data, we do some further experiments with the 20 newsgroups dataset. We subsample 7 of the 20 categories and remove meta information such as headers to avoid overfitting (see also http://scikit-learn.org/stable/datasets/twenty_newsgroups.html). The posts are transformed into very high dimensional tf-idf vectors used as input to the SimEc and to compute the linear kernel matrix.",
"## load the data and transform it into a tf-idf representation\ncategories = [\n \"comp.graphics\",\n \"rec.autos\",\n \"rec.sport.baseball\",\n \"sci.med\",\n \"sci.space\",\n \"soc.religion.christian\",\n \"talk.politics.guns\"\n]\nnewsgroups_train = fetch_20newsgroups(subset='train', remove=(\n 'headers', 'footers', 'quotes'), data_home='data', categories=categories, random_state=42)\nnewsgroups_test = fetch_20newsgroups(subset='test', remove=(\n 'headers', 'footers', 'quotes'), data_home='data', categories=categories, random_state=42)\n# store in dicts (if the text contains more than 3 words)\ntextdict = {i: t for i, t in enumerate(newsgroups_train.data) if len(t.split()) > 3}\ntextdict.update({i: t for i, t in enumerate(newsgroups_test.data, len(newsgroups_train.data)) if len(t.split()) > 3})\ntrain_ids = [i for i in range(len(newsgroups_train.data)) if i in textdict]\ntest_ids = [i for i in range(len(newsgroups_train.data), len(textdict)) if i in textdict]\nprint(\"%i training and %i test samples\" % (len(train_ids), len(test_ids)))\n# transform into tf-idf features\nft = FeatureTransform(norm='max', weight=True, renorm='max')\ndocfeats = ft.texts2features(textdict, fit_ids=train_ids)\n# organize in feature matrix\nX, featurenames = features2mat(docfeats, train_ids)\nX_test, _ = features2mat(docfeats, test_ids, featurenames)\nprint(\"%i features\" % len(featurenames))\ntargets = np.hstack([newsgroups_train.target,newsgroups_test.target])\ny = targets[train_ids]\ny_test = targets[test_ids]\nn_targets = 1000\ntarget_names = newsgroups_train.target_names\n\n# compute label based simmat\nY = np.tile(y, (len(y), 1))\nS = center_K(np.array(Y==Y.T, dtype=int))\nY = np.tile(y_test, (len(y_test), 1))\nS_test = center_K(np.array(Y==Y.T, dtype=int))\nD, V = np.linalg.eig(S)\n# as a comparison: regular kpca embedding: take largest EV\nD1, V1 = D[np.argsort(D)[::-1]], V[:,np.argsort(D)[::-1]]\nX_embed = np.dot(V1.real, np.diag(np.sqrt(np.abs(D1.real))))\nplot_20news(X_embed[:, :2], y, target_names, title='20 newsgroups - 2 largest EV', legend=True)\nprint(\"similarity approximation 2D - mse: %f\" % check_similarity_match(X_embed[:,:2], S)[0])\nprint(\"similarity approximation 5D - mse: %f\" % check_similarity_match(X_embed[:,:5], S)[0])\nprint(\"similarity approximation 7D - mse: %f\" % check_similarity_match(X_embed[:,:7], S)[0])\nprint(\"similarity approximation 10D - mse: %f\" % check_similarity_match(X_embed[:,:10], S)[0])\nprint(\"similarity approximation 25D - mse: %f\" % check_similarity_match(X_embed[:,:25], S)[0])\n\nn_targets = 2000\n# get good alpha for RR model\nm = Ridge()\nrrm = GridSearchCV(m, {'alpha': [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 0.25, 0.5, 0.75, 1., 2.5, 5., 7.5, 10., 25., 50., 75., 100., 250., 500., 750., 1000.]})\nrrm.fit(X, X_embed[:,:8])\nalpha = rrm.best_params_[\"alpha\"]\nprint(\"Ridge Regression with alpha: %r\" % alpha)\nmse_ev, mse_rr, mse_rr_test = [], [], []\nmse_simec, mse_simec_test = [], []\nmse_simec_hl, mse_simec_hl_test = [], []\ne_dims = [2, 3, 4, 5, 6, 7, 8, 9, 10]\nfor e_dim in e_dims:\n print(e_dim)\n # eigenvalue based embedding\n mse = check_similarity_match(X_embed[:,:e_dim], S)[0]\n mse_ev.append(mse)\n # train a linear ridge regression model to learn the mapping from X to Y\n model = Ridge(alpha=alpha)\n model.fit(X, X_embed[:,:e_dim])\n X_embed_r = model.predict(X)\n X_embed_test_r = model.predict(X_test)\n mse = check_similarity_match(X_embed_r, S)[0]\n mse_rr.append(mse)\n mse = check_similarity_match(X_embed_test_r, S_test)[0]\n mse_rr_test.append(mse)\n # simec - linear\n simec = SimilarityEncoder(X.shape[1], e_dim, n_targets, s_ll_reg=0.5, S_ll=S[:n_targets,:n_targets],\n sparse_inputs=True, orth_reg=0.1 if e_dim > 6 else 0., l2_reg_emb=0.0001, \n l2_reg_out=0.00001, opt=keras.optimizers.Adamax(lr=0.01))\n simec.fit(X, S[:,:n_targets])\n X_embeds = simec.transform(X)\n X_embed_tests = simec.transform(X_test)\n mse = check_similarity_match(X_embeds, S)[0]\n mse_simec.append(mse)\n mse_t = check_similarity_match(X_embed_tests, S_test)[0]\n mse_simec_test.append(mse_t)\n # simec - 2hl\n simec = SimilarityEncoder(X.shape[1], e_dim, n_targets, hidden_layers=[(25, 'tanh'), (25, 'tanh')], sparse_inputs=True,\n s_ll_reg=1., S_ll=S[:n_targets,:n_targets], orth_reg=0.1 if e_dim > 7 else 0., \n l2_reg=0., l2_reg_emb=0.01, l2_reg_out=0.00001, opt=keras.optimizers.Adamax(lr=0.01))\n simec.fit(X, S[:,:n_targets])\n X_embeds = simec.transform(X)\n X_embed_tests = simec.transform(X_test)\n mse = check_similarity_match(X_embeds, S)[0]\n mse_simec_hl.append(mse)\n mse_t = check_similarity_match(X_embed_tests, S_test)[0]\n mse_simec_hl_test.append(mse_t)\n print(\"mse ev: %f; mse rr: %f (%f); mse simec (0hl): %f (%f); mse simec (2hl): %f (%f)\" % (mse_ev[-1], mse_rr[-1], mse_rr_test[-1], mse_simec[-1], mse_simec_test[-1], mse, mse_t))\nkeras.backend.clear_session()\ncolors = get_colors(15)\nplt.figure();\nplt.plot(e_dims, mse_ev, '-o', markersize=3, c=colors[14], label='Eigendecomposition');\nplt.plot(e_dims, mse_rr, '-o', markersize=3, c=colors[12], label='ED + Regression');\nplt.plot(e_dims, mse_rr_test, '--o', markersize=3, c=colors[12], label='ED + Regression (test)');\nplt.plot(e_dims, mse_simec, '-o', markersize=3, c=colors[8], label='SimEc 0hl');\nplt.plot(e_dims, mse_simec_test, '--o', markersize=3, c=colors[8], label='SimEc 0hl (test)');\nplt.plot(e_dims, mse_simec_hl, '-o', markersize=3, c=colors[4], label='SimEc 2hl');\nplt.plot(e_dims, mse_simec_hl_test, '--o', markersize=3, c=colors[4], label='SimEc 2hl (test)');\nplt.legend(bbox_to_anchor=(1.02, 1), loc=2, borderaxespad=0.);\nplt.title('20 newsgroups (class based similarities)');\nplt.plot([0, e_dims[-1]], [0,0], 'k--', linewidth=0.5);\nplt.xticks(e_dims, e_dims);\nplt.xlabel('Number of Embedding Dimensions ($d$)')\nplt.ylabel('Mean Squared Error of $\\hat{S}$')\nprint(\"e_dims=\", e_dims)\nprint(\"mse_ev=\", mse_ev)\nprint(\"mse_rr=\", mse_rr)\nprint(\"mse_rr_test=\", mse_rr_test)\nprint(\"mse_simec=\", mse_simec)\nprint(\"mse_simec_test=\", mse_simec_test)\nprint(\"mse_simec_hl=\", mse_simec_hl)\nprint(\"mse_simec_hl_test=\", mse_simec_hl_test)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maojrs/riemann_book
|
Acoustics.ipynb
|
bsd-3-clause
|
[
"Acoustics",
"%matplotlib inline\n\n%config InlineBackend.figure_format = 'svg'\nimport numpy as np\nfrom exact_solvers import acoustics, acoustics_demos\nfrom IPython.display import IFrame, HTML, Image",
"In this chapter we consider our first system of hyperbolic conservation laws. We study the acoustics equations that were introduced briefly in Introduction. We first describe the physical context of this system and then investigate its characteristic structure and the solution to the Riemann problem. This system is described in more detail in Chapter 3 of <cite data-cite=\"fvmhp\"><a href=\"riemann.html#fvmhp\">(LeVeque 2002)</a></cite>.\nIf you wish to examine the Python code for this chapter, please see:\n\nexact_solvers/acoustics.py ...\n on github,\nexact_solvers/acoustics_demos.py ...\n on github.\n\nPhysical setting\nThe linear acoustic equations describe the propagation of small perturbations in a fluid. In Advection we derived the one-dimensional continuity equation, which describes mass conservation:\n\\begin{align} \\label{Ac:continuity}\n \\rho_t + (\\rho u)_x & = 0.\n\\end{align}\nFor more realistic fluid models, we need another equation that determines the velocity $u$. This typically takes the form of a conservation law for the momentum $\\rho u$. Momentum, like density, is transported by fluid motion with corresponding flux $\\rho u^2$. Additionally, any difference in pressure will also lead to a flux of momentum that is proportional to the pressure difference. Thus the momentum equation takes the form\n\\begin{align} \\label{Ac:mom_cons}\n(\\rho u)_t + (\\rho u^2 + P(\\rho))_x & = 0,\n\\end{align}\nwhere the pressure $P$ is is given by the equation of state $P(\\rho)$; here we have assumed the pressure depends only on the density. A more general equation of state will be considered, along with fully nonlinear fluid motions, in Euler. The linear acoustics equations focus on the behavior of small perturbations in the system above.\nIn order to derive the equations of linear acoustics, observe that equations (\\ref{Ac:continuity})-(\\ref{Ac:mom_cons}) form a hyperbolic system $q_t+f(q)_x=0$ with\n\\begin{align}\nq & = \\begin{bmatrix} \\rho \\ \\rho u \\end{bmatrix} & \nf(q) & = \\begin{bmatrix} \\rho u \\ \\rho u^2 + P(\\rho) \\end{bmatrix}\n\\end{align}\nWe will make use of the quasilinear form of a hyperbolic system:\n$$q_t + f'(q) q_x = 0.$$\nHere $f'(q)$ denotes the Jacobian of the flux $f$ with respect to the conserved variables $q$. In the present system, as is often the case, $f$ is most naturally written in terms of so-called primitive variables (in this case $\\rho$ and $u$) rather than in terms of the conserved variables $q$. In order to find the flux Jacobian (and thus the quasilinear form), we first write $f$ in terms of the conserved variables $(q_1,q_2) = (\\rho, \\rho u)$:\n\\begin{align}\nf(q) & = \\begin{bmatrix} q_2 \\ q_2^2/q_1 + P(q_1) \\end{bmatrix}.\n\\end{align} \nNow we can differentiate to find the flux Jacobian:\n\\begin{align}\nf'(q) & = \\begin{bmatrix} \\partial f_1/\\partial q_1 & \\partial f_1/\\partial q_2 \\\n \\partial f_2/\\partial q_1 & \\partial f_2/\\partial q_2 \\end{bmatrix} \\\n & = \\begin{bmatrix} 0 & 1 \\ -q_2^2/q_1^2 + P'(q_1) & 2q_2/q_1 \\end{bmatrix} \\\n & = \\begin{bmatrix} 0 & 1 \\ P'(\\rho)-u^2 & 2u \\end{bmatrix}.\n\\end{align}\nThus small perturbations to an ambient fluid state $\\rho_0, u_0$ evolve according to the linearized equations $q_t + f'(q_0) q_x = 0$, or more explicitly\n\\begin{align}\n\\rho_t + (\\rho u)_x & = 0 \\\n(\\rho u)_t + (P'(\\rho_0)-u_0^2)\\rho_x + 2u_0(\\rho u)_x & = 0.\n\\end{align}\nAs we are only interested in small perturbations of equation (\\ref{Ac:mom_cons}), we expand the perturbations $\\rho-\\rho_0$ and $\\rho u - \\rho_0 u_0$ as functions of a small parameter $\\epsilon$, and then we discard terms of order $\\epsilon^2$ and higher. This results in the linear hyperbolic system\n\\begin{align}\np_t + u_0 p_x + P'(\\rho_0) u_x & = 0 \\\nu_t + \\frac{1}{\\rho_0} p_x + u_0 u_x & = 0,\n\\end{align}\nwhere $p(x,t)$ is the pressure as a function of $x$ and $t$. If the ambient fluid is at rest (i.e. $u_0=0$) and the pressure is directly proportional to the density, then this simplifies to\n\\begin{align} \\label{Ac:main}\n \\left[ \\begin{array}{c}\np \\\nu \n\\end{array} \\right]t + \\underbrace{\\left[ \\begin{array}{cc}\n0 & K_0 \\\n1/\\rho_0 & 0 \\\n\\end{array} \\right]}{\\mathbf{A}}\n\\left[ \\begin{array}{c}\np \\\nu \\end{array} \\right]_x = 0,\n\\end{align}\nwhere $K_0=P'(\\rho_0)$ is referred to as the bulk modulus of compressibility. The system of equations (\\ref{Ac:main}) is called the linear acoustics equations.\nFor the rest of this chapter we work with (\\ref{Ac:main}) and let $q=[p,u]^T$. Then we can write (\\ref{Ac:main}) as $q_t + A q_x = 0$. For simplicity, we also drop the subscripts on $K, \\rho$. Direct calculation reveals that the eigenvectors of $A$ are\n\\begin{align}\n\\lambda_1 = -c, \\qquad \\lambda_2 = c\n\\end{align}\nwhere $c=\\sqrt{{K}/{\\rho}}$ is the speed of sound in a medium with a given density and bulk modulus. The right eigenvectors of $A$ are given by\n\\begin{align}\nr_1 = \\begin{bmatrix}\\begin{array}{c}-Z\\1\\end{array}\\end{bmatrix}, \\qquad r_2 = \\begin{bmatrix}\\begin{array}{c}Z\\1\\end{array}\\end{bmatrix},\n\\end{align}\nwhere $Z=\\rho c$ is called the acoustic impedance. Defining $R = [r_1 r_2]$ and $\\Lambda = diag(\\lambda_1, \\lambda_2)$, we have $AR = R\\Lambda$, or $A = R \\Lambda R^{-1}$. Substituting this into (\\ref{Ac:main}) yields\n\\begin{align}\nq_t + A q_x & = 0 \\\nq_t + R \\Lambda R^{-1} q_x & = 0 \\\nR^{-1}q_t + \\Lambda R^{-1} q_x & = 0 \\\nw_t + \\Lambda w_x & = 0,\n\\end{align}\nwhere we have introduced the characteristic variables $w=R^{-1}q$. The last system above is simply a pair of decoupled advection equations for $w_1$ and $w_2$, with velocities $\\lambda_1$ and $\\lambda_2$; a system we already know how to solve. Thus we see that the eigenvalues of $A$ are the velocities at which information propagates in the solution.\nSolution by characteristics\nThe discussion above suggests a strategy for solving the Cauchy problem:\n\nDecompose the initial data $(p(x,0), u(x,0))$ into characteristic variables $w(x,0)=(w_1^0(x),w_2^0(x,0))$ using the relation $w = R^{-1}q$.\nEvolve the characteristic variables: $w_p(x,t) = w_p^0(x-\\lambda_p t)$.\nTransform back to the physical variables: $q = Rw$.\n\nThe first step in this process amounts to expressing the vector $q$ in the basis given by $r_1, r_2$. Solving the system $Rw=q$ yields \n\\begin{align}\nq = w_1 r_1 + w_2 r_2,\n\\end{align} \nwhere\n\\begin{align}\nw_1 = \\frac{- p + Z u}{2Z}, \\ \\ \\ \\ \\ \\\nw_2 = \\frac{ p + Z u}{2Z}.\n\\end{align}\nWe visualize this below, where the first plot shows the two eigenvectors, and the second plot shows how $q$ can be expressed as a linear combination of the two eigenvectors, $r_1$ and $r_2$. In the live notebook you can adjust the left and right states or the material parameters to see how this affects the construction of the Riemann solution.",
"%matplotlib inline\n\n%config InlineBackend.figure_format = 'svg'\nimport numpy as np\nfrom exact_solvers import acoustics, acoustics_demos\nfrom IPython.display import IFrame\n\nacoustics_demos.decompose_q_interactive()",
"In the second and third steps, we evolve the characteristic variables $w$ and then transform back to the original variables. We take as initial pressure a Gaussian, with zero initial velocity. We visualize this below, where the time evolution in the characteristic variables is shown in the first plot, and the time evolution of the velocity is shown in the second plot.",
"acoustics_demos.char_solution_interactive()",
"In the live notebook, you can advance the above solutions in time and select which of the two characteristic variables to display.\nNotice how in the characteristic variables $w$ (plotted on the left), each part of the solution simply advects (translates) since each of the characteristics variables simply obeys an uncoupled advection equation.\nThe Riemann problem\nNow that we know how to solve the Cauchy problem, solution of the Riemann problem is merely a special case. We have the special initial data\n\\begin{align}\nq(x,0) = \\begin{cases}\nq_\\ell & \\text{if } x \\le 0, \\\nq_r & \\text{if } x > 0.\n\\end{cases}\n\\end{align}\nWe can proceed as before, by decomposing into characteristic components, advecting, and then transforming back. But since we know the solution will be constant almost everywhere, it's even simpler to just decompose the jump $\\Delta q = q_r - q_\\ell$ in terms of the characteristic variables, and advect the two resulting jumps $\\Delta w_1$ and $\\Delta w_2$:\n\\begin{align}\n\\Delta q = \\Delta w_1 r_1 + \\Delta w_2 r_2,\n\\end{align}\nSince $R\\Delta w = \\Delta q$, we have\n\\begin{align}\n\\Delta w_1 = \\frac{-\\Delta p + Z\\Delta u}{2Z}, \\ \\ \\ \\ \\ \\\n\\Delta w_2 = \\frac{\\Delta p + Z\\Delta u}{2Z}.\n\\end{align}\nThus the solution has the structure depicted below.",
"Image('figures/acoustics_xt_plane.png', width=350)",
"The three constant states are related by the jumps: \n\\begin{align}\nq_m = q_\\ell + \\Delta w_1 r_1 = q_r - \\Delta w_2 r_2.\n\\label{eq:acussol}\n\\end{align}\nThe jumps in pressure and velocity for each propagating discontinuity are related in a particular way, since each jump is a multiple of one of the eigenvectors of $A$. More generally, the eigenvectors of the coefficient matrix of a linear hyperbolic system reveal the relation between jumps in the conserved variables across a wave propagating with speed given by the corresponding eigenvalue. For acoustics, the impedance is the physical parameter that determines this relation.\nA simple solution\nHere we provide some very simple initial data, and determine the Riemann solution, which consists of three states $q_\\ell$, $q_m$ and $q_r$, and the speeds of the two waves.",
"# Initial data for Riemann problem\nrho = 0.5 # density\nbulk = 2. # bulk modulus\nql = np.array([3,2]) # Left state\nqr = np.array([3,-2]) # Right state\n# Calculated parameters\nc = np.sqrt(bulk/rho) # calculate sound speed\nZ = np.sqrt(bulk*rho) # calculate impedance\nprint(\"With density rho = %g, bulk modulus K = %g\" \\\n % (rho,bulk))\nprint(\"We compute: sound speed c = %g, impedance Z = %g \\n\" \\\n % (c,Z))\n\n# Call and print Riemann solution\nstates, speeds, reval = \\\n acoustics.exact_riemann_solution(ql ,qr, [rho, bulk])\n \nprint(\"The states ql, qm and qr are: \")\nprint(states, \"\\n\")\nprint(\"The left and right wave speeds are:\")\nprint(speeds)",
"One way to visualize the Riemann solution for a system of two equations is by looking at the $p-u$ phase plane. In the figure below, we show the two initial conditions of the Riemann problem $q_\\ell$ and $q_r$ as points in the phase space; the lines passing through these points correspond to the eigenvectors, $r_1$ and $r_2$. \nThe middle state $q_m$ is simply the intersection of the line in the direction $r_1$ passing through $q_\\ell$ and the line in the direction $r_2$ passing through $q_r$. The structure of this solution becomes evident from equation (\\ref{eq:acussol}). The dashed lines correspond to a line in the direction $r_2$ passing through $q_\\ell$ and a line in the direction $r_1$ passing through $q_r$; these also intersect, but cannot represent a Riemann solution since they would involve a wave going to the right but connected to $q_\\ell$ and a wave going to the left but connected to $q_r$.\nIn the live notebook, the cell below allows you to interactively adjust the initial conditions the material parameters as well as the plot range, so that you can explore how the structure of the solution in the phase plane is affected by these quantities.",
"acoustics_demos.interactive_phase_plane(ql,qr,rho,bulk)",
"Note that the eigenvectors are given in terms of the impedance $Z$, which depends on the density $\\rho$\nand the bulk modulus $K$. Therefore, when $\\rho$ and $K$ are modified the eigenvectors change and consequently the slope of the lines changes as well.\nExamples\nWe will use the exact solver in exact_solvers/acoustics.py and the functions in exact_solvers/acoustics_demos.py to plot interactive solutions for a few examples.\nShock tube\nIf there is a jump in pressure and the velocity is zero in both initial states (the shock tube problem) then the resulting Riemann solution consists of pressure jumps of equal magnitude propagating in both directions, with equal and opposite jumps in velocity. This is the linearized version of what is known in fluid dynamics as a shock tube problem, since it emulates what would happen inside a shock tube, where the air is initially stationary and a separate chamber at the end of the tube is pressurized and then released.",
"ql = np.array([5,0])\nqr = np.array([1,0])\nrho = 1.0\nbulk = 4.0\nacoustics_demos.riemann_plot_pplane(ql,qr,rho,bulk)",
"We can also observe the structure of the solution in the phase plane. In the second plot, we show the structure of the solution in the phase plane.\nReflection from a wall\nAs another example, suppose the pressure is initially the same in the left and right states, while the velocities are non-zero with $u_r = -u_\\ell > 0$. The flow is converging from both sides and because of the symmetry of the initial states, the result is a middle state $q_m$ in which the velocity is 0 (and the pressure is higher than on either side).",
"ql = np.array([2,1]) \nqr = np.array([2,-1]) \nrho = 1.0\nbulk = 1.5\nacoustics_demos.riemann_plot_pplane(ql,qr,rho,bulk)",
"We again show the Riemann solution in space and in the phase plane, where the symmetry is also evident.\nDisregarding the left half of the domain ($x<0$), one can view this as a solution to the problem of an acoustic wave impacting a solid wall. The result is a reflected wave that moves away from the wall; notice that the velocity vanishes at the wall, as it must.\nThis type of Riemann solution is important when simulating waves in a domain with reflecting boundaries. The reflecting condition can be imposed by the use of fictitious ghost cells that lie just outside the domain and whose state is set by reflecting the interior solution with the symmetry just described (equal pressure, negated velocity).\nIn reality, at a material boundary only part of a wave is reflected while the rest is transmitted. This can be accounted for by including the spatial variation in $\\rho, K$ and solving a variable-coefficient Riemann problem.\nInteractive phase plane with solution at fixed time\nFor a more general exploration of the solution to the acoustics equation, we now show an interactive solution of the acoustics equations. The initial states $q_\\ell$ and $q_r$ can be modified by dragging and dropping the points in the phase plane plot (in the notebook version, or on this webpage).",
"IFrame(src='phase_plane/acoustics_small_notitle.html', \n width=980, height=340)",
"Gaussian initial condition\nIn this example, we use the first example described near the beginning of this chapter. The initial condition is a Gaussian pressure perturbation, while the initial velocity is zero. Reflecting boundary conditions are imposed at $x=-2$ and $x=2$, so the wave is fully reflected back, and we can see how it interacts with itself. This animation is produced using a numerical method from PyClaw, and can be viewed in the interactive notebook or on this webpage.",
"anim = acoustics_demos.bump_animation(numframes = 50)\nHTML(anim)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
erikdrysdale/erikdrysdale.github.io
|
_rmd/extra_power/winners_curse.ipynb
|
mit
|
[
"A winner's curse adjustment for a single test statistic\nBackground\nThe naive application of applied statistics for conducting inference in scientific research is one of the primary culprits in the reproducability crisis. Even excluding cases of scientific misconduct, cited research findings are likely to innaccurate due to 1) the file drawer problem, 2) researchers' degrees of freedom and 3) underpowered statistical designs. To address the problem of publication bias some journals are now accepting the findings regardless of their statistical signifance. More than 200 journals now use the Open Science Foundation's pre-registration framework to help improve reproducability and reduce the garden of forking paths problem.\nOn ongoing challenge in many disciplines is to improve the power of research designs. For example in the empirical economics literature, the median power is estimated to be only 18%. In biomedical research there is at least more attention paid to power, but due to financial incentives (the NIH requires power > 80% for successful grants) the estimates of power are likely to be exagerated. Most researchers believe that statistical power is important because it ensures that the resources used to carry out research are not wasted. But in addition to pecuniary considerations, the power of a test is intimately linked to the reproducability crisis because studies with low power have inflated effect sizes. \nOne consequence of using frequentist statistical tools to conduct scientific inference is that all statistically significant findings are biased even if the test itself is unbiased. This is because statistically significant findings have to be a certain number of standard deviations away from zero, and concomitantly certain values of the test are never observed (in the statistically significant space). The power-bias relationship helps to explain the Proteus phenomenon whereby follow-up studies tend to have a smaller effect size. The magnitude of this bias is known as the Winner's Curse, and several adjustment procedures have been proposed in the context of multiple tests.[[^1]] Is is especially relevant in genomics for the polygenic risk scores developed with genome-wide association studies. \nIn this post I will review briefly review the frequentist paradigm that is used to conduct scientic inference and demonstrate how the probability of type-1 and type-2 errors are related to biased effect sizes. In the final section of the post I propose a Winner's Curse adjustment (WCA) procedure for a single test statistic. I am not aware of such a method being proposed before, but if it has been please contact me so that I can properly credit alternative methods.[[^2]] \nIn summary this post will provide to explicit formulas for: \n\nThe relationship between power and effect size bias (equation X).\nAn effect size adjuster for single test statistic results (equation Y).\n\nIn the sections below the examples and math will be kept as simple as possible. All null/alternative hypothesis will be assumed to come from a Gaussian distribution. Variances will be fixed and known. All hypothesis will be one-sided hypothesis. Each of these assumptions can be relaxed without any change to the implications of the examples below, but do require a bit more math. Also note that $\\Phi$ refers to the standard normal CDF and its quantile function $\\Phi^{-1}$.",
"# modules used in the rest of the post\nfrom scipy.stats import norm, truncnorm\nimport numpy as np\nfrom numpy.random import randn\nfrom scipy.optimize import minimize_scalar\nimport plotnine\nfrom plotnine import *\nimport pandas as pd",
"(1) Review of Type-I and Type-II errors\nImagine a simple hypothesis test: to determine whether one gaussian distribution, with a known variance, has a larger mean than another: $y_{i1} \\sim N(\\mu_A, \\sigma^2/2)$ and $y_{i2} \\sim N(\\mu_B, \\sigma^2/2)$, then $\\bar y_i \\sim N(\\mu_i, \\sigma^2/n)$ and $\\bar d = \\bar y_1 - \\bar y_2 \\sim N(\\mu_A, \\sigma^2/n)$. The sample mean (difference) will have a variance of $\\sigma^2/n$.[[^3]]\n$$\n\\begin{align}\n\\bar d &\\sim N(d, \\sigma^2/n) \\\n\\bar z = \\frac{\\bar d - d}{\\sigma / \\sqrt{n}} &\\sim N(0, 1)\n\\end{align}\n$$\nThe null hypothesis is that: $H_0: \\mu_A \\leq \\mu_B$, with the alternative hypothoesis that $\\mu_A > \\mu_B$ (equavilent to $d \\leq 0$ and $d >0$, respectively). Recall that in frequentist statistical paradigm, the goal is find a rejection region of the test statistic ($z$) that bounds the type-I error rate and maximizes power. When the null is true ($d\\leq 0$) then setting $c_\\alpha = \\Phi_{1-\\alpha}^{-1}$, and rejecting the null when $\\bar z > c_\\alpha$ will obtain a type-I error rate of exactly $\\alpha$.[[^4]]\n$$\n\\begin{align}\nP(\\bar z > c) &\\leq \\alpha \\\n1-\\Phi ( c ) &\\leq \\alpha \\\nc &\\geq \\Phi^{-1}(1-\\alpha)\n\\end{align}\n$$\n[^1]: Note that the Winner's Curse in economics is a different but related phenomenon.\n[^2]: There is an approach with uses a simple MLE to invert the observed mean of a truncated Gaussian, but as I discuss below this approach has signficant drawbacks when the true effect size is zero or small.\n[^3]: If the variances were unknown, then the difference in means would have a student-t distribution with slightly fatter tails. \n[^4]: If $c > c_\\alpha$, then the type-I error rate would be lower (which is good), but, the power would also be lower in the event that the null were false. It is therefore desirable the rejection region obtain the exactly desired type-I error rate, and then the statistician can decide what type-I level to choose.",
"# EXAMPLE OF TYPE-I ERROR RATE\nalpha, n, sig2, seed, nsim = 0.05, 18, 2, 1234, 50000\nc_alpha = norm.ppf(1-alpha)\nnp.random.seed(1234)\nerr1 = np.mean([ (np.mean(randn(n)) - np.mean(randn(n)))/np.sqrt(2/n) > c_alpha for i in range(nsim)])\nprint('Empirical type-I error rate: %0.3f\\nExpected type-I error rate: %0.3f' % (err1, alpha))",
"In the event that the null is not true $(d > 0)$ then power of the test will depend four things:\n\nThe magnitute of the effect (the bigger the value of $d$ the better)\nThe number of samples (the more the better)\nThe type-I error rate (the larger the better)\nThe magnitute of the variance (the smaller the better)\n\nDefining the empirical test statistic as $\\bar z$, the type-II error rate is: \n$$\n\\begin{align}\n1 - \\beta &= P( \\bar z > c_\\alpha) \\\n1 - \\beta &= P\\Bigg( \\frac{\\bar d - 0}{\\sigma/\\sqrt{n}} > c_\\alpha \\Bigg) \\\n1 - \\beta &= P( z > c_\\alpha - \\sqrt{n} \\cdot d / \\sigma ) \\\n\\beta &= \\Phi\\Bigg(c_\\alpha - \\frac{\\sqrt{n} \\cdot d}{\\sigma} \\Bigg)\n\\end{align}\n$$",
"# EXAMPLE OF TYPE-II ERROR RATE\nd = 0.75\nbeta = norm.cdf(c_alpha - d / np.sqrt(2/n))\nerr2 = np.mean([ (np.mean(d + randn(n)) - np.mean(randn(n)))/np.sqrt(sig2 / n) < c_alpha for i in range(nsim)])\nprint('Empirical type-II error rate: %0.3f\\nExpected type-II error rate: %0.3f' % (err2, beta))",
"(2) Relationship between power and effect size bias\nMost practioneers of applied statistics will be familiar with type-I and type-II error rates and will use these to interpret the results of studies and design trials. In most disciplines it is common that only statistically significant results (i.e. those ones that that reject the null) are analyzed. In research domains where there are many hypothesis tests under consideration (such as genomics), it is required that multiple testing adjustments be made so that the number of aggregate false discoveries is bounded. Note that such adjustments are equivalent to increasing the value of $c_\\alpha$ and will lower the power of eaah test.\nUnfortunately few researchers in my experience understand the relationship between power and effect size bias. Even though rigourously pre-specified research designs will likely have an accurate number of \"true discoveries\", the distribution of significanct effect sizes will almost certaintly be overstated. An example will help to illustrate. Returning to the difference in Gaussian sample means, the distribution of statistically significant means will follow the following conditional distribution:\n$$\n\\begin{align}\n\\bar d^ &= \\bar d | \\bar z > c_\\alpha \\\n&= \\bar d | \\bar d > \\sigma \\cdot c_\\alpha / \\sqrt{n}\n\\end{align*}\n$$\nNotice that the smallest observable and statistically significant mean difference will be at least $c_\\alpha$ root-$n$ normalized standard deviations above zero. Because $\\bar d$ has a Gaussian distribution, $\\bar d^*$ has a truncated Gaussian distribution:\n$$\n\\begin{align}\n\\bar d^ &\\sim TN(\\mu, \\sigma^2, l, u) \\\n&\\sim TN(d, \\sigma^2 / n, \\sigma \\cdot c_\\alpha / \\sqrt{n}, \\infty) \\\na &= \\frac{l - \\mu}{\\sigma} = c_\\alpha - \\sqrt{n}\\cdot d / \\sigma = \\Phi^{-1}(\\beta) \\\nE[\\bar d^] &= d + \\frac{\\phi(a)}{1 - \\Phi(a)} \\cdot (\\sigma/\\sqrt{n}) \\\n&= d + \\underbrace{\\frac{\\sigma \\cdot \\phi(\\Phi_\\beta^{-1})}{\\sqrt{n}(1 - \\beta)}}_{\\text{bias}}\n\\end{align}\n$$\nThe bias of the truncated Gaussian is shown to be related to a handful of statistical parameters including the power of the test! The bias can also be expressed as a ratio of the mean of the statistically significant effect size to the true one, what I will call the bias ratio,\n$$\n\\begin{align}\n\\text{R}(\\beta;n,d,\\sigma) &= \\frac{E[\\bar d^]}{d} = 1 + \\frac{\\sigma \\cdot \\phi(\\Phi_\\beta^{-1})}{d\\cdot\\sqrt{n}\\cdot(1 - \\beta)}\n\\end{align*}\n$$\nwhere $\\beta = f(n,d,\\sigma)$, and $R$ is ultimately a function of the sample size, true effect size, and measurement error. The simulations below show the relationship between the bias ratio and power for different effect and sample sample sizes.",
"def power_fun(alpha, n, mu, sig2):\n thresh = norm.ppf(1-alpha)\n t2_err = norm.cdf(thresh - mu/np.sqrt(sig2/n))\n return 1 - t2_err\n return \n\ndef bias_ratio(alpha, n, mu, sig2):\n power = power_fun(alpha=alpha, n=n, mu=d, sig2=sig2)\n num = np.sqrt(sig2) * norm.pdf(norm.ppf(1-power))\n den = mu * np.sqrt(n) * power\n return 1 + num / den\n\n# SIMULATE ONE EXAMPLE #\nnp.random.seed(seed)\nnsim = 125000\nn, d = 16, 0.5\nholder = np.zeros([nsim, 2])\nfor ii in range(nsim):\n y1, y2 = randn(n) + d, randn(n)\n dbar = y1.mean() - y2.mean()\n zbar = dbar / np.sqrt(sig2 / n)\n holder[ii] = [dbar, zbar]\n\nemp_power = np.mean(holder[:,1] > c_alpha)\ntheory_power = power_fun(alpha=alpha, n=n, mu=0.5, sig2=sig2)\nemp_ratio = holder[:,0][holder[:,1] > c_alpha].mean() / d\ntheory_ratio = bias_ratio(alpha, n, d, sig2)\nprint('Empirical power: %0.2f, theoretical power: %0.2f' % (emp_power, theory_power))\nprint('Empirical bias-ratio: %0.2f, theoretical power: %0.2f' % (emp_ratio, theory_ratio))\n\n# CALCULATE CLOSED-FORM RATIO #\nn_seq = np.arange(1,11,1)**2\nd_seq = np.linspace(0.01,1,len(n_seq))\n\ndf_ratio = pd.DataFrame(np.array(np.meshgrid(n_seq, d_seq)).reshape([2,len(n_seq)*len(d_seq)]).T, columns=['n','d'])\ndf_ratio.n = df_ratio.n.astype(int)\ndf_ratio = df_ratio.assign(ratio = lambda x: bias_ratio(alpha, x.n, x.d, sig2),\n power = lambda x: power_fun(alpha, x.n, x.d, sig2))\n\ngg_ratio = (ggplot(df_ratio, aes(x='power',y='np.log(ratio)',color='n')) + \n geom_point() + theme_bw() + \n ggtitle('Figure 1: Relationship between power and effect size bias') + \n labs(x='Power',y='log(Bias Ratio)') + \n scale_color_gradient2(low='blue',mid='yellow',high='red',midpoint=50,\n name='Sample Size'))\nplotnine.options.figure_size = (5,3.5)\ngg_ratio",
"Figure 1 shows that while there is not a one-to-one relationship between the power and the bias ratio, generally speaking the higher the power the lower the ratio. The variation in low powered tests is driven by the sample sizes. Tests that have low power with a large sample sizes but small effect sizes will have a much smaller bias than equivalently powered tests with large effect sizes and small sample sizes. The tables below highlight this fact by showing the range in power for tests with similar bias ratios, the range in bias ratios for similarly powered tests.",
"np.round(df_ratio[(df_ratio.ratio > 1.3) & (df_ratio.ratio < 1.4)].sort_values('power').head(),2)\n\nnp.round(df_ratio[(df_ratio.power > 0.45) & (df_ratio.power < 0.52)].sort_values('ratio').head(),2)",
"(3) Why estimating the bias of statistically significant effects is hard!\nIf the true effect size were known, then it would be possible to explicitely calculate the bias term. Unfortunately this parameter is never known in the real world. If there happened to be multiple draws from the same hypothesis then an estimate of the true mean could be found. With multiple draws, there will be an observed distribution of $\\bar d^$ so that the empirical mean $\\hat{\\bar d}^$ could be used by optimization methods to estimate $d$ using the formula for the mean of a truncated Gaussian.\n$$\n\\begin{align}\nd^ &= \\arg\\min_d \\hspace{2mm} \\Bigg[ \\hat{\\bar d}^ - \\Bigg( d + \\frac{\\phi(c_\\alpha-\\sqrt{n}\\cdot d/\\sigma)}{1 - \\Phi(c_\\alpha-\\sqrt{n}\\cdot d/\\sigma)} \\cdot (\\sigma/\\sqrt{n}) \\Bigg) \\Bigg]^2\n\\end{align}\n$$\nThe simulations below show that with enough hypothesis rejections, the true value of $d$ could be determined. However if the null could be sampled multiple times then the exact value of $d$ could be determined by just looking at $\\bar d$! The code is merely to highlight the principle.",
"def mu_trunc(mu_true, alpha, n, sig2):\n sig = np.sqrt(sig2 / n)\n a = norm.ppf(1-alpha) - mu_true / sig\n return mu_true + norm.pdf(a)/(1-norm.cdf(a)) * sig\n\ndef mu_diff(mu_true, mu_star, alpha, n, sig2):\n diff = mu_star - mu_trunc(mu_true, alpha, n, sig2)\n return diff**2\n\ndef mu_find(mu_star, alpha, n, sig2):\n hat = minimize_scalar(fun=mu_diff,args=(mu_star, alpha, n, sig2),method='brent').x\n return hat\n\nn = 16\nnsim = 100000\n\nnp.random.seed(seed)\nd_seq = np.round(np.linspace(-1,2,7),1)\nres = np.zeros(len(d_seq))\nfor jj, d in enumerate(d_seq):\n holder = np.zeros([nsim,2])\n # Generate from truncated normal\n dbar_samp = truncnorm.rvs(a=c_alpha-d/np.sqrt(sig2/n),b=np.infty,loc=d,scale=np.sqrt(sig2/n),size=nsim,random_state=seed)\n z_samp = dbar_samp / np.sqrt(sig2/n)\n res[jj] = mu_find(dbar_samp.mean(), alpha, n, sig2)\n\ndf_res = pd.DataFrame({'estimate':res, 'actual':d_seq})\n\nplotnine.options.figure_size = (5.5, 3.5)\ngg_res = (ggplot(df_res, aes(y='estimate',x='actual')) + geom_point() + \n theme_bw() + labs(y='Estimate',x='Actual') + \n geom_abline(intercept=0,slope=1,color='blue') + \n ggtitle('Figure 2: Unbiased estimate of true mean possible for repeated samples'))\ngg_res",
"But if multiple samples are unavailable to estimate $\\hat{\\bar d}^$, then can the value of $d$ ever be estimated? A naive reproach using only a single value to find $d^$ from the equation above yields negative estimates when $\\mu \\approx 0$ because many values below the median of the truncated normal with a small mean have will match a large and negative mean for another truncated normal. Figures 3A and 3B show this asymmetric phenomenon.",
"n = 25\nsig, rn = np.sqrt(sig2), np.sqrt(n)\nd_seq = np.linspace(-10,2,201)\ndf1 = pd.DataFrame({'dstar':d_seq,'d':[d + norm.pdf(c_alpha - rn*d/sig) / norm.cdf(rn*d/sig - c_alpha) * (sig/rn) for d in d_seq]})\nsample = truncnorm.rvs(c_alpha, np.infty, loc=0, scale=sig/rn, size=1000, random_state=seed)\ndf2 = pd.DataFrame({'d':sample,'tt':'dist'})\nplotnine.options.figure_size = (5,3.5)\nplt1 = (ggplot(df1,aes(y='dstar',x='d')) + geom_point() + theme_bw() + \n geom_vline(xintercept=c_alpha*sig/rn,color='blue') + \n labs(x='Observed mean',y='Estimate of d') + \n ggtitle('Figure 3A: Negative bias in ML estimate'))\nplt1\n\nfig2 = (ggplot(df1,aes(x='d')) + theme_bw() +\n geom_histogram(fill='grey',color='blue',bins=30) +\n labs(x='Observed value',y='Frequency') + \n ggtitle('Figure 3B: Distribution under d=0'))\nfig2",
"(4) Approaches to de-biasing single-test statistic results\nA conversative method to ensure that $E[ \\bar d^* -d ] \\leq 0$ when $d\\geq 0$ is to subtract off the bias when the null is zero: $(\\sigma \\cdot \\phi(c_\\alpha)) / (\\sqrt{n}\\cdot\\Phi(-c_\\alpha))$. The problem with this approach is that for true effect ($d>0$), the bias estimate will be too large and the estimate of the true effect will actually be too small as Figure 4 shows.",
"d_seq = np.linspace(-1,2,31)\nbias_d0 = norm.pdf(c_alpha)/norm.cdf(-c_alpha)*np.sqrt(sig2/n)\ndf_bias = pd.DataFrame({'d':d_seq,'deflated':[mu_trunc(dd,alpha,n,sig2)-bias_d0 for dd in d_seq]})\n\nplotnine.options.figure_size = (5,3.5)\ngg_bias1 = (ggplot(df_bias,aes(x='d',y='deflated')) + theme_bw() + \n geom_point() + labs(x='True effect',y='Deflated effect') + \n geom_abline(intercept=0,slope=1,color='blue') + \n scale_x_continuous(limits=[min(d_seq),max(d_seq)]) + \n scale_y_continuous(limits=[min(d_seq),max(d_seq)]) + \n ggtitle('Figure 4: Naive deflation leads to large bias'))\ngg_bias1",
"A better approach I have devised is to weight the statistically significant observation by where it falls in the cdf of truncated Gaussian for $d=0$. When $d>0$ most $\\bar d^*$ will be above this range and receive little penatly, whereas for values of $d \\approx 0$ they will tend to receive a stronger deflation.\n$$\n\\begin{align}\nb_0 &= \\frac{\\sigma}{\\sqrt{n}} \\frac{\\phi(c_\\alpha)}{\\Phi(-c_\\alpha)} = \\text{bias}(\\bar d^ | d=0) \\\nd^ &= \\bar d^ - 2\\cdot[1 - F_{\\bar d>c_\\alpha\\sigma/\\sqrt{n}}(\\bar d^|d=0)] \\cdot b_0\n\\end{align}\n$$\nThe simulations below implement the deflation procedure suggested by equation Y for a single point estimate for different sample and effect sizes.",
"nsim = 10000\n\ndi_cn = {'index':'tt','25%':'lb','75%':'ub','mean':'mu'}\nd_seq = np.linspace(-0.5,1.5,21)\nn_seq = [16, 25, 100, 250]\n\nholder = []\nfor n in n_seq:\n bias_d0 = norm.pdf(c_alpha)/norm.cdf(-c_alpha)*np.sqrt(sig2/n)\n dist_d0 = truncnorm(a=c_alpha,b=np.infty,loc=0,scale=np.sqrt(sig2/n))\n sample_d0 = dist_d0.rvs(size=nsim, random_state=seed)\n w0 = dist_d0.pdf(sample_d0).mean()\n sim = []\n for ii, d in enumerate(d_seq):\n dist = truncnorm(a=c_alpha-d/np.sqrt(sig2/n),b=np.infty,loc=d,scale=np.sqrt(sig2/n))\n sample = dist.rvs(size=nsim, random_state=seed)\n deflator = 2*(1-dist_d0.cdf(sample))*bias_d0\n # deflator = dist_d0.pdf(sample)*bias_d0 / w0\n d_adj = sample - deflator\n mat = pd.DataFrame({'adj':d_adj,'raw':sample}).describe()[1:].T.reset_index().rename(columns=di_cn)[list(di_cn.values())].assign(d=d,n=n)\n sim.append(mat)\n holder.append(pd.concat(sim))\ndf_defl = pd.concat(holder)\n\nplotnine.options.figure_size = (9,6)\ngg_bias2 = (ggplot(df_defl,aes(x='d',y='mu',color='tt')) + theme_bw() + \n geom_linerange(aes(ymin='lb',ymax='ub',color='tt')) + \n geom_point() + labs(x='True effect',y='Observed statistically significant effect') + \n geom_abline(intercept=0,slope=1,color='black') + \n geom_vline(xintercept=0, linetype='--') + \n scale_x_continuous(limits=[-0.75,1.8]) + \n scale_y_continuous(limits=[-0.75,1.8]) + \n facet_wrap('~n', labeller=label_both) + \n scale_color_discrete(name='Type',labels=['Deflated','Observed']) + \n ggtitle('Figure 5: Deflating by the cdf of d=0 achieves better results\\nVertical lines show IQR'))\ngg_bias2",
"Figure 5 shows that the bias for values of $d \\geq 0$ is now conservative and limited. Especially for larger samples, a large and otherwise highly significant effect will be brough much closer to its true value. The primary drawback to using the WCA from equation Y is that it adds further noise to the point estimate. While this is statistically problematic, from an epistemological viewpoint it could be useful to reduce the confidence of reserachers in their \"significant\" findings that are unlikely to replicate anyways. \n(5) Summary\nWCAs for single test results are much more challenging that those for repeated test measurements due to a lack of measured information. I have proposed a simple formula (Y) that can be used on all statistically significant results requiring only the observed effect size, type-I error rate, sample size, and noise estimate. For small to medium sample sizes this deflator leads to additional noise in the point estimate, but may have a humbling effect of researcher confidence. While it has no doubt being expressed before, I also derive the analytical relationship between power and effect size bias (X). \nAs a final motivating example consider the well-regarded paper Labor Market Returns to Early Childhood Stimulation by Gertler et. al (2013) that even included a Nobel-prize winning economist in its author list. They claim to show that an educational intervention using randomized control trial improved long-run income earnings by 42%. This is a huge increase. \n\nThese findings show that psychosocial stimulation early in childhood in disadvantaged settings can have substantial effects on labormarket outcomes and reduce later life inequality.\n\nNotice that the authors state: \"The results ... show that the impact on earnings remains large and statistically significant\". As this post has discussed, it is quite likely that they should have said these results are statistically significant because they were large. \nTable 3 in the paper shows a p-value for 0.01 a sample size of 105, implying that $1 - \\Phi(0.42/(1.9/\\sqrt{105})) \\approx 0.01$, with a z-score of around 2.7. The code below shows that if there were no effect, then the average statistically significant effect that would be observed would be 0.372. However because the result (42%) is in the 80th percentile of such a distribution, the adjustment procedure suggests removing 15% off of the point estimate. Using a WCA adjustment for this paper reduces the findings to 27%, which is still quite high and respectable. I hope this post will help to spread the word about the importance of understanding and addressing the winner's curse in applied statistics research.",
"alpha = 0.05\nc_alpha = norm.ppf(1-alpha)\ndstar = 0.42\nsig2 = 1.85**2\nn = 105\nbias_d0 = norm.pdf(c_alpha)/norm.cdf(-c_alpha)*np.sqrt(sig2/n)\ndist_d0 = truncnorm(a=c_alpha,b=np.infty,loc=0,scale=np.sqrt(sig2/n))\nadj = 2*(1-dist_d0.cdf(dstar))*bias_d0\n\nprint('Baseline effect: %0.3f, P-value: %0.3f\\nBias when d=0: %0.3f\\nDeflator: %0.3f\\nAdjusted effect: %0.3f' % \n (dstar, 1-norm.cdf(dstar/np.sqrt(sig2/n)),bias_d0, adj, dstar - adj))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ModSimPy
|
notebooks/chap03.ipynb
|
mit
|
[
"Modeling and Simulation in Python\nChapter 3\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International",
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim library\nfrom modsim import *\n\n# set the random number generator\nnp.random.seed(7)",
"More than one State object\nHere's the code from the previous chapter, with two changes:\n\n\nI've added DocStrings that explain what each function does, and what parameters it takes.\n\n\nI've added a parameter named state to the functions so they work with whatever State object we give them, instead of always using bikeshare. That makes it possible to work with more than one State object.",
"def step(state, p1, p2):\n \"\"\"Simulate one minute of time.\n \n state: bikeshare State object\n p1: probability of an Olin->Wellesley customer arrival\n p2: probability of a Wellesley->Olin customer arrival\n \"\"\"\n if flip(p1):\n bike_to_wellesley(state)\n \n if flip(p2):\n bike_to_olin(state)\n \ndef bike_to_wellesley(state):\n \"\"\"Move one bike from Olin to Wellesley.\n \n state: bikeshare State object\n \"\"\"\n state.olin -= 1\n state.wellesley += 1\n \ndef bike_to_olin(state):\n \"\"\"Move one bike from Wellesley to Olin.\n \n state: bikeshare State object\n \"\"\"\n state.wellesley -= 1\n state.olin += 1\n \ndef decorate_bikeshare():\n \"\"\"Add a title and label the axes.\"\"\"\n decorate(title='Olin-Wellesley Bikeshare',\n xlabel='Time step (min)', \n ylabel='Number of bikes')",
"And here's run_simulation, which is a solution to the exercise at the end of the previous notebook.",
"def run_simulation(state, p1, p2, num_steps):\n \"\"\"Simulate the given number of time steps.\n \n state: State object\n p1: probability of an Olin->Wellesley customer arrival\n p2: probability of a Wellesley->Olin customer arrival\n num_steps: number of time steps\n \"\"\"\n results = TimeSeries() \n for i in range(num_steps):\n step(state, p1, p2)\n results[i] = state.olin\n \n plot(results, label='Olin')",
"Now we can create more than one State object:",
"bikeshare1 = State(olin=10, wellesley=2)\n\nbikeshare2 = State(olin=2, wellesley=10)",
"Whenever we call a function, we indicate which State object to work with:",
"bike_to_olin(bikeshare1)\n\nbike_to_wellesley(bikeshare2)",
"And you can confirm that the different objects are getting updated independently:",
"bikeshare1\n\nbikeshare2",
"Negative bikes\nIn the code we have so far, the number of bikes at one of the locations can go negative, and the number of bikes at the other location can exceed the actual number of bikes in the system.\nIf you run this simulation a few times, it happens often.",
"bikeshare = State(olin=10, wellesley=2)\nrun_simulation(bikeshare, 0.4, 0.2, 60)\ndecorate_bikeshare()",
"We can fix this problem using the return statement to exit the function early if an update would cause negative bikes.",
"def bike_to_wellesley(state):\n \"\"\"Move one bike from Olin to Wellesley.\n \n state: bikeshare State object\n \"\"\"\n if state.olin == 0:\n return\n state.olin -= 1\n state.wellesley += 1\n \ndef bike_to_olin(state):\n \"\"\"Move one bike from Wellesley to Olin.\n \n state: bikeshare State object\n \"\"\"\n if state.wellesley == 0:\n return\n state.wellesley -= 1\n state.olin += 1",
"Now if you run the simulation again, it should behave.",
"bikeshare = State(olin=10, wellesley=2)\nrun_simulation(bikeshare, 0.4, 0.2, 60)\ndecorate_bikeshare()",
"Comparison operators\nThe if statements in the previous section used the comparison operator ==. The other comparison operators are listed in the book.\nIt is easy to confuse the comparison operator == with the assignment operator =.\nRemember that = creates a variable or gives an existing variable a new value.",
"x = 5",
"Whereas == compares two values and returns True if they are equal.",
"x == 5",
"You can use == in an if statement.",
"if x == 5:\n print('yes, x is 5')",
"But if you use = in an if statement, you get an error.",
"# If you remove the # from the if statement and run it, you'll get\n# SyntaxError: invalid syntax\n\n#if x = 5:\n# print('yes, x is 5')",
"Exercise: Add an else clause to the if statement above, and print an appropriate message.\nReplace the == operator with one or two of the other comparison operators, and confirm they do what you expect.\nMetrics\nNow that we have a working simulation, we'll use it to evaluate alternative designs and see how good or bad they are. The metric we'll use is the number of customers who arrive and find no bikes available, which might indicate a design problem.\nFirst we'll make a new State object that creates and initializes additional state variables to keep track of the metrics.",
"bikeshare = State(olin=10, wellesley=2, \n olin_empty=0, wellesley_empty=0)",
"Next we need versions of bike_to_wellesley and bike_to_olin that update the metrics.",
"def bike_to_wellesley(state):\n \"\"\"Move one bike from Olin to Wellesley.\n \n state: bikeshare State object\n \"\"\"\n if state.olin == 0:\n state.olin_empty += 1\n return\n state.olin -= 1\n state.wellesley += 1\n \ndef bike_to_olin(state):\n \"\"\"Move one bike from Wellesley to Olin.\n \n state: bikeshare State object\n \"\"\"\n if state.wellesley == 0:\n state.wellesley_empty += 1\n return\n state.wellesley -= 1\n state.olin += 1",
"Now when we run a simulation, it keeps track of unhappy customers.",
"run_simulation(bikeshare, 0.4, 0.2, 60)\ndecorate_bikeshare()",
"After the simulation, we can print the number of unhappy customers at each location.",
"bikeshare.olin_empty\n\nbikeshare.wellesley_empty",
"Exercises\nExercise: As another metric, we might be interested in the time until the first customer arrives and doesn't find a bike. To make that work, we have to add a \"clock\" to keep track of how many time steps have elapsed:\n\n\nCreate a new State object with an additional state variable, clock, initialized to 0. \n\n\nWrite a modified version of step that adds one to the clock each time it is invoked.\n\n\nTest your code by running the simulation and check the value of clock at the end.",
"bikeshare = State(olin=10, wellesley=2, \n olin_empty=0, wellesley_empty=0,\n clock=0)\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here",
"Exercise: Continuing the previous exercise, let's record the time when the first customer arrives and doesn't find a bike.\n\n\nCreate a new State object with an additional state variable, t_first_empty, initialized to -1 as a special value to indicate that it has not been set. \n\n\nWrite a modified version of step that checks whetherolin_empty and wellesley_empty are 0. If not, it should set t_first_empty to clock (but only if t_first_empty has not already been set).\n\n\nTest your code by running the simulation and printing the values of olin_empty, wellesley_empty, and t_first_empty at the end.",
"# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
dev/_downloads/8fc13fc21872d78f6b3678c81e192a76/decoding_xdawn_eeg.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"XDAWN Decoding From EEG data\nERP decoding with Xdawn :footcite:RivetEtAl2009,RivetEtAl2011. For each event\ntype, a set of spatial Xdawn filters are trained and applied on the signal.\nChannels are concatenated and rescaled to create features vectors that will be\nfed into a logistic regression.",
"# Authors: Alexandre Barachant <alexandre.barachant@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import classification_report, confusion_matrix\nfrom sklearn.preprocessing import MinMaxScaler\n\nfrom mne import io, pick_types, read_events, Epochs, EvokedArray, create_info\nfrom mne.datasets import sample\nfrom mne.preprocessing import Xdawn\nfrom mne.decoding import Vectorizer\n\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters and read data",
"meg_path = data_path / 'MEG' / 'sample'\nraw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'\nevent_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.1, 0.3\nevent_id = {'Auditory/Left': 1, 'Auditory/Right': 2,\n 'Visual/Left': 3, 'Visual/Right': 4}\nn_filter = 3\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 20, fir_design='firwin')\nevents = read_events(event_fname)\n\npicks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,\n exclude='bads')\n\nepochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,\n picks=picks, baseline=None, preload=True,\n verbose=False)\n\n# Create classification pipeline\nclf = make_pipeline(Xdawn(n_components=n_filter),\n Vectorizer(),\n MinMaxScaler(),\n LogisticRegression(penalty='l1', solver='liblinear',\n multi_class='auto'))\n\n# Get the labels\nlabels = epochs.events[:, -1]\n\n# Cross validator\ncv = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)\n\n# Do cross-validation\npreds = np.empty(len(labels))\nfor train, test in cv.split(epochs, labels):\n clf.fit(epochs[train], labels[train])\n preds[test] = clf.predict(epochs[test])\n\n# Classification report\ntarget_names = ['aud_l', 'aud_r', 'vis_l', 'vis_r']\nreport = classification_report(labels, preds, target_names=target_names)\nprint(report)\n\n# Normalized confusion matrix\ncm = confusion_matrix(labels, preds)\ncm_normalized = cm.astype(float) / cm.sum(axis=1)[:, np.newaxis]\n\n# Plot confusion matrix\nfig, ax = plt.subplots(1)\nim = ax.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues)\nax.set(title='Normalized Confusion matrix')\nfig.colorbar(im)\ntick_marks = np.arange(len(target_names))\nplt.xticks(tick_marks, target_names, rotation=45)\nplt.yticks(tick_marks, target_names)\nfig.tight_layout()\nax.set(ylabel='True label', xlabel='Predicted label')",
"The patterns_ attribute of a fitted Xdawn instance (here from the last\ncross-validation fold) can be used for visualization.",
"fig, axes = plt.subplots(nrows=len(event_id), ncols=n_filter,\n figsize=(n_filter, len(event_id) * 2))\nfitted_xdawn = clf.steps[0][1]\ninfo = create_info(epochs.ch_names, 1, epochs.get_channel_types())\ninfo.set_montage(epochs.get_montage())\nfor ii, cur_class in enumerate(sorted(event_id)):\n cur_patterns = fitted_xdawn.patterns_[cur_class]\n pattern_evoked = EvokedArray(cur_patterns[:n_filter].T, info, tmin=0)\n pattern_evoked.plot_topomap(\n times=np.arange(n_filter),\n time_format='Component %d' if ii == 0 else '', colorbar=False,\n show_names=False, axes=axes[ii], show=False)\n axes[ii, 0].set(ylabel=cur_class)\nfig.tight_layout(h_pad=1.0, w_pad=1.0, pad=0.1)",
"References\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mayankjohri/LetsExplorePython
|
Section 2 - Advance Python/Chapter S2.01 - Functional Programming/02_04_Comprehensions.ipynb
|
gpl-3.0
|
[
"Comprehensions\nUsing comprehensions is often a way both to make code more compact and to shift our focus from the \"how\" to the \"what\". It is an expression that uses the same keywords as loop and conditional blocks, but inverts their order to focus on the data rather than on the procedure. \nSimply changing the form of expression can often make a surprisingly large difference in how we reason about code and how easy it is to understand. The ternary operator also performs a similar restructuring of our focus, using the same keywords in a different order.\nList Comprehensions\nA way to create a new list from existing list based on defined logic\nUnconditional Compreshensions",
"# Original\ndoubled_numbers = []\nfor n in range(1,12,2):\n doubled_numbers.append(n*2)\n\nprint(doubled_numbers)\n\n#list compreshensions\ndoubled_numbers = [n * 2 for n in range(1,12,2)] # 1 ,3, 5, 7, 9, 11\nprint(doubled_numbers)",
"Conditional Compreshensions",
"doubled_odds = []\n\nfor n in range(1,12):\n if n % 2 == 1:\n doubled_odds.append(n * 2)\nprint(doubled_odds)\n\ndoubled_odds = [n * 2 for n in range(1,12) if n% 2 == 1]\n\nprint(doubled_odds)",
"!!!! Tip !!!! \n\nCopy the variable assignment for our new empty list (line 3)\nCopy the expression that we’ve been append-ing into this new list (line 6)\nCopy the for loop line, excluding the final : (line 4)\nCopy the if statement line, also without the : (line 5)",
"# FROM\nnumbers = range(2,10)\n\ndoubled_odds = []\nfor n in numbers:\n if n % 2 == 1:\n doubled_odds.append(n * 2)\nprint(doubled_odds)\n\n# TO\nnumbers = range(2,10)\n\ndoubled_odds = [n * 2 for n in numbers if n % 2 == 1]\n",
"Nested if statements in for loop",
"l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0]\n\nlst = []\nfor v in l:\n if v == 0 :\n lst.append('Zero')\n else:\n if v % 2 == 0:\n lst.append('even')\n else:\n lst.append('odd')\n \nprint(lst)\n\nlst = [\"zero\" if v == 0 else \"even\" if v%2 == 0 else \"odd\" for v in l]\nprint(lst)\n\nprint(['yes' if v == 1 else 'no' if v == 2 else 'idle' for v in l])\n\ndef flatten_list_new(lst, result=None):\n \"\"\"Flattens a nested list\n >>> flatten_list([ [1, 2, [3, 4] ], [5, 6], 7])\n [1, 2, 3, 4, 5, 6, 7]\n \"\"\"\n if result is None:\n result = []\n# else:\n# result = [x if not isinstance(x, list) else flatten_list_new(x, list) for x in lst]\n# result = [ x if not isinstance(x, list) else isinstance(x, list) for x in lst ]\n# result = [flatten_list_new(x, result) if isinstance(x, list) else x for x in lst ]\n for x in lst:\n if isinstance(x, list):\n flatten_list_new(x, result)\n else:\n result.append(x)\n\n return result\nlst = [[1, 2, [3, [4]] ], [5, 6], 7]\n \nprint(flatten_list_new(lst))\n\nnewlist = []\ninput_list = [1,2, [2,[3]],3,[3,[[4],5]]]\n\ndef convertHetrogenousList(hetroList):\n newlist = []\n if type(hetroList) is int:\n newlist.append(hetroList)\n elif type(hetroList) is list:\n for items in hetroList:\n newlist.extend(convertHetrogenousList(items))\n return newlist\n\nnewlist = convertHetrogenousList(input_list)\nprint(newlist)\n\n### TODO Can we redirect the stdio.out to a list\n\nlst = []\nfor a in range(10):\n if a % 2==0:\n for x in range(a, 10):\n lst.append(x)\n\nprint(lst)\n\nn = 10\nlsts = [x for a in range(n) if a % 2 == 0 for x in range(a, 10)]\n\nprint(lsts)\n\nimport os\n\nhelp(os.walk)\n\n# %%time\nimport os\n\nfile_list = []\n\nfor path, _, files in os.walk(\"/home/mayank/lep\"):\n for f in files:\n if f.endswith(\".py\"):\n file_list.append(os.path.join(path, f))\n\nprint(len(file_list))\n# print(file_list)\n\nfile_list = [os.path.join(path, f) for path, _, files in os.walk(\"/home/mayank/lep\") for f in files if f.endswith(\".py\") ]\nprint(len(file_list))\n\n%%time\n\nimport os\n\nfolder = \"/home/mayank/lep\"\nfile_list = [os.path.join(path, f) \n for path, _, files in os.walk(folder) \n for f in files if f.endswith(\".py\") ]\nprint(len(file_list))\n\n%%time\nrestFiles = []\n\nfor d in os.walk(r\"C:\\apps\"):\n if \"etc\" in d[0]:\n for f in d[2]:\n if f.endswith(\".txt\"):\n restFiles.append(os.path.join(d[0], f))\nprint(len(restFiles))\n\n%%time\nrestFiles = [os.path.join(d[0], f) \n for d in os.walk(r\"C:\\apps\") \n if \"etc\" in d[0]\n for f in d[2] \n if f.endswith(\".txt\")]\nprint(len(restFiles))\n\nlst = [1, 2, [2, 3], 3, [3, [[4], 5]]]\n\nlst = [1, 2, 2, 3, 3, 3, 4, 5]\n\n%%time\n\nmatrix = []\nfor row_idx in range(0, 3):\n itmList = []\n for item_idx in range(0, 3):\n if item_idx == row_idx:\n itmList.append(1)\n else:\n itmList.append(0)\n matrix.append(itmList)\nprint(matrix)\n\nmatrix = [[1 if item_idx == row_idx \n else 0 for item_idx in range(0, 3)] \n for row_idx in range(0, 3) ]\nprint(matrix)\n\nlst = [1,2,34,4,5]\nprint(lst)\nlst.append(2)\nprint(lst)\n\n\nlst.append(2)\nprint(lst)\nl = set(lst)\nprint(l)",
"Set Comprehensions\nSet comprehensions allow sets to be constructed using the same principles as list comprehensions, the only difference is that resulting sequence is a set and \"{}\" are used instead of \"[]\".",
"names = [ 'aaLok', 'Manish', 'AalOK', 'Manish', 'Gupta', 'Johri', 'Mayank' ]\n\nnew_names1 = [name[0].upper() + name[1:].lower() for name in names if len(name) > 1 ]\nnew_names = sorted({name[0].upper() + name[1:].lower() for name in names if len(name) > 1 })\nprint(new_names1)\nprint(new_names)",
"Dictionary Comprehensions",
"original = {'a':10, 'b': 34, 'A': 7, 'Z':3, \"z\": 199}",
"Now, lets consolidate the above dictionary in such a way that resultant dictionary will have only lower case keys and if both lower and upper case keys are found in the original dictionary than values of both the keys should be added.",
"mcase_freq = {}\nfor k in original.keys():\n mcase_freq[k.lower()] = original.get(k.lower(),0) + original.get(k.upper, 0)\nprint(mcase_freq)\n\nmcase_frequency = { k.lower() : original.get(k.lower(), 0) + original.get(k.upper(), 0) for k in original.keys() }\nprint(mcase_frequency)\n\noriginal = {'a':10, 'b': 34, 'A': 7, 'Z':3, \"z\": 199, 'c': 10}\nflipped = {value: key for key, value in original.items()}\nprint(flipped)\n\noriginal = {'a':10, 'b': 34, 'A': 7, 'Z':3, \"z\": 199, 'c': 10}\nnewdict = {}\nfor key, value in original.items():\n# print(ori)\n if (value not in newdict):\n newdict[value] = key\nprint(newdict)\n\nnewdict = {value: key for key, value in original.items() if (value not in newdict)}\nprint(newdict)\n\nx = {\"a\": 10, \"b\": 20, \"c\": 20}\n\nprint(x)\nx[\"a\"] = 100\n\nprint(x)",
"This map doesn’t take a named function. It takes an anonymous, inlined function defined with lambda. The parameters of the lambda are defined to the left of the colon. The function body is defined to the right of the colon. The result of running the function body is (implicitly) returned.\nThe unfunctional code below takes a list of real names and appends them with randomly assigned code names.",
"import random\n\nnames_dict = {}\nnames = [\"Mayank\", \"Manish\", \"Aalok\", \"Roshan Musheer\"]\ncode_names = ['Mr. Normal', 'Mr. God', 'Mr. Cool', 'The Big Boss']\n\nrandom.shuffle(code_names)\n\nfor i in range(len(names)):\n names_dict[names[i]] = code_names[i] \n \nprint(names_dict)\n\n# Better implementation\nimport random\n\nnames_dict = {}\nnames = [\"Mayank\", \"Manish\", \"Aalok\", \"Roshan Musheer\"]\ncode_names = ['Mr. Normal', 'Mr. God', 'Mr. Cool', 'The Big Boss']\n\nrandom.shuffle(code_names)\n\nfor i, _ in enumerate(names):\n names_dict[names[i]] = code_names[i] \n \nprint(names_dict)\n\n# best implementation\nimport random\n\nnames_dict = {}\nnames = [\"Mayank\", \"Manish\", \"Aalok\", \"Roshan Musheer\"]\ncode_names = ['Mr. Normal', 'Mr. God', 'Mr. Cool', 'The Big Boss']\n\nrandom.shuffle(code_names)\n\nnames_dict = dict(zip(names, code_names))\n\nprint(names_dict)\n\nld = [{'a': 10, 'b': 20}, {'p': 10, 'u': 100}]\ndict([kv for d in ld for kv in d.items()])",
"Generator Comprehension\nThey are simply a generator expression with a parenthesis \"()\" around it. Otherwise, the syntax and the way of working is like list comprehension, but a generator comprehension returns a generator instead of a list.",
"%%timeit\nx = (x**2 for x in range(20000))\n# for a in x:\n# pass\n\n%%timeit\nx = [x**2 for x in range(20000)]\n# for a in x:\n# pass\n\nitm = 10\nprint(itm / 2)",
"Summary\nWhen struggling to write a comprehension, don’t panic. Start with a for loop first and copy-paste your way into a comprehension.\nAny for loop that looks like this:",
"def condition_based_on(itm):\n return itm % 2 == 0\n\nold_things = range(2,20, 3)\nnew_things = []\nfor ITEM in old_things:\n if condition_based_on(ITEM):\n new_things.append(ITEM)\nprint(new_things)",
"Can be rewritten into a list comprehension like this:",
"new_things = [ITEM for ITEM in old_things if condition_based_on(ITEM)]\nprint(new_things)",
"NOTE\nIf you can nudge a for loop until it looks like the ones above, you can rewrite it as a list comprehension."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AllenDowney/ModSim
|
python/soln/examples/yoyo_soln.ipynb
|
gpl-2.0
|
[
"Simulating a Yo-Yo\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International",
"# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *",
"Yo-yo\nSuppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin.\nThe following diagram shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls.\n\nIn this system, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations:\n$\\sum F = m a $\n$\\sum \\tau = I \\alpha$\nwhere the summations indicate that we are adding up forces and torques.\nAs in the previous examples, linear and angular velocity are related because of the way the string unrolls:\n$\\frac{dy}{dt} = -r \\frac{d \\theta}{dt} $\nIn this example, the linear and angular accelerations have opposite sign. As the yo-yo rotates counter-clockwise, $\\theta$ increases and $y$, which is the length of the rolled part of the string, decreases.\nTaking the derivative of both sides yields a similar relationship between linear and angular acceleration:\n$\\frac{d^2 y}{dt^2} = -r \\frac{d^2 \\theta}{dt^2} $\nWhich we can write more concisely:\n$ a = -r \\alpha $\nThis relationship is not a general law of nature; it is specific to scenarios like this where there is rolling without stretching or slipping.\nBecause of the way we've set up the problem, $y$ actually has two meanings: it represents the length of the rolled string and the height of the yo-yo, which decreases as the yo-yo falls. Similarly, $a$ represents acceleration in the length of the rolled string and the height of the yo-yo.\nWe can compute the acceleration of the yo-yo by adding up the linear forces:\n$\\sum F = T - mg = ma $\nWhere $T$ is positive because the tension force points up, and $mg$ is negative because gravity points down.\nBecause gravity acts on the center of mass, it creates no torque, so the only torque is due to tension:\n$\\sum \\tau = T r = I \\alpha $\nPositive (upward) tension yields positive (counter-clockwise) angular acceleration.\nNow we have three equations in three unknowns, $T$, $a$, and $\\alpha$, with $I$, $m$, $g$, and $r$ as known parameters. We could solve these equations by hand, but we can also get SymPy to do it for us.",
"from sympy import symbols, Eq, solve\n\nT, a, alpha, I, m, g, r = symbols('T a alpha I m g r')\n\neq1 = Eq(a, -r * alpha)\neq1\n\neq2 = Eq(T - m * g, m * a)\neq2\n\neq3 = Eq(T * r, I * alpha)\neq3\n\nsoln = solve([eq1, eq2, eq3], [T, a, alpha])\n\nsoln[T]\n\nsoln[a]\n\nsoln[alpha]",
"The results are\n$T = m g I / I^* $\n$a = -m g r^2 / I^* $\n$\\alpha = m g r / I^* $\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.\nYou can also see the derivation of these equations in this video.\nWe can use these equations for $a$ and $\\alpha$ to write a slope function and simulate this system.\nExercise: Simulate the descent of a yo-yo. How long does it take to reach the end of the string?\nHere are the system parameters:",
"Rmin = 8e-3 # m\nRmax = 16e-3 # m\nRout = 35e-3 # m\nmass = 50e-3 # kg\nL = 1 # m\ng = 9.8 # m / s**2",
"Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.\n\n\nRout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string. \n\n\nL is the length of the string.\n\n\ng is the acceleration of gravity.",
"1 / (Rmax)",
"Based on these parameters, we can compute the moment of inertia for the yo-yo, modeling it as a solid cylinder with uniform density (see here).\nIn reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.",
"I = mass * Rout**2 / 2\nI",
"And we can compute k, which is the constant that determines how the radius of the spooled string decreases as it unwinds.",
"k = (Rmax**2 - Rmin**2) / 2 / L \nk",
"The state variables we'll use are angle, theta, angular velocity, omega, the length of the spooled string, y, and the linear velocity of the yo-yo, v.\nHere is a State object with the the initial conditions.",
"init = State(theta=0, omega=0, y=L, v=0)",
"And here's a System object with init and t_end (chosen to be longer than I expect for the yo-yo to drop 1 m).",
"system = System(init=init, t_end=2)",
"Write a slope function for this system, using these results from the book:\n$ r = \\sqrt{2 k y + R_{min}^2} $ \n$ T = m g I / I^* $\n$ a = -m g r^2 / I^* $\n$ \\alpha = m g r / I^* $\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.",
"# Solution\n\ndef slope_func(t, state, system):\n theta, omega, y, v = state\n \n r = np.sqrt(2*k*y + Rmin**2)\n alpha = mass * g * r / (I + mass * r**2)\n a = -r * alpha\n \n return omega, alpha, v, a ",
"Test your slope function with the initial conditions.\nThe results should be approximately\n0, 180.5, 0, -2.9",
"# Solution\n\nslope_func(0, system.init, system)",
"Notice that the initial acceleration is substantially smaller than g because the yo-yo has to start spinning before it can fall.\nWrite an event function that will stop the simulation when y is 0.",
"# Solution\n\ndef event_func(t, state, system):\n theta, omega, y, v = state\n return y",
"Test your event function:",
"# Solution\n\nevent_func(0, system.init, system)",
"Then run the simulation.",
"# Solution\n\nresults, details = run_solve_ivp(system, slope_func, \n events=event_func, max_step=0.05)\ndetails.message",
"Check the final state. If things have gone according to plan, the final value of y should be close to 0.",
"# Solution\n\nresults.tail()",
"How long does it take for the yo-yo to fall 1 m? Does the answer seem reasonable?\nThe following cells plot the results.\ntheta should increase and accelerate.",
"results.theta.plot(color='C0', label='theta')\ndecorate(xlabel='Time (s)',\n ylabel='Angle (rad)')",
"y should decrease and accelerate down.",
"results.y.plot(color='C1', label='y')\n\ndecorate(xlabel='Time (s)',\n ylabel='Length (m)')\n ",
"Plot velocity as a function of time; is the acceleration constant?",
"results.v.plot(label='velocity', color='C3')\n\ndecorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')",
"We can use gradient to estimate the derivative of v. How does the acceleration of the yo-yo compare to g?",
"a = gradient(results.v)\na.plot(label='acceleration', color='C4')\ndecorate(xlabel='Time (s)',\n ylabel='Acceleration (m/$s^2$)')",
"And we can use the formula for r to plot the radius of the spooled thread over time.",
"r = np.sqrt(2*k*results.y + Rmin**2)\nr.plot(label='radius')\n\ndecorate(xlabel='Time (s)',\n ylabel='Radius of spooled thread (m)')\n\nimport pandas as pd\ns = pd.date_range('2020-1', '2020-12', freq='M').to_series()\nlist(s.dt.month_name())\n\npd.interval_range(start=pd.Timestamp('2017-01-01'),\n periods=3, freq='MS').dt"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
RaRe-Technologies/gensim
|
docs/src/auto_examples/tutorials/run_word2vec.ipynb
|
lgpl-2.1
|
[
"%matplotlib inline",
"Word2Vec Model\nIntroduces Gensim's Word2Vec model and demonstrates its use on the Lee Evaluation Corpus\n<https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>_.",
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)",
"In case you missed the buzz, Word2Vec is a widely used algorithm based on neural\nnetworks, commonly referred to as \"deep learning\" (though word2vec itself is rather shallow).\nUsing large amounts of unannotated plain text, word2vec learns relationships\nbetween words automatically. The output are vectors, one vector per word,\nwith remarkable linear relationships that allow us to do things like:\n\nvec(\"king\") - vec(\"man\") + vec(\"woman\") =~ vec(\"queen\")\nvec(\"Montreal Canadiens\") – vec(\"Montreal\") + vec(\"Toronto\") =~ vec(\"Toronto Maple Leafs\").\n\nWord2vec is very useful in automatic text tagging\n<https://github.com/RaRe-Technologies/movie-plots-by-genre>_\\ , recommender\nsystems and machine translation.\nThis tutorial:\n. Introduces Word2Vec as an improvement over traditional bag-of-words\n. Shows off a demo of Word2Vec using a pre-trained model\n. Demonstrates training a new model from your own data\n. Demonstrates loading and saving models\n. Introduces several training parameters and demonstrates their effect\n. Discusses memory requirements\n. Visualizes Word2Vec embeddings by applying dimensionality reduction\nReview: Bag-of-words\n.. Note:: Feel free to skip these review sections if you're already familiar with the models.\nYou may be familiar with the bag-of-words model\n<https://en.wikipedia.org/wiki/Bag-of-words_model>_ from the\ncore_concepts_vector section.\nThis model transforms each document to a fixed-length vector of integers.\nFor example, given the sentences:\n\nJohn likes to watch movies. Mary likes movies too.\nJohn also likes to watch football games. Mary hates football.\n\nThe model outputs the vectors:\n\n[1, 2, 1, 1, 2, 1, 1, 0, 0, 0, 0]\n[1, 1, 1, 1, 0, 1, 0, 1, 2, 1, 1]\n\nEach vector has 10 elements, where each element counts the number of times a\nparticular word occurred in the document.\nThe order of elements is arbitrary.\nIn the example above, the order of the elements corresponds to the words:\n[\"John\", \"likes\", \"to\", \"watch\", \"movies\", \"Mary\", \"too\", \"also\", \"football\", \"games\", \"hates\"].\nBag-of-words models are surprisingly effective, but have several weaknesses.\nFirst, they lose all information about word order: \"John likes Mary\" and\n\"Mary likes John\" correspond to identical vectors. There is a solution: bag\nof n-grams <https://en.wikipedia.org/wiki/N-gram>__\nmodels consider word phrases of length n to represent documents as\nfixed-length vectors to capture local word order but suffer from data\nsparsity and high dimensionality.\nSecond, the model does not attempt to learn the meaning of the underlying\nwords, and as a consequence, the distance between vectors doesn't always\nreflect the difference in meaning. The Word2Vec model addresses this\nsecond problem.\nIntroducing: the Word2Vec Model\nWord2Vec is a more recent model that embeds words in a lower-dimensional\nvector space using a shallow neural network. The result is a set of\nword-vectors where vectors close together in vector space have similar\nmeanings based on context, and word-vectors distant to each other have\ndiffering meanings. For example, strong and powerful would be close\ntogether and strong and Paris would be relatively far.\nThe are two versions of this model and :py:class:~gensim.models.word2vec.Word2Vec\nclass implements them both:\n\nSkip-grams (SG)\nContinuous-bag-of-words (CBOW)\n\n.. Important::\n Don't let the implementation details below scare you.\n They're advanced material: if it's too much, then move on to the next section.\nThe Word2Vec Skip-gram <http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model>\nmodel, for example, takes in pairs (word1, word2) generated by moving a\nwindow across text data, and trains a 1-hidden-layer neural network based on\nthe synthetic task of given an input word, giving us a predicted probability\ndistribution of nearby words to the input. A virtual one-hot\n<https://en.wikipedia.org/wiki/One-hot> encoding of words\ngoes through a 'projection layer' to the hidden layer; these projection\nweights are later interpreted as the word embeddings. So if the hidden layer\nhas 300 neurons, this network will give us 300-dimensional word embeddings.\nContinuous-bag-of-words Word2vec is very similar to the skip-gram model. It\nis also a 1-hidden-layer neural network. The synthetic training task now uses\nthe average of multiple input context words, rather than a single word as in\nskip-gram, to predict the center word. Again, the projection weights that\nturn one-hot words into averageable vectors, of the same width as the hidden\nlayer, are interpreted as the word embeddings.\nWord2Vec Demo\nTo see what Word2Vec can do, let's download a pre-trained model and play\naround with it. We will fetch the Word2Vec model trained on part of the\nGoogle News dataset, covering approximately 3 million words and phrases. Such\na model can take hours to train, but since it's already available,\ndownloading and loading it with Gensim takes minutes.\n.. Important::\n The model is approximately 2GB, so you'll need a decent network connection\n to proceed. Otherwise, skip ahead to the \"Training Your Own Model\" section\n below.\nYou may also check out an online word2vec demo\n<http://radimrehurek.com/2014/02/word2vec-tutorial/#app>_ where you can try\nthis vector algebra for yourself. That demo runs word2vec on the\nentire Google News dataset, of about 100 billion words.",
"import gensim.downloader as api\nwv = api.load('word2vec-google-news-300')",
"A common operation is to retrieve the vocabulary of a model. That is trivial:",
"for index, word in enumerate(wv.index_to_key):\n if index == 10:\n break\n print(f\"word #{index}/{len(wv.index_to_key)} is {word}\")",
"We can easily obtain vectors for terms the model is familiar with:",
"vec_king = wv['king']",
"Unfortunately, the model is unable to infer vectors for unfamiliar words.\nThis is one limitation of Word2Vec: if this limitation matters to you, check\nout the FastText model.",
"try:\n vec_cameroon = wv['cameroon']\nexcept KeyError:\n print(\"The word 'cameroon' does not appear in this model\")",
"Moving on, Word2Vec supports several word similarity tasks out of the\nbox. You can see how the similarity intuitively decreases as the words get\nless and less similar.",
"pairs = [\n ('car', 'minivan'), # a minivan is a kind of car\n ('car', 'bicycle'), # still a wheeled vehicle\n ('car', 'airplane'), # ok, no wheels, but still a vehicle\n ('car', 'cereal'), # ... and so on\n ('car', 'communism'),\n]\nfor w1, w2 in pairs:\n print('%r\\t%r\\t%.2f' % (w1, w2, wv.similarity(w1, w2)))",
"Print the 5 most similar words to \"car\" or \"minivan\"",
"print(wv.most_similar(positive=['car', 'minivan'], topn=5))",
"Which of the below does not belong in the sequence?",
"print(wv.doesnt_match(['fire', 'water', 'land', 'sea', 'air', 'car']))",
"Training Your Own Model\nTo start, you'll need some data for training the model. For the following\nexamples, we'll use the Lee Evaluation Corpus\n<https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>\n(which you already have\n<https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/test/test_data/lee_background.cor>\nif you've installed Gensim).\nThis corpus is small enough to fit entirely in memory, but we'll implement a\nmemory-friendly iterator that reads it line-by-line to demonstrate how you\nwould handle a larger corpus.",
"from gensim.test.utils import datapath\nfrom gensim import utils\n\nclass MyCorpus:\n \"\"\"An iterator that yields sentences (lists of str).\"\"\"\n\n def __iter__(self):\n corpus_path = datapath('lee_background.cor')\n for line in open(corpus_path):\n # assume there's one document per line, tokens separated by whitespace\n yield utils.simple_preprocess(line)",
"If we wanted to do any custom preprocessing, e.g. decode a non-standard\nencoding, lowercase, remove numbers, extract named entities... All of this can\nbe done inside the MyCorpus iterator and word2vec doesn’t need to\nknow. All that is required is that the input yields one sentence (list of\nutf8 words) after another.\nLet's go ahead and train a model on our corpus. Don't worry about the\ntraining parameters much for now, we'll revisit them later.",
"import gensim.models\n\nsentences = MyCorpus()\nmodel = gensim.models.Word2Vec(sentences=sentences)",
"Once we have our model, we can use it in the same way as in the demo above.\nThe main part of the model is model.wv\\ , where \"wv\" stands for \"word vectors\".",
"vec_king = model.wv['king']",
"Retrieving the vocabulary works the same way:",
"for index, word in enumerate(wv.index_to_key):\n if index == 10:\n break\n print(f\"word #{index}/{len(wv.index_to_key)} is {word}\")",
"Storing and loading models\nYou'll notice that training non-trivial models can take time. Once you've\ntrained your model and it works as expected, you can save it to disk. That\nway, you don't have to spend time training it all over again later.\nYou can store/load models using the standard gensim methods:",
"import tempfile\n\nwith tempfile.NamedTemporaryFile(prefix='gensim-model-', delete=False) as tmp:\n temporary_filepath = tmp.name\n model.save(temporary_filepath)\n #\n # The model is now safely stored in the filepath.\n # You can copy it to other machines, share it with others, etc.\n #\n # To load a saved model:\n #\n new_model = gensim.models.Word2Vec.load(temporary_filepath)",
"which uses pickle internally, optionally mmap\\ ‘ing the model’s internal\nlarge NumPy matrices into virtual memory directly from disk files, for\ninter-process memory sharing.\nIn addition, you can load models created by the original C tool, both using\nits text and binary formats::\nmodel = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)\n # using gzipped/bz2 input works too, no need to unzip\n model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)\nTraining Parameters\nWord2Vec accepts several parameters that affect both training speed and quality.\nmin_count\nmin_count is for pruning the internal dictionary. Words that appear only\nonce or twice in a billion-word corpus are probably uninteresting typos and\ngarbage. In addition, there’s not enough data to make any meaningful training\non those words, so it’s best to ignore them:\ndefault value of min_count=5",
"model = gensim.models.Word2Vec(sentences, min_count=10)",
"vector_size\nvector_size is the number of dimensions (N) of the N-dimensional space that\ngensim Word2Vec maps the words onto.\nBigger size values require more training data, but can lead to better (more\naccurate) models. Reasonable values are in the tens to hundreds.",
"# The default value of vector_size is 100.\nmodel = gensim.models.Word2Vec(sentences, vector_size=200)",
"workers\nworkers , the last of the major parameters (full list here\n<http://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec>_)\nis for training parallelization, to speed up training:",
"# default value of workers=3 (tutorial says 1...)\nmodel = gensim.models.Word2Vec(sentences, workers=4)",
"The workers parameter only has an effect if you have Cython\n<http://cython.org/> installed. Without Cython, you’ll only be able to use\none core because of the GIL\n<https://wiki.python.org/moin/GlobalInterpreterLock> (and word2vec\ntraining will be miserably slow\n<http://rare-technologies.com/word2vec-in-python-part-two-optimizing/>_\\ ).\nMemory\nAt its core, word2vec model parameters are stored as matrices (NumPy\narrays). Each array is #vocabulary (controlled by the min_count parameter)\ntimes vector size (the vector_size parameter) of floats (single precision aka 4 bytes).\nThree such matrices are held in RAM (work is underway to reduce that number\nto two, or even one). So if your input contains 100,000 unique words, and you\nasked for layer vector_size=200\\ , the model will require approx.\n100,000*200*4*3 bytes = ~229MB.\nThere’s a little extra memory needed for storing the vocabulary tree (100,000 words would\ntake a few megabytes), but unless your words are extremely loooong strings, memory\nfootprint will be dominated by the three matrices above.\nEvaluating\nWord2Vec training is an unsupervised task, there’s no good way to\nobjectively evaluate the result. Evaluation depends on your end application.\nGoogle has released their testing set of about 20,000 syntactic and semantic\ntest examples, following the “A is to B as C is to D” task. It is provided in\nthe 'datasets' folder.\nFor example a syntactic analogy of comparative type is bad:worse;good:?.\nThere are total of 9 types of syntactic comparisons in the dataset like\nplural nouns and nouns of opposite meaning.\nThe semantic questions contain five types of semantic analogies, such as\ncapital cities (Paris:France;Tokyo:?) or family members\n(brother:sister;dad:?).\nGensim supports the same evaluation set, in exactly the same format:",
"model.wv.evaluate_word_analogies(datapath('questions-words.txt'))",
"This evaluate_word_analogies method takes an optional parameter\n<http://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.evaluate_word_analogies>_\nrestrict_vocab which limits which test examples are to be considered.\nIn the December 2016 release of Gensim we added a better way to evaluate semantic similarity.\nBy default it uses an academic dataset WS-353 but one can create a dataset\nspecific to your business based on it. It contains word pairs together with\nhuman-assigned similarity judgments. It measures the relatedness or\nco-occurrence of two words. For example, 'coast' and 'shore' are very similar\nas they appear in the same context. At the same time 'clothes' and 'closet'\nare less similar because they are related but not interchangeable.",
"model.wv.evaluate_word_pairs(datapath('wordsim353.tsv'))",
".. Important::\n Good performance on Google's or WS-353 test set doesn’t mean word2vec will\n work well in your application, or vice versa. It’s always best to evaluate\n directly on your intended task. For an example of how to use word2vec in a\n classifier pipeline, see this tutorial\n <https://github.com/RaRe-Technologies/movie-plots-by-genre>_.\nOnline training / Resuming training\nAdvanced users can load a model and continue training it with more sentences\nand new vocabulary words <online_w2v_tutorial.ipynb>_:",
"model = gensim.models.Word2Vec.load(temporary_filepath)\nmore_sentences = [\n ['Advanced', 'users', 'can', 'load', 'a', 'model',\n 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences'],\n]\nmodel.build_vocab(more_sentences, update=True)\nmodel.train(more_sentences, total_examples=model.corpus_count, epochs=model.epochs)\n\n# cleaning up temporary file\nimport os\nos.remove(temporary_filepath)",
"You may need to tweak the total_words parameter to train(),\ndepending on what learning rate decay you want to simulate.\nNote that it’s not possible to resume training with models generated by the C\ntool, KeyedVectors.load_word2vec_format(). You can still use them for\nquerying/similarity, but information vital for training (the vocab tree) is\nmissing there.\nTraining Loss Computation\nThe parameter compute_loss can be used to toggle computation of loss\nwhile training the Word2Vec model. The computed loss is stored in the model\nattribute running_training_loss and can be retrieved using the function\nget_latest_training_loss as follows :",
"# instantiating and training the Word2Vec model\nmodel_with_loss = gensim.models.Word2Vec(\n sentences,\n min_count=1,\n compute_loss=True,\n hs=0,\n sg=1,\n seed=42,\n)\n\n# getting the training loss value\ntraining_loss = model_with_loss.get_latest_training_loss()\nprint(training_loss)",
"Benchmarks\nLet's run some benchmarks to see effect of the training loss computation code\non training time.\nWe'll use the following data for the benchmarks:\n. Lee Background corpus: included in gensim's test data\n. Text8 corpus. To demonstrate the effect of corpus size, we'll look at the\nfirst 1MB, 10MB, 50MB of the corpus, as well as the entire thing.",
"import io\nimport os\n\nimport gensim.models.word2vec\nimport gensim.downloader as api\nimport smart_open\n\n\ndef head(path, size):\n with smart_open.open(path) as fin:\n return io.StringIO(fin.read(size))\n\n\ndef generate_input_data():\n lee_path = datapath('lee_background.cor')\n ls = gensim.models.word2vec.LineSentence(lee_path)\n ls.name = '25kB'\n yield ls\n\n text8_path = api.load('text8').fn\n labels = ('1MB', '10MB', '50MB', '100MB')\n sizes = (1024 ** 2, 10 * 1024 ** 2, 50 * 1024 ** 2, 100 * 1024 ** 2)\n for l, s in zip(labels, sizes):\n ls = gensim.models.word2vec.LineSentence(head(text8_path, s))\n ls.name = l\n yield ls\n\n\ninput_data = list(generate_input_data())",
"We now compare the training time taken for different combinations of input\ndata and model training parameters like hs and sg.\nFor each combination, we repeat the test several times to obtain the mean and\nstandard deviation of the test duration.",
"# Temporarily reduce logging verbosity\nlogging.root.level = logging.ERROR\n\nimport time\nimport numpy as np\nimport pandas as pd\n\ntrain_time_values = []\nseed_val = 42\nsg_values = [0, 1]\nhs_values = [0, 1]\n\nfast = True\nif fast:\n input_data_subset = input_data[:3]\nelse:\n input_data_subset = input_data\n\n\nfor data in input_data_subset:\n for sg_val in sg_values:\n for hs_val in hs_values:\n for loss_flag in [True, False]:\n time_taken_list = []\n for i in range(3):\n start_time = time.time()\n w2v_model = gensim.models.Word2Vec(\n data,\n compute_loss=loss_flag,\n sg=sg_val,\n hs=hs_val,\n seed=seed_val,\n )\n time_taken_list.append(time.time() - start_time)\n\n time_taken_list = np.array(time_taken_list)\n time_mean = np.mean(time_taken_list)\n time_std = np.std(time_taken_list)\n\n model_result = {\n 'train_data': data.name,\n 'compute_loss': loss_flag,\n 'sg': sg_val,\n 'hs': hs_val,\n 'train_time_mean': time_mean,\n 'train_time_std': time_std,\n }\n print(\"Word2vec model #%i: %s\" % (len(train_time_values), model_result))\n train_time_values.append(model_result)\n\ntrain_times_table = pd.DataFrame(train_time_values)\ntrain_times_table = train_times_table.sort_values(\n by=['train_data', 'sg', 'hs', 'compute_loss'],\n ascending=[False, False, True, False],\n)\nprint(train_times_table)",
"Visualising Word Embeddings\nThe word embeddings made by the model can be visualised by reducing\ndimensionality of the words to 2 dimensions using tSNE.\nVisualisations can be used to notice semantic and syntactic trends in the data.\nExample:\n\nSemantic: words like cat, dog, cow, etc. have a tendency to lie close by\nSyntactic: words like run, running or cut, cutting lie close together.\n\nVector relations like vKing - vMan = vQueen - vWoman can also be noticed.\n.. Important::\n The model used for the visualisation is trained on a small corpus. Thus\n some of the relations might not be so clear.",
"from sklearn.decomposition import IncrementalPCA # inital reduction\nfrom sklearn.manifold import TSNE # final reduction\nimport numpy as np # array handling\n\n\ndef reduce_dimensions(model):\n num_dimensions = 2 # final num dimensions (2D, 3D, etc)\n\n # extract the words & their vectors, as numpy arrays\n vectors = np.asarray(model.wv.vectors)\n labels = np.asarray(model.wv.index_to_key) # fixed-width numpy strings\n\n # reduce using t-SNE\n tsne = TSNE(n_components=num_dimensions, random_state=0)\n vectors = tsne.fit_transform(vectors)\n\n x_vals = [v[0] for v in vectors]\n y_vals = [v[1] for v in vectors]\n return x_vals, y_vals, labels\n\n\nx_vals, y_vals, labels = reduce_dimensions(model)\n\ndef plot_with_plotly(x_vals, y_vals, labels, plot_in_notebook=True):\n from plotly.offline import init_notebook_mode, iplot, plot\n import plotly.graph_objs as go\n\n trace = go.Scatter(x=x_vals, y=y_vals, mode='text', text=labels)\n data = [trace]\n\n if plot_in_notebook:\n init_notebook_mode(connected=True)\n iplot(data, filename='word-embedding-plot')\n else:\n plot(data, filename='word-embedding-plot.html')\n\n\ndef plot_with_matplotlib(x_vals, y_vals, labels):\n import matplotlib.pyplot as plt\n import random\n\n random.seed(0)\n\n plt.figure(figsize=(12, 12))\n plt.scatter(x_vals, y_vals)\n\n #\n # Label randomly subsampled 25 data points\n #\n indices = list(range(len(labels)))\n selected_indices = random.sample(indices, 25)\n for i in selected_indices:\n plt.annotate(labels[i], (x_vals[i], y_vals[i]))\n\ntry:\n get_ipython()\nexcept Exception:\n plot_function = plot_with_matplotlib\nelse:\n plot_function = plot_with_plotly\n\nplot_function(x_vals, y_vals, labels)",
"Conclusion\nIn this tutorial we learned how to train word2vec models on your custom data\nand also how to evaluate it. Hope that you too will find this popular tool\nuseful in your Machine Learning tasks!\nLinks\n\nAPI docs: :py:mod:gensim.models.word2vec\nOriginal C toolkit and word2vec papers by Google <https://code.google.com/archive/p/word2vec/>_."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Diyago/Machine-Learning-scripts
|
general studies/Graphs/Intro-networkx-python-graph-tutorial.ipynb
|
apache-2.0
|
[
"#initial code from here https://www.datacamp.com/community/tutorials/networkx-python-graph-tutorial\n\nimport itertools\nimport copy\nimport networkx as nx\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Grab edge list data hosted on Gist\nedgelist = pd.read_csv('https://gist.githubusercontent.com/brooksandrew/e570c38bcc72a8d102422f2af836513b/raw/89c76b2563dbc0e88384719a35cba0dfc04cd522/edgelist_sleeping_giant.csv')\n\n# Preview edgelist\nedgelist.head(10)\n\nnodelist = pd.read_csv('https://gist.githubusercontent.com/brooksandrew/f989e10af17fb4c85b11409fea47895b/raw/a3a8da0fa5b094f1ca9d82e1642b384889ae16e8/nodelist_sleeping_giant.csv')\nnodelist.head(5)\n\n# Create empty graph\ng = nx.Graph()\n# Add edges and edge attributes\nfor i, elrow in edgelist.iterrows():\n g.add_edge(elrow[0], elrow[1], attr_dict=elrow[2:].to_dict())\n\n# Edge list example\nprint(elrow[0]) # node1\nprint(elrow[1]) # node2\nprint(elrow[2:].to_dict()) # edge attribute dict\n\n# Add node attributes\nfor i, nlrow in nodelist.iterrows():\n g.node[nlrow['id']].update(nlrow[1:].to_dict())\n\n# Node list example\nprint(nlrow)",
"Inspect Graph\nEdges\nYour graph edges are represented by a list of tuples of length 3. The first two elements are the node names linked by the edge. The third is the dictionary of edge attributes.",
"# Preview first 5 edges\nlist(g.edges(data=True))[0:5]",
"Nodes\nSimilarly, your nodes are represented by a list of tuples of length 2. The first element is the node ID, followed by the dictionary of node attributes.",
"# Preview first 10 nodes\nlist(g.nodes(data=True))[0:10]\n\n## Summary Stats\nprint('# of edges: {}'.format(g.number_of_edges()))\nprint('# of nodes: {}'.format(g.number_of_nodes()))",
"Visualize\nManipulate Colors and Layout\nPositions: First you need to manipulate the node positions from the graph into a dictionary. This will allow you to recreate the graph using the same layout as the actual trail map. Y is negated to transform the Y-axis origin from the topleft to the bottomleft.",
"# Define node positions data structure (dict) for plotting\nnode_positions = {node[0]: (node[1]['X'], -node[1]['Y']) for node in g.nodes(data=True)}\n\n# Preview of node_positions with a bit of hack (there is no head/slice method for dictionaries).\ndict(list(node_positions.items())[0:5])",
"Colors: Now you manipulate the edge colors from the graph into a simple list so that you can visualize the trails by their color.",
"# Define data structure (list) of edge colors for plotting\nedge_colors = [e[2]['attr_dict']['color'] for e in g.edges(data=True)]\n\n# Preview first 10\nedge_colors[0:10]\n\nplt.figure(figsize=(8, 6))\nnx.draw(g, pos=node_positions, edge_color=edge_colors, node_size=10, node_color='black')\nplt.title('Graph Representation of Sleeping Giant Trail Map', size=15)\nplt.show()",
"Solving the Chinese Postman Problem is quite simple conceptually:\n\n\nFind all nodes with odd degree (very easy).\n(Find all trail intersections where the number of trails touching that intersection is an odd number)\n\n\nAdd edges to the graph such that all nodes of odd degree are made even. These added edges must be duplicates from the original graph (we'll assume no bushwhacking for this problem). The set of edges added should sum to the minimum distance possible (hard...np-hard to be precise).\n(In simpler terms, minimize the amount of double backing on a route that hits every trail)\n\n\nGiven a starting point, find the Eulerian tour over the augmented dataset (moderately easy).\n(Once we know which trails we'll be double backing on, actually calculate the route from beginning to end)\n\n\nCPP Step 1: Find Nodes of Odd Degree\nThis is a pretty straightforward counting computation. You see that 36 of the 76 nodes have odd degree. These are mostly the dead-end trails (degree 1) and intersections of 3 trails. There are a handful of degree 5 nodes.",
"list(g.nodes(data=True))\n\n# Calculate list of nodes with odd degree\nnodes_odd_degree = [v for v, d in g.degree() if d % 2 ==1]\n\n# Preview\n(nodes_odd_degree[0:5])\n\nprint('Number of nodes of odd degree: {}'.format(len(nodes_odd_degree)))\nprint('Number of total nodes: {}'.format(len(g.nodes())))",
"CPP Step 2: Find Min Distance Pairs\nThis is really the meat of the problem. You'll break it down into 5 parts:\n\nCompute all possible pairs of odd degree nodes.\nCompute the shortest path between each node pair calculated in 1.\nCreate a complete graph connecting every node pair in 1. with shortest path distance attributes calculated in 2.\nCompute a minimum weight matching of the graph calculated in 3. \n(This boils down to determining how to pair the odd nodes such that the sum of the distance between the pairs is as small as possible).\nAugment the original graph with the shortest paths between the node pairs calculated in 4.\n\nStep 2.1: Compute Node Pairs\nYou use the itertools combination function to compute all possible pairs of the odd degree nodes. Your graph is undirected, so we don't care about order: For example, (a,b) == (b,a).",
"# Compute all pairs of odd nodes. in a list of tuples\nodd_node_pairs = list(itertools.combinations(nodes_odd_degree, 2))\n\n# Preview pairs of odd degree nodes\nodd_node_pairs[0:10]\n\nprint('Number of pairs: {}'.format(len(odd_node_pairs)))\n\ndef get_shortest_paths_distances(graph, pairs, edge_weight_name):\n \"\"\"Compute shortest distance between each pair of nodes in a graph. Return a dictionary keyed on node pairs (tuples).\"\"\"\n distances = {}\n for pair in pairs:\n distances[pair] = nx.dijkstra_path_length(graph, pair[0], pair[1], weight=edge_weight_name)\n return distances\n\n# Compute shortest paths. Return a dictionary with node pairs keys and a single value equal to shortest path distance.\nodd_node_pairs_shortest_paths = get_shortest_paths_distances(g, odd_node_pairs, 'distance')\n\n# Preview with a bit of hack (there is no head/slice method for dictionaries).\ndict(list(odd_node_pairs_shortest_paths.items())[0:10])",
"Step 2.3: Create Complete Graph\nA complete graph is simply a graph where every node is connected to every other node by a unique edge.\ncreate_complete_graph is defined to calculate it. The flip_weights parameter is used to transform the distance to the weight attribute where smaller numbers reflect large distances and high numbers reflect short distances. This sounds a little counter intuitive, but is necessary for Step 2.4 where you calculate the minimum weight matching on the complete graph.\nIdeally you'd calculate the minimum weight matching directly, but NetworkX only implements a max_weight_matching function which maximizes, rather than minimizes edge weight. We hack this a bit by negating (multiplying by -1) the distance attribute to get weight. This ensures that order and scale by distance are preserved, but reversed.",
"def create_complete_graph(pair_weights, flip_weights=True):\n \"\"\"\n Create a completely connected graph using a list of vertex pairs and the shortest path distances between them\n Parameters:\n pair_weights: list[tuple] from the output of get_shortest_paths_distances\n flip_weights: Boolean. Should we negate the edge attribute in pair_weights?\n \"\"\"\n g = nx.Graph()\n for k, v in pair_weights.items():\n wt_i = - v if flip_weights else v\n g.add_edge(k[0], k[1], attr_dict={'distance': v, 'weight': wt_i})\n return g\n\n# Generate the complete graph\ng_odd_complete = create_complete_graph(odd_node_pairs_shortest_paths, flip_weights=True)\n\n# Counts\nprint('Number of nodes: {}'.format(len(g_odd_complete.nodes())))\nprint('Number of edges: {}'.format(len(g_odd_complete.edges())))",
"For a visual prop, the fully connected graph of odd degree node pairs is plotted below. Note that you preserve the X, Y coordinates of each node, but the edges do not necessarily represent actual trails. For example, two nodes could be connected by a single edge in this graph, but the shortest path between them could be 5 hops through even degree nodes (not shown here).",
"# Plot the complete graph of odd-degree nodes\nplt.figure(figsize=(8, 6))\npos_random = nx.random_layout(g_odd_complete)\nnx.draw_networkx_nodes(g_odd_complete, node_positions, node_size=20, node_color=\"red\")\nnx.draw_networkx_edges(g_odd_complete, node_positions, alpha=0.1)\nplt.axis('off')\nplt.title('Complete Graph of Odd-degree Nodes')\nplt.show()",
"Step 2.4: Compute Minimum Weight Matching\nThis is the most complex step in the CPP. You need to find the odd degree node pairs whose combined sum (of distance between them) is as small as possible. So for your problem, this boils down to selecting the optimal 18 edges (36 odd degree nodes / 2) from the hairball of a graph generated in 2.3.\nBoth the implementation and intuition of this optimization are beyond the scope of this tutorial... like 800+ lines of code and a body of academic literature beyond this scope.\nThe code implemented in the NetworkX function max_weight_matching is based on Galil, Zvi (1986) [2] which employs an O(n3) time algorithm.",
"# Compute min weight matching.\n# Note: max_weight_matching uses the 'weight' attribute by default as the attribute to maximize.\nodd_matching_dupes = nx.algorithms.max_weight_matching(g_odd_complete, True)\n\nprint('Number of edges in matching: {}'.format(len(odd_matching_dupes)))",
"The matching output (odd_matching_dupes) is a dictionary. Although there are 36 edges in this matching, you only want 18. Each edge-pair occurs twice (once with node 1 as the key and a second time with node 2 as the key of the dictionary).",
"odd_matching_dupes\n\nlist(odd_matching_dupes)\n\n# Convert matching to list of deduped tuples\nodd_matching = list(odd_matching_dupes)\n\n# Counts\nprint('Number of edges in matching (deduped): {}'.format(len(odd_matching)))\n\nplt.figure(figsize=(8, 6))\n\n# Plot the complete graph of odd-degree nodes\nnx.draw(g_odd_complete, pos=node_positions, node_size=20, alpha=0.05)\n\n# Create a new graph to overlay on g_odd_complete with just the edges from the min weight matching\ng_odd_complete_min_edges = nx.Graph(odd_matching)\nnx.draw(g_odd_complete_min_edges, pos=node_positions, node_size=20, edge_color='blue', node_color='red')\n\nplt.title('Min Weight Matching on Complete Graph')\nplt.show()",
"To illustrate how this fits in with the original graph, you plot the same min weight pairs (blue lines), but over the trail map (faded) instead of the complete graph. Again, note that the blue lines are the bushwhacking route (as the crow flies edges, not actual trails). You still have a little bit of work to do to find the edges that comprise the shortest route between each pair in Step 3.",
"plt.figure(figsize=(8, 6))\n\n# Plot the original trail map graph\nnx.draw(g, pos=node_positions, node_size=20, alpha=0.1, node_color='black')\n\n# Plot graph to overlay with just the edges from the min weight matching\nnx.draw(g_odd_complete_min_edges, pos=node_positions, node_size=20, alpha=1, node_color='red', edge_color='blue')\n\nplt.title('Min Weight Matching on Orginal Graph')\nplt.show()",
"Step 2.5: Augment the Original Graph\nNow you augment the original graph with the edges from the matching calculated in 2.4. A simple function to do this is defined below which also notes that these new edges came from the augmented graph. You'll need to know this in 3. when you actually create the Eulerian circuit through the graph.",
"def add_augmenting_path_to_graph(graph, min_weight_pairs):\n \"\"\"\n Add the min weight matching edges to the original graph\n Parameters:\n graph: NetworkX graph (original graph from trailmap)\n min_weight_pairs: list[tuples] of node pairs from min weight matching\n Returns:\n augmented NetworkX graph\n \"\"\"\n\n # We need to make the augmented graph a MultiGraph so we can add parallel edges\n graph_aug = nx.MultiGraph(graph.copy())\n for pair in min_weight_pairs:\n graph_aug.add_edge(pair[0],\n pair[1],\n attr_dict={'distance': nx.dijkstra_path_length(graph, pair[0], pair[1]),\n 'trail': 'augmented'}\n )\n return graph_aug\n\n# Create augmented graph: add the min weight matching edges to g\ng_aug = add_augmenting_path_to_graph(g, odd_matching)\n\n# Counts\nprint('Number of edges in original graph: {}'.format(len(g.edges())))\nprint('Number of edges in augmented graph: {}'.format(len(g_aug.edges())))",
"CPP Step 3: Compute Eulerian Circuit\now that you have a graph with even degree the hard optimization work is over. As Euler famously postulated in 1736 with the Seven Bridges of Königsberg problem, there exists a path which visits each edge exactly once if all nodes have even degree. Carl Hierholzer fomally proved this result later in the 1870s.\nThere are many Eulerian circuits with the same distance that can be constructed. You can get 90% of the way there with the NetworkX eulerian_circuit function. However there are some limitations.\nNaive Circuit\nNonetheless, let's start with the simple yet incomplete solution:",
"naive_euler_circuit = list(nx.eulerian_circuit(g_aug, source='b_end_east'))\nprint('Length of eulerian circuit: {}'.format(len(naive_euler_circuit)))\n\nnaive_euler_circuit[0:10]\n",
"Correct Circuit\nNow let's define a function that utilizes the original graph to tell you which trails to use to get from node A to node B. Although verbose in code, this logic is actually quite simple. You simply transform the naive circuit which included edges that did not exist in the original graph to a Eulerian circuit using only edges that exist in the original graph.\nYou loop through each edge in the naive Eulerian circuit (naive_euler_circuit). Wherever you encounter an edge that does not exist in the original graph, you replace it with the sequence of edges comprising the shortest path between its nodes using the original graph",
"def create_eulerian_circuit(graph_augmented, graph_original, starting_node=None):\n \"\"\"Create the eulerian path using only edges from the original graph.\"\"\"\n euler_circuit = []\n naive_circuit = list(nx.eulerian_circuit(graph_augmented, source=starting_node))\n\n for edge in naive_circuit:\n edge_data = graph_augmented.get_edge_data(edge[0], edge[1]) \n #print(edge_data[0])\n if edge_data[0]['attr_dict']['trail'] != 'augmented':\n # If `edge` exists in original graph, grab the edge attributes and add to eulerian circuit.\n edge_att = graph_original[edge[0]][edge[1]]\n euler_circuit.append((edge[0], edge[1], edge_att))\n else:\n aug_path = nx.shortest_path(graph_original, edge[0], edge[1], weight='distance')\n aug_path_pairs = list(zip(aug_path[:-1], aug_path[1:]))\n\n print('Filling in edges for augmented edge: {}'.format(edge))\n print('Augmenting path: {}'.format(' => '.join(aug_path)))\n print('Augmenting path pairs: {}\\n'.format(aug_path_pairs))\n\n # If `edge` does not exist in original graph, find the shortest path between its nodes and\n # add the edge attributes for each link in the shortest path.\n for edge_aug in aug_path_pairs:\n edge_aug_att = graph_original[edge_aug[0]][edge_aug[1]]\n euler_circuit.append((edge_aug[0], edge_aug[1], edge_aug_att))\n\n return euler_circuit\n\n# Create the Eulerian circuit\neuler_circuit = create_eulerian_circuit(g_aug, g, 'b_end_east')\n\nprint('Length of Eulerian circuit: {}'.format(len(euler_circuit)))\n\n## CPP Solution\n\n# Preview first 20 directions of CPP solution\nfor i, edge in enumerate(euler_circuit[0:20]):\n print(i, edge)",
"Stats",
"# Computing some stats\ntotal_mileage_of_circuit = sum([edge[2]['attr_dict']['distance'] for edge in euler_circuit])\ntotal_mileage_on_orig_trail_map = sum(nx.get_edge_attributes(g, 'distance').values())\n_vcn = pd.value_counts(pd.value_counts([(e[0]) for e in euler_circuit]), sort=False)\nnode_visits = pd.DataFrame({'n_visits': _vcn.index, 'n_nodes': _vcn.values})\n_vce = pd.value_counts(pd.value_counts([sorted(e)[0] + sorted(e)[1] for e in nx.MultiDiGraph(euler_circuit).edges()]))\nedge_visits = pd.DataFrame({'n_visits': _vce.index, 'n_edges': _vce.values})\n\n# Printing stats\nprint('Mileage of circuit: {0:.2f}'.format(total_mileage_of_circuit))\nprint('Mileage on original trail map: {0:.2f}'.format(total_mileage_on_orig_trail_map))\nprint('Mileage retracing edges: {0:.2f}'.format(total_mileage_of_circuit-total_mileage_on_orig_trail_map))\n#print('Percent of mileage retraced: {0:.2f}%\\n'.format((1-total_mileage_of_circuit/total_mileage_on_orig_trail_map)*-100))\n\nprint('Number of edges in circuit: {}'.format(len(euler_circuit)))\nprint('Number of edges in original graph: {}'.format(len(g.edges())))\nprint('Number of nodes in original graph: {}\\n'.format(len(g.nodes())))\n\nprint('Number of edges traversed more than once: {}\\n'.format(len(euler_circuit)-len(g.edges()))) \n\nprint('Number of times visiting each node:')\nprint(node_visits.to_string(index=False))\n\nprint('\\nNumber of times visiting each edge:')\nprint(edge_visits.to_string(index=False))",
"Create CPP Graph\nYour first step is to convert the list of edges to walk in the Euler circuit into an edge list with plot-friendly attributes.",
"def create_cpp_edgelist(euler_circuit):\n \"\"\"\n Create the edgelist without parallel edge for the visualization\n Combine duplicate edges and keep track of their sequence and # of walks\n Parameters:\n euler_circuit: list[tuple] from create_eulerian_circuit\n \"\"\"\n cpp_edgelist = {}\n\n for i, e in enumerate(euler_circuit):\n edge = frozenset([e[0], e[1]])\n\n if edge in cpp_edgelist:\n cpp_edgelist[edge][2]['sequence'] += ', ' + str(i)\n cpp_edgelist[edge][2]['visits'] += 1\n\n else:\n cpp_edgelist[edge] = e\n cpp_edgelist[edge][2]['sequence'] = str(i)\n cpp_edgelist[edge][2]['visits'] = 1\n\n return list(cpp_edgelist.values())\n\ncpp_edgelist = create_cpp_edgelist(euler_circuit)\nprint('Number of edges in CPP edge list: {}'.format(len(cpp_edgelist)))\n\n\ncpp_edgelist[0:3]\n\n\ng_cpp = nx.Graph(cpp_edgelist)\n\nplt.figure(figsize=(14, 10))\n\nvisit_colors = {1:'lightgray', 2:'blue', 3: 'red', 4 : 'black', 5 : 'green'}\nedge_colors = [visit_colors[e[2]['visits']] for e in g_cpp.edges(data=True)]\nnode_colors = ['red' if node in nodes_odd_degree else 'lightgray' for node in g_cpp.nodes()]\n\nnx.draw_networkx(g_cpp, pos=node_positions, node_size=20, node_color=node_colors, edge_color=edge_colors, with_labels=False)\nplt.axis('off')\nplt.show()\n\nplt.figure(figsize=(14, 10))\n\nedge_colors = [e[2]['attr_dict']['color'] for e in g_cpp.edges(data=True)]\nnx.draw_networkx(g_cpp, pos=node_positions, node_size=10, node_color='black', edge_color=edge_colors, with_labels=False, alpha=0.5)\n\nbbox = {'ec':[1,1,1,0], 'fc':[1,1,1,0]} # hack to label edges over line (rather than breaking up line)\nedge_labels = nx.get_edge_attributes(g_cpp, 'sequence')\nnx.draw_networkx_edge_labels(g_cpp, pos=node_positions, edge_labels=edge_labels, bbox=bbox, font_size=6)\n\nplt.axis('off')\nplt.show()\n\nvisit_colors = {1:'lightgray', 2:'blue', 3: 'red', 4 : 'black', 5 : 'green'}\nedge_cnter = {}\ng_i_edge_colors = []\nfor i, e in enumerate(euler_circuit, start=1):\n\n edge = frozenset([e[0], e[1]])\n if edge in edge_cnter:\n edge_cnter[edge] += 1\n else:\n edge_cnter[edge] = 1\n\n # Full graph (faded in background)\n nx.draw_networkx(g_cpp, pos=node_positions, node_size=6, node_color='gray', with_labels=False, alpha=0.07)\n\n # Edges walked as of iteration i\n euler_circuit_i = copy.deepcopy(euler_circuit[0:i])\n for i in range(len(euler_circuit_i)):\n edge_i = frozenset([euler_circuit_i[i][0], euler_circuit_i[i][1]])\n euler_circuit_i[i][2]['visits_i'] = edge_cnter[edge_i]\n g_i = nx.Graph(euler_circuit_i)\n g_i_edge_colors = [visit_colors[e[2]['visits_i']] for e in g_i.edges(data=True)]\n\n nx.draw_networkx_nodes(g_i, pos=node_positions, node_size=6, alpha=0.6, node_color='lightgray', with_labels=False, linewidths=0.1)\n nx.draw_networkx_edges(g_i, pos=node_positions, edge_color=g_i_edge_colors, alpha=0.8)\n\n plt.axis('off')\n plt.savefig('img{}.png'.format(i), dpi=120, bbox_inches='tight')\n plt.close()\n\nimport glob\nimport numpy as np\nimport imageio\nimport os\n\ndef make_circuit_video(image_path, movie_filename, fps=7):\n # sorting filenames in order\n filenames = glob.glob(image_path + 'img*.png')\n filenames_sort_indices = np.argsort([int(os.path.basename(filename).split('.')[0][3:]) for filename in filenames])\n filenames = [filenames[i] for i in filenames_sort_indices]\n\n # make movie\n with imageio.get_writer(movie_filename, mode='I', fps=fps) as writer:\n for filename in filenames:\n image = imageio.imread(filename)\n writer.append_data(image)\n\nmake_circuit_video('', 'cpp_route_animation.gif', fps=3)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
neuro-ml/reskit
|
tutorials/2. Pipeliner Class Usage.ipynb
|
bsd-3-clause
|
[
"The task is simple: find the best combination of pre-processing steps and predictive models with respect to an objective criterion. Logistically this can be problematic: a small example might involve three classification models, and two data preprocessing steps with two possible variations for each — overall 12 combinations. For each of these combinations we would like to perform a grid search of predefined hyperparameters on a fixed cross-validation dataset, computing performance metrics for each option (for example ROC AUC). Clearly this can become complicated quickly. On the other hand, many of these combinations share substeps, and re-running such shared steps amounts to a loss of compute time.\n1. Defining Pipelines Steps and Grid Search Parameters\nThe researcher specifies the possible processing steps and the scikit objects involved, then Reskit expands these steps to each possible pipeline. Reskit represents these pipelines in a convenient pandas dataframe, so the researcher can directly visualize and manipulate the experiments.",
"from sklearn.preprocessing import StandardScaler\nfrom sklearn.preprocessing import MinMaxScaler\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.svm import SVC\n\nfrom sklearn.feature_selection import VarianceThreshold\nfrom sklearn.decomposition import PCA\n\nfrom sklearn.model_selection import StratifiedKFold\n\nfrom reskit.core import Pipeliner\n\n\n# Feature selection and feature extraction step variants (1st step)\nfeature_engineering = [('VT', VarianceThreshold()),\n ('PCA', PCA())]\n\n# Preprocessing step variants (2nd step)\nscalers = [('standard', StandardScaler()),\n ('minmax', MinMaxScaler())]\n\n# Models (3rd step)\nclassifiers = [('LR', LogisticRegression()),\n ('SVC', SVC()),\n ('SGD', SGDClassifier())]\n\n# Reskit needs to define steps in this manner\nsteps = [('feature_engineering', feature_engineering),\n ('scaler', scalers),\n ('classifier', classifiers)]\n\n# Grid search parameters for our models\nparam_grid = {'LR': {'penalty': ['l1', 'l2']},\n 'SVC': {'kernel': ['linear', 'poly', 'rbf', 'sigmoid']},\n 'SGD': {'penalty': ['elasticnet'],\n 'l1_ratio': [0.1, 0.2, 0.3]}}\n\n# Quality metric that we want to optimize\nscoring='roc_auc'\n\n# Setting cross-validations\ngrid_cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)\neval_cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=1)\n\npipe = Pipeliner(steps, grid_cv=grid_cv, eval_cv=eval_cv, param_grid=param_grid)\npipe.plan_table",
"2. Forbidden combinations\nIn case you don't want to use minmax scaler with SVC, you can define banned combo:",
"banned_combos = [('minmax', 'SVC')]\npipe = Pipeliner(steps, grid_cv=grid_cv, eval_cv=eval_cv, param_grid=param_grid, banned_combos=banned_combos)\npipe.plan_table",
"3. Launching Experiment\nReskit then runs each experiment and presents results which are provided to the user through a pandas dataframe. For each pipeline’s classifier, Reskit grid search on cross-validation to find the best classifier’s parameters and report metric mean and standard deviation for each tested pipeline (ROC AUC in this case).",
"from sklearn.datasets import make_classification\n\n\nX, y = make_classification()\npipe.get_results(X, y, scoring=['roc_auc'])",
"4. Caching intermediate steps\nReskit also allows you to cache interim calculations to avoid unnecessary recalculations.",
"from sklearn.preprocessing import Binarizer\n\n# Simple binarization step that we want ot cache\nbinarizer = [('binarizer', Binarizer())]\n\n# Reskit needs to define steps in this manner\nsteps = [('binarizer', binarizer),\n ('classifier', classifiers)]\n\npipe = Pipeliner(steps, grid_cv=grid_cv, eval_cv=eval_cv, param_grid=param_grid)\npipe.plan_table\n\npipe.get_results(X, y, caching_steps=['binarizer'])",
"Last cached calculations stored in _cached_X",
"pipe._cached_X"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/automl
|
efficientnetv2/tutorial.ipynb
|
apache-2.0
|
[
"EfficientNetV2 Tutorial: inference, eval, and training\n<table align=\"left\"><td>\n <a target=\"_blank\" href=\"https://github.com/google/automl/blob/master/efficientnetv2/tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on github\n </a>\n</td><td>\n <a target=\"_blank\" href=\"https://colab.sandbox.google.com/github/google/automl/blob/master/efficientnetv2/tutorial.ipynb\">\n <img width=32px src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n</td></table>\n\n0. Install and view graph.\n0.1 Install package and download source code/image.",
"%%capture\n#@title\n!pip install tensorflow_addons\n\nimport os\nimport sys\nimport tensorflow.compat.v1 as tf\n\n# Download source code.\nif \"efficientnetv2\" not in os.getcwd():\n !git clone --depth 1 https://github.com/google/automl\n os.chdir('automl/efficientnetv2')\n sys.path.append('.')\nelse:\n !git pull\n\ndef download(m):\n if m not in os.listdir():\n !wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/v2/{m}.tgz\n !tar zxf {m}.tgz\n ckpt_path = os.path.join(os.getcwd(), m)\n return ckpt_path",
"0.2 View graph in TensorBoard",
"MODEL = 'efficientnetv2-b0' #@param\nimport effnetv2_model\n\nwith tf.compat.v1.Graph().as_default():\n model = effnetv2_model.EffNetV2Model(model_name=MODEL)\n _ = model(tf.ones([1, 224, 224, 3]), training=False)\n tf.io.gfile.mkdir('tb')\n train_writer = tf.summary.FileWriter('tb')\n train_writer.add_graph(tf.get_default_graph())\n train_writer.flush()\n\n%load_ext tensorboard\n%tensorboard --logdir tb",
"1. inference",
"MODEL = 'efficientnetv2-b0' #@param\n\n# Download checkpoint.\nckpt_path = download(MODEL)\nif tf.io.gfile.isdir(ckpt_path):\n ckpt_path = tf.train.latest_checkpoint(ckpt_path)\n\n# Download label map file\n!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/eval_data/labels_map.txt -O labels_map.txt\nlabels_map = 'labels_map.txt'\n\n# Download images\nimage_file = 'panda.jpg'\n!wget https://upload.wikimedia.org/wikipedia/commons/f/fe/Giant_Panda_in_Beijing_Zoo_1.JPG -O {image_file}\n\n# Build model\ntf.keras.backend.clear_session()\nmodel = effnetv2_model.EffNetV2Model(model_name=MODEL)\n_ = model(tf.ones([1, 224, 224, 3]), training=False)\nmodel.load_weights(ckpt_path)\ncfg = model.cfg\n\n# Run inference for a given image\nimport preprocessing\nimage = tf.io.read_file(image_file)\nimage = preprocessing.preprocess_image(\n image, cfg.eval.isize, is_training=False, augname=cfg.data.augname)\nlogits = model(tf.expand_dims(image, 0), False)\n\n# Output classes and probability\npred = tf.keras.layers.Softmax()(logits)\nidx = tf.argsort(logits[0])[::-1][:5].numpy()\nimport ast\nclasses = ast.literal_eval(open(labels_map, \"r\").read())\nfor i, id in enumerate(idx):\n print(f'top {i+1} ({pred[0][id]*100:.1f}%): {classes[id]} ')\nfrom IPython import display\ndisplay.display(display.Image(image_file))",
"2. Finetune EfficientNetV2 on CIFAR10.",
"!python main_tf2.py --mode=traineval --model_name=efficientnetv2-b0 --dataset_cfg=cifar10Ft --model_dir={MODEL}_finetune --hparam_str=\"train.ft_init_ckpt={MODEL},runtime.strategy=gpus,train.batch_size=64\"\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arokem/seaborn
|
doc/docstrings/FacetGrid.ipynb
|
bsd-3-clause
|
[
"import seaborn as sns\nsns.set_theme(style=\"ticks\")",
"Calling the constructor requires a long-form data object. This initializes the grid, but doesn't plot anything on it:",
"tips = sns.load_dataset(\"tips\")\nsns.FacetGrid(tips)\n\nsns.FacetGrid(tips, col=\"time\", row=\"sex\")\n\ng = sns.FacetGrid(tips, col=\"time\", row=\"sex\")\ng.map(sns.scatterplot, \"total_bill\", \"tip\")\n\ng = sns.FacetGrid(tips, col=\"time\", row=\"sex\")\ng.map_dataframe(sns.histplot, x=\"total_bill\")\n\ng = sns.FacetGrid(tips, col=\"time\", row=\"sex\")\ng.map_dataframe(sns.histplot, x=\"total_bill\", binwidth=2, binrange=(0, 60))\n\ng = sns.FacetGrid(tips, col=\"time\", hue=\"sex\")\ng.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\ng.add_legend()\n\ng = sns.FacetGrid(tips, col=\"time\")\ng.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\", hue=\"sex\")\ng.add_legend()\n\ng = sns.FacetGrid(tips, col=\"day\", height=3.5, aspect=.65)\ng.map(sns.histplot, \"total_bill\")\n\ng = sns.FacetGrid(tips, col=\"size\", height=2.5, col_wrap=3)\ng.map(sns.histplot, \"total_bill\")",
"You can pass custom functions to plot with, or to annotate each facet. Your custom function must use the matplotlib state-machine interface to plot on the \"current\" axes, and it should catch additional keyword arguments:",
"import matplotlib.pyplot as plt\ndef annotate(data, **kws):\n n = len(data)\n ax = plt.gca()\n ax.text(.1, .6, f\"N = {n}\", transform=ax.transAxes)\n\ng = sns.FacetGrid(tips, col=\"time\")\ng.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\ng.map_dataframe(annotate)\n\ng = sns.FacetGrid(tips, col=\"sex\", row=\"time\", margin_titles=True)\ng.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\ng.set_axis_labels(\"Total bill ($)\", \"Tip ($)\")\ng.set_titles(col_template=\"{col_name} patrons\", row_template=\"{row_name}\")\ng.set(xlim=(0, 60), ylim=(0, 12), xticks=[10, 30, 50], yticks=[2, 6, 10])\ng.tight_layout()\ng.savefig(\"facet_plot.png\")\n\nimport os\nif os.path.exists(\"facet_plot.png\"):\n os.remove(\"facet_plot.png\")\n\ng = sns.FacetGrid(tips, col=\"sex\", row=\"time\", margin_titles=True, despine=False)\ng.map_dataframe(sns.scatterplot, x=\"total_bill\", y=\"tip\")\ng.fig.subplots_adjust(wspace=0, hspace=0)\nfor (row_val, col_val), ax in g.axes_dict.items():\n if row_val == \"Lunch\" and col_val == \"Female\":\n ax.set_facecolor(\".95\")\n else:\n ax.set_facecolor((0, 0, 0, 0))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
kayzhou22/DSBiz_Project_LendingClub
|
Data_Preprocessing/Collaboration-appLoan_DataProcessing.ipynb
|
mit
|
[
"Lending Club Default Rate Analysis",
"import pandas as pd\nimport numpy as np\n\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.model_selection import cross_val_score\n\nfrom sklearn.feature_selection import RFE\n\nfrom sklearn.svm import SVR\nfrom sklearn.svm import LinearSVC\nfrom sklearn.svm import LinearSVR\n\nimport seaborn as sns\nimport matplotlib.pylab as pl\n%matplotlib inline",
"Columns Interested\nloan_status -- Current status of the loan<br/>\nloan_amnt -- The listed amount of the loan applied for by the borrower. If at some point in time, the credit department reduces the loan amount, then it will be reflected in this value.<br/>\nint_rate -- interest rate of the loan <br/>\nsub_grade -- LC assigned sub loan grade -- dummie (grade -- LC assigned loan grade<br/>-- dummie)<br/> \npurpose -- A category provided by the borrower for the loan request. -- dummie<br/> \nannual_inc -- The self-reported annual income provided by the borrower during registration.<br/>\nemp_length -- Employment length in years. Possible values are between 0 and 10 where 0 means less than one year and 10 means ten or more years. -- dummie<br/> \nfico_range_low<br/>\nfico_range_high\nhome_ownership -- The home ownership status provided by the borrower during registration or obtained from the credit report. Our values are: RENT, OWN, MORTGAGE, OTHER -- dummie<br/>\ntot_cur_bal -- Total current balance of all accounts \nnum_actv_bc_tl -- number of active bank accounts (avg_cur_bal -- average current balance of all accounts )<br/>\nmort_acc -- number of mortgage accounts<br/>\nnum_actv_rev_tl -- Number of currently active revolving trades<br/>\ndti -- A ratio calculated using the borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income. \npub_rec_bankruptcies - Number of public record bankruptcies<br/>\n2015 Lending Club Data\n1. Approved Loans",
"df_app_2015 = pd.read_csv('LoanStats3d_securev1.csv.zip', compression='zip',header=1, skiprows=[-2,-1],low_memory=False)\n\ndf_app_2015.head(3)\n\n# Pre-select columns\ndf = df_app_2015.ix[:, ['loan_status','loan_amnt', 'int_rate', 'sub_grade',\\\n 'purpose',\\\n 'annual_inc', 'emp_length', 'home_ownership',\\\n 'fico_range_low','fico_range_high',\\\n 'num_actv_bc_tl', 'tot_cur_bal', 'mort_acc','num_actv_rev_tl',\\\n 'pub_rec_bankruptcies','dti' ]]",
"1. Data Understanding -- Selected Decriptive Analysis",
"## in Nehal and Kay's notebooks",
"2. Data Munging\nFunctions that performs data mining tasks\n1a. Create column “default” using “loan_status”\nValentin (edited by Kay)",
"df_app_2015.tail(3)\n\ndf.head(3)\n\ndf.loan_status.unique()\n\ndf = df.dropna()\n\nlen(df)\n\n#df.loan_status.fillna('none', inplace=True) ## there is no nan\n\ndf.loan_status.unique()\n\ndefaulters=['Default','Charged Off', 'Late (31-120 days)']\nnon_defaulters=['Fully Paid']\nuncertain = ['Current','Late (16-30 days)','In Grace Period', 'none']\n\nlen(df[df.loan_status.isin(uncertain)].loan_status)\n\ndf.info()\n\n## select instances of defaulters and non_defulters\ndf2 = df.copy()\n\ndf2['Target']= 2 ## uncertain\ndf2.loc[df2.loan_status.isin(defaulters),'Target'] = 0 ## defaulters\ndf2.loc[df2.loan_status.isin(non_defaulters),'Target'] = 1 ## paid -- (and to whom to issue the loan)\n\nprint('Value in Target value for non defaulters')\nprint(df2.loc[df2.loan_status.isin(non_defaulters)].Target.unique())\nprint(len(df2[df2['Target'] == 1]))\n\nprint('Value in Target value for defaulters')\nprint(df2.loc[df2.loan_status.isin(defaulters)].Target.unique())\nprint(len(df2[df2['Target'] == 0]))\n\nprint('Value in Target value for uncertained-- unlabeled ones to predict')\nprint(df2.loc[df2.loan_status.isin(uncertain)].Target.unique())\nprint(len(df2[df2['Target'] == 2]))\n\n42302/94968",
"2a. Convert data type on certain columns and create dummies\nNehal",
"# function to create dummies\ndef create_dummies(column_name,df):\n temp=pd.get_dummies(df[column_name],prefix=column_name)\n df=pd.concat([df,temp],axis=1)\n return df\n\ndummy_list=['emp_length','home_ownership','purpose','sub_grade']\nfor col in dummy_list:\n df2=create_dummies(col,df2)\nfor col in dummy_list:\n df2=df2.drop(col,1)\n\ntemp=df2['int_rate'].astype(str).str.replace('%', '').replace(' ','').astype(float)\ndf2=df2.drop('int_rate',1)\ndf2=pd.concat([df2,temp],axis=1)\ndf2=df2.drop('loan_status',1)\n\nfor col in df2.columns:\n print((df2[col].dtype))",
"3a. Check and remove outliers (methods: MAD)",
"df2.shape\n\ndf2['loan_amnt'][sorted(np.random.randint(0, high=10, size=5))]\n\n# Reference: \n# http://stackoverflow.com/questions/22354094/pythonic-way-of-detecting-outliers-in-one-dimensional-observation-data\n\ndef main(df, col, thres):\n outliers_all = []\n ind = sorted(np.random.randint(0, high=len(df), size=5000)) # randomly pick instances from the dataframe\n #select data from our dataframe\n x = df[col][ind]\n num = len(ind)\n outliers = plot(x, col, num, thres) # append all the outliers in the list\n pl.show()\n return outliers\n\ndef mad_based_outlier(points, thresh):\n if len(points.shape) == 1:\n points = points[:,None]\n median = np.median(points, axis=0)\n diff = np.sum((points - median)**2, axis=-1)\n diff = np.sqrt(diff)\n med_abs_deviation = np.median(diff)\n\n modified_z_score = 0.6745 * diff / med_abs_deviation\n\n return modified_z_score > thresh\n\ndef plot(x, col, num, thres):\n fig, ax = pl.subplots(nrows=1, figsize=(10, 3))\n sns.distplot(x, ax=ax, rug=True, hist=False)\n outliers = np.asarray(x[mad_based_outlier(x, thres)])\n ax.plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False)\n\n fig.suptitle('MAD-based Outlier Tests with selected {} values'.format(col, num, size=20))\n return outliers\n\n### Find outliers \n## \nboundries = []\noutliers_loan = main(df2, 'loan_amnt', thres=2.2)\n\nboundries.append(outliers_loan.min())\n\n## annual income\noutliers_inc = main(df2, 'annual_inc', 8)\n\nboundries.append(outliers_inc.min())\n\n## For total current balance of bank accounts\noutliers_bal = main(df2, 'tot_cur_bal', 8)\n\nboundries.append(outliers_bal.min())\n\ncolumns = ['loan_amnt', 'annual_inc', 'tot_cur_bal']\n\nfor col, bound in zip(columns, boundries):\n print ('Lower bound of detected Outliers for {}: {}'.format(col, bound))\n # Use the outlier boundry to \"regularize\" the dataframe\n df2_r = df2[df2[col] <= bound]",
"4a. Remove or replace missing values of certain columns",
"# df2_r.info()\n\ndf2_r.shape\n\n#### Fill NaN with \"none\"??? ####\n#df_filled = df2.fillna(value='none')\n#df_filled.head(3)\ndf2_r = df2_r.dropna()\nprint (len(df2_r))",
"6. Save the cleaned data",
"# df2_r.to_csv('approved_loan_2015_clean.csv')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
juanshishido/okcupid
|
main.ipynb
|
mit
|
[
"This notebook sets up the workflow for the various functions we have implemented. It shows an example of how we clustered using Nonnegative Matrix Factorization. We manually inspect the output of NMF to determine the best number of clusters for each group",
"import pickle\nimport warnings\n\nfrom utils.hash import make\nfrom utils.calculate_pmi_features import *\nfrom utils.clean_up import *\nfrom utils.categorize_demographics import *\nfrom utils.reduce_dimensions import run_kmeans\nfrom utils.nonnegative_matrix_factorization import nmf_inspect, nmf_labels\nwarnings.filterwarnings('ignore')",
"Getting the data, cleaning it, and categorizing demographic data",
"df = get_data()\n\nessay_list = ['essay0','essay4','essay5']\ndf_clean = clean_up(df, essay_list)\n\n\ndf_clean.fillna('', inplace=True)\n\ndf.columns.values\n\ndf_clean['religion'] = df_clean['religion'].apply(religion_categories)\ndf_clean['job'] = df_clean['job'].apply(job_categories)\ndf_clean['drugs'] = df_clean['drugs'].apply(drug_categories)\ndf_clean['diet'] = df_clean['diet'].apply(diet_categories)\ndf_clean['body_type'] = df_clean['body_type'].apply(body_categories)\ndf_clean['drinks'] = df_clean['drinks'].apply(drink_categories)\ndf_clean['sign'] = df_clean['sign'].apply(sign_categories)\ndf_clean['ethnicity'] = df_clean['ethnicity'].apply(ethnicity_categories)\ndf_clean['pets'] = df_clean['pets'].apply(pets_categories)\ndf_clean['speaks'] = df_clean['speaks'].apply(language_categories)",
"Splitting the dataframe by gender, running clustering separately on each.",
"df_male = df_clean[df_clean['sex'] == 'm']\n\ndf_female = df_clean[df_clean['sex'] == 'f']\n\ncount_matrix_m, tfidf_matrix_m, vocab_m = col_to_data_matrix(df_male, 'essay0') #save out\n\ncount_matrix_f, tfidf_matrix_f, vocab_f = col_to_data_matrix(df_female, 'essay0')\n\nvocab_m\n\nnmf_inspect(tfidf_matrix_m, vocab_m)\n\nnmf_inspect(tfidf_matrix_f, vocab_f)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ja/tutorials/generative/cvae.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"畳み込み変分オートエンコーダ\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/generative/cvae\"> <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"> TensorFlow.org で表示</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/cvae.ipynb\"> <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\"> Google Colab で実行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/cvae.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/generative/cvae.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a></td>\n</table>\n\nこのノートブックでは、MNIST データセットで変分オートエンコーダ(VAE)(1、2)のトレーニング方法を実演します。VAE はオートエンコードの確率論的見解で、高次元入力データをより小さな表現に圧縮するモデルです。入力を潜在ベクトルにマッピングする従来のオートエンコーダとは異なり、VAE は入力をガウスの平均や分散といった確率分布のパラメータにマッピングします。このアプローチによって、画像生成に役立つ構造化された連続的な潜在空間が生成されます。\n\nMNIST モデルをビルドする",
"!pip install tensorflow-probability\n\n# to generate gifs\n!pip install imageio\n!pip install git+https://github.com/tensorflow/docs\n\nfrom IPython import display\n\nimport glob\nimport imageio\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport PIL\nimport tensorflow as tf\nimport tensorflow_probability as tfp\nimport time",
"MNIST データセットを読み込む\nそれぞれの MNIST 画像は、もともと 784 個の整数値から成るベクトルで、各整数値は、ピクセルの強度を表す 0~255 の値です。各ピクセルをベルヌーイ分布を使ってモデル化し、データセットを統計的にバイナリ化します。",
"(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()\n\ndef preprocess_images(images):\n images = images.reshape((images.shape[0], 28, 28, 1)) / 255.\n return np.where(images > .5, 1.0, 0.0).astype('float32')\n\ntrain_images = preprocess_images(train_images)\ntest_images = preprocess_images(test_images)\n\ntrain_size = 60000\nbatch_size = 32\ntest_size = 10000",
"tf.data を使用して、データをバッチ化・シャッフルする",
"train_dataset = (tf.data.Dataset.from_tensor_slices(train_images)\n .shuffle(train_size).batch(batch_size))\ntest_dataset = (tf.data.Dataset.from_tensor_slices(test_images)\n .shuffle(test_size).batch(batch_size))",
"tf.keras.Sequential を使ってエンコーダとデコーダネットワークを定義する\nこの VAE の例では、エンコーダとデコーダのネットワークに 2 つの小さな ConvNets を使用しています。文献では、これらのネットワークはそれぞれ、推論/認識モデルおよび生成モデルとも呼ばれています。実装を簡略化するために tf.keras.Sequential を使用しています。以降の説明では、観測と潜在変数をそれぞれ $x$ と $z$ で表記しています。\nエンコーダネットワーク\nこれは、おおよその事後分布 $q(z|x)$ を定義します。これは、観測を入力として受け取り、潜在表現 $z$ の条件付き分布を指定するための一連のパラメータを出力します。この例では、単純に対角ガウスとして分布をモデル化するため、ネットワークは、素因数分解されたガウスの平均と対数分散を出力します。数値的な安定性を得るために、直接分散を出力する代わりに対数分散を出力します。\nデコーダネットワーク\nこれは、観測 $p(x|z)$ の条件付き分布を定義します。これは入力として潜在サンプル $z$ を取り、観測の条件付き分布のパラメータを出力します。$p(z)$ 前の潜在分布を単位ガウスとしてモデル化します。\nパラメータ再設定のコツ\nトレーニング中にデコーダのサンプル $z$ を生成するには、エンコーダが出力したパラメータによって定義される潜在分布から入力観測 $x$ を指定してサンプリングできます。ただし、このサンプリング演算では、バックプロパゲーションがランダムなノードを通過できないため、ボトルネックが発生します。\nこれを解消するために、パラメータ再設定のコツを使用します。この例では、次のように、デコーダパラメータともう 1 つの $\\epsilon$ を使用して $z$ を近似化します。\n$$z = \\mu + \\sigma \\odot \\epsilon$$\n上記の $\\mu$ と $\\sigma$ は、それぞれガウス分布の平均と標準偏差を表します。これらは、デコーダ出力から得ることができます。$\\epsilon$ は、$z$ の偶然性を維持するためのランダムノイズとして考えることができます。$\\epsilon$ は標準正規分布から生成します。\n潜在変数 $z$ は、$\\mu$、$\\sigma$、および $\\epsilon$ の関数によって生成されるようになりました。これらによって、モデルがそれぞれ $\\mu$ と $\\sigma$ を介してエンコーダの勾配をバックプロパゲートしながら、$\\epsilon$ を介して偶然性を維持できるようになります。\nネットワークアーキテクチャ\nエンコーダネットワークの場合、2 つの畳み込みレイヤーを使用し、その後に全結合レイヤーが続きます。デコーダネットワークでは、3 つの畳み込み転置レイヤー(一部の文脈ではデコンボリューションレイヤーとも呼ばれる)が続く全結合レイヤーを使用することで、このアーキテクチャをミラーリングします。VAE をトレーニングする場合にはバッチの正規化を回避するのが一般的であることに注意してください。これは、ミニバッチを使用することで追加される偶然性によって、サンプリングの偶然性にさらに不安定性を加える可能性があるためです。",
"class CVAE(tf.keras.Model):\n \"\"\"Convolutional variational autoencoder.\"\"\"\n\n def __init__(self, latent_dim):\n super(CVAE, self).__init__()\n self.latent_dim = latent_dim\n self.encoder = tf.keras.Sequential(\n [\n tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),\n tf.keras.layers.Conv2D(\n filters=32, kernel_size=3, strides=(2, 2), activation='relu'),\n tf.keras.layers.Conv2D(\n filters=64, kernel_size=3, strides=(2, 2), activation='relu'),\n tf.keras.layers.Flatten(),\n # No activation\n tf.keras.layers.Dense(latent_dim + latent_dim),\n ]\n )\n\n self.decoder = tf.keras.Sequential(\n [\n tf.keras.layers.InputLayer(input_shape=(latent_dim,)),\n tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),\n tf.keras.layers.Reshape(target_shape=(7, 7, 32)),\n tf.keras.layers.Conv2DTranspose(\n filters=64, kernel_size=3, strides=2, padding='same',\n activation='relu'),\n tf.keras.layers.Conv2DTranspose(\n filters=32, kernel_size=3, strides=2, padding='same',\n activation='relu'),\n # No activation\n tf.keras.layers.Conv2DTranspose(\n filters=1, kernel_size=3, strides=1, padding='same'),\n ]\n )\n\n @tf.function\n def sample(self, eps=None):\n if eps is None:\n eps = tf.random.normal(shape=(100, self.latent_dim))\n return self.decode(eps, apply_sigmoid=True)\n\n def encode(self, x):\n mean, logvar = tf.split(self.encoder(x), num_or_size_splits=2, axis=1)\n return mean, logvar\n\n def reparameterize(self, mean, logvar):\n eps = tf.random.normal(shape=mean.shape)\n return eps * tf.exp(logvar * .5) + mean\n\n def decode(self, z, apply_sigmoid=False):\n logits = self.decoder(z)\n if apply_sigmoid:\n probs = tf.sigmoid(logits)\n return probs\n return logits",
"損失関数とオプティマイザを定義する\nVAE は、限界対数尤度の証拠下限(ELBO)を最大化することでトレーニングします。\n$$\\log p(x) \\ge \\text{ELBO} = \\mathbb{E}_{q(z|x)}\\left[\\log \\frac{p(x, z)}{q(z|x)}\\right].$$\n実際には、この期待値の単一サンプルのモンテカルロ推定を最適化します。\n$$\\log p(x| z) + \\log p(z) - \\log q(z|x),$$ とし、$z$ は $q(z|x)$ からサンプリングされます。\n注意: KL 項を分析的に計算することもできますが、ここでは簡単にするために、3 つの項すべてをモンテカルロ Estimator に組み込んでいます。",
"optimizer = tf.keras.optimizers.Adam(1e-4)\n\n\ndef log_normal_pdf(sample, mean, logvar, raxis=1):\n log2pi = tf.math.log(2. * np.pi)\n return tf.reduce_sum(\n -.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),\n axis=raxis)\n\n\ndef compute_loss(model, x):\n mean, logvar = model.encode(x)\n z = model.reparameterize(mean, logvar)\n x_logit = model.decode(z)\n cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)\n logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])\n logpz = log_normal_pdf(z, 0., 0.)\n logqz_x = log_normal_pdf(z, mean, logvar)\n return -tf.reduce_mean(logpx_z + logpz - logqz_x)\n\n\n@tf.function\ndef train_step(model, x, optimizer):\n \"\"\"Executes one training step and returns the loss.\n\n This function computes the loss and gradients, and uses the latter to\n update the model's parameters.\n \"\"\"\n with tf.GradientTape() as tape:\n loss = compute_loss(model, x)\n gradients = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(gradients, model.trainable_variables))",
"トレーニング\n\nデータセットのイテレーションから始めます。\n各イテレーションで、画像をエンコーダに渡して、おおよその事後分布 $q(z|x)$ の一連の平均値と対数分散パラメータを取得します。\n次に、$q(z|x)$ から得たサンプルにパラメータ再設定のコツを適用します。\n最後に、パラメータを再設定したサンプルをデコーダに渡して、生成分布 $p(x|z)$ のロジットを取得します。\n注意: トレーニングセットに 60k のデータポイントとテストセットに 10k のデータポイント持つ Keras で読み込んだデータセットを使用しているため、テストセットの ELBO は、Larochelle の MNIST の動的なバイナリ化を使用する文献で報告された結果よりもわずかに高くなります。\n\n画像の生成\n\nトレーニングの後は、画像をいくつか生成します。\n分布 $p(z)$ 前の単位ガウスから一連の潜在ベクトルをサンプリングすることから始めます。\nすると、ジェネレータはその潜在サンプル $z$ を観測のロジットに変換し、分布 $p(x|z)$ が得られます。\nここで、ベルヌーイ分布の確率を図に作成します。",
"epochs = 10\n# set the dimensionality of the latent space to a plane for visualization later\nlatent_dim = 2\nnum_examples_to_generate = 16\n\n# keeping the random vector constant for generation (prediction) so\n# it will be easier to see the improvement.\nrandom_vector_for_generation = tf.random.normal(\n shape=[num_examples_to_generate, latent_dim])\nmodel = CVAE(latent_dim)\n\ndef generate_and_save_images(model, epoch, test_sample):\n mean, logvar = model.encode(test_sample)\n z = model.reparameterize(mean, logvar)\n predictions = model.sample(z)\n fig = plt.figure(figsize=(4, 4))\n\n for i in range(predictions.shape[0]):\n plt.subplot(4, 4, i + 1)\n plt.imshow(predictions[i, :, :, 0], cmap='gray')\n plt.axis('off')\n\n # tight_layout minimizes the overlap between 2 sub-plots\n plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))\n plt.show()\n\n# Pick a sample of the test set for generating output images\nassert batch_size >= num_examples_to_generate\nfor test_batch in test_dataset.take(1):\n test_sample = test_batch[0:num_examples_to_generate, :, :, :]\n\ngenerate_and_save_images(model, 0, test_sample)\n\nfor epoch in range(1, epochs + 1):\n start_time = time.time()\n for train_x in train_dataset:\n train_step(model, train_x, optimizer)\n end_time = time.time()\n\n loss = tf.keras.metrics.Mean()\n for test_x in test_dataset:\n loss(compute_loss(model, test_x))\n elbo = -loss.result()\n display.clear_output(wait=False)\n print('Epoch: {}, Test set ELBO: {}, time elapse for current epoch: {}'\n .format(epoch, elbo, end_time - start_time))\n generate_and_save_images(model, epoch, test_sample)",
"最後のトレーニングエポックから生成された画像を表示する",
"def display_image(epoch_no):\n return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))\n\nplt.imshow(display_image(epoch))\nplt.axis('off') # Display images",
"保存したすべての画像のアニメーション GIF を表示する",
"anim_file = 'cvae.gif'\n\nwith imageio.get_writer(anim_file, mode='I') as writer:\n filenames = glob.glob('image*.png')\n filenames = sorted(filenames)\n for filename in filenames:\n image = imageio.imread(filename)\n writer.append_data(image)\n image = imageio.imread(filename)\n writer.append_data(image)\n\nimport tensorflow_docs.vis.embed as embed\nembed.embed_file(anim_file)",
"潜在空間から数字の 2D 多様体を表示する\n次のコードを実行すると、各数字が 2D 潜在空間で別の数字に変化する、さまざまな数字クラスの連続分布が表示されます。潜在空間の標準正規分布の生成には、TensorFlow Probability を使用します。",
"def plot_latent_images(model, n, digit_size=28):\n \"\"\"Plots n x n digit images decoded from the latent space.\"\"\"\n\n norm = tfp.distributions.Normal(0, 1)\n grid_x = norm.quantile(np.linspace(0.05, 0.95, n))\n grid_y = norm.quantile(np.linspace(0.05, 0.95, n))\n image_width = digit_size*n\n image_height = image_width\n image = np.zeros((image_height, image_width))\n\n for i, yi in enumerate(grid_x):\n for j, xi in enumerate(grid_y):\n z = np.array([[xi, yi]])\n x_decoded = model.sample(z)\n digit = tf.reshape(x_decoded[0], (digit_size, digit_size))\n image[i * digit_size: (i + 1) * digit_size,\n j * digit_size: (j + 1) * digit_size] = digit.numpy()\n\n plt.figure(figsize=(10, 10))\n plt.imshow(image, cmap='Greys_r')\n plt.axis('Off')\n plt.show()\n\nplot_latent_images(model, 20)",
"次のステップ\nこのチュートリアルでは、TensorFlow を使用して畳み込み変分オートエンコーダを実装する方法を実演しました。\n次のステップとして、ネットワークサイズを増加し、モデル出力の改善を試みることができます。たとえば、Conv2D と Conv2DTranspose の各レイヤーの filter パラメータを 512 に設定することができます。最終的な 2D 潜在画像プロットを生成するには、latent_dim を 2 に維持することが必要であることに注意してください。また、ネットワークサイズが高まると、トレーニング時間も増大します。\nまた、CIFAR-10 などのほかのデータセットを使って VAE を実装してみるのもよいでしょう。\nVAE は、さまざまなスタイルや複雑さの異なるスタイルで実装することができます。その他の実装については、次のリンクをご覧ください。\n\n変分オートエンコーダ (keras.io)\n「カスタムレイヤーとモデルを記述する」ガイドの VAE サンプル(tensorflow.org)\nTFP 確率レイヤー: 変分オートエンコーダ\n\nVAE をさらに詳しく学習するには、「An Introduction to Variational Autoencoders」をご覧ください。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/snu/cmip6/models/sam0-unicon/aerosol.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: SNU\nSource ID: SAM0-UNICON\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:38\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'snu', 'sam0-unicon', 'aerosol')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Meteorological Forcings\n5. Key Properties --> Resolution\n6. Key Properties --> Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --> Absorption\n12. Optical Radiative Properties --> Mixtures\n13. Optical Radiative Properties --> Impact Of H2o\n14. Optical Radiative Properties --> Radiative Scheme\n15. Optical Radiative Properties --> Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of aerosol model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrognostic variables in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of tracers in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre aerosol calculations generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the aerosol model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Variables 2D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Frequency\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of transport in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for aerosol transport modeling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n",
"7.3. Mass Conservation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to ensure mass conservation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.4. Convention\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTransport by convention",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prescribed Climatology\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nSpecify the climatology type for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n",
"8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Other Method Characteristics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCharacteristics of the "other method" used for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as mass mixing ratios.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of optical and radiative properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Optical Radiative Properties --> Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.2. Dust\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Organics\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12. Optical Radiative Properties --> Mixtures\n**\n12.1. External\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there external mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Internal\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.3. Mixing Rule\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Optical Radiative Properties --> Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact size?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.2. Internal Mixture\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact internal mixture?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Optical Radiative Properties --> Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Shortwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of shortwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Optical Radiative Properties --> Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol-cloud interactions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Twomey\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the Twomey effect included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.3. Twomey Minimum Ccn\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Drizzle\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect drizzle?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.5. Cloud Lifetime\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect cloud lifetime?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the Aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n",
"16.3. Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther model components coupled to the Aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.4. Gas Phase Precursors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of gas phase aerosol precursors.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.5. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.6. Bulk Scheme Species\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of species covered by the bulk scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
marcinofulus/ProgramowanieRownolegle
|
CUDA/iCSE_PR_SDE.ipynb
|
gpl-3.0
|
[
"Rozwiązywanie stochastycznych równań różniczkowych z CUDA\nRównania stochastyczne są niezwykle pożytecznym narzędziem w modelowaniu zarówno procesów fizycznych, biolgicznych czy chemicznych a nawet ekonomicznych (wycena instrumentów pochodnych).\nKlasycznym przykładem problemu z jakim się spotykami przy numerycznym rozwiązywaniu równań stochastycznych jest konieczność uśrednienia po wielu niezależnych od siebie realizacjach procesu losowego. Mówiąc wprost musimy rozwiązać numerycznie wiele razy to samo równanie różniczkowe, za każdym razem zmieniając \"seed\" generatora liczb losowych. Jest to idealny problem dla urządzenia GPU, gdzie generacja niezależnych trajektorii wielu kopii tego samego układu jest w stanie wykorzystać maksymalnie jego możliwości obliczeniowe.\nPoniżej przedstawiamy implementację algorytmu, wg. pierwszego przykładu z pracy: http://arxiv.org/abs/0903.3852 \nRóźnicą będzie skorzystanie z pycuda, zamiast C. Co ciekawe, taka modyfikacja jest w stanie przyśpieszyć kernel obliczniowe o ok 25%. Spowodowane jest to zastosowaniem metoprogramowania. Pewne parametry, które nie zmieniają się podczas wykonywania kodu są \"klejane\" do źródła jako konkrente wartości liczbowe, co ułatwia kompilatorowi nvcc optymalizacje.\nW tym przykładzie wykorzystamy własny generator liczb losowych i transformację Boxa-Mullera (zamiast np. curand). \nPrzykład ten może być z łatwością zmodyfikowany na dowolny układ SDE, dlatego można do traktować jako szablon dla własnych zagadnień.\nStruktura programu\nSzablony\nNiezwykle pomocne w programowaniu w pycuda jest zastosowanie metaprogramowania - to jest - piszemy program piszący nasz kernel. Tutaj mamy najprostszy wariant, po prostu pewne parametry równań, wpisujemy automatycznie do tekstu jądra. W pythonie jest przydatne formatowanie \"stringów\" np.:",
"print('%(language)04d a nawiasy {} ' % {\"language\": 1234, \"number\": 2})",
"lub:",
"print('{zmienna} a nawiasy: {{}}'.format( **{\"zmienna\": 123} ))",
"W pewnych bardziej zaawansowanych przypadkach, można zastosować system szablonów np. mako templates (w projekcie http://sailfish.us.edu.pl). \nStruktura kernela\nJądro:\n__global__ void SDE(float *cx,float *cv,unsigned int *rng_state, float ct)\n\njest funkcją CUDA typu __global__, jako parametry przyjmuje tablice cx i cv, będące zmiennymi stanu układu dwóch równań różniczkowch:\n$$ \\dot x = v$$\n$$ \\dot v = -2\\pi \\cos(2.0\\pi x) + A \\cos(\\omega t) + F - \\gamma v$$\nPonadto w wywołaniu przekazujemy czas (przez wartość) oraz wskaźnik do stanu generatora liczb losowych na GPU.\nFunkje dostępne dla jądra z GPU to:\ngenerator liczb losowych o rozkładzie jednostajnym:\n __device__ float rng_uni(unsigned int *state)\n\ntransformacja Boxa-Mullera:\n __device__ void bm_trans(float& u1, float& u2)\n\ni wreszczcie funkcja obliczająca prawe strony układu równań:\n __device__ inline void diffEq(float &nx, float &nv, float x, float v, float t)\n\nZauważmy, że dla poprawienia wydajności, każde wywołanie kernela, powoduje wielokrotne (określone przez parametr spp) wykonanie pętli iteracyjnej.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport pycuda.gpuarray as gpuarray\n\nfrom pycuda.curandom import rand as curand\nfrom pycuda.compiler import SourceModule\nimport pycuda.driver as cuda\n\n\ncuda.init()\ndevice = cuda.Device(0)\nctx = device.make_context()\nprint (device.name(), device.compute_capability(),device.total_memory()/1024.**3,\"GB\")\n\n\nblocks = 2**11\nblock_size = 2**8\nN = blocks*block_size\n\nomega = 4.9\nspp = 100\ndt = 2.0*np.pi/omega/spp\npars = {'samples':spp,'N':N,'dt':dt,'gam':0.9,'d0':0.001,'omega':omega,'force':0.1,'amp':4.2}\n\nrng_src = \"\"\"\n#define PI 3.14159265358979f\n/*\n * Return a uniformly distributed random number from the\n * [0;1] range.\n */\n__device__ float rng_uni(unsigned int *state)\n{\n\tunsigned int x = *state;\n\n\tx = x ^ (x >> 13);\n\tx = x ^ (x << 17);\n\tx = x ^ (x >> 5);\n\n\t*state = x;\n\n\treturn x / 4294967296.0f;\n}\n/*\n * Generate two normal variates given two uniform variates.\n */\n__device__ void bm_trans(float& u1, float& u2)\n{\n\tfloat r = sqrtf(-2.0f * logf(u1));\n\tfloat phi = 2.0f * PI * u2;\n\tu1 = r * cosf(phi);\n\tu2 = r * sinf(phi);\n}\n\n\"\"\"\n\nsrc = \"\"\"\n __device__ inline void diffEq(float &nx, float &nv, float x, float v, float t)\n{{\n\tnx = v;\n\tnv = -2.0f * PI * cosf(2.0f * PI * x) + {amp}f * cosf({omega}f * t) + {force}f - {gam}f * v;\n}}\n\n__global__ void SDE(float *cx,float *cv,unsigned int *rng_state, float ct)\n {{\n int idx = blockDim.x*blockIdx.x + threadIdx.x;\n float n1, n2; \t\n unsigned int lrng_state;\n float xim, vim, xt1, vt1, xt2, vt2,t,x,v;\n lrng_state = rng_state[idx]; \n t = ct;\n x = cx[idx];\n\t v = cv[idx]; \n \n for (int i=1;i<={samples};i++) {{\n \tn1 = rng_uni(&lrng_state);\n\t\tn2 = rng_uni(&lrng_state);\n\t\tbm_trans(n1, n2);\n\tdiffEq(xt1, vt1, x, v, t);\n\t\txim = x + xt1 * {dt}f;\n\t\tvim = v + vt1 * {dt}f + sqrtf({dt}f * {gam}f * {d0}f * 2.0f) * n1;\n\t\tt = ct + i * {dt}f;\n\tdiffEq(xt2, vt2, xim, vim, t);\n\t\tx += 0.5f * {dt}f * (xt1 + xt2);\n\t\tv += 0.5f * {dt}f * (vt1 + vt2) + sqrtf(2.0f * {dt}f * {gam}f * {d0}f) * n2;\n }}\n cx[idx] = x;\n\t cv[idx] = v;\n\n\t rng_state[idx] = lrng_state;;\n \n }}\n \"\"\".format(**pars)\n\nmod = SourceModule(rng_src + src,options=[\"--use_fast_math\"])\nSDE = mod.get_function(\"SDE\")\n\nprint( \"kernel ready for \",block_size,\"N =\",N,N/1e6)\n\nprint(spp,N)",
"Mając gotowe jądro, można wykonac testowe uruchomienie:",
"import time\n\nx = np.zeros(N,dtype=np.float32)\nv = np.ones(N,dtype=np.float32)\nrng_state = np.array(np.random.randint(1,2147483648,size=N),dtype=np.uint32)\n\nx_g = gpuarray.to_gpu(x)\nv_g = gpuarray.to_gpu(v)\nrng_state_g = gpuarray.to_gpu(rng_state)\n\nstart = time.time()\nfor i in range(0,200000,spp):\n t = i * 2.0 * np.pi /omega /spp;\n SDE(x_g, v_g, rng_state_g, np.float32(t), block=(block_size,1,1), grid=(blocks,1))\n\nctx.synchronize()\nelapsed = (time.time() - start)\nx=x_g.get()\nprint (elapsed,N/1e6, 200000*N/elapsed/1024.**3,\"Giter/sek\")",
"Wynikiem działania programu jest $N$ liczb określających końcowe położenie cząstki. Możemy zwizualizować je wykorzystując np. hostogram:",
"h = np.histogram(x,bins=50,range=(-150, 100) ) \nplt.plot(h[1][1:],h[0])",
"Dane referencyjne dla walidacji\nW tablicy hist_ref znajdują się dane referencyjne dla celów walidacji. Możemy sprawdzić czy program działa tak jak ten w pracy referencyjnej:",
"hist_ref = (np.array([ 46, 72, 134, 224, 341, 588, 917, 1504, 2235,\\\n 3319, 4692, 6620, 8788, 11700, 15139, 18702, 22881, 26195,\\\n 29852, 32700, 35289, 36232, 36541, 35561, 33386, 30638, 27267,\\\n 23533, 19229, 16002, 12646, 9501, 7111, 5079, 3405, 2313,\\\n 1515, 958, 573, 370, 213, 103, 81, 28, 15,\\\n 7, 3, 2, 0, 0]),\\\n np.array([-150., -145., -140., -135., -130., -125., -120., -115., -110.,\\\n -105., -100., -95., -90., -85., -80., -75., -70., -65.,\\\n -60., -55., -50., -45., -40., -35., -30., -25., -20.,\\\n -15., -10., -5., 0., 5., 10., 15., 20., 25.,\\\n 30., 35., 40., 45., 50., 55., 60., 65., 70.,\\\n 75., 80., 85., 90., 95., 100.]) )\n\n\n\nplt.hist(x,bins=50,range=(-150, 100) ) \nplt.plot((hist_ref[1][1:]+hist_ref[1][:-1])/2.0,hist_ref[0],'r')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
scotthuang1989/Python-3-Module-of-the-Week
|
networking/Addressing, Protocol Families and Socket Types.ipynb
|
apache-2.0
|
[
"A socket is one endpoint of a communication channel used by programs to pass data back and forth locally or across the Internet. Sockets have two primary properties controlling the way they send data: the address family controls the OSI network layer protocol used and the socket type controls the transport layer protocol.\nPython supports three address families. The most common, AF_INET, is used for IPv4 Internet addressing. IPv4 addresses are four bytes long and are usually represented as a sequence of four numbers, one per octet, separated by dots (e.g., 10.1.1.5 and 127.0.0.1). These values are more commonly referred to as “IP addresses.” Almost all Internet networking is done using IP version 4 at this time.\nAF_INET6 is used for IPv6 Internet addressing. IPv6 is the “next generation” version of the Internet protocol, and supports 128-bit addresses, traffic shaping, and routing features not available under IPv4. Adoption of IPv6 continues to grow, especially with the proliferation of cloud computing and the extra devices being added to the network because of Internet-of-things projects.\nAF_UNIX is the address family for Unix Domain Sockets (UDS), an inter-process communication protocol available on POSIX-compliant systems. The implementation of UDS typically allows the operating system to pass data directly from process to process, without going through the network stack. This is more efficient than using AF_INET, but because the file system is used as the namespace for addressing, UDS is restricted to processes on the same system. The appeal of using UDS over other IPC mechanisms such as named pipes or shared memory is that the programming interface is the same as for IP networking, so the application can take advantage of efficient communication when running on a single host, but use the same code when sending data across the network.\nNote:\nTHE AF_UNIX constant is only defined on systems where UDS is supported.\nThe socket type is usually either SOCK_DGRAM for message-oriented datagram transport or SOCK_STREAM for stream-oriented transport. Datagram sockets are most often associated with UDP, the user datagram protocol. They provide unreliable delivery of individual messages. Stream-oriented sockets are associated with TCP, transmission control protocol. They provide byte streams between the client and server, ensuring message delivery or failure notification through timeout management, retransmission, and other features.\nMost application protocols that deliver a large amount of data, such as HTTP, are built on top of TCP because it makes it simpler to create complex applications when message ordering and delivery is handled automatically. UDP is commonly used for protocols where order is less important (since the messages are self-contained and often small, such as name look-ups via DNS), or for multicasting (sending the same data to several hosts). Both UDP and TCP can be used with either IPv4 or IPv6 addressing.\nLooking up Hosts on the Network\nsocket includes functions to interface with the domain name services on the network so a program can convert the host name of a server into its numerical network address. Applications do not need to convert addresses explicitly before using them to connect to a server, but it can be useful when reporting errors to include the numerical address as well as the name value being used.\nTo find the official name of the current host, use gethostname()",
"import socket\n \nprint(socket.gethostname())\n ",
"Use gethostbyname() to consult the operating system hostname resolution API and convert the name of a server to its numerical address.",
"import socket\n\nHOSTS = [\n 'apu',\n 'pymotw.com',\n 'www.python.org',\n 'nosuchname',\n]\n\nfor host in HOSTS:\n try:\n print('{} : {}'.format(host, socket.gethostbyname(host)))\n except socket.error as msg:\n print('{} : {}'.format(host, msg))",
"For access to more naming information about a server, use gethostbyname_ex(). It returns the canonical hostname of the server, any aliases, and all of the available IP addresses that can be used to reach it.",
"import socket\n\nHOSTS = [\n 'apu',\n 'pymotw.com',\n 'www.python.org',\n 'nosuchname',\n]\n\nfor host in HOSTS:\n print(host)\n try:\n name, aliases, addresses = socket.gethostbyname_ex(host)\n print(' Hostname:', name)\n print(' Aliases :', aliases)\n print(' Addresses:', addresses)\n except socket.error as msg:\n print('ERROR:', msg)\n print()",
"Use getfqdn() to convert a partial name to a fully qualified domain name.",
"import socket\n\nfor host in ['scott-t460', 'pymotw.com']:\n print('{:>10} : {}'.format(host, socket.getfqdn(host)))",
"When the address of a server is available, use gethostbyaddr() to do a “reverse” lookup for the name.",
"import socket\n\nhostname, aliases, addresses = socket.gethostbyaddr('10.104.190.53')\n\nprint('Hostname :', hostname)\nprint('Aliases :', aliases)\nprint('Addresses:', addresses)",
"Finding Service Information\nIn addition to an IP address, each socket address includes an integer port number. Many applications can run on the same host, listening on a single IP address, but only one socket at a time can use a port at that address. The combination of IP address, protocol, and port number uniquely identify a communication channel and ensure that messages sent through a socket arrive at the correct destination.\nSome of the port numbers are pre-allocated for a specific protocol. For example, communication between email servers using SMTP occurs over port number 25 using TCP, and web clients and servers use port 80 for HTTP. The port numbers for network services with standardized names can be looked up with getservbyname().",
"import socket\nfrom urllib.parse import urlparse\n\nURLS = [\n 'http://www.python.org',\n 'https://www.mybank.com',\n 'ftp://prep.ai.mit.edu',\n 'gopher://gopher.micro.umn.edu',\n 'smtp://mail.example.com',\n 'imap://mail.example.com',\n 'imaps://mail.example.com',\n 'pop3://pop.example.com',\n 'pop3s://pop.example.com',\n]\n\nfor url in URLS:\n parsed_url = urlparse(url)\n port = socket.getservbyname(parsed_url.scheme)\n print('{:>6} : {}'.format(parsed_url.scheme, port))",
"To reverse the service port lookup, use getservbyport().",
"import socket\nfrom urllib.parse import urlunparse\n\nfor port in [80, 443, 21, 70, 25, 143, 993, 110, 995]:\n url = '{}://example.com/'.format(socket.getservbyport(port))\n print(url)",
"The number assigned to a transport protocol can be retrieved with getprotobyname().",
"import socket\n\n\ndef get_constants(prefix):\n \"\"\"Create a dictionary mapping socket module\n constants to their names.\n \"\"\"\n return {\n getattr(socket, n): n\n for n in dir(socket)\n if n.startswith(prefix)\n }\n\n\nprotocols = get_constants('IPPROTO_')\n\nfor name in ['icmp', 'udp', 'tcp']:\n proto_num = socket.getprotobyname(name)\n const_name = protocols[proto_num]\n print('{:>4} -> {:2d} (socket.{:<12} = {:2d})'.format(\n name, proto_num, const_name,\n getattr(socket, const_name)))",
"Looking Up Server Addresses\ngetaddrinfo() converts the basic address of a service into a list of tuples with all of the information necessary to make a connection. The contents of each tuple will vary, containing different network families or protocols.",
"import socket\n\n\ndef get_constants(prefix):\n \"\"\"Create a dictionary mapping socket module\n constants to their names.\n \"\"\"\n return {\n getattr(socket, n): n\n for n in dir(socket)\n if n.startswith(prefix)\n }\n\n\nfamilies = get_constants('AF_')\ntypes = get_constants('SOCK_')\nprotocols = get_constants('IPPROTO_')\n\nfor response in socket.getaddrinfo('www.python.org', 'http'):\n\n # Unpack the response tuple\n family, socktype, proto, canonname, sockaddr = response\n\n print('Family :', families[family])\n print('Type :', types[socktype])\n print('Protocol :', protocols[proto])\n print('Canonical name:', canonname)\n print('Socket address:', sockaddr)\n print()",
"IP Address Representations\nNetwork programs written in C use the data type struct sockaddr to represent IP addresses as binary values (instead of the string addresses usually found in Python programs). To convert IPv4 addresses between the Python representation and the C representation, use inet_aton() and inet_ntoa().",
"import binascii\nimport socket\nimport struct\nimport sys\n\nfor string_address in ['192.168.1.1', '127.0.0.1']:\n packed = socket.inet_aton(string_address)\n print('Original:', string_address)\n print('Packed :', binascii.hexlify(packed))\n print('Unpacked:', socket.inet_ntoa(packed))\n print()",
"The four bytes in the packed format can be passed to C libraries, transmitted safely over the network, or saved to a database compactly.\nThe related functions inet_pton() and inet_ntop() work with both IPv4 and IPv6 addresses, producing the appropriate format based on the address family parameter passed in.",
"import binascii\nimport socket\nimport struct\nimport sys\n\nstring_address = '2002:ac10:10a:1234:21e:52ff:fe74:40e'\npacked = socket.inet_pton(socket.AF_INET6, string_address)\n\nprint('Original:', string_address)\nprint('Packed :', binascii.hexlify(packed))\nprint('Unpacked:', socket.inet_ntop(socket.AF_INET6, packed))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gpotter2/scapy
|
doc/notebooks/Scapy in 15 minutes.ipynb
|
gpl-2.0
|
[
"Scapy in 15 minutes (or longer)\nGuillaume Valadon & Pierre Lalet\nScapy is a powerful Python-based interactive packet manipulation program and library. It can be used to forge or decode packets for a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more.\nThis iPython notebook provides a short tour of the main Scapy features. It assumes that you are familiar with networking terminology. All examples were built using the development version from https://github.com/secdev/scapy, and tested on Linux. They should work as well on OS X, and other BSD.\nThe current documentation is available on http://scapy.readthedocs.io/ !\nScapy eases network packets manipulation, and allows you to forge complicated packets to perform advanced tests. As a teaser, let's have a look a two examples that are difficult to express without Scapy:\n1_ Sending a TCP segment with maximum segment size set to 0 to a specific port is an interesting test to perform against embedded TCP stacks. It can be achieved with the following one-liner:",
"send(IP(dst=\"1.2.3.4\")/TCP(dport=502, options=[(\"MSS\", 0)]))",
"2_ Advanced firewalking using IP options is sometimes useful to perform network enumeration. Here is a more complicated one-liner:",
"ans = sr([IP(dst=\"8.8.8.8\", ttl=(1, 8), options=IPOption_RR())/ICMP(seq=RandShort()), IP(dst=\"8.8.8.8\", ttl=(1, 8), options=IPOption_Traceroute())/ICMP(seq=RandShort()), IP(dst=\"8.8.8.8\", ttl=(1, 8))/ICMP(seq=RandShort())], verbose=False, timeout=3)[0]\nans.make_table(lambda x, y: (\", \".join(z.summary() for z in x[IP].options) or '-', x[IP].ttl, y.sprintf(\"%IP.src% %ICMP.type%\")))",
"Now that we've got your attention, let's start the tutorial !\nQuick setup\nThe easiest way to try Scapy is to clone the github repository, then launch the run_scapy script as root. The following examples can be pasted at the Scapy prompt. There is no need to install any external Python modules.\n```shell\ngit clone https://github.com/secdev/scapy --depth=1\nsudo ./run_scapy\nWelcome to Scapy (2.4.0)\n\n\n\n```\n\n\n\nNote: iPython users must import scapy as follows",
"from scapy.all import *",
"First steps\nWith Scapy, each network layer is a Python class.\nThe '/' operator is used to bind layers together. Let's put a TCP segment on top of IP and assign it to the packet variable, then stack it on top of Ethernet.",
"packet = IP()/TCP()\nEther()/packet",
"This last output displays the packet summary. Here, Scapy automatically filled the Ethernet type as well as the IP protocol field.\nProtocol fields can be listed using the ls() function:",
" >>> ls(IP, verbose=True)\n version : BitField (4 bits) = (4)\n ihl : BitField (4 bits) = (None)\n tos : XByteField = (0)\n len : ShortField = (None)\n id : ShortField = (1)\n flags : FlagsField (3 bits) = (0)\n MF, DF, evil\n frag : BitField (13 bits) = (0)\n ttl : ByteField = (64)\n proto : ByteEnumField = (0)\n chksum : XShortField = (None)\n src : SourceIPField (Emph) = (None)\n dst : DestIPField (Emph) = (None)\n options : PacketListField = ([])",
"Let's create a new packet to a specific IP destination. With Scapy, each protocol field can be specified. As shown in the ls() output, the interesting field is dst.\nScapy packets are objects with some useful methods, such as summary().",
"p = Ether()/IP(dst=\"www.secdev.org\")/TCP()\np.summary()",
"There are not many differences with the previous example. However, Scapy used the specific destination to perform some magic tricks !\nUsing internal mechanisms (such as DNS resolution, routing table and ARP resolution), Scapy has automatically set fields necessary to send the packet. These fields can of course be accessed and displayed.",
"print(p.dst) # first layer that has an src field, here Ether\nprint(p[IP].src) # explicitly access the src field of the IP layer\n\n# sprintf() is a useful method to display fields\nprint(p.sprintf(\"%Ether.src% > %Ether.dst%\\n%IP.src% > %IP.dst%\"))",
"Scapy uses default values that work most of the time. For example, TCP() is a SYN segment to port 80.",
"print(p.sprintf(\"%TCP.flags% %TCP.dport%\"))",
"Moreover, Scapy has implicit packets. For example, they are useful to make the TTL field value vary from 1 to 5 to mimic traceroute.",
"[p for p in IP(ttl=(1,5))/ICMP()]",
"Sending and receiving\nCurrently, you know how to build packets with Scapy. The next step is to send them over the network !\nThe sr1() function sends a packet and returns the corresponding answer. srp1() does the same for layer two packets, i.e. Ethernet. If you are only interested in sending packets send() is your friend.\nAs an example, we can use the DNS protocol to get www.example.com IPv4 address.",
"p = sr1(IP(dst=\"8.8.8.8\")/UDP()/DNS(qd=DNSQR()))\np[DNS].an",
"Another alternative is the sr() function. Like srp1(), the sr1() function can be used for layer 2 packets.",
"r, u = srp(Ether()/IP(dst=\"8.8.8.8\", ttl=(5,10))/UDP()/DNS(rd=1, qd=DNSQR(qname=\"www.example.com\")))\nr, u",
"sr() sent a list of packets, and returns two variables, here r and u, where:\n1. r is a list of results (i.e tuples of the packet sent and its answer)\n2. u is a list of unanswered packets",
"# Access the first tuple\nprint(r[0][0].summary()) # the packet sent\nprint(r[0][1].summary()) # the answer received\n\n# Access the ICMP layer. Scapy received a time-exceeded error message\nr[0][1][ICMP]",
"With Scapy, list of packets, such as r or u, can be easily written to, or read from PCAP files.",
"wrpcap(\"scapy.pcap\", r)\n\npcap_p = rdpcap(\"scapy.pcap\")\npcap_p[0]",
"Sniffing the network is as straightforward as sending and receiving packets. The sniff() function returns a list of Scapy packets, that can be manipulated as previously described.",
"s = sniff(count=2)\ns",
"sniff() has many arguments. The prn one accepts a function name that will be called on received packets. Using the lambda keyword, Scapy could be used to mimic the tshark command behavior.",
"sniff(count=2, prn=lambda p: p.summary())",
"Alternatively, Scapy can use OS sockets to send and receive packets. The following example assigns an UDP socket to a Scapy StreamSocket, which is then used to query www.example.com IPv4 address.\nUnlike other Scapy sockets, StreamSockets do not require root privileges.",
"import socket\n\nsck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # create an UDP socket\nsck.connect((\"8.8.8.8\", 53)) # connect to 8.8.8.8 on 53/UDP\n\n# Create the StreamSocket and gives the class used to decode the answer\nssck = StreamSocket(sck)\nssck.basecls = DNS\n\n# Send the DNS query\nssck.sr1(DNS(rd=1, qd=DNSQR(qname=\"www.example.com\")))",
"Visualization\nParts of the following examples require the matplotlib module.\nWith srloop(), we can send 100 ICMP packets to 8.8.8.8 and 8.8.4.4.",
"ans, unans = srloop(IP(dst=[\"8.8.8.8\", \"8.8.4.4\"])/ICMP(), inter=.1, timeout=.1, count=100, verbose=False)",
"Then we can use the results to plot the IP id values.",
"%matplotlib inline\nans.multiplot(lambda x, y: (y[IP].src, (y.time, y[IP].id)), plot_xy=True)",
"The raw() constructor can be used to \"build\" the packet's bytes as they would be sent on the wire.",
"pkt = IP() / UDP() / DNS(qd=DNSQR())\nprint(repr(raw(pkt)))",
"Since some people cannot read this representation, Scapy can:\n - give a summary for a packet",
"print(pkt.summary())",
"\"hexdump\" the packet's bytes",
"hexdump(pkt)",
"dump the packet, layer by layer, with the values for each field",
"pkt.show()",
"render a pretty and handy dissection of the packet",
"pkt.canvas_dump()",
"Scapy has a traceroute() function, which basically runs a sr(IP(ttl=(1..30)) and creates a TracerouteResult object, which is a specific subclass of SndRcvList().",
"ans, unans = traceroute('www.secdev.org', maxttl=15)",
"The result can be plotted with .world_trace() (this requires GeoIP module and data, from MaxMind)",
"ans.world_trace()",
"The PacketList.make_table() function can be very helpful. Here is a simple \"port scanner\":",
"ans = sr(IP(dst=[\"scanme.nmap.org\", \"nmap.org\"])/TCP(dport=[22, 80, 443, 31337]), timeout=3, verbose=False)[0]\nans.extend(sr(IP(dst=[\"scanme.nmap.org\", \"nmap.org\"])/UDP(dport=53)/DNS(qd=DNSQR()), timeout=3, verbose=False)[0])\nans.make_table(lambda x, y: (x[IP].dst, x.sprintf('%IP.proto%/{TCP:%r,TCP.dport%}{UDP:%r,UDP.dport%}'), y.sprintf('{TCP:%TCP.flags%}{ICMP:%ICMP.type%}')))",
"Implementing a new protocol\nScapy can be easily extended to support new protocols.\nThe following example defines DNS over TCP. The DNSTCP class inherits from Packet and defines two field: the length, and the real DNS message. The length_of and length_from arguments link the len and dns fields together. Scapy will be able to automatically compute the len value.",
"class DNSTCP(Packet):\n name = \"DNS over TCP\"\n \n fields_desc = [ FieldLenField(\"len\", None, fmt=\"!H\", length_of=\"dns\"),\n PacketLenField(\"dns\", 0, DNS, length_from=lambda p: p.len)]\n \n # This method tells Scapy that the next packet must be decoded with DNSTCP\n def guess_payload_class(self, payload):\n return DNSTCP",
"This new packet definition can be direcly used to build a DNS message over TCP.",
"# Build then decode a DNS message over TCP\nDNSTCP(raw(DNSTCP(dns=DNS())))",
"Modifying the previous StreamSocket example to use TCP allows to use the new DNSCTP layer easily.",
"import socket\n\nsck = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # create an TCP socket\nsck.connect((\"8.8.8.8\", 53)) # connect to 8.8.8.8 on 53/TCP\n\n# Create the StreamSocket and gives the class used to decode the answer\nssck = StreamSocket(sck)\nssck.basecls = DNSTCP\n\n# Send the DNS query\nssck.sr1(DNSTCP(dns=DNS(rd=1, qd=DNSQR(qname=\"www.example.com\"))))",
"Scapy as a module\nSo far, Scapy was only used from the command line. It is also a Python module than can be used to build specific network tools, such as ping6.py:",
" from scapy.all import *\n import argparse\n\n parser = argparse.ArgumentParser(description=\"A simple ping6\")\n parser.add_argument(\"ipv6_address\", help=\"An IPv6 address\")\n args = parser.parse_args()\n\n print(sr1(IPv6(dst=args.ipv6_address)/ICMPv6EchoRequest(), verbose=0).summary())",
"Answering machines\nA lot of attack scenarios look the same: you want to wait for a specific packet, then send an answer to trigger the attack.\nTo this extent, Scapy provides the AnsweringMachine object. Two methods are especially useful:\n1. is_request(): return True if the pkt is the expected request\n2. make_reply(): return the packet that must be sent\nThe following example uses Scapy Wi-Fi capabilities to pretend that a \"Scapy !\" access point exists.\nNote: your Wi-Fi interface must be set to monitor mode !",
"# Specify the Wi-Fi monitor interface\n#conf.iface = \"mon0\" # uncomment to test\n\n# Create an answering machine\nclass ProbeRequest_am(AnsweringMachine):\n function_name = \"pram\"\n\n # The fake mac of the fake access point\n mac = \"00:11:22:33:44:55\"\n\n def is_request(self, pkt):\n return Dot11ProbeReq in pkt\n\n def make_reply(self, req):\n\n rep = RadioTap()\n # Note: depending on your Wi-Fi card, you might need a different header than RadioTap()\n rep /= Dot11(addr1=req.addr2, addr2=self.mac, addr3=self.mac, ID=RandShort(), SC=RandShort())\n rep /= Dot11ProbeResp(cap=\"ESS\", timestamp=time.time())\n rep /= Dot11Elt(ID=\"SSID\",info=\"Scapy !\")\n rep /= Dot11Elt(ID=\"Rates\",info=b'\\x82\\x84\\x0b\\x16\\x96')\n rep /= Dot11Elt(ID=\"DSset\",info=chr(10))\n\n OK,return rep\n\n# Start the answering machine\n#ProbeRequest_am()() # uncomment to test",
"Cheap Man-in-the-middle with NFQUEUE\nNFQUEUE is an iptables target than can be used to transfer packets to userland process. As a nfqueue module is available in Python, you can take advantage of this Linux feature to perform Scapy based MiTM.\nThis example intercepts ICMP Echo request messages sent to 8.8.8.8, sent with the ping command, and modify their sequence numbers. In order to pass packets to Scapy, the following iptable command put packets into the NFQUEUE #2807:\n$ sudo iptables -I OUTPUT --destination 8.8.8.8 -p icmp -o eth0 -j NFQUEUE --queue-num 2807",
" from scapy.all import *\n import nfqueue, socket\n\n def scapy_cb(i, payload):\n s = payload.get_data() # get and parse the packet\n p = IP(s)\n\n # Check if the packet is an ICMP Echo Request to 8.8.8.8\n if p.dst == \"8.8.8.8\" and ICMP in p:\n # Delete checksums to force Scapy to compute them\n del(p[IP].chksum, p[ICMP].chksum)\n \n # Set the ICMP sequence number to 0\n p[ICMP].seq = 0\n \n # Let the modified packet go through\n ret = payload.set_verdict_modified(nfqueue.NF_ACCEPT, raw(p), len(p))\n \n else:\n # Accept all packets\n payload.set_verdict(nfqueue.NF_ACCEPT)\n\n # Get an NFQUEUE handler\n q = nfqueue.queue()\n # Set the function that will be call on each received packet\n q.set_callback(scapy_cb)\n # Open the queue & start parsing packes\n q.fast_open(2807, socket.AF_INET)\n q.try_run()",
"Automaton\nWhen more logic is needed, Scapy provides a clever way abstraction to define an automaton. In a nutshell, you need to define an object that inherits from Automaton, and implement specific methods:\n- states: using the @ATMT.state decorator. They usually do nothing\n- conditions: using the @ATMT.condition and @ATMT.receive_condition decorators. They describe how to go from one state to another\n- actions: using the ATMT.action decorator. They describe what to do, like sending a back, when changing state\nThe following example does nothing more than trying to mimic a TCP scanner:",
"class TCPScanner(Automaton):\n\n @ATMT.state(initial=1)\n def BEGIN(self):\n pass\n\n @ATMT.state()\n def SYN(self):\n print(\"-> SYN\")\n\n @ATMT.state()\n def SYN_ACK(self):\n print(\"<- SYN/ACK\")\n raise self.END()\n\n @ATMT.state()\n def RST(self):\n print(\"<- RST\")\n raise self.END()\n\n @ATMT.state()\n def ERROR(self):\n print(\"!! ERROR\")\n raise self.END()\n @ATMT.state(final=1)\n def END(self):\n pass\n \n @ATMT.condition(BEGIN)\n def condition_BEGIN(self):\n raise self.SYN()\n\n @ATMT.condition(SYN)\n def condition_SYN(self):\n\n if random.randint(0, 1):\n raise self.SYN_ACK()\n else:\n raise self.RST()\n\n @ATMT.timeout(SYN, 1)\n def timeout_SYN(self):\n raise self.ERROR()\n\nTCPScanner().run()\n\nTCPScanner().run()",
"Pipes\nPipes are an advanced Scapy feature that aims sniffing, modifying and printing packets. The API provides several buildings blocks. All of them, have high entries and exits (>>) as well as low (>) ones.\nFor example, the CliFeeder is used to send message from the Python command line to a low exit. It can be combined to the InjectSink that reads message on its low entry and inject them to the specified network interface. These blocks can be combined as follows:",
"# Instantiate the blocks\nclf = CLIFeeder()\nijs = InjectSink(\"enx3495db043a28\")\n\n# Plug blocks together\nclf > ijs\n\n# Create and start the engine\npe = PipeEngine(clf)\npe.start()",
"Packet can be sent using the following command on the prompt:",
"clf.send(\"Hello Scapy !\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
raharjaliu/BIFers
|
scripts/notebook/binding.ipynb
|
mit
|
[
"Binding Site Prediction\nIn this notebook we perform various machine learning methods and compare various aspects of machine learning paradigms:\n\nZero-knowledge vs. domain-knowledge based prediction\nSingle algorithms vs. ensemble methods\nPrediction over normalized vs. non-normalized space\n\nWe also reviewed several machine learning algorithms such as Support Vector Machine including its variants (c-SVN, regressive SVN etc), tree based methods (decision tree, random forest, extremely random forest etc) and other ensembel methods (AdaBoost)",
"## matrix and vector tools\n\nimport pandas as pd\nfrom pandas import DataFrame as df\nfrom pandas import Series\nimport numpy as np\n\n## sklearn\n\nfrom sklearn.datasets import make_friedman1\nfrom sklearn.feature_selection import RFE\nfrom sklearn.svm import SVR\nfrom sklearn.svm import SVC\n\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.datasets import make_blobs\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import AdaBoostClassifier\n\nfrom sklearn.feature_selection import VarianceThreshold\n\n# matplotlib et al.\n\nfrom matplotlib import pyplot as plt\n\n%matplotlib inline",
"Data Import and pre-processing",
"dna = df.from_csv('../../data/training_data_binding_site_prediction/dna_big.csv')\n\n## embed class\n\ndna = dna.reset_index(drop=False)\ndna['class_bool'] = dna['class'] == '+'\ndna['class_num'] = dna.class_bool.apply(lambda x: 1 if x else 0)\n\n## added protein ID and corresponding position\n\ndna['ID'] = dna.ID_pos.apply(lambda x: ''.join(x.split('_')[:-1]))\ndna['pos'] = dna.ID_pos.apply(lambda x: x.split('_')[-1])\n\n## data columns\n\ndna.columns\n\n## print available features\nfor feature in dna.columns[:-6]:\n \n print feature\n\ndna",
"Data Pre-processing: normalization\nWe apply Student's t normalization on each column:\n$$X' = \\frac{X - \\hat{X}}{s}$$\nThis set would be used in parallel to normal dataset for comparison",
"## create column-wise normalized data-set\n\ndna_norm = dna.copy()\n\nfor col in dna_norm[dna_norm.columns[1:][:-6]].columns:\n dna_norm[col] = (dna_norm[col] - dna_norm[col].mean()) / (dna_norm[col].std() + .00001)\n\ndna_norm",
"Analysis I: Measuring the variation of significance among PSSM features\nWe want to see whether certain evolution patterns have any influence on DNA binding mechanism. We apply Recursive Feature Elimination (RFE) to rank our all PSSM-based features according to their predictive power using linear SVM (that is, SVM with linear kernel function).\nReference:\n\nGuyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(1-3), 389–422, 2002.",
"# extract dataset and prediction\nX = dna[dna.columns[1:][:-6]]\nX = X[[x for x in X.columns.tolist() if 'pssm' in x]]\nX = X.iloc[range(1000)]\ny = dna['class_bool']\ny = y[range(1000)]\n\n# apply RFE on linear c-SVM\nestimator = SVC(kernel=\"linear\")\nselector = RFE(estimator, 5, step=1)\nselector = selector.fit(X, y)\n\nprint selector.ranking_\n\n# redid previous routine on the whole data\n\npssm_rank = pd.DataFrame()\ncat = dna['class']\n\nfor i in range(dna.index.size / 1000):\n \n this_cat = cat[range(i * 1000, (i + 1) * 1000)]\n if this_cat.unique().size > 1:\n \n X = dna[dna.columns[1:][:-6]]\n X = X[[c for c in X.columns.tolist() if 'pssm' in c]]\n X = X.iloc[range(i * 1000, (i + 1) * 1000)]\n y = dna['class_bool']\n y = y[range(i * 1000, (i + 1) * 1000)]\n\n estimator = SVC(kernel=\"linear\")\n selector = RFE(estimator, 5, step=1)\n selector = selector.fit(X, y)\n print selector.ranking_\n \n pssm_rank[str(i)] = selector.ranking_\n \npssm_rank.index = [c for c in X.columns.tolist() if 'pssm' in c]\n\n##sort PSSM features by its predictive power\n\nrank_av = [np.mean(pssm_rank.ix[i]) for i in pssm_rank.index]\narg_rank_av = np.argsort(rank_av)\npssm_rank_sorted = pssm_rank.ix[pssm_rank.index[arg_rank_av]]\npssm_rank_sorted['RANK_AV'] = np.sort(rank_av)\n\npssm_rank_sorted\n\n# plot average rank of all HSSP values\n\nplt.hist([np.mean(pssm_rank.ix[i]) for i in pssm_rank.index], bins=60, alpha=.5)\nplt.title(\"Histogram of Average HSSP Features Rank (RFE on linear SVM)\")\nfig = plt.gcf()\nfig.set_size_inches(10, 6)",
"Some PSSM shows deviation from expected normal distribution -- which should be the case in neutral information setting due to Central Limit Theorem (CLT).\nAnalysis II: Support Vector Machine\nWe use c-SVM on two models :\n\nall features model\nhandpicked features model\n\nReference\n\n“Support-vector networks”, C. Cortes, V. Vapnik - Machine Learning, 20, 273-297 (1995).\n“Automatic Capacity Tuning of Very Large VC-dimension Classifiers”, I. Guyon, B. Boser, V. Vapnik - Advances in neural information processing 1993.\n\nSVM on all features\nWithout Normalization",
"X = dna[dna.columns[1:][:-6]]\ny = dna.class_num\n\n## train c-SVM\n\nclf_svm1 = SVC(kernel='rbf', C=0.7)\nclf_svm1.fit(X[dna.fold == 0], y[dna.fold == 0])\n\n## predict class\n\npred = clf_svm1.predict(dna[dna.fold == 1][dna.columns[1:][:-6]])\n\ntruth = dna[dna.fold == 1]['class_num']\n\ntp = pred[(np.array(pred) == 1) & (np.array(truth) == 1)].size\ntn = pred[(np.array(pred) == 0) & (np.array(truth) == 0)].size\nfp = pred[(np.array(pred) == 1) & (np.array(truth) == 0)].size\nfn = pred[(np.array(pred) == 0) & (np.array(truth) == 1)].size\n\n\ncm = \"Confusion Matrix:\\n\\tX\\t\\t(+)-pred\\t(-)-pred\\n\" +\\\n \"\\t(+)-truth\\t%d\\t\\t%d\\n\" +\\\n \"\\t(-)-truth\\t%d\\t\\t%d\"\n \nprint cm % (tp, fn, fp, tn)\n\nprint \"Size of (-)- and (+)-sets:\\n\\t(+)\\t %d\\n\\t(-)\\t%d\" % (truth[truth == 1].index.size, truth[truth == 0].index.size)",
"With normalization",
"X_norm = dna_norm[dna_norm.columns[1:][:-6]]\ny = dna_norm.class_num\n\n## train c-SVM\n\nclf_svm2 = SVC(kernel='rbf', C=0.7)\nclf_svm2.fit(X_norm[dna_norm.fold == 0], y[dna_norm.fold == 0])\n\n## predict class\n\npred2 = clf_svm2.predict(dna_norm[dna_norm.fold == 1][dna_norm.columns[1:][:-6]])\n\ntruth = dna_norm[dna_norm.fold == 1]['class_num']\n\ntp = pred2[(np.array(pred2) == 1) & (np.array(truth) == 1)].size\ntn = pred2[(np.array(pred2) == 0) & (np.array(truth) == 0)].size\nfp = pred2[(np.array(pred2) == 1) & (np.array(truth) == 0)].size\nfn = pred2[(np.array(pred2) == 0) & (np.array(truth) == 1)].size\n\ncm = \"Confusion Matrix:\\n\\tX\\t\\t(+)-pred\\t(-)-pred\\n\" +\\\n \"\\t(+)-truth\\t%d\\t\\t%d\\n\" +\\\n \"\\t(-)-truth\\t%d\\t\\t%d\"\n \nprint cm % (tp, fn, fp, tn)",
"Normalization over each feature space reduces the complexity of the problem which in turn improves the result.\nSVM on PSSM + other putative useful features\nWe now test the performance on SVM using non-zero knowledge approach. We scourged through the complete list of features acquired by PredictProtein and include features that might have certain influence on DNA/RNA binding",
"## hand-pick features\n\nfeatures = [x for x in dna.columns[1:][:-6] if 'pssm' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'glbl_aa_comp' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'glbl_sec' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'glbl_acc' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'chemprop_mass' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'chemprop_hyd' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'chemprop_cbeta' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'chemprop_charge' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'inf_PP' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'isis_bin' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'isis_raw' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profbval_raw' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profphd_sec_raw' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profphd_sec_bin' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profphd_acc_bin' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profphd_normalize' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'pfam_within_domain' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'pfam_dom_cons' in x]\n\nX_norm = dna_norm[features]\ny = dna_norm.class_num\n\n## train c-SVM\n\nclf_svm3 = SVC(kernel='rbf', C=0.7)\nclf_svm3.fit(X_norm[dna_norm.fold == 0], y[dna_norm.fold == 0])\n\n## predict class\n\npred3 = clf_svm3.predict(X_norm[dna_norm.fold == 1])\n\ntruth = dna_norm[dna_norm.fold == 1]['class_num']\n\ntp = pred3[(np.array(pred3) == 1) & (np.array(truth) == 1)].size\ntn = pred3[(np.array(pred3) == 0) & (np.array(truth) == 0)].size\nfp = pred3[(np.array(pred3) == 1) & (np.array(truth) == 0)].size\nfn = pred3[(np.array(pred3) == 0) & (np.array(truth) == 1)].size\n\ncm = \"Confusion Matrix:\\n\\tX\\t\\t(+)-pred\\t(-)-pred\\n\" +\\\n \"\\t(+)-truth\\t%d\\t\\t%d\\n\" +\\\n \"\\t(-)-truth\\t%d\\t\\t%d\"\n \nprint cm % (tp, fn, fp, tn)",
"Analysis III: Tree-based Classificators\nWe picked three tree-based algorithms: Decision Tree (DT), Random Forest (RT) and Extremely Random Forest (ERT). From left to right, the algorithm allows more complexity into the models by introducing more randomness and biased than the previous algorithm.",
"X = dna[dna.columns[1:][:-6]]\ny = dna.class_num",
"Decision Tree\nDecision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.\nReference\nL. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, Belmont, CA, 1984.",
"# compute cross validated accuracy of the model\n\nclf_t1 = DecisionTreeClassifier(max_depth=None, min_samples_split=2,\n random_state=0)\n\nscores = cross_val_score(clf_t1, X, y, cv=5)\n\nprint scores\nprint scores.mean() \n\nclf_t1.fit(X[dna.fold == 0], y[dna.fold == 0])\n\npred_t1 = clf_t1.predict(X[dna.fold == 1])\n\ntruth = dna[dna.fold == 1]['class_num']\n\ntp = pred_t1[(np.array(pred_t1) == 1) & (np.array(truth) == 1)].size\ntn = pred_t1[(np.array(pred_t1) == 0) & (np.array(truth) == 0)].size\nfp = pred_t1[(np.array(pred_t1) == 1) & (np.array(truth) == 0)].size\nfn = pred_t1[(np.array(pred_t1) == 0) & (np.array(truth) == 1)].size\n\ncm = \"Confusion Matrix:\\n\\tX\\t\\t(+)-pred\\t(-)-pred\\n\" +\\\n \"\\t(+)-truth\\t%d\\t\\t%d\\n\" +\\\n \"\\t(-)-truth\\t%d\\t\\t%d\"\n \nprint cm % (tp, fn, fp, tn)",
"Random Forest\nIn random forests, each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. In addition, when splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features. Instead, the split that is picked is the best split among a random subset of the features. As a result of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model.\n[SKL]\nReference:\n\nBreiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001.\nBreiman, “Arcing Classifiers”, Annals of Statistics 1998.",
"# compute cross validated accuracy of the model\n\nclf_t2 = RandomForestClassifier(n_estimators=10, max_depth=None,\n min_samples_split=2, random_state=0)\nscores = cross_val_score(clf_t2, X, y, cv=5)\nprint scores\nprint scores.mean()\n\nclf_t2.fit(X[dna.fold == 0], y[dna.fold == 0])\n\npred_t2 = clf_t2.predict(X[dna.fold == 1])\n\ntruth = dna[dna.fold == 1]['class_num']\n\ntp = pred_t2[(np.array(pred_t2) == 1) & (np.array(truth) == 1)].size\ntn = pred_t2[(np.array(pred_t2) == 0) & (np.array(truth) == 0)].size\nfp = pred_t2[(np.array(pred_t2) == 1) & (np.array(truth) == 0)].size\nfn = pred_t2[(np.array(pred_t2) == 0) & (np.array(truth) == 1)].size\n\ncm = \"Confusion Matrix:\\n\\tX\\t\\t(+)-pred\\t(-)-pred\\n\" +\\\n \"\\t(+)-truth\\t%d\\t\\t%d\\n\" +\\\n \"\\t(-)-truth\\t%d\\t\\t%d\"\n \nprint cm % (tp, fn, fp, tn)",
"Extremely Randomized Tree\nIn extremely randomized trees (see ExtraTreesClassifier and ExtraTreesRegressor classes), randomness goes one step further in the way splits are computed. As in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random for each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. This usually allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias:\n[SKL]\nReference:\n\nP. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006.",
"# compute cross validated accuracy of the model\n\nclf_t3 = ExtraTreesClassifier(n_estimators=10, max_depth=None,\n min_samples_split=2, random_state=0)\nscores = cross_val_score(clf_t3, X, y, cv=5)\n\nprint scores\nprint scores.mean()\n\nclf_t3.fit(X[dna.fold == 0], y[dna.fold == 0])\n\npred_t3 = clf_t3.predict(X[dna.fold == 1])\n\ntruth = dna[dna.fold == 1]['class_num']\n\ntp = pred_t3[(np.array(pred_t3) == 1) & (np.array(truth) == 1)].size\ntn = pred_t3[(np.array(pred_t3) == 0) & (np.array(truth) == 0)].size\nfp = pred_t3[(np.array(pred_t3) == 1) & (np.array(truth) == 0)].size\nfn = pred_t3[(np.array(pred_t3) == 0) & (np.array(truth) == 1)].size\n\ncm = \"Confusion Matrix:\\n\\tX\\t\\t(+)-pred\\t(-)-pred\\n\" +\\\n \"\\t(+)-truth\\t%d\\t\\t%d\\n\" +\\\n \"\\t(-)-truth\\t%d\\t\\t%d\"\n \nprint cm % (tp, fn, fp, tn)",
"Random Forest on selected features",
"features = [x for x in dna.columns[1:][:-6] if 'pssm' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'glbl_aa_comp' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'glbl_sec' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'glbl_acc' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'chemprop_mass' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'chemprop_hyd' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'chemprop_cbeta' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'chemprop_charge' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'inf_PP' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'isis_bin' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'isis_raw' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profbval_raw' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profphd_sec_raw' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profphd_sec_bin' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profphd_acc_bin' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'profphd_normalize' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'pfam_within_domain' in x] +\\\n [x for x in dna.columns[1:][:-6] if 'pfam_dom_cons' in x]\n\nX = dna[features]\ny = dna.class_num\n\n# compute cross validated accuracy of the model\n\nclf_t4 = RandomForestClassifier(n_estimators=10, max_depth=None,\n min_samples_split=2, random_state=0)\nscores = cross_val_score(clf_t4, X, y, cv=5)\nprint scores\nprint scores.mean()\n\nclf_t4.fit(X[dna.fold == 0], y[dna.fold == 0])\n\npred_t4 = clf_t4.predict(X[dna.fold == 1])\n\ntruth = dna[dna.fold == 1]['class_num']\n\ntp = pred_t4[(np.array(pred_t4) == 1) & (np.array(truth) == 1)].size\ntn = pred_t4[(np.array(pred_t4) == 0) & (np.array(truth) == 0)].size\nfp = pred_t4[(np.array(pred_t4) == 1) & (np.array(truth) == 0)].size\nfn = pred_t4[(np.array(pred_t4) == 0) & (np.array(truth) == 1)].size\n\ncm = \"Confusion Matrix:\\n\\tX\\t\\t(+)-pred\\t(-)-pred\\n\" +\\\n \"\\t(+)-truth\\t%d\\t\\t%d\\n\" +\\\n \"\\t(-)-truth\\t%d\\t\\t%d\"\n \nprint cm % (tp, fn, fp, tn)",
"While there is a significant accuracy improvement going from Decision Tree to Random Forest, the resulting prediction from Extremely Random Forest only improves the accuracy by the margin. Likewise, manually handpicking the features does not seem to improve the performance of the accuracy.\nIV: Other Ensemble Learning Methods\nAdaBoost\nThe core principle of AdaBoost is to fit a sequence of weak learners (i.e., models that are only slightly better than random guessing, such as small decision trees) on repeatedly modified versions of the data. The predictions from all of them are then combined through a weighted majority vote (or sum) to produce the final prediction.\n[SKL]\nReference:\n\nY. Freund, and R. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting”, 1997.\nJ. Zhu, H. Zou, S. Rosset, T. Hastie. “Multi-class AdaBoost”, 2009.",
"X = dna[dna.columns[1:][:-6]]\ny = dna.class_num\n\n# compute cross validated accuracy of the model\n\nada = AdaBoostClassifier(n_estimators=100)\nscores = cross_val_score(ada, X, y, cv=5)\n\nprint scores\nprint scores.mean() \n\nada.fit(X[dna.fold == 0], y[dna.fold == 0])\n\npred_ada = ada.predict(X[dna.fold == 1])\n\ntruth = dna[dna.fold == 1]['class_num']\n\ntp = pred_ada[(np.array(pred_ada) == 1) & (np.array(truth) == 1)].size\ntn = pred_ada[(np.array(pred_ada) == 0) & (np.array(truth) == 0)].size\nfp = pred_ada[(np.array(pred_ada) == 1) & (np.array(truth) == 0)].size\nfn = pred_ada[(np.array(pred_ada) == 0) & (np.array(truth) == 1)].size\n\ncm = \"Confusion Matrix:\\n\\tX\\t\\t(+)-pred\\t(-)-pred\\n\" +\\\n \"\\t(+)-truth\\t%d\\t\\t%d\\n\" +\\\n \"\\t(-)-truth\\t%d\\t\\t%d\"\n \nprint cm % (tp, fn, fp, tn)",
"Conclusions:\n\nxNA Binding a hard(er than expected) classification problem\nGood accuracy and precision; okay recall for most basic ML algos\nMost features are not i.i.d.\nManual selection of features doesn't improve performances\n\nSome solutions that might work:\nI. Quantitative features selection using RFE/RFA over complete feature spaces\nProblem: Feature spaces might be too large for conventional canned algorithms.\nPossible Hacks:\n-- Bagging of features (55+ feature groups vs. 500+ features)\n-- Removing similar features before RFE (elimination via cosine similarity et al.?)\n-- Dimensionality reductions (t-SNE, PCA et al.?)\nII. Regularization: Might work considering the system is not entirely overdetermined and many features are not actually informative + the tendency of the problem to overcomplicate.\nGenerally combination of I and II would make some sense.\nFor continous class value [0:1] (for submission)\nSVM, Random Forest and AdaBoost regressor.\nReference\n\n[SKL] SciKit Learn: http://scikit-learn.org/stable/index.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
WeirdCircumstances/hu_bp_python_course
|
02_introduction/intro_to_python.ipynb
|
mit
|
[
"Introduction to Python\nWhy Python?\n\neasy to learn and use\nexcellent for beginners, yet superb for experts\nhighly scalable, suitable for large projects as well as small ones\nrapid development\nportable, cross-platform\npowerful standard libs\nwealth of 3rd party packages\n\nAnd don't forget that with Python, programming is fun again!\n<img src=\"http://imgs.xkcd.com/comics/python.png\">",
"print \"Hello, world!\"\n\nprint(\"Hello, world!\")",
"Python 2 or Python 3?\nShort version: Python 2.x is legacy, Python 3.x is the present and future of the language\n- final 2.x version 2.7 was released 2010\n- 3.0 was released in 2008\n- 3.4 was released 2014\n- recent standard libraby improvements are only available in 3.x\nBUT:\n- we are a bit lazy\n- some third party packages are not (yet) available in Python 3\nSoftware\nWe will use:\n - Python (https://www.python.org/)\n - IPython (http://ipython.org/)\n - IPython Notebook (http://ipython.org/notebook.html)\n - an editor of your choice\nYou could use:\n - an IDE (PyCharm, Eclipse)\n - Python debuggers (pdb)\n - Code checkers (pylint)\nInteractive mode\nStart the python command line interpreter by typing python, or the Python interactive shell by typing ipython into your terminal.\nYou can use this a calculator:",
"2 + 3\n\n5 * (7 + 2)",
"Variables\nYou can assign numbers to variables:",
"width = 5\nheight = 2 + 6\nwidth * height",
"You can change variables:",
"height = 12\nwidth * height",
"Naming Rules\n\nVariables can only contain letters, numbers, and underscores. Variable names can start with a letter or an underscore, but can not start with a number.\nSpaces are not allowed in variable names, so use underscores instead of spaces.\nYou cannot use Python keywords as variable names. If you absolutely need to, add an underscore to the end.\nVariable names should be descriptive, without being too long. For example mc_wheels is better than just wheels or number_of_wheels_on_a_motorycle.\nBe careful about using the lowercase letter l and the uppercase letters O and I in places where they could be confused with the numbers 1 and 0. And never use them as single character variable names.\n\nDatatypes\nInt & Float\n(Integers and Floating point numbers)\nYou'll have to be careful about the datatype (sometimes)",
"width / 2\n\nwidth\n\n5.0 / 2",
"You can convert an int to a float:",
"float(width)",
"This kind of type casting works for most datatypes in Python!\nString\nStrings are sets of characters and are either contained in single or double quotes.",
"bla = 'bla bla bla'\nbla\n\nprint(bla)\n\nblub = \"bla blub\"\nprint(blub)",
"This allows you to make strings that contain quotations.",
"quote = \"Linus Torvalds once said, 'Any program is only as good as it is useful.'\"\nprint(quote)",
"Use \\ to continue your command on the next line.",
"bla = 'bla bla\\\n bla bla'\nprint(bla)",
"Use \\n to put a newline into the string.",
"bla = 'bla bla\\n\\\n bla bla'\nprint(bla)",
"Or use multi line strings in triple quotes (either ''' or \"\"\") to do both.",
"bla = '''bla bla \n bla bla'''\nprint(bla)",
"You can concatenate strings:",
"blub = 'bla ' + blub\nblub\n\nblub * 3",
"In Ipython (and Ipython Notebook) you can see all the string functions by tabbing them.",
"blub.title?\n\nblub.title()",
"Unicode string\nLike strings, but with more characters!",
"my_unicode = u'Hellö World!'\nmy_unicode\n\nprint(my_unicode)",
"Some unicode strings can not be cast to strings.",
"str(my_unicode)",
"While most Error messages are easy to understand, it might also be a good idea, to google (or whatever) them. Other questions occuring during programming are also likely to have been asked before (so, google!). Answers on stackoverflow.com are often useful.\nIf you absolutely want to convert to string, you can do it like that:",
"my_unicode.encode('ascii', 'ignore')\n\nmy_unicode.encode('ascii', 'replace')",
"But if you want to make your strings compatible, you should do it the other way round:",
"unicode(blub)",
"List\nThere are many compound data types in Python.\nThe most versatile one is a list. This is the closest to 'arrays' from other programming languages.",
"my_list = blub.split()\nmy_list",
"Lists can be indexed and sliced.",
"my_list[0]\n\nmy_list[-1]\n\nmy_list = my_list + ['more', 'and', 'more', 'words', 'plus', 'a', 'number', 4]\nmy_list\n\nmy_list[2:6]\n\nmy_list[0:2]\n\nmy_list[:2]\n\nmy_list[4:-2]\n\nmy_list[4:-2:3]",
"The indexing and slicing works for all built-in sequence types. (Also strings!)\nIn a list, you can change elements or slices. (Lists are mutable, strings for example are immutable.)",
"my_list[-4:] = ['a']\nmy_list",
"You can add new items to the end of the list with the append() method:",
"my_list.append(4)\nmy_list",
"You can nest lists:",
"nested = [my_list, ['another', 'list']]\nnested\n\nnested[1][0]",
"Set\nA set will not store duplicate entries.",
"my_list\n\nmy_set = set(my_list)\nmy_set",
"Membership testing",
"'bla' in my_list\n\n'turtle' in my_set",
"Dict\nA dictionary contains key / value pairs. Instead of the position you use the key as the key.",
"phonebook = {'Max': 8389, 'Jannis': 8389, 'Thomas': 8325}\nphonebook['Max']\n\nphonebook['Katharina'] = 8592\nphonebook\n\nphonebook.keys()",
"More important datatypes\nBool\nValues: True, False\n - [] False\n - [a, b] True\n - 0 False\n - all other True\nNone\nValues: None\n- frequently used to represent the absence of a value\nComments",
"print(\"Hello, world!\") # Python 3, also works in Python 2",
"if Statements",
"if 1:\n print('yes!')",
"If the boolean statement is True, the code below is executed.",
"x = int(raw_input(\"Please enter an integer: \")) \n\nif x < 0:\n x = 0\n print('Negative changed to zero')\nelif x == 0:\n print('Zero')\nelif x == 1:\n print('Single')\nelse:\n print('More')",
"Indentation\nIndentation determines the context of commands.",
"if []:\n print('empty list!')\nprint('Never True!')\n\nif not []:\n print('That list isn\\'t empty!')\nprint('Always True!')",
"You should use 4 spaces, not a tab!!\nIf you are presenting your code, be sure the indentation is right!\nwhile Statements",
"i = 5\nwhile i > 0:\n print(i**2)\n i = i - 1",
"for Statements\nLooping through lists, strings, sets, ...",
"for element in my_list:\n print element, len(element)",
"List Comprehensions",
"my_first_lc = [len(element) for element in my_list[:-1]]\nmy_first_lc",
"The range() Function",
"range?\n\nrange(4)\n\nrange(4,9)\n\nrange(4, 9, 2)",
"range() is often used in for-loops",
"for i in xrange(len(my_list)):\n print i, my_list[i]",
"But using xrange() is the smarter solution. It does not return a list, but an object generating the numbers on demand. In Python 3, there is only range(), but behaving like xrange(). So, if you need the list, type list(range(4)).\nAnd always keep in mind, that you can loop over the elements in a list in an easy, pythonic way. No need to use the indices.",
"for i in xrange(len(my_list)): # NOOOO!\n print my_list[i]\n \nfor element in my_list: # YES!!\n print element",
"break and continue Statements, and else Clauses on Loops\nThe break statement, like in C, breaks out of the smallest enclosing for or while loop.\nLoop statements may have an else clause; it is executed when the loop terminates through exhaustion of the list (with for) or when the condition becomes false (with while), but not when the loop is terminated by a break statement.",
"for n in range(2, 10):\n for x in range(2, n):\n if n % x == 0:\n print n, 'equals', x, '*', n/x\n break\n else:\n # loop fell through without finding a factor\n print n, 'is a prime number'",
"The continue statement, also borrowed from C, continues with the next iteration of the loop:",
"for num in range(2, 10):\n if num % 2 == 0:\n print \"Found an even number\", num\n continue\n print \"Found a number\", num",
"pass Statements\nIt will do nothing!",
"x = int(raw_input(\"Please enter an integer: \")) \n\nif x < 0:\n pass\nelif x == 42:\n pass #TODO must fill the answer to life, the universe, and everything here later\nelse:\n print('More') ",
"Functions\nDefinition Syntax",
"def fib(n): # write Fibonacci series up to n\n \"\"\"Print a Fibonacci series up to n.\"\"\" # docstring\n a, b = 0, 1 # multiple assignement\n while a < n:\n print a, # prevents the newline\n a, b = b, a+b # another multiple assignement\n \nfib(500)",
"The keyword def introduces a function definition. It has to be followed by the function name and the paranthesized list of formal parameters.\nDocumentation strings\nYou should put a triple quoted string into the first line after the function definition, containing a description of the function. This is called docstring, and can be used to automatically produce documentation.\nThe return statement\nIt is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of printing it:",
"def fib2(n): # return Fibonacci series up to n\n \"\"\"Return a list containing the Fibonacci series up to n.\"\"\"\n result = []\n a, b = 0, 1\n while a < n:\n result.append(a)\n a, b = b, a+b\n return result\n\nf100 = fib2(100) # call it\nf100 # write the result",
"The return statement returns with a value from a function. return without an expression argument returns None. Falling off the end of a function also returns None.\nDefault Argument Values and Keyword Arguments",
"def parrot(voltage, state='a stiff', action='voom', type_='Norwegian Blue'):\n print \"-- This parrot wouldn't\", action,\n print \"if you put\", voltage, \"volts through it.\"\n print \"-- Lovely plumage, the\", type_\n print \"-- It's\", state, \"!\"\n \nparrot(1000) # 1 positional argument",
"state, action, and type_ have default values, so they can be omitted from the function call.",
"parrot(voltage=1000, action='jump') # 2 keyword arguments",
"You can call the function using keyword arguments, positional arguments, or mix both.",
"parrot('a lot of', 'bereft of life') # 2 positional arguments\n\nparrot('a lot of', type_='Lory', state='pushing up the daisies') # 1 positional and 1 keyword argument",
"The sentences are actually from a Monty Pythons episode. You can watch it later: https://www.youtube.com/watch?v=4vuW6tQ0218\nHandling exceptions\nYou have seen error messages by now. Errors detected during excution are called exceptions and can be handled. Just as a reminder, error messages look like this:",
"my_list[20]",
"Exceptions come in different types, which are given in the message. The type of this exception is IndexError. \nSo, let's handle that exception!",
"def handle_index_error(i):\n try:\n print(my_list[i])\n except IndexError:\n print(my_list[-1])\n \nhandle_index_error(20)\n\nhandle_index_error(5)\n\nhandle_index_error('some useless string')",
"Coding style\n“The best programs are written so that computing machines can perform them quickly and so that human beings can understand them clearly.\" - Donald E. Knuth, Selected Papers on Computer Science \nPEP8\nStyle guide for Python code\nA style guide is about consistency. Consistency with this style guide is important. Consistency within a project is more important. Consistency within one module or function is most important. \nA few rules\n\nnever use tabs, always 4 spaces\ntry to limit lines to 79 characters\nuse whitespace to make your code more readable",
"spam(ham[1], {eggs: 2}) # YES!\nspam( ham[ 1 ], { eggs: 2 } ) # NO!!\n\nx, y = y, x # YES!\nx , y = y , x # NO!!\n\ncounter = counter + 1 # YES!\ncounter=counter+1 # NO!!\n\nresult = add(x+1, 3) # YES!\nresult = add(x + 1, 3) # YES!\n\n\ndef complex(real, imag=0.0): # YES!\n return magic(r=real, i=imag)\n\ndef complex(real, imag = 0.0): # NO!!\n return magic(r = real, i = imag)",
"Follow these naming conventions:\nlower_case_under for variables and functions and methods\nWordCap for classes\nALL_CAPS for constants\n\n\n\nAnd of course, there is more: https://www.python.org/dev/peps/pep-0008/\nThe Zen of Python",
"import this"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
yashdeeph709/Algorithms
|
PythonBootCamp/Complete-Python-Bootcamp-master/Milestone Project 1 - Advanced Solution.ipynb
|
apache-2.0
|
[
"Tic Tac Toe\nThis is the solution for the Milestone Project! A two player game made within a Jupyter Notebook. Feel free to download the notebook to understand how it works!\nFirst some imports we'll need to use for displaying output and set the global variables",
"# Specifically for the iPython Notebook environment for clearing output.\nfrom IPython.display import clear_output\n\n# Global variables\nboard = [' '] * 10\ngame_state = True\nannounce = ''",
"Next make a function that will reset the board, in this case we'll store values as a list.",
"# Note: Game will ignore the 0 index\ndef reset_board():\n global board,game_state\n board = [' '] * 10\n game_state = True",
"Now create a function to display the board, I'll use the num pad as the board reference. \nNote: Should probably just make board and player classes later....",
"def display_board():\n ''' This function prints out the board so the numpad can be used as a reference '''\n # Clear current cell output\n clear_output()\n # Print board\n print \" \"+board[7]+\" |\"+board[8]+\" | \"+board[9]+\" \"\n print \"------------\"\n print \" \"+board[4]+\" |\"+board[5]+\" | \"+board[6]+\" \"\n print \"------------\"\n print \" \"+board[1]+\" |\"+board[2]+\" | \"+board[3]+\" \"\n",
"Define a function to check for a win by comparing inputs in the board list. Note: Maybe should just have a list of winning combos and cycle through them?",
"def win_check(board, player):\n ''' Check Horizontals,Verticals, and Diagonals for a win '''\n if (board[7] == board[8] == board[9] == player) or \\\n (board[4] == board[5] == board[6] == player) or \\\n (board[1] == board[2] == board[3] == player) or \\\n (board[7] == board[4] == board[1] == player) or \\\n (board[8] == board[5] == board[2] == player) or \\\n (board[9] == board[6] == board[3] == player) or \\\n (board[1] == board[5] == board[9] == player) or \\\n (board[3] == board[5] == board[7] == player):\n return True\n else:\n return False\n",
"Define function to check if the board is already full in case of a tie. (This is straightforward with our board stored as a list)\nJust remember index 0 is always empty.",
"def full_board_check(board):\n ''' Function to check if any remaining blanks are in the board '''\n if \" \" in board[1:]:\n return False\n else:\n return True",
"Now define a function to get player input and do various checks on it.",
"def ask_player(mark):\n ''' Asks player where to place X or O mark, checks validity '''\n global board\n req = 'Choose where to place your: ' + mark\n while True:\n try:\n choice = int(raw_input(req))\n except ValueError:\n print(\"Sorry, please input a number between 1-9.\")\n continue\n\n if choice not in range(1,10):\n print(\"Sorry, please input a number between 1-9.\")\n continue\n\n if board[choice] == \" \":\n board[choice] = mark\n break\n else:\n print \"That space isn't empty!\"\n continue\n \n \n\n ",
"Now have a function that takes in the player's choice (via the ask_player function) then returns the game_state.",
"def player_choice(mark):\n global board,game_state,announce\n #Set game blank game announcement\n announce = ''\n #Get Player Input\n mark = str(mark)\n # Validate input\n ask_player(mark)\n\n #Check for player win\n if win_check(board,mark):\n clear_output()\n display_board()\n announce = mark +\" wins! Congratulations\"\n game_state = False\n \n #Show board\n clear_output()\n display_board()\n\n #Check for a tie \n if full_board_check(board):\n announce = \"Tie!\"\n game_state = False\n \n return game_state,announce\n",
"Finally put it all together in a function to play the game.",
"def play_game():\n reset_board()\n global announce\n \n # Set marks\n X='X'\n O='O'\n while True:\n # Show board\n clear_output()\n display_board()\n \n # Player X turn\n game_state,announce = player_choice(X)\n print announce\n if game_state == False:\n break\n \n # Player O turn\n game_state,announce = player_choice(O)\n print announce\n if game_state == False:\n break\n \n # Ask player for a rematch\n rematch = raw_input('Would you like to play again? y/n')\n if rematch == 'y':\n play_game()\n else:\n print \"Thanks for playing!\"\n ",
"Let's play!",
"play_game()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.