repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
griffinfoster/fundamentals_of_interferometry
|
2_Mathematical_Groundwork/2_13_spherical_trigonometry.ipynb
|
gpl-2.0
|
[
"Outline\nGlossary\n2. Mathematical Groundwork\nPrevious: 2.12 Solid Angle \nNext: 2.14 CLEAN in 1D\n\n\n\n\nImport standard modules:",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS",
"Import section specific modules:",
"pass",
"2.13 Spherical Trigonometry <a id='math:sec:st'></a> <!--\\label{math:sec:st}-->\nMost people have a basic understanding of planar trigonometry. In this section we explore how trigonometry can be extended into the spherical realm. Spherical trigonometry is a branch of spherical geometry in which we study the relationship between the sides and angles of spherical triangles. Spherical triangles are formed by the pairwise intersection of three great circlular arcs in three vertices. A great circular arc is an arc segment of a great circle. A great circle is formed by the intersection of a sphere and a plane that passes through the center of the sphere. It is possible to derive the following basic spherical trigonometric identities by studying arbitrary spherical triangles that are located on the unit sphere:\n<p class=conclusion>\n <font size=4> <b>Spherical Trigonometric Identities</b></font>\n <br>\n <br>\n• <b>Spherical cosine rule</b>: $\\cos b = \\cos a \\cos c + \\sin a \\sin c \\cos B$ <br><br>\n• <b>Spherical sine rule</b>: $\\sin b \\sin A = \\sin B \\sin a$ <br><br> \n• <b>Five part rule</b>: $\\sin b \\cos A = \\cos a \\sin c - \\sin a\\cos c\\cos B$ \n</p>\n\nThe first two rules are analogous to the planar sine and cosine rule. The sides and angles used in the above expressions are graphically depicted in Fig. 2.13.1 ⤵ <!--\\ref{math:fig:spher_trig}-->.\n<div class=advice>\n<b>Advice:</b> We use spherical trigonometry to convert between different astronomical coordinate systems (see [Appendix ➞](../0_Introduction/2_Appendix.ipynb)).\n</div>\n\n<img src='figures/spher_trig.svg' width=40%>\nFigure 2.13.1: The spherical triangle $ABC$. <a id='pos:math:spher_trig'></a> <!--\\label{math:fig:spher_trig}-->\n\nNext: 2.14 CLEAN in 1D"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
luiscruz/udacity_data_analyst
|
P01/Project1_Statistics_The_Science_of_Decisions_Project_Instructions.ipynb
|
mit
|
[
"Statistics: The Science of Decisions Project Instructions\nBackground Information\nIn a Stroop task, participants are presented with a list of words, with each word displayed in a color of ink. The participant’s task is to say out loud the color of the ink in which the word is printed. The task has two conditions: a congruent words condition, and an incongruent words condition. In the congruent words condition, the words being displayed are color words whose names match the colors in which they are printed: for example RED, BLUE. In the incongruent words condition, the words displayed are color words whose names do not match the colors in which they are printed: for example PURPLE, ORANGE. In each case, we measure the time it takes to name the ink colors in equally-sized lists. Each participant will go through and record a time from each condition.\nQuestions For Investigation\nAs a general note, be sure to keep a record of any resources that you use or refer to in the creation of your project. You will need to report your sources as part of the project submission.\n\nWhat is our independent variable? What is our dependent variable?\n\nR: Independent: Words congruence condition. Dependent: Naming time.\n\nWhat is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices.\n\nR:\n Where $\\mu_{congruent}$ and $\\mu_{incongruent}$ stand for congruent and incongruent population means, respectively:\n $H_0: \\mu_{congruent} = \\mu_{incongruent} $ — The time to name the ink colors doesn't change with the congruency condition\n$H_A: \\mu_{congruent} \\neq \\mu_{incongruent} $ — The time to name the ink colors changes with the congruency condition\nTo perform the test I will use a 2-tailed paired t-test. A t-test is apropriated since we don't the standard deviations of the population. A two-sample kind of t-test is necessary since we don't know the population mean. The sample sizes is below 30 (N=24), which is compatible with a t-test. I am also assuming that the population is normally distributed.\n<p class=\"c2\"><span>Now it’s your chance to try out the Stroop task for yourself. Go to </span><span class=\"c4\"><a class=\"c8\" href=\"https://www.google.com/url?q=https://faculty.washington.edu/chudler/java/ready.html&sa=D&usg=AFQjCNFRXmkTGaTjMtk1Xh0SPh-RiaZerA\">this link</a></span><span>, which has a Java-based applet for performing the Stroop task. Record the times that you received on the task (you do not need to submit your times to the site.) Now, download </span><span class=\"c4\"><a class=\"c8\" href=\"https://www.google.com/url?q=https://drive.google.com/file/d/0B9Yf01UaIbUgQXpYb2NhZ29yX1U/view?usp%3Dsharing&sa=D&usg=AFQjCNGAjbK9VYD5GsQ8c_iRT9zH9QdOVg\">this dataset</a></span><span> which contains results from a number of participants in the task. Each row of the dataset contains the performance for one participant, with the first number their results on the congruent task and the second number their performance on the incongruent task.</span></p>\n\n\nReport some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability.\n\nR: Central tendency: mean; measure of variability: standard deviation.",
"%matplotlib inline\nimport pandas\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (16.0, 8.0)\n\ndf = pandas.read_csv('./stroopdata.csv')\n\ndf.describe()",
"Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.",
"df.hist()",
"R:\n\nThis histograms show that, in this sample, times are longer in the incrongruent experiment than in the congruent experiment.\n\nIn the congruent experiment, the interval with more values is aproximately between 14 and 16 values. In the incronguent experiment the interval with more values is aproximately (20,22).\n\n\nNow, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?\n\n\nR: I'm going to perform the test for a confidence level of 95%, which means that our t-critical values are {-2.069,2.069}",
"import math\ndf['differences'] = df['Incongruent']-df['Congruent']\nN =df['differences'].count()\nprint \"Sample size:\\t\\t%d\"% N\nprint \"DoF:\\t\\t\\t%d\"%(df['differences'].count()-1)\nmean = df['differences'].mean()\nstd = df['differences'].std()\ntscore = mean/(std/math.sqrt(N))\nprint \"Differences Mean:\\t%.3f\" % mean\nprint \"Differences Std:\\t%.3f\" % std\nprint \"t-score:\\t\\t%.3f\" %tscore",
"We can reject the null hypothesis, since the t-score is greater than 2.069. In this case I have used $\\alpha=0.05$, but a bigger confidence level could also reject $H_0$. This means that incongruency affects the naming time, which validates the evidence found in the histograms.\n\nOptional: What do you think is responsible for the effects observed? Can you think of an alternative or similar task that would result in a similar effect? Some research about the problem will be helpful for thinking about these two questions!\n\nThe effects observed are related with the reaction time of our brain. When there is congruency our brain does not need to make a conscient operation and the participant can trust in the first response provided by the brain. When there is incongruency, the participant has conscienscly go through the process of finding the color, which results in a longer response time. Another experiment would be writing with different types of keyboards (e.g., QWERTY, AZERTY, etc.)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
phoebe-project/phoebe2-docs
|
2.3/tutorials/distribution_propagation.ipynb
|
gpl-3.0
|
[
"Advanced: Distribution Propagation\nNOTE: support for distribution propagation was improved in the 2.3.25 release. Please make sure you have at least 2.3.25 installed.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.3,<2.4\"\n\nimport phoebe\nfrom phoebe import u # units\nimport numpy as np\n\nlogger = phoebe.logger()",
"We'll use a semi-detached system so that we can see some interesting cases of distribution propagation from orbital parameters to the equivalent radius of the star filling its roche lobe.",
"b = phoebe.default_binary(semidetached='primary')",
"Here we'll add some distributions directly to parameters... but the concepts below apply just as well to distributions extracted from posteriors.",
"b.add_distribution({'sma@binary': phoebe.gaussian_around(1),\n 'incl@binary': phoebe.uniform(85, 90),\n 'q@binary': phoebe.gaussian_around(0.05)},\n distribution='mydist')",
"Plotting Distributions\nBy calling plot_distribution_collection, we can see a corner plot of all of these parameters. Since we created these as univariate distributions (see Advanced: Distribution Types for multivariate examples), we can see that there are no correlations between the distributions.\nBy default, this shows a corner plot that samples from all the matching distributions.",
"_ = b.plot_distribution_collection(distribution='mydist', show=True)",
"We can pass a list of parameters (as twigs) to the parameters keyword argument to only plot a subset of the available parameters.",
"_ = b.plot_distribution_collection(distribution='mydist', \n parameters=['sma@binary', 'q@binary'],\n show=True)",
"But we can also use parameters to propagate the distributions through the constraints linking parameters together. For example, since we have distributions on sma and incl, including asini should combine the two distributions according to the constraint and showing the resulting correlations.",
"_ = b.plot_distribution_collection(distribution='mydist', \n parameters=['sma@binary', 'q@binary', 'asini@binary'],\n show=True)",
"Accessing Uncertainties from Distributions\nSimilarly, we can access the resulting uncertainties (taken from the 1-sigma percentiles by default), by calling uncertainties_from_distribution_collection.\nNote that the input gaussian distributions are automatically exposed with symmetric uncertainties, whereas the propagated asini distribution will rely on asymmetric uncertainties from the sampled values.",
"b.uncertainties_from_distribution_collection(distribution='mydist', \n parameters=['sma@binary', 'q@binary', 'asini@binary'],\n tex=True)",
"To expose at a different \"sigma-level\", we can pass sigma.",
"b.uncertainties_from_distribution_collection(distribution='mydist', \n parameters=['sma@binary', 'q@binary', 'asini@binary'],\n sigma=3,\n tex=True)",
"And to expose a machine-readable list with lower, centeral, and upper bounds represented, we just exclude the tex=True.",
"b.uncertainties_from_distribution_collection(distribution='mydist', \n parameters=['sma@binary', 'q@binary', 'asini@binary'],\n sigma=3)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cmcc/cmip6/models/sandbox-3/landice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: CMCC\nSource ID: SANDBOX-3\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:50\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-3', 'landice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --> Mass Balance\n7. Ice --> Mass Balance --> Basal\n8. Ice --> Mass Balance --> Frontal\n9. Ice --> Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Ice Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify how ice albedo is modelled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Atmospheric Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Oceanic Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the ocean and ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs an adative grid being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Base Resolution\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThe base resolution (in metres), before any adaption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Resolution Limit\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Projection\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of glaciers in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of glaciers, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Dynamic Areal Extent\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes the model include a dynamic glacial extent?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Grounding Line Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.3. Ice Sheet\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice sheets simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.4. Ice Shelf\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice shelves simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Ice --> Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Ice --> Mass Balance --> Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Ocean\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Ice --> Mass Balance --> Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Melting\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Ice --> Dynamics\n**\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Approximation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nApproximation type used in modelling ice dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Adaptive Timestep\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.4. Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
keras-team/keras-io
|
examples/keras_recipes/ipynb/quasi_svm.ipynb
|
apache-2.0
|
[
"A Quasi-SVM in Keras\nAuthor: fchollet<br>\nDate created: 2020/04/17<br>\nLast modified: 2020/04/17<br>\nDescription: Demonstration of how to train a Keras model that approximates a SVM.\nIntroduction\nThis example demonstrates how to train a Keras model that approximates a Support Vector\n Machine (SVM).\nThe key idea is to stack a RandomFourierFeatures layer with a linear layer.\nThe RandomFourierFeatures layer can be used to \"kernelize\" linear models by applying\n a non-linear transformation to the input\nfeatures and then training a linear model on top of the transformed features. Depending\non the loss function of the linear model, the composition of this layer and the linear\nmodel results to models that are equivalent (up to approximation) to kernel SVMs (for\nhinge loss), kernel logistic regression (for logistic loss), kernel linear regression\n (for MSE loss), etc.\nIn our case, we approximate SVM using a hinge loss.\nSetup",
"\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.layers.experimental import RandomFourierFeatures\n",
"Build the model",
"\nmodel = keras.Sequential(\n [\n keras.Input(shape=(784,)),\n RandomFourierFeatures(\n output_dim=4096, scale=10.0, kernel_initializer=\"gaussian\"\n ),\n layers.Dense(units=10),\n ]\n)\nmodel.compile(\n optimizer=keras.optimizers.Adam(learning_rate=1e-3),\n loss=keras.losses.hinge,\n metrics=[keras.metrics.CategoricalAccuracy(name=\"acc\")],\n)\n",
"Prepare the data",
"\n# Load MNIST\n(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n\n# Preprocess the data by flattening & scaling it\nx_train = x_train.reshape(-1, 784).astype(\"float32\") / 255\nx_test = x_test.reshape(-1, 784).astype(\"float32\") / 255\n\n# Categorical (one hot) encoding of the labels\ny_train = keras.utils.to_categorical(y_train)\ny_test = keras.utils.to_categorical(y_test)\n",
"Train the model",
"\nmodel.fit(x_train, y_train, epochs=20, batch_size=128, validation_split=0.2)\n",
"I can't say that it works well or that it is indeed a good idea, but you can probably\n get decent results by tuning your hyperparameters.\nYou can use this setup to add a \"SVM layer\" on top of a deep learning model, and train\n the whole model end-to-end."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
anhaidgroup/py_stringsimjoin
|
notebooks/jaccard_books.ipynb
|
bsd-3-clause
|
[
"This tutorial explains how to join two tables A and B using jaccard similarity measure.",
"# Import libraries\nimport py_stringsimjoin as ssj\nimport py_stringmatching as sm\nimport pandas as pd\nimport os\nimport sys\n\nprint('python version ' + sys.version)\nprint('py_stringsimjoin version: ' + ssj.__version__)\nprint('py_stringmatching version: ' + sm.__version__)\nprint('pandas version: ' + pd.__version__)",
"Loading data\nWe begin by loading two tables. For the purpose of this tutorial, \nwe use the books dataset that comes with the package.",
"table_A_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'books_A.csv.gz'])\ntable_B_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'books_B.csv.gz'])\n\n# Load csv files as dataframes. Since we are reading a compressed csv file, we provide the compression argument.\n# If you are reading an uncompressed csv file, you should not specify the compression argument.\nA = pd.read_csv(table_A_path, compression='gzip')\nB = pd.read_csv(table_B_path, compression='gzip')\nprint('Number of records in A: ' + str(len(A)))\nprint('Number of records in B: ' + str(len(B)))\n\nA.head(1)\n\nB.head(1)",
"Profiling data\nIn order to perform the join, you need to identify on which attribute \nto perform the join. Using the profiling command, you can inspect which\nattributes are suitable for join. For example, if an attribute contains \nmany missing values, you may not want to perform join on that attribute.",
"# profile attributes in table A\nssj.profile_table_for_join(A)\n\n# profile attributes in table B\nssj.profile_table_for_join(B)",
"Based on the profile output, we find that the 'Title' attribute in both \ntables does not contain any missing values. Hence, for the purpose of this \ntutorial, we will now join tables A and B on 'Title' attribute using jaccard \nmeasure. Next, we need to decide on what threshold to use for the join. For \nthis tutorial, we will use a threshold of 0.5. Specifically, the join will \nnow find tuple pairs from A and B such that the jaccard score over \nthe 'Title' attributes is atleast 0.5.\nCreating a tokenizer\nThe next step after loading the tables is to create a tokenizer. \nA tokenizer is used to tokenize a string into a set of tokens. \nTo create a tokenizer, you can use the different tokenizers provided \nby py_stringmatching package. A whitespace tokenizer can be created as follows:",
"# create whitespace tokenizer for tokenizing 'Title' attribute. The return_set flag should be set to True since\n# jaccard is a set based measure.\nws = sm.WhitespaceTokenizer(return_set=True)",
"Performing join\nNext, you need to perform the join using the following command:",
"# find all pairs from A and B such that the jaccard score on 'Title' is at least 0.5. Setting n_jobs=-1 exploits all\n# CPU cores available.\noutput_pairs = ssj.jaccard_join(A, B, 'ID', 'ID', 'Title', 'Title', ws, 0.5, \n l_out_attrs=['Title'], r_out_attrs=['Title'], n_jobs=-1)\n\nlen(output_pairs)\n\n# examine the output pairs\noutput_pairs.head()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
conversationai/conversationai-crowdsource
|
constructiveness_toxicity_crowdsource/jupyter-notebooks/annotation_aggregation/combine_annotated_batches.ipynb
|
apache-2.0
|
[
"import sys\nimport pandas as pd\n\naggregated_data_path = '../../CF_output/combined/'\n\nbatches = [3,4,5,6]",
"Combine constructiveness and toxicity annotations from different batches",
"dfs = []\nfor batch in batches:\n filename = aggregated_data_path + 'batch' + str(batch) + '_constructiveness_and_toxicity_combined.csv'\n dfs.append(pd.read_csv(filename))\n\ncombined_annotations_df = pd.concat(dfs)\n\n# Sort the merged dataframe on constructiveness and toxicity\ncombined_annotations_df.shape\n\n# Relevant columns\ncols = (['article_id', 'article_author', 'article_published_date',\n 'article_title', 'article_url', 'article_text',\n 'comment_author', 'comment_counter', 'comment_text',\n 'agree_constructiveness_expt', 'agree_toxicity_expt', 'constructive', 'constructive_internal_gold', \n 'crowd_toxicity_level', 'crowd_toxicity_level_internal_gold',\n 'has_content', 'crowd_discard', \n 'constructive_characteristics', 'non_constructive_characteristics',\n 'toxicity_characteristics', \n 'crowd_comments_constructiveness_expt', \n 'crowd_comments_toxicity_expt',\n 'other_con_chars', 'other_noncon_chars', 'other_toxic_chars' \n ])",
"Write contructiveness and toxicity combined CSV",
"output_dir = '../../CF_output/annotated_data/'\n\ncombined_annotations_df.to_csv( output_dir + 'constructiveness_and_toxicity_annotations.csv', columns = cols, index = False)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
scottprahl/miepython
|
docs/07_algorithm.ipynb
|
mit
|
[
"Mie Scattering Algorithms\nScott Prahl\nJan 2022\nThis Jupyter notebook shows the formulas used in miepython. This code is heavily influenced by Wiscomes MIEV0 code as documented in his paper on Mie scattering and his 1979 NCAR and 1996 NCAR publications.\nThere are a couple of things that set this code apart from other python Mie codes. \n1) Instead of using the built-in special functions from SciPy, the calculation relies on the logarthmic derivative of the Ricatti-Bessel functions. This technique is significantly more accurate.\n2) The code uses special cases for small spheres. This is faster and more accurate\n3) The code works when the index of refraction m.real is zero or when m.imag is very large (negative).\nThe code has been tested up to sizes ($x=2\\pi r/\\lambda=10000$).\nIf miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)",
"#!pip install --user miepython\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntry:\n import miepython.miepython as miepython\n\nexcept ModuleNotFoundError:\n print('miepython not installed. To install, uncomment and run the cell above.')\n print('Once installation is successful, rerun this cell again.')",
"The logarithmic derivative $D_n$.\nThis routine uses a continued fraction method to compute $D_n(z)$\nproposed by Lentz. Lentz uses the notation $A_n$ \ninstead of $D_n$, but I prefer the notation used by Bohren and Huffman.\nThis method eliminates many weaknesses in previous algorithms using\nforward recursion. \nThe logarithmic derivative $D_n$ is defined as\n$$\nD_n = -\\frac{n}{z} + \\frac{J_{n-1/2}(z)}{J_{n+1/2}(z)} \n$$\nEquation (5) in Lentz's paper can be used to obtain\n$$\n\\frac{J_{n-1/2}(z)}{J_{n+1/2}(z)} =\n{2n+1 \\over z} + {1\\over\\displaystyle -\\frac{2n+3}{z} +\n {\\strut 1 \\over\\displaystyle \\frac{2n+5}{z} +\n {\\strut 1 \\over\\displaystyle -\\frac{2n+7}{z} + \\cdots}}}\n$$\nNow if\n$$\n\\alpha_{i,j}=[a_i,a_{i-1},\\ldots,a_j] = a_i + \\frac{1}{\\displaystyle a_{i-1} +\n \\frac{\\strut 1}{\\displaystyle a_{i-2} + \\cdots\n \\frac{\\strut 1 }{\\displaystyle a_j}}}\n$$\nwe seek to create \n$$\n\\alpha = \\alpha_{1,1}\\,\\alpha_{2,1}\\cdots \\alpha_{j,1} \n\\qquad\n\\beta = \\alpha_{2,2}\\,\\alpha_{3,2}\\cdots \\alpha_{j,2} \n$$\nsince Lentz showed that\n$$\n\\frac{J_{n-1/2}(z)}{J_{n+1/2}(z)} \\approx \\frac{\\alpha}{\\beta}\n$$\nThe whole goal is to iterate until the $\\alpha$ and $\\beta$\nare identical to the number of digits desired. Once this is\nachieved, then use equations this equation and the first equation for\nthe logarithmic derivative to calculate\n$D_n(z)$.\nFirst terms\nThe value of $a_j$ is\n$$\na_j = (-1)^{j+1} {2n+2j-1\\over z}\n$$\nThe first terms for $\\alpha$ and $\\beta$ are then\n$$\n\\alpha = a_1 \\left(a_2 + \\frac{1}{a_1}\\right)\n\\qquad\n\\beta = a_2\n$$\nLater terms\nTo calculate the next $\\alpha$ and $\\beta$, I use\n$$\na_{j+1} = -a_j+(-1)^j\\,{2\\over z}\n$$\nto find the next $a_j$ and\n$$\n\\alpha_{j+1} = a_j + \\frac{1}{\\alpha_j},\n\\qquad\\hbox{and}\\qquad\n\\beta_{j+1} = a_j + \\frac{1}{\\beta_j}\n$$\nCalculating $D_n$\nUse formula 7 from Wiscombe's paper to figure out if upwards or downwards recurrence should be used. Namely if\n$$\nm_{\\rm Im}x\\le 13.78 m_{\\rm Re}^2 - 10.8 m_{\\rm Re} + 3.9\n$$\nthe upward recurrence would be stable.\nThe returned array D is set-up so that $D_n(z)=$ D[n]. Therefore the first value for $D_1(z)$ will not be D[0], but rather D[1].\n$D_n$ by downwards recurrence.\nStart downwards recurrence using by accurately calculating D[nstop] using the Lentz method, then find earlier\nterms of the logarithmic derivative $D_n(z)$ using the recurrence relation,\n$$\nD_{n-1}(z) = \\frac{n}{z} - \\frac{1}{D_n(z) + n/z}\n$$\nThis is a pretty straightforward procedure.\n$D_n$ by upward recurrence.\nCalculating the logarithmic derivative $D_n(\\rho)$ using the upward recurrence relation,\n$$\nD_n(z) = \\frac{1}{n/z - D_{n-1}(z)}-\\frac{n}{z}\n$$\nTo calculate the initial value D[1] we use Wiscombe's representation that avoids overflow errors when the usual $D_0(x)=1/tan(z)$ is used.\n$$\nD_1(z) = -\\frac{1}{z}+\\frac{1-\\exp(-2jz)}{[1-\\exp(-2jz)]/z - j[1+\\exp(-2jz)]}\n$$",
"m = 1\nx = 1\nnstop = 10\n\ndn = np.zeros(nstop, dtype=np.complex128)\n\nprint(\"both techniques work up to 5\")\nn=5\nprint(\" Lentz\",n,miepython._Lentz_Dn(m*x,n).real)\nmiepython._D_downwards(m*x,nstop, dn)\nprint(\"downwards\",n, dn[n].real)\nmiepython._D_upwards(m*x,nstop, dn)\nprint(\" upwards\",n, dn[n].real)\n\nprint(\"but upwards fails badly by n=9\")\nn=9\nprint(\" Lentz\",n,miepython._Lentz_Dn(m*x,n).real)\nmiepython._D_downwards(m*x, nstop, dn)\nprint(\"downwards\",n,dn[n].real)\nmiepython._D_upwards(m*x, nstop, dn)\nprint(\" upwards\",n,dn[n].real)",
"Calculating $A_n$ and $B_n$\nOK, Here we go. We need to start up the arrays. First, recall\n(page 128 Bohren and Huffman) that\n$$\n\\psi_n(x) = x j_n(x)\\qquad\\hbox{and}\\qquad \\xi_n(x) = x j_n(x) + i x y_n(x)\n$$\nwhere $j_n$ and $y_n$ are spherical Bessel functions. The first few terms\nmay be worked out as,\n$$\n\\psi_0(x) = \\sin x \n\\qquad\\hbox{and}\\qquad\n\\psi_1(x) = \\frac{\\sin x}{x} - \\cos x\n$$\nand\n$$\n\\xi_0(x) = \\psi_0 + i \\cos x\n\\qquad\\hbox{and}\\qquad\n\\xi_1(x) = \\psi_1 + i \\left[\\frac{\\cos x}{x} + \\sin x\\right]\n$$\nThe main equations for $a_n$ and $b_n$ in Bohren and Huffman Equation (4.88).\n$$\na_n = \\frac{\\Big[ D_n(mx)/m + n/x\\Big] \\psi_n(x)-\\psi_{n-1}(x)}\n {\\Big[ D_n(mx)/m + n/x\\Big] \\xi_n(x)- \\xi_{n-1}(x)}\n$$\nand\n$$\nb_n = \\frac{\\Big[m D_n(mx) + n/x\\Big] \\psi_n(x)-\\psi_{n-1}(x)}\n {\\Big[m D_n(mx) + n/x\\Big] \\xi_n(x)- \\xi_{n-1}(x)}\n$$\nThe recurrence relations for $\\psi$ and $\\xi$ depend on the recursion relations\nfor the spherical Bessel functions (page 96 equation 4.11)\n$$\nz_{n-1}(x) + z_{n+1}(x) = {2n+1\\over x} z_n(x)\n$$\nwhere $z_n$ might be either $j_n$ or $y_n$. Thus\n$$\n\\psi_{n+1}(x) = {2n+1\\over x} \\psi_n(x) - \\psi_{n-1}(x)\n\\qquad\\hbox{and}\\qquad\n\\xi_{n+1}(x) = {2n+1\\over x} \\xi_n(x) - \\xi_{n-1}(x)\n$$\nIf the spheres are perfectly reflecting m.real=0 then Kerker gives\nequations for $a_n$ and $b_n$ that do not depend on $D_n$ at all\n$$\na_n = \\frac{n\\psi_n(x)/x-\\psi_{n-1}(x)}\n {n\\xi_n(x)/x- \\xi_{n-1}(x)}\n$$\nand\n$$\nb_n = \\frac{\\psi_n(x)}{\\xi_n(x)}\n$$\nTherefore D[n] will directly correspond to $D_n$ in Bohren. However, a and b will be zero based arrays and so $a_1$=a[0] or $b_n$=b[n-1]",
"m=4/3\nx=50\nprint(\"m=4/3 test, m=\",m, \" x=\",x)\na, b = miepython._mie_An_Bn(m,x)\nprint(\"a_1=\", a[0])\nprint(\"a_1= (0.531105889295-0.499031485631j) #test\")\nprint(\"b_1=\", b[0])\nprint(\"b_1= (0.791924475935-0.405931152229j) #test\")\nprint()\n\nm=3/2-1j\nx=2\nprint(\"upward recurrence test, m=\",m, \" x=\",x)\na, b = miepython._mie_An_Bn(m,x)\n\nprint(\"a_1=\", a[0])\nprint(\"a_1= (0.546520203397-0.152373857258j) #test\")\nprint(\"b_1=\", b[0])\nprint(\"b_1= (0.389714727888+0.227896075256j) #test\")\nprint()\n\nm=11/10-25j\nx=2\nprint(\"downward recurrence test, m=\",m, \" x=\",x)\na, b = miepython._mie_An_Bn(m,x)\n\nprint(\"a_1=\", a[0])\nprint(\"a_1= (0.322406907480-0.465063542971j) #test\")\nprint(\"b_1=\", b[0])\nprint(\"b_1= (0.575167279092+0.492912495262j) #test\")",
"Small Spheres\nThis calculates everything accurately for small spheres. This approximation\nis necessary because in the small particle or Rayleigh limit $x\\rightarrow0$ the\nMie formulas become ill-conditioned. The method was taken from Wiscombe's paper\nand has been tested for several complex indices of refraction. \nWiscombe uses this when \n$$\nx\\vert m\\vert\\le0.1\n$$ \nand says this routine should be accurate to six places. \nThe formula for ${\\hat a}_1$ is\n$$\n{\\hat a}_1 = 2i\\frac{m^2-1}{3}\\frac{1-0.1x^2+\\frac{\\displaystyle4m^2+5}{\\displaystyle1400}x^4}{D}\n$$\nwhere\n$$\nD=m^2+2+(1-0.7m^2)x^2-\\frac{8m^4-385m^2+350}{1400}x^4+2i\\frac{m^2-1}{3}x^3(1-0.1x^2)\n$$\nNote that I have disabled the case when the sphere has no index of refraction.\nThe perfectly conducting sphere equations are \nThe formula for ${\\hat b}_1$ is\n$$\n{\\hat b}_1 = ix^2\\frac{m^2-1}{45} \\frac{1+\\frac{\\displaystyle2m^2-5}{\\displaystyle70}x^2}{1-\\frac{\\displaystyle2m^2-5}{\\displaystyle30}x^2}\n$$\nThe formula for ${\\hat a}_2$ is\n$$\n{\\hat a}_2 = ix^2 \\frac{m^2-1}{15} \\frac{1-\\frac{\\displaystyle1}{\\displaystyle14}x^2}{2m^2+3-\\frac{\\displaystyle2m^2-7}{\\displaystyle14}x^2}\n$$\nThe scattering and extinction efficiencies are given by\n$$\nQ_\\mathrm{ext} = 6x \\cdot \\mathcal{Re}\\left[{\\hat a}_1+{\\hat b}_1+\\frac{5}{3}{\\hat a}_2\\right]\n$$\nand\n$$\nQ_\\mathrm{sca} = 6x^4 T \n$$\nwith\n$$\nT =\\vert{\\hat a}_1\\vert^2+\\vert{\\hat b}_1\\vert^2+\\frac{5}{3}\\vert{\\hat a}_2\\vert^2\n$$\nand the anisotropy (average cosine of the phase function) is\n$$\ng =\\frac{1}{T}\\cdot {\\cal Re}\\left[{\\hat a}_1({\\hat a}_2+{\\hat b}_1)^*\\right] \n$$\nThe backscattering efficiency $Q_\\mathrm{back}$ is \n$$\nQ_\\mathrm{back} = \\frac{\\vert S_1(-1)\\vert^2 }{ x^2}\n$$\nwhere $S_1(\\mu)$ is\n$$\n\\frac{S_1(-1)}{x}=\\frac{3}{2}x^2\\left[{\\hat a}_1-{\\hat b}_1-\\frac{5}{3}{\\hat a}_2\\right] \n$$",
"m=1.5-0.1j\nx=0.0665\nprint(\"abs(m*x)=\",abs(m*x))\nqext, qsca, qback, g = miepython._small_mie(m,x)\nprint(\"Qext=\",qext)\nprint(\"Qsca=\",qsca)\nprint(\"Qabs=\",qext-qsca)\nprint(\"Qback=\",qback)\nprint(\"g=\",g)\n\nprint()\nprint('The following should be nearly the same as those above:')\nprint()\n\nx=0.067\nprint(\"abs(m*x)=\",abs(m*x))\nqext, qsca, qback, g = miepython.mie(m,x)\nprint(\"Qext=\",qext)\nprint(\"Qsca=\",qsca)\nprint(\"Qabs=\",qext-qsca)\nprint(\"Qback=\",qback)\nprint(\"g=\",g)",
"Small Perfectly Reflecting Spheres\nThe above equations fail when m.real=0 so use these approximations when the sphere is small and refective",
"m = 0 - 0.01j\nx=0.099\nqext, qsca, qback, g = miepython._small_conducting_mie(m,x)\nprint(\"Qext =\",qext)\nprint(\"Qsca =\",qsca)\nprint(\"Qabs =\",qext-qsca)\nprint(\"Qback=\",qback)\nprint(\"g =\",g)\n\n\nprint()\nprint('The following should be nearly the same as those above:')\nprint()\n\nm = 0 - 0.01j\nx=0.1001\nqext, qsca, qback2, g = miepython.mie(m,x)\nprint(\"Qext =\",qext)\nprint(\"Qsca =\",qsca)\nprint(\"Qabs =\",qext-qsca)\nprint(\"Qback=\",qback2)\nprint(\"g =\",g)",
"Mie scattering calculations\nFrom page 120 of Bohren and Huffman the anisotropy is given by\n$$\nQ_{\\rm sca}\\langle \\cos\\theta\\rangle = \\frac{4}{x^2} \\left[\n\\sum_{n=1}^{\\infty} \\frac{n(n+2)}{n+1} \\mbox{Re}\\lbrace a_na_{n+1}^+b_nb_{n+1}^\\rbrace\n+ \\sum_{n=1}^{\\infty} \\frac{2n+1}{n(n+1)} \\mbox{Re}\\lbrace a_nb_n^*\\rbrace\\right]\n$$\nFor computation purposes, this must be rewritten as\n$$\nQ_{\\rm sca}\\langle \\cos\\theta\\rangle = \\frac{4}{x^2} \\left[\n\\sum_{n=2}^{\\infty} \\frac{(n^2-1)}{n} \\mbox{Re}\\lbrace a_{n-1}a_n^+b_{n-1}b_n^\\rbrace\n+ \\sum_{n=1}^{\\infty} \\frac{2n+1}{n(n+1)} \\mbox{Re}\\lbrace a_nb_n^*\\rbrace\\right]\n$$\nFrom page 122 we find an expression for the backscattering efficiency\n$$\nQ_{\\rm back} = \\frac{\\sigma_b}{\\pi a^2} = \\frac{1}{x^2} \\left\\vert\n\\sum_{n=1}^{\\infty} (2n+1)(-1)^n(a_n-b_n)\\right\\vert^2\n$$\nFrom page 103 we find an expression for the scattering cross section\n$$\nQ_{\\rm sca} = \\frac{\\sigma_s}{\\pi a^2}\n= \\frac{2}{x^2}\\sum_{n=1}^{\\infty} (2n+1)(\\vert a_n\\vert^2+\\vert b_n\\vert^2)\n$$\nThe total extinction efficiency is also found on page 103\n$$\nQ_{\\rm ext}= \\frac{\\sigma_t}{\\pi a^2}\n= \\frac{2}{x^2}\\sum_{n=1}^{\\infty} (2n+1)\\cdot\\mbox{Re}{a_n+b_n}\n$$",
"qext, qsca, qback, g = miepython.mie(1.55-0.0j,2*np.pi/0.6328*0.525)\nprint(\"Qext=\",qext)\nprint(\"Qsca=\",qsca)\nprint(\"Qabs=\",qext-qsca)\nprint(\"Qback=\",qback)\nprint(\"g=\",g)\n\nx=1000.0\nm=1.5-0.1j\nqext, qsca, qback, g = miepython.mie(m,x)\nprint(\"Qext=\",qext)\nprint(\"Qsca=\",qsca)\nprint(\"Qabs=\",qext-qsca)\nprint(\"Qback=\",qback)\nprint(\"g=\",g)\n\nx=10000.0\nm=1.5-1j\nqext, qsca, qback, g = miepython.mie(m,x)\nprint(\"Qext=\",qext)\nprint(\"Qsca=\",qsca)\nprint(\"Qabs=\",qext-qsca)\nprint(\"Qback=\",qback)\nprint(\"g=\",g)",
"Scattering Matrix\nThe scattering matrix is given by Equation 4.74 in Bohren and Huffman.\nNamely,\n$$\nS_1(\\cos\\theta) = \\sum_{n=1}^\\infty \\frac{2n+1}{n(n+1)} \\left[ a_n \\pi_n(\\cos\\theta)+b_n\\tau_n(\\cos\\theta)\\right]\n$$\nand\n$$\nS_2(\\cos\\theta) = \\sum_{n=1}^\\infty \\frac{2n+1}{n(n+1)} \\left[a_n \\tau_n(\\cos\\theta)+b_n\\pi_n(\\cos\\theta) \\right]\n$$\nIf $\\mu=\\cos\\theta$ then\n$$\nS_1(\\mu) = \\sum_{n=1}^\\infty \\frac{2n+1}{n(n+1)} \\left[ a_n \\pi_n(\\mu)+b_n\\tau_n(\\mu)\\right]\n$$\nand\n$$\nS_2(\\mu) = \\sum_{n=1}^\\infty \\frac{2n+1}{n(n+1)} \\left[a_n \\tau_n(\\mu)+b_n\\pi_n(\\mu) \\right]\n$$\nThis means that for each angle $\\mu$ we need to know $\\tau_n(\\mu)$ and $\\pi_n(\\mu)$ for every $a_n$ and $b_n$.\nEquation 4.47 in Bohren and Huffman states\n$$\n\\pi_n(\\mu) = \\frac{2n-1}{ n-1}\\mu \\pi_{n-1}(\\mu) - \\frac{n}{ n-1} \\pi_{n-2}(\\mu)\n$$\nand knowning that $\\pi_0(\\mu)=0$ and $\\pi_1(\\mu)=1$, all the rest can be found. Similarly\n$$\n\\tau_n(\\mu) = n\\mu\\pi_n(\\mu)-(n+1)\\pi_{n-1}(\\mu)\n$$\nso the plan is to use these recurrence relations to find $\\pi_n(\\mu)$ and $\\tau_n(\\mu)$ during the summation process.\nThe only real trick is to account for 0-based arrays when the sums above are 1-based.",
"m=1.55-0.1j\nx=5.213\nmu = np.array([0.0,0.5,1.0])\n\nS1,S2 = miepython.mie_S1_S2(m,x,mu)\nfor i in range(len(mu)):\n print(mu[i], S2[i].real, S2[i].imag)\n ",
"Test to match Bohren's Sample Calculation",
"# Test to match Bohren's Sample Calculation\ntheta = np.arange(0,181,9)\nmu=np.cos(theta*np.pi/180)\nS1,S2 = miepython.mie_S1_S2(1.55,5.213,mu)\nqext, qsca, qback, g = miepython.mie(m,x)\nnorm = np.sqrt(qext * x**2 * np.pi)\nS1 /= norm\nS2 /= norm\n\nS11 = (abs(S2)**2 + abs(S1)**2)/2\nS12 = (abs(S2)**2 - abs(S1)**2)/2\nS33 = (S2 * S1.conjugate()).real\nS34 = (S2 * S1.conjugate()).imag\n\n# the minus in POL=-S12/S11 matches that Bohren\n# the minus in front of -S34/S11 does not match Bohren's code!\n\nprint(\"ANGLE S11 POL S33 S34\")\nfor i in range(len(mu)):\n print(\"%5d %10.8f % 10.8f % 10.8f % 10.8f\" % (theta[i], S11[i]/S11[0], -S12[i]/S11[i], S33[i]/S11[i], -S34[i]/S11[i]))\n\nnum=100\nm=1.1\nx=np.linspace(0.01,0.21,num)\nqext, qsca, qback, g = miepython.mie(m,x)\n \nplt.plot(x,qback)\nplt.plot((abs(0.1/m),abs(0.1/m)),(0,qback[num-1]))\nplt.xlabel(\"Size Parameter (-)\")\nplt.ylabel(\"Backscattering Efficiency\")\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
spectralDNS/shenfun
|
binder/Poisson1D.ipynb
|
bsd-2-clause
|
[
"Demo - Poisson's equation 1D\nIn this demo we will solve Poisson's equation\n\\begin{align}\n\\label{eq:poisson}\n\\nabla^2 u(x) &= f(x), \\quad \\forall \\, x \\in [-1, 1]\\\nu(\\pm 1) &= 0, \n\\end{align}\nwhere $u(x)$ is the solution and $f(x)$ is some given function of $x$.\nWe want to solve this equation with the spectral Galerkin method, using a basis composed of either Chebyshev $T_k(x)$ or Legendre $L_k(x)$ polynomials. Using $P_k$ to refer to either one, Shen's composite basis is then given as \n$$\nV^N = \\text{span}{P_k - P_{k+2}\\, | \\, k=0, 1, \\ldots, N-3},\n$$\nwhere all basis functions satisfy the homogeneous boundary conditions.\nFor the spectral Galerkin method we will also need the weighted inner product\n$$\n (u, v)w = \\int{-1}^1 u v w \\, {dx},\n$$\nwhere $w(x)$ is a weight associated with the chosen basis, and $v$ and $u$ are test and trial functions, respectively. For Legendre the weight is simply $w(x)=1$, whereas for Chebyshev it is $w(x)=1/\\sqrt{1-x^2}$. Quadrature is used to approximate the integral\n$$\n\\int_{-1}^1 u v w \\, {dx} \\approx \\sum_{i=0}^{N-1} u(x_i) v(x_i) \\omega_i,\n$$\nwhere ${\\omega_i}{i=0}^{N-1}$ are the quadrature weights associated with the chosen basis and quadrature rule. The associated quadrature points are denoted as ${x_i}{i=0}^{N-1}$. For Chebyshev we can choose between Chebyshev-Gauss or Chebyshev-Gauss-Lobatto, whereas for Legendre the choices are Legendre-Gauss or Legendre-Gauss-Lobatto. \nWith the weighted inner product in place we can create variational problems from the original PDE by multiplying with a test function $v$ and integrating over the domain. For a Legendre basis we can use integration by parts and formulate the variational problem: \nFind $u \\in V^N$ such that\n$$ (\\nabla u, \\nabla v) = -(f, v), \\quad \\forall \\, v \\in V^N.$$\nFor a Chebyshev basis the integration by parts is complicated due to the non-constant weight and the variational problem used is instead: \nFind $u \\in V^N$ such that\n$$ (\\nabla^2 u, v)_w = (f, v)_w, \\quad \\forall \\, v \\in V^N.$$\nWe now break the problem down to linear algebra. With any choice of basis or quadrature rule we use $\\phi_k(x)$ to represent the test function $v$ and thus\n$$\n\\begin{align}\nv(x) &= \\phi_k(x), \\\nu(x) &= \\sum_{j=0}^{N-3} \\hat{u}j \\phi_j(x),\n\\end{align}\n$$\nwhere $\\hat{\\mathbf{u}}={\\hat{u}_j}{j=0}^{N-3}$ are the unknown expansion coefficients, also called the degrees of freedom.\nInsert into the variational problem for Legendre and we get the linear algebra system to solve for $\\hat{\\mathbf{u}}$\n$$\n\\begin{align}\n(\\nabla \\sum_{j=0}^{N-3} \\hat{u}j \\phi_j, \\nabla \\phi_k) &= -(f, \\phi_k), \\\n\\sum{j=0}^{N-3} \\underbrace{(\\nabla \\phi_j, \\nabla \\phi_k)}{a{kj}} \\hat{u}j &= -\\underbrace{(f, \\phi_k)}{\\tilde{f}_k}, \\\nA \\hat{\\textbf{u}} &= -\\tilde{\\textbf{f}},\n\\end{align}\n$$\nwhere $A = (a_{kj}){0 \\ge k, j \\ge N-3}$ is the stiffness matrix and $\\tilde{\\textbf{f}} = {\\tilde{f}_k}{k=0}^{N-3}$.\nImplementation with shenfun\nThe given problem may be easily solved with a few lines of code using the shenfun Python module. The high-level code matches closely the mathematics and the stiffness matrix is assembled simply as",
"from shenfun import *\nimport matplotlib.pyplot as plt\n\nN = 100\nV = FunctionSpace(N, 'Legendre', quad='LG', bc=(0, 0))\nv = TestFunction(V)\nu = TrialFunction(V)\nA = inner(grad(u), grad(v))\n",
"Using a manufactured solution that satisfies the boundary conditions we can create just about any corresponding right hand side $f(x)$",
"import sympy\nx = sympy.symbols('x')\nue = (1-x**2)*(sympy.cos(4*x)*sympy.sin(6*x))\nfe = ue.diff(x, 2)",
"Note that fe is the right hand side that corresponds to the exact solution ue. We now want to use fe to compute a numerical solution $u$ that can be compared directly with the given ue. First, to compute the inner product $(f, v)$, we need to evaluate fe on the quadrature mesh",
"fl = sympy.lambdify(x, fe, 'numpy')\nul = sympy.lambdify(x, ue, 'numpy')\nfj = Array(V, buffer=fl(V.mesh()))",
"fj holds the analytical fe on the nodes of the quadrature mesh.\nAssemble right hand side $\\tilde{\\textbf{f}} = -(f, v)_w$ using the shenfun function inner",
"f_tilde = inner(-fj, v)",
"All that remains is to solve the linear algebra system \n$$\n\\begin{align}\nA \\hat{\\textbf{u}} &= \\tilde{\\textbf{f}} \\\n\\hat{\\textbf{u}} &= A^{-1} \\tilde{\\textbf{f}} \n\\end{align}\n$$",
"u_hat = Function(V)\nu_hat = A/f_tilde\n",
"Get solution in real space, i.e., evaluate $u(x_i) = \\sum_{j=0}^{N-3} \\hat{u}j \\phi_j(x_i)$ for all quadrature points ${x_i}{i=0}^{N-1}$.",
"uj = u_hat.backward()\n\nX = V.mesh()\nplt.plot(X, uj)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JackDi/phys202-2015-work
|
assignments/assignment10/ODEsEx02.ipynb
|
mit
|
[
"Ordinary Differential Equations Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed",
"Lorenz system\nThe Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:\n$$ \\frac{dx}{dt} = \\sigma(y-x) $$\n$$ \\frac{dy}{dt} = x(\\rho-z) - y $$\n$$ \\frac{dz}{dt} = xy - \\beta z $$\nThe solution vector is $[x(t),y(t),z(t)]$ and $\\sigma$, $\\rho$, and $\\beta$ are parameters that govern the behavior of the solutions.\nWrite a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.",
"def lorentz_derivs(yvec, t, sigma, rho, beta):\n \"\"\"Compute the the derivatives for the Lorentz system at yvec(t).\"\"\"\n x= yvec[0]\n y= yvec[1]\n z= yvec[2]\n dx= sigma*(y-x)\n dy= x*(rho-z)-y\n dz= x*y -beta*z\n return np.array([dx,dy,dz])\n\nassert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])",
"Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.",
"def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Solve the Lorenz system for a single initial condition.\n \n Parameters\n ----------\n ic : array, list, tuple\n Initial conditions [x,y,z].\n max_time: float\n The max time to use. Integrate with 250 points per time unit.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \n Returns\n -------\n soln : np.ndarray\n The array of the solution. Each row will be the solution vector at that time.\n t : np.ndarray\n The array of time points used.\n \n \"\"\"\n # YOUR CODE HERE\n t= np.linspace(0,max_time, int(250*max_time))\n soln= odeint(lorentz_derivs,\n ic,\n t,\n args=(sigma,rho,beta)\n )\n return soln, t\n\n\nassert True # leave this to grade solve_lorenz\n\nx= np.random.seed(1)\ny= np.random.seed(1)\nz= np.random.seed(1)\nx=np.random.randint(-15, 15,1)\ny=np.random.randint(-15, 15,1)\nz=np.random.randint(-15, 15,1)\nsoln, time=solve_lorentz([3,3,3], max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0)\nsoln#[:,1]",
"Write a function plot_lorentz that:\n\nSolves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.\nPlot $[x(t),z(t)]$ using a line to show each trajectory.\nColor each line using the hot colormap from Matplotlib.\nLabel your plot and choose an appropriate x and y limit.\n\nThe following cell shows how to generate colors that can be used for the lines:",
"N = 5\ncolors = plt.cm.hot(np.linspace(0,1,N))\nfor i in range(N):\n # To use these colors with plt.plot, pass them as the color argument\n print(colors[i])\n\ndef plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Plot [x(t),z(t)] for the Lorenz system.\n \n Parameters\n ----------\n N : int\n Number of initial conditions and trajectories to plot.\n max_time: float\n Maximum time to use.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \"\"\"\n x= np.random.seed(1)\n y= np.random.seed(1)\n z= np.random.seed(1)\n x=np.random.randint(-15, 15,N)\n y=np.random.randint(-15, 15,N)\n z=np.random.randint(-15, 15,N)\n soln, time=solve_lorentz([25,25,25], max_time, sigma, rho, beta)\n plt.plot(soln[:,0],soln[:,2])\n\n\nplot_lorentz()\n\nassert True # leave this to grade the plot_lorenz function",
"Use interact to explore your plot_lorenz function with:\n\nmax_time an integer slider over the interval $[1,10]$.\nN an integer slider over the interval $[1,50]$.\nsigma a float slider over the interval $[0.0,50.0]$.\nrho a float slider over the interval $[0.0,50.0]$.\nbeta fixed at a value of $8/3$.",
"# YOUR CODE HERE\ninteract(plot_lorentz, N=(1,50,1), max_time=(1,10), sigma=(0,50,.1), rho=(0,50,0.1),beta=fixed(8.0/3.0))",
"Describe the different behaviors you observe as you vary the parameters $\\sigma$, $\\rho$ and $\\beta$ of the system:\nYOUR ANSWER HERE\nChanging $\\rho$ changes the number of spirals the function makes by making it loop more or less times.\nChanging $\\sigma$ changes how big the spiral is, which will also impact the number of loops.\nChanging the max time changes the number of spirlas there are but not their density, simply by making the line shorter or longer.\nChanging N changes"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
eford/rebound
|
ipython_examples/EscapingParticles.ipynb
|
gpl-3.0
|
[
"Escaping particles\nSometimes we are not interested in particles that get too far from the central body. Here we will define a radius beyond which we remove particles from the simulation. Let's set up an artificial situation with 3 planets, and the inner one moves radially outward with $v > v_{escape}$.",
"import rebound\nimport numpy as np\ndef setupSimulation():\n sim = rebound.Simulation()\n sim.add(m=1., hash=\"Sun\")\n sim.add(x=0.4,vx=5., hash=\"Mercury\")\n sim.add(a=0.7, hash=\"Venus\")\n sim.add(a=1., hash=\"Earth\")\n sim.move_to_com()\n return sim\n\nsim = setupSimulation()\nsim.status()",
"Now let's run a simulation for 20 years (in default units where $G=1$, and thus AU, yr/2$\\pi$, and $M_\\odot$, see Units.ipynb for how to change units), and set up a 50 AU sphere beyond which we remove particles from the simulation. We can do this by setting the exit_max_distance flag of the simulation object. If a particle's distance (from the origin of whatever inertial reference frame chosen) exceeds sim.exit_max_distance, an exception is thrown.\nIf we simply call sim.integrate(), the program will crash due to the unhandled exception when the particle escapes, so we'll create a try-except block to catch the exception. We'll also store the x,y positions of Venus, which we expect to survive.",
"sim = setupSimulation() # Resets everything\nsim.exit_max_distance = 50.\nNoutputs = 1000\ntimes = np.linspace(0,20.*2.*np.pi,Noutputs)\nxvenus, yvenus = np.zeros(Noutputs), np.zeros(Noutputs)\nfor i,time in enumerate(times):\n try:\n sim.integrate(time) \n except rebound.Escape as error:\n print(error)\n for j in range(sim.N):\n p = sim.particles[j]\n d2 = p.x*p.x + p.y*p.y + p.z*p.z\n if d2>sim.exit_max_distance**2:\n index=j # cache index rather than remove here since our loop would go beyond end of particles array\n sim.remove(index=index)\n xvenus[i] = sim.particles[2].x\n yvenus[i] = sim.particles[2].y\n\nprint(\"Went down to {0} particles\".format(sim.N))",
"So this worked as expected. Now let's plot what we got:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nfig,ax = plt.subplots(figsize=(15,5))\nax.plot(xvenus, yvenus)\nax.set_aspect('equal')\nax.set_xlim([-2,10]);",
"This doesn't look right. The problem here is that when we removed particles[1] from the simulation, all the particles got shifted down in the particles array. So following the removal, xvenus all of a sudden started getting populated by the values for Earth (the new sim.particles[2]). A more robust way to access particles is using hashes (see UniquelyIdentifyingParticles.ipynb)",
"sim = setupSimulation() # Resets everything\nsim.exit_max_distance = 50.\nNoutputs = 1000\ntimes = np.linspace(0,20.*2.*np.pi,Noutputs)\nxvenus, yvenus = np.zeros(Noutputs), np.zeros(Noutputs)\nfor i,time in enumerate(times):\n try:\n sim.integrate(time) \n except rebound.Escape as error:\n print(error)\n for j in range(sim.N):\n p = sim.particles[j]\n d2 = p.x*p.x + p.y*p.y + p.z*p.z\n if d2>sim.exit_max_distance**2:\n index=j # cache index rather than remove here since our loop would go beyond end of particles array\n sim.remove(index=index)\n xvenus[i] = sim.get_particle_by_hash(\"Venus\").x\n yvenus[i] = sim.get_particle_by_hash(\"Venus\").y\n\nfig,ax = plt.subplots(figsize=(15,5))\nax.plot(xvenus, yvenus)\nax.set_aspect('equal')\nax.set_xlim([-2,10]);",
"Much better! We solved the problem by assigning particles hashes and using those to access the particles for output."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mattmcd/PyBayes
|
scripts/gp_first_principles_01.ipynb
|
apache-2.0
|
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\n\nN = 101\nx = np.linspace(0, 1, N).reshape(-1, 1)\n\na = 0.2\nK = np.exp(-(x-x.T)**2/a**2) + np.eye(N)*0.00001\n\n_ = plt.matshow(K)\n\nL = np.linalg.cholesky(K)\n\n_ = plt.matshow(L)\n\n_ = plt.matshow(L.dot(L.T))",
"Sample from the Gaussian Process by use of the Cholesky decomposition of the Kernel matrix",
"n_sample = 50000\nu = np.random.randn(N, n_sample)\n\nX = L.dot(u)\n_ = plt.plot(X[:, np.random.permutation(n_sample)[:500]], c='k', alpha=0.05)\n_ = plt.plot(X.mean(axis=1), c='k', linewidth=2)\n_ = plt.plot(2*X.std(axis=1), c='r', linewidth=2)\n_ = plt.plot(-2*X.std(axis=1), c='r', linewidth=2)",
"Sample from the posterior given points at (0.1, 0.0), (0.5, 1.0)",
"_ = plt.plot(x, X[:, (np.abs(X[np.where(x == 0.1)[0][0], :] - 0.0) < 0.05) &\n (np.abs(X[np.where(x == 0.5)[0][0], :] -1) < 0.05)], \n c='k', alpha=0.25)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
albahnsen/ML_SecurityInformatics
|
notebooks/01-IntroMachineLearning.ipynb
|
mit
|
[
"01 - Introduction to Machine Learning\nby Alejandro Correa Bahnsen\nversion 0.2, May 2016\nPart of the class Machine Learning for Security Informatics\nThis notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Jake Vanderplas\nWhat is Machine Learning?\nIn this section we will begin to explore the basic principles of machine learning.\nMachine Learning is about building programs with tunable parameters (typically an\narray of floating point values) that are adjusted automatically so as to improve\ntheir behavior by adapting to previously seen data.\nMachine Learning can be considered a subfield of Artificial Intelligence since those\nalgorithms can be seen as building blocks to make computers learn to behave more\nintelligently by somehow generalizing rather that just storing and retrieving data items\nlike a database system would do.\nWe'll take a look at two very simple machine learning tasks here.\nThe first is a classification task: the figure shows a\ncollection of two-dimensional data, colored according to two different class\nlabels.",
"# Import libraries\n%matplotlib inline\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n\n# Create a random set of examples\nfrom sklearn.datasets.samples_generator import make_blobs\nX, Y = make_blobs(n_samples=50, centers=2,random_state=23, cluster_std=2.90)\n\nplt.scatter(X[:, 0], X[:, 1], c=Y)\nplt.show()",
"A classification algorithm may be used to draw a dividing boundary\nbetween the two clusters of points:",
"from sklearn.linear_model import SGDClassifier\nclf = SGDClassifier(loss=\"hinge\", alpha=0.01, n_iter=200, fit_intercept=True)\nclf.fit(X, Y)\n\n# Plot the decision boundary. For that, we will assign a color to each\n# point in the mesh [x_min, m_max]x[y_min, y_max].\nx_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\ny_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\nxx, yy = np.meshgrid(np.arange(x_min, x_max, .05), np.arange(y_min, y_max, .05))\nZ = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)\n\nplt.contour(xx, yy, Z)\nplt.scatter(X[:, 0], X[:, 1], c=Y)\nplt.show()",
"This may seem like a trivial task, but it is a simple version of a very important concept.\nBy drawing this separating line, we have learned a model which can generalize to new\ndata: if you were to drop another point onto the plane which is unlabeled, this algorithm\ncould now predict whether it's a blue or a red point.\nThe next simple task we'll look at is a regression task: a simple best-fit line\nto a set of data:",
"a = 0.5\nb = 1.0\n\n# x from 0 to 10\nx = 30 * np.random.random(20)\n\n# y = a*x + b with noise\ny = a * x + b + np.random.normal(size=x.shape)\n\nplt.scatter(x, y)\n\nfrom sklearn.linear_model import LinearRegression\nclf = LinearRegression()\nclf.fit(x[:, None], y)\n\n# underscore at the end indicates a fit parameter\nprint(clf.coef_)\nprint(clf.intercept_)\n\nx_new = np.linspace(0, 30, 100)\ny_new = clf.predict(x_new[:, None])\nplt.scatter(x, y)\nplt.plot(x_new, y_new)",
"Again, this is an example of fitting a model to data, such that the model can make\ngeneralizations about new data. The model has been learned from the training\ndata, and can be used to predict the result of test data:\nhere, we might be given an x-value, and the model would\nallow us to predict the y value. Again, this might seem like a trivial problem,\nbut it is a basic example of a type of operation that is fundamental to\nmachine learning tasks.\nRepresentation of Data in Scikit-learn\nMachine learning is about creating models from data: for that reason, we'll start by\ndiscussing how data can be represented in order to be understood by the computer. Along\nwith this, we'll build on our matplotlib examples from the previous section and show some\nexamples of how to visualize data.\nMost machine learning algorithms implemented in scikit-learn expect data to be stored in a\ntwo-dimensional array or matrix. The arrays can be\neither numpy arrays, or in some cases scipy.sparse matrices.\nThe size of the array is expected to be [n_samples, n_features]\n\nn_samples: The number of samples: each sample is an item to process (e.g. classify).\n A sample can be a document, a picture, a sound, a video, an astronomical object,\n a row in database or CSV file,\n or whatever you can describe with a fixed set of quantitative traits.\nn_features: The number of features or distinct traits that can be used to describe each\n item in a quantitative manner. Features are generally real-valued, but may be boolean or\n discrete-valued in some cases.\n\nThe number of features must be fixed in advance. However it can be very high dimensional\n(e.g. millions of features) with most of them being zeros for a given sample. This is a case\nwhere scipy.sparse matrices can be useful, in that they are\nmuch more memory-efficient than numpy arrays.\nA Simple Example: the Iris Dataset\nAs an example of a simple dataset, we're going to take a look at the\niris data stored by scikit-learn.\nThe data consists of measurements of three different species of irises.\nThere are three species of iris in the dataset, which we can picture here:",
"from IPython.core.display import Image, display\n\ndisplay(Image(url='images/iris_setosa.jpg'))\nprint(\"Iris Setosa\\n\")\n\ndisplay(Image(url='images/iris_versicolor.jpg'))\nprint(\"Iris Versicolor\\n\")\n\ndisplay(Image(url='images/iris_virginica.jpg'))\nprint(\"Iris Virginica\")\n\ndisplay(Image(url='images/iris_with_length.png'))\nprint('Iris versicolor and the petal and sepal width and length')\nprint('From, Python Data Analytics, Apress, 2015.')",
"Quick Question:\nIf we want to design an algorithm to recognize iris species, what might the data be?\nRemember: we need a 2D array of size [n_samples x n_features].\n\n\nWhat would the n_samples refer to?\n\n\nWhat might the n_features refer to?\n\n\nRemember that there must be a fixed number of features for each sample, and feature\nnumber i must be a similar kind of quantity for each sample.\nLoading the Iris Data with Scikit-Learn\nScikit-learn has a very straightforward set of data on these iris species. The data consist of\nthe following:\n\n\nFeatures in the Iris dataset:\n\n\nsepal length in cm\n\nsepal width in cm\npetal length in cm\n\npetal width in cm\n\n\nTarget classes to predict:\n\n\nIris Setosa\n\nIris Versicolour\nIris Virginica\n\nscikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:",
"from sklearn.datasets import load_iris\niris = load_iris()\niris.keys()\n\nn_samples, n_features = iris.data.shape\nprint((n_samples, n_features))\nprint(iris.data[0])\n\nprint(iris.data.shape)\nprint(iris.target.shape)\n\nprint(iris.target)\nprint(iris.target_names)",
"Dimensionality Reduction: PCA\nPrincipal Component Analysis (PCA) is a dimension reduction technique that can find the combinations of variables that explain the most variance.\nConsider the iris dataset. It cannot be visualized in a single 2D plot, as it has 4 features. We are going to extract 2 combinations of sepal and petal dimensions to visualize it:",
"X, y = iris.data, iris.target\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=3)\npca.fit(X)\nX_reduced = pca.transform(X)\nplt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y)\n\nX, y = iris.data, iris.target\nfrom sklearn.manifold import Isomap\npca = Isomap(n_components=3)\npca.fit(X)\nX_reduced2 = pca.transform(X)\nplt.scatter(X_reduced2[:, 0], X_reduced2[:, 1], c=y)\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = Axes3D(fig)\nax.set_title('Iris Dataset by PCA', size=14)\nax.scatter(X_reduced[:,0],X_reduced[:,1],X_reduced[:,2], c=y)\nax.set_xlabel('First eigenvector')\nax.set_ylabel('Second eigenvector')\nax.set_zlabel('Third eigenvector')\nax.w_xaxis.set_ticklabels(())\nax.w_yaxis.set_ticklabels(())\nax.w_zaxis.set_ticklabels(())\nplt.show()",
"Clustering: K-means\nClustering groups together observations that are homogeneous with respect to a given criterion, finding ''clusters'' in the data.\nNote that these clusters will uncover relevent hidden structure of the data only if the criterion used highlights it.",
"from sklearn.cluster import KMeans\nk_means = KMeans(n_clusters=3, random_state=0) # Fixing the RNG in kmeans\nk_means.fit(X)\ny_pred = k_means.predict(X)\n\nplt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y_pred);",
"Lets then evaluate the performance of the clustering versus the ground truth",
"from sklearn.metrics import confusion_matrix\n\n# Compute confusion matrix\ncm = confusion_matrix(y, y_pred)\nnp.set_printoptions(precision=2)\nprint(cm)\n\ndef plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(iris.target_names))\n plt.xticks(tick_marks, iris.target_names, rotation=45)\n plt.yticks(tick_marks, iris.target_names)\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n\nplt.figure()\nplot_confusion_matrix(cm)",
"Classification Logistic Regression",
"from sklearn.linear_model import LogisticRegression\n\nfrom sklearn import cross_validation\n\nerrors = []\nfor i in range(1000):\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target, test_size=0.4, random_state=i)\n\n clf = LogisticRegression()\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n\n acc = (y_pred == y_test).sum()\n err = 1- acc / n_samples\n errors.append(err)\n\nplt.plot(list(range(1000)), errors)\n\nerrors = np.array(errors)\nprint(errors.max(), errors.min(), errors.mean(), errors.std())\n\nfrom sklearn.ensemble import RandomForestClassifier\n\nerrors = []\nfor i in range(1000):\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target, test_size=0.4, random_state=i)\n\n clf = RandomForestClassifier()\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n\n acc = (y_pred == y_test).sum()\n err = 1- acc / n_samples\n errors.append(err)\nplt.plot(list(range(1000)), errors)\n\nerrors = np.array(errors)\nprint(errors.max(), errors.min(), errors.mean(), errors.std())",
"Recap: Scikit-learn's estimator interface\nScikit-learn strives to have a uniform interface across all methods,\nand we'll see examples of these below. Given a scikit-learn estimator\nobject named model, the following methods are available:\n\nAvailable in all Estimators\nmodel.fit() : fit training data. For supervised learning applications,\n this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)).\n For unsupervised learning applications, this accepts only a single argument,\n the data X (e.g. model.fit(X)).\nAvailable in supervised estimators\nmodel.predict() : given a trained model, predict the label of a new set of data.\n This method accepts one argument, the new data X_new (e.g. model.predict(X_new)),\n and returns the learned label for each object in the array.\nmodel.predict_proba() : For classification problems, some estimators also provide\n this method, which returns the probability that a new observation has each categorical label.\n In this case, the label with the highest probability is returned by model.predict().\nmodel.score() : for classification or regression problems, most (all?) estimators implement\n a score method. Scores are between 0 and 1, with a larger score indicating a better fit.\nAvailable in unsupervised estimators\nmodel.predict() : predict labels in clustering algorithms.\nmodel.transform() : given an unsupervised model, transform new data into the new basis.\n This also accepts one argument X_new, and returns the new representation of the data based\n on the unsupervised model.\nmodel.fit_transform() : some estimators implement this method,\n which more efficiently performs a fit and a transform on the same input data.\n\nFlow Chart: How to Choose your Estimator\nThis is a flow chart created by scikit-learn super-contributor Andreas Mueller which gives a nice summary of which algorithms to choose in various situations. Keep it around as a handy reference!",
"from IPython.display import Image\nImage(url=\"http://scikit-learn.org/dev/_static/ml_map.png\")",
"Original source on the scikit-learn website\nMachine Learning for Security Informatics\nThere are several applications of machine learning for security informatics\nIntrusion Detection\n\nAn Intrusion Detection System (IDS) is a software that monitors a single or a\nnetwork of computers for malicious activities (attacks) that are aimed at stealing\nor censoring information or corrupting network protocols. Most techniques used\nin today’s IDS are not able to deal with the dynamic and complex nature of cyber\nattacks on computer networks. Hence, efficient adaptive methods like various\ntechniques of machine learning can result in higher detection rates, lower false\nalarm rates and reasonable computation and communication costs.\n\nFraud Detection\nFraud detection is one of the earliest industrial applications of data mining and machine learning. \nFraud detection is typically handled as a binary classification problem, but the class population is unbalanced because instances of fraud are usually very rare compared to the overall volume of transactions. Moreover, when fraudulent transactions are discovered, the business typically takes measures to block the accounts from transacting to prevent further losses. Therefore, model performance is measured by using account-level metrics, which will be discussed in detail later.\n\nPhishing Detection\nPhishing, by definition, is the\nact of defrauding an online user in order to obtain personal information by posing as\na trustworthy institution or entity. Users usually have a hard time differentiating\nbetween legitimate and malicious sites because they are made to look exactly the\nsame. Therefore, there is a need to create better tools to combat attackers.\n\nMalware Classification\nIn recent years, the malware industry has become a well organized market involving large amounts of money. Well funded, multi-player syndicates invest heavily in technologies and capabilities built to evade traditional protection, requiring anti-malware vendors to develop counter mechanisms for finding and deactivating them. In the meantime, they inflict real financial and emotional pain to users of computer systems.\nOne of the major challenges that anti-malware faces today is the vast amounts of data and files which need to be evaluated for potential malicious intent. For example, Microsoft's real-time detection anti-malware products are present on over 160M computers worldwide and inspect over 700M computers monthly. This generates tens of millions of daily data points to be analyzed as potential malware. One of the main reasons for these high volumes of different files is the fact that, in order to evade detection, malware authors introduce polymorphism to the malicious components. This means that malicious files belonging to the same malware \"family\", with the same forms of malicious behavior, are constantly modified and/or obfuscated using various tactics, such that they look like many different files.\n\nMan-in-the-Browser Attacks\nMan-in-the-Browser (MITB) attacks are the most destructive threat on the Internet stealing money from customer accounts right\nnow. These attacks infect a webpage by taking advantage of security vulnerabilities in browsers and common web plugins,\nmodifying web pages and transactions as they are happening in real time. Cybercriminals launching an MITB attack can intercept\nand change the content on a website by injecting new HTML code and then perform unauthorized transactions while a customer\nhas an online banking session open, but the client will only see the transaction performed as they intended on their screen. If the\ncustomer checks the URL or SSL certificates of the transactional site, they will be the same. Even the most sophisticated security\nprofessional may not know an incident is happening, because the entire point of an MITB attack is to mimic the page that malicious\ncode is being injected into as much as possible, so that the banking customer doesn't realize that something is amiss."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
daniestevez/jupyter_notebooks
|
dslwp/DSLWP-B Doppler analysis.ipynb
|
gpl-3.0
|
[
"DSLWP-B Doppler analysis\nIn this notebook we perform and analyse orbit determination of DSLWP-B using S-band Doppler measurments by Scott Tilley VE7TIL and GMAT.",
"%matplotlib inline",
"Set this to the path of your GMAT installation:",
"GMAT_PATH = '/home/daniel/GMAT/R2018a/'\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport subprocess\n\n# Larger figure size\nfig_size = [10, 6]\nplt.rcParams['figure.figsize'] = fig_size",
"Below we set the published frequency of the S-band beacon of DSLWP-B. There is an offset in the Doppler measurements by VE7TIL that we must correct. Probably, this Doppler is due to the beacon not been perfectly in frequency.",
"freq = 2275222e3\nc = 299792458 # speed of light\nf_shift = -3400 # Shift to apply to VE7TIL Doppler measurement",
"Utility functions to load reports from GMAT and write Doppler information in the format expected by GMAT.",
"def load_report(path):\n ncols = 14\n data = np.fromfile(path, sep=' ')\n return data.reshape((data.size // ncols, ncols))\n\ndef load_ve7til_doppler(path):\n ncols = 4\n leap_seconds = 37 # for UTC to TAI conversion\n data = np.fromfile(path, sep=' ')\n data = data.reshape((data.size // ncols, ncols))[:,:2]\n data[:,0] += leap_seconds/(24*3600) - 29999.5\n s = np.argsort(data[:,0])\n data = data[s,:]\n return data\n\ndef load_rangerate(path):\n with open(path) as f:\n return np.array([(l.split()[0], l.split()[7]) for l in f.readlines()[2:]], dtype='float')\n\ndef load_twoway(path):\n with open(path) as f:\n return np.array([(l.split()[0], l.split()[10]) for l in f.readlines()[2:]], dtype='float')\n\ndef write_doppler_gmd(data, path):\n doppler_interval = 10\n with open(path, 'w') as f:\n for j in range(data.shape[0]):\n line = '{} RangeRate 9012 VE7TIL DSLWP-B 1 {} {}\\n'.format(data[j,0] + 0.5*doppler_interval/(24*3600), doppler_interval, -2*(data[j,1]-freq+f_shift)*1e-3*c/freq)\n f.write(line)",
"Position of each field in the GMAT report.",
"utc = 0\ntai = 1\ndslwp_x = 2\ndslwp_v = 5\nluna_x = 8\nluna_v = 11",
"Load report file as computed by GMAT (note that the GMAT script dslwp_doppler.script must have been run before this in simulation mode to generate Doppler data). The report contains the position and velocity of DSLWP-B and the Moon in VE7TIL's frame of reference. This can be used to compute Doppler, disregarding the finite speed of light and the Doppler measurement interval.",
"report = load_report('VE7TIL.txt')\nreport_old = load_report('VE7TIL_old.txt')\nreport_gs = load_report('GS.txt')",
"Load Doppler data by VE7TIL, select some points to exclude because they do not give a good match, and write the Doppler information in GMAT's format.",
"doppler = load_ve7til_doppler('ve7til_doppler/dslwpb_lunarday1.dat')\nground_lock = ((doppler[:,0] > 28285.543) & (doppler[:,0] < 28286)) | ((doppler[:,0] > 28286.5417) & (doppler[:,0] < 28286.585)) | ((doppler[:,0] > 28282.4175) & (doppler[:,0] < 28283)) | ((doppler[:,0] > 28283.4165) & (doppler[:,0] < 28284)) | ((doppler[:,0] > 28284.44) & (doppler[:,0] < 28285)) \\\n | ((doppler[:,0] > 28288.5) & (doppler[:,0] < 28288.53)) | ((doppler[:,0] > 28289.502) & (doppler[:,0] < 28289.5315))\nexclude = ground_lock | ((doppler[:,0] > 28268.5) & (doppler[:,0] < 28269.2)) | ((doppler[:,0] > 28266.75) & (doppler[:,0] < 28267.5)) \\\n | ((doppler[:,0] > 28272.5) & (doppler[:,0] < 28273.6)) | ((doppler[:,0] > 28281.37) & (doppler[:,0] < 28281.6)) \\\n | ((doppler[:,0] > 28288.58) & (doppler[:,0] < 28289))\nwrite_doppler_gmd(doppler[~exclude,:], '/tmp/VE7TIL.gmd')",
"Load simulated RangeRate information from GMAT (note that the GMAT script dslwp_doppler.script must have been run before this in estimation mode). Compute Doppler using the data from GMAT report.",
"gmat_doppler = load_rangerate(GMAT_PATH + 'output/DSLWP_Doppler.gmd')\n#gs_gmat_doppler = load_twoway(GMAT_PATH + 'output/TwoWay_Doppler.gmd')\n\ndef rangerate(rep):\n return np.sum(rep[:,dslwp_v:dslwp_v+3] * rep[:,dslwp_x:dslwp_x+3], axis=1) / np.sqrt(np.sum(rep[:,dslwp_x:dslwp_x+3]**2, axis=1))\n\ndslwp_rangerate = rangerate(report)\nold_rangerate = rangerate(report_old)\ngs_rangerate = rangerate(report_gs)\n\nf_up = 2095.1e6\nf_down = f_up * 240 / 221\ntwo_way_doppler = -dslwp_rangerate * 1e3 / c * f_down - gs_rangerate * 1e3 / c * f_up",
"Plot Doppler simulation from GMAT and VE7TIL's measurements.",
"start = 0\nend = 28293\nselect = (doppler[:,0] > start) & (doppler[:,0] < end)\nreport_select = (report[:,tai] > start) & (report[:,tai] < end)\nreport_old_select = (report_old[:,tai] > start) & (report_old[:,tai] < end)\nplt.figure(figsize = [15,10], facecolor='w')\nplt.plot(report_old[report_old_select, tai], -old_rangerate[report_old_select]*1e3*freq/c)\nplt.plot(report[report_select, tai], -dslwp_rangerate[report_select]*1e3*freq/c)\n#plt.plot(gmat_doppler[:,0], -0.5*gmat_doppler[:,1]*1e3*freq/c, '.', markersize=1, alpha=0.5)\nplt.plot(doppler[~exclude & select,0], doppler[~exclude & select,1]-freq+f_shift,'.', alpha=0.1, markersize=5, color='green')\nplt.plot(doppler[exclude & ~ground_lock & select,0], doppler[exclude & ~ground_lock & select,1]-freq+f_shift,'.', alpha=0.1, markersize=5, color='red')\nplt.plot(doppler[ground_lock & select,0], doppler[ground_lock & select,1]-freq+f_shift,'.', alpha=0.1, markersize=5, color='orange')\nplt.title('DSLWP-B Doppler fit')\nplt.xlabel('TAIModJulian')\nplt.ylabel('Doppler (Hz, S-band)')\nplt.legend(['Old elements','New elements', 'VE7TIL (shifted {}Hz)'.format(f_shift), 'Excluded (shifted)', 'Possible ground lock (shifted)']);\n\nplt.figure(figsize = [15,10], facecolor='w')\nt = doppler[~exclude,0]\nplt.plot(t, doppler[~exclude,1]-freq+f_shift - np.interp(t, report_old[:, tai], -old_rangerate*1e3*freq/c) ,'.')\nplt.plot(t, doppler[~exclude,1]-freq+f_shift - np.interp(t, report[:, tai], -dslwp_rangerate*1e3*freq/c) ,'.')\nplt.title('DSLWP-B Doppler residual')\nplt.xlabel('TAIModJulian')\nplt.ylabel('Doppler residual (Hz, S-band)')\nplt.legend(['Old elements', 'New elements']);\n\nstart = 28282\n#start = 28284\nend = 28290\n#end = 28285\nselect = (doppler[:,0] > start) & (doppler[:,0] < end)\nreport_select = (report[:,tai] > start) & (report[:,tai] < end)\nplt.figure(figsize = [15,10], facecolor='w')\nplt.plot(report[report_select, tai], two_way_doppler[report_select])\nplt.plot(report[report_select, tai], -dslwp_rangerate[report_select]*1e3*freq/c)\n#plt.plot(gmat_doppler[:,0], -0.5*gmat_doppler[:,1]*1e3*freq/c, '.', markersize=1, alpha=0.5)\nplt.plot(doppler[ground_lock & select,0], doppler[ground_lock & select,1]-f_down,'.', alpha=0.1, markersize=5, color='green')\nplt.plot(doppler[~exclude & select,0], doppler[~exclude & select,1]-freq+f_shift,'.', alpha=0.1, markersize=5, color='purple')\n#plt.plot(gs_gmat_doppler[:,0], -gs_gmat_doppler[:,1]-f_down,'.', color='blue', alpha=0.005, markersize=5)\nplt.title('DSLWP-B Doppler fit')\nplt.xlabel('TAIModJulian')\nplt.ylabel('Doppler (Hz, S-band)');\n#plt.legend(['GMAT (x and v)', 'GMAT (RangeRate simulation)', 'VE7TIL measurements (frequency shifted {}Hz)'.format(f_shift), 'Excluded VE7TIL measurements (frequency shifted {}Hz)'.format(f_shift)]);\n\ndef load_tracking_file(path):\n ncols = 7\n data = np.fromfile(path, sep=' ')\n return data.reshape((data.size // ncols, ncols))\n\ndef utc2taimodjulian(x):\n mjd_unixtimestamp_offset = 10587.5\n seconds_in_day = 3600 * 24\n leap_seconds = 37\n return (x + leap_seconds) / seconds_in_day + mjd_unixtimestamp_offset\n\nve7til_ecef = np.array([-2303967.2134504286, -3458727.86250663, 4822174.148309025])*1e-3\n\nparts = ['20180529', '20180531', '20180601', '20180602', '20180603', '20180607', '20180609', '20180615', '20180619', '20180622', '20180629']\n\nfig1 = plt.figure(figsize = [15,10], facecolor='w')\nfig2 = plt.figure(figsize = [15,10], facecolor='w')\nsub1 = fig1.add_subplot(111)\nsub2 = fig2.add_subplot(111)\n\nt = doppler[~exclude,0]\nsub1.plot(t, doppler[~exclude,1]-freq+f_shift - np.interp(t, report[:, tai], -dslwp_rangerate*1e3*freq/c) ,'.', color='yellow')\n\nfor part in parts:\n tracking = load_tracking_file('tracking_files/program_tracking_dslwp-b_{}.txt'.format(part))\n tracking_range_rate = np.sum((tracking[:,1:4] - ve7til_ecef) * tracking[:,4:7], axis = 1) / np.sqrt(np.sum((tracking[:,1:4] - ve7til_ecef)**2, axis = 1)) * 1e3\n doppler_tracking = -tracking_range_rate * freq / c\n time = utc2taimodjulian(tracking[:,0])\n sub2.plot(time, doppler_tracking - np.interp(time, report[:,tai], -dslwp_rangerate*1e3*freq/c))\n time_sel = ~exclude & (doppler[:,0] > time[0]) & (doppler[:,0] < time[-1])\n sub1.plot(doppler[time_sel,0], doppler[time_sel,1]-freq+f_shift - np.interp(doppler[time_sel,0], time, doppler_tracking) ,'.')\n\nsub1.set_title('DSLWP-B Doppler residual')\nsub1.set_xlabel('TAIModJulian')\nsub1.set_ylabel('Doppler residual (Hz, S-band)')\nsub2.set_title('DSLWP-B Doppler difference between tracking and new elements')\nsub2.set_xlabel('TAIModJulian')\nsub2.set_ylabel('Doppler difference (Hz, S-band)')\nsub1.legend(['New elements'] + ['Tracking {}'.format(part) for part in parts])\nsub2.legend(['Tracking {}'.format(part) for part in parts]);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dewitt-li/deep-learning
|
first-neural-network/Your_first_neural_network.ipynb
|
mit
|
[
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head(50)",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*100].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std\n \ndata.head()",
"Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n #def sigmoid(x):\n # return 0 # Replace 0 with your sigmoid calculation here\n #self.activation_function = sigmoid\n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y-final_outputs # Output layer error is the difference between desired target and actual output.\n output_error_term = error\n # TODO: Calculate the hidden layer's contribution to the error\n\n hidden_error = np.dot(output_error_term, self.weights_hidden_to_output.T)\n hidden_error_term = hidden_error*hidden_outputs*(1-hidden_outputs)\n # TODO: Backpropagated error terms - Replace these values with your calculations.\n\n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term*X[:,None]\n # Weight step (hidden to output)\n delta_weights_h_o += output_error_term*hidden_outputs[:,None]\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output +=self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden +=self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\niterations = 2000\nlearning_rate = 0.08\nhidden_nodes = 60\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jamesfolberth/NGC_STEM_camp_AWS
|
notebooks/20Q/data/MakeTurkInput.ipynb
|
bsd-3-clause
|
[
"Prepare to submit a job to Mechanical Turk\nUsed Google Sheets to list the attributes (100 of them) and movies (top 250 popular ones from IMDb), copy those to csv files manually, and use this script to generate a file with inputs for Mechanical Turk\nscript by Stephen Becker, June 9--12 2017",
"import csv",
"Read in the list of 100 questions, as a list of 25 sub-lists, 4 questions per sub-list",
"# string info: http://www.openbookproject.net/books/bpp4awd/ch03.html\n# [lists] are mutable, (tuples) and 'strings' are not\n\n# Read in the list of 100 questions, putting it into 25 groups of 4\nquestions = []\nrowQuestions= []\nwith open('questions.csv', 'rb') as csvfile:\n myreader = csv.reader(csvfile)\n for index,row in enumerate(myreader):\n rowQuestions.append( row[0].rstrip() )\n if index%4 is 3:\n #print index, ' '.join(row)\n #print index, rowQuestions\n questions.append( rowQuestions )\n rowQuestions = []\nlen(questions)",
"Read in all 250 movies",
"# Read in the list of 250 movies, making sure to remove commas from their names\n# (actually, if it has commas, it will be read in as different fields)\nmovies = []\nwith open('movies.csv','rb') as csvfile:\n myreader = csv.reader(csvfile)\n for index, row in enumerate(myreader):\n movies.append( ' '.join(row) ) # the join() call merges all fields",
"Write an output file to be used as the input file for Amazon Mechanical Turk. Each row will be one HIT",
"N = len(movies)\nwith open('input.csv', 'wb') as csvfile:\n mywriter = csv.writer(csvfile)\n mywriter.writerow( ['MOVIE','QUESTION1','QUESTION2','QUESTION3','QUESTION4'])\n for i in range(5):\n for q in questions:\n mywriter.writerow( [movies[i], q[0], q[1], q[2], q[3] ])\n #mywriter.writerow( [movies[i]+','+','.join(q)] ) # has extra \" \"",
"After submitting to Mechanical Turk...\nRead in the results. Note that the order of the questions is the same as the input file,\nso we can use that to simplify recording the answers.\nLet's encode 1 = Yes, 2 = No, 0 = Unsure",
"with open('Batch_2832525_batch_results.csv', 'rb') as csvfile:\n myreader = csv.DictReader(csvfile)\n #myreader = csv.reader(csvfile)\n # see dir(myreader) to list available methods\n for row in myreader:\n #print row\n print row['Input.MOVIE'] +\": \" + row['Input.QUESTION1'] , row['Answer.MovieAnswer1']\n print ' ' + row['Input.QUESTION2'] , row['Answer.MovieAnswer2']\n print ' ' + row['Input.QUESTION3'] , row['Answer.MovieAnswer3']\n print ' ' + row['Input.QUESTION4'] , row['Answer.MovieAnswer4']\n\n#import os\ncwd = os.getcwd()\nprint cwd\n#dir(myreader)\nmyreader.line_num\n#row\n#row['Input.QUESTION1']"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.12/_downloads/plot_tf_lcmv.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Time-frequency beamforming using LCMV\nCompute LCMV source power in a grid of time-frequency windows and display\nresults.\nThe original reference is:\nDalal et al. Five-dimensional neuroimaging: Localization of the time-frequency\ndynamics of cortical activity. NeuroImage (2008) vol. 40 (4) pp. 1686-1700",
"# Author: Roman Goj <roman.goj@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import compute_covariance\nfrom mne.datasets import sample\nfrom mne.event import make_fixed_length_events\nfrom mne.beamformer import tf_lcmv\nfrom mne.viz import plot_source_spectrogram\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nnoise_fname = data_path + '/MEG/sample/ernoise_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nsubjects_dir = data_path + '/subjects'\nlabel_name = 'Aud-lh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name",
"Read raw data, preload to allow filtering",
"raw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\n\n# Pick a selection of magnetometer channels. A subset of all channels was used\n# to speed up the example. For a solution based on all MEG channels use\n# meg=True, selection=None and add grad=4000e-13 to the reject dictionary.\n# We could do this with a \"picks\" argument to Epochs and the LCMV functions,\n# but here we use raw.pick_types() to save memory.\nleft_temporal_channels = mne.read_selection('Left-temporal')\nraw.pick_types(meg='mag', eeg=False, eog=False, stim=False, exclude='bads',\n selection=left_temporal_channels)\nreject = dict(mag=4e-12)\n# Re-normalize our empty-room projectors, which should be fine after\n# subselection\nraw.info.normalize_proj()\n\n# Setting time limits for reading epochs. Note that tmin and tmax are set so\n# that time-frequency beamforming will be performed for a wider range of time\n# points than will later be displayed on the final spectrogram. This ensures\n# that all time bins displayed represent an average of an equal number of time\n# windows.\ntmin, tmax = -0.55, 0.75 # s\ntmin_plot, tmax_plot = -0.3, 0.5 # s\n\n# Read epochs. Note that preload is set to False to enable tf_lcmv to read the\n# underlying raw object.\n# Filtering is then performed on raw data in tf_lcmv and the epochs\n# parameters passed here are used to create epochs from filtered data. However,\n# reading epochs without preloading means that bad epoch rejection is delayed\n# until later. To perform bad epoch rejection based on the reject parameter\n# passed here, run epochs.drop_bad(). This is done automatically in\n# tf_lcmv to reject bad epochs based on unfiltered data.\nevent_id = 1\nevents = mne.read_events(event_fname)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n baseline=None, preload=False, reject=reject)\n\n# Read empty room noise, preload to allow filtering, and pick subselection\nraw_noise = mne.io.read_raw_fif(noise_fname, preload=True)\nraw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\nraw_noise.pick_types(meg='mag', eeg=False, eog=False, stim=False,\n exclude='bads', selection=left_temporal_channels)\nraw_noise.info.normalize_proj()\n\n# Create artificial events for empty room noise data\nevents_noise = make_fixed_length_events(raw_noise, event_id, duration=1.)\n# Create an epochs object using preload=True to reject bad epochs based on\n# unfiltered data\nepochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin, tmax,\n proj=True, baseline=None,\n preload=True, reject=reject)\n\n# Make sure the number of noise epochs is the same as data epochs\nepochs_noise = epochs_noise[:len(epochs.events)]\n\n# Read forward operator\nforward = mne.read_forward_solution(fname_fwd, surf_ori=True)\n\n# Read label\nlabel = mne.read_label(fname_label)",
"Time-frequency beamforming based on LCMV",
"# Setting frequency bins as in Dalal et al. 2008 (high gamma was subdivided)\nfreq_bins = [(4, 12), (12, 30), (30, 55), (65, 299)] # Hz\nwin_lengths = [0.3, 0.2, 0.15, 0.1] # s\n\n# Setting the time step\ntstep = 0.05\n\n# Setting the whitened data covariance regularization parameter\ndata_reg = 0.001\n\n# Subtract evoked response prior to computation?\nsubtract_evoked = False\n\n# Calculating covariance from empty room noise. To use baseline data as noise\n# substitute raw for raw_noise, epochs.events for epochs_noise.events, tmin for\n# desired baseline length, and 0 for tmax_plot.\n# Note, if using baseline data, the averaged evoked response in the baseline\n# period should be flat.\nnoise_covs = []\nfor (l_freq, h_freq) in freq_bins:\n raw_band = raw_noise.copy()\n raw_band.filter(l_freq, h_freq, method='iir', n_jobs=1)\n epochs_band = mne.Epochs(raw_band, epochs_noise.events, event_id,\n tmin=tmin_plot, tmax=tmax_plot, baseline=None,\n proj=True)\n\n noise_cov = compute_covariance(epochs_band, method='shrunk')\n noise_covs.append(noise_cov)\n del raw_band # to save memory\n\n# Computing LCMV solutions for time-frequency windows in a label in source\n# space for faster computation, use label=None for full solution\nstcs = tf_lcmv(epochs, forward, noise_covs, tmin, tmax, tstep, win_lengths,\n freq_bins=freq_bins, subtract_evoked=subtract_evoked,\n reg=data_reg, label=label)\n\n# Plotting source spectrogram for source with maximum activity.\n# Note that tmin and tmax are set to display a time range that is smaller than\n# the one for which beamforming estimates were calculated. This ensures that\n# all time bins shown are a result of smoothing across an identical number of\n# time windows.\nplot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,\n source_index=None, colorbar=True)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kubeflow/examples
|
pipelines/simple-notebook-pipeline/Simple Notebook Pipeline.ipynb
|
apache-2.0
|
[
"Simple notebook pipeline\nWelcome to your first steps with Kubeflow Pipelines (KFP). This notebook demos: \n\nDefining a Kubeflow pipeline with the KFP SDK\nCreating an experiment and submitting pipelines to the KFP run time environment using the KFP SDK \n\nReference documentation: \n* https://www.kubeflow.org/docs/pipelines/sdk/sdk-overview/\n* https://www.kubeflow.org/docs/pipelines/sdk/build-component/\nPrerequisites: Install or update the pipelines SDK\nYou may need to restart your notebook kernel after updating the KFP sdk.\nThis notebook is intended to be run from a Kubeflow notebook server. (From other environments, you would need to pass different arguments to the kfp.Client constructor.)",
"# You may need to restart your notebook kernel after updating the kfp sdk\n!python3 -m pip install kfp --upgrade --user",
"Setup",
"EXPERIMENT_NAME = 'Simple notebook pipeline' # Name of the experiment in the UI\nBASE_IMAGE = 'tensorflow/tensorflow:2.0.0b0-py3' # Base image used for components in the pipeline\n\nimport kfp\nimport kfp.dsl as dsl\nfrom kfp import compiler\nfrom kfp import components",
"Create pipeline component\nCreate a python function",
"@dsl.python_component(\n name='add_op',\n description='adds two numbers',\n base_image=BASE_IMAGE # you can define the base image here, or when you build in the next step. \n)\ndef add(a: float, b: float) -> float:\n '''Calculates sum of two arguments'''\n print(a, '+', b, '=', a + b)\n return a + b",
"Build a pipeline component from the function",
"# Convert the function to a pipeline operation.\nadd_op = components.func_to_container_op(\n add,\n base_image=BASE_IMAGE, \n)",
"Build a pipeline using the component",
"@dsl.pipeline(\n name='Calculation pipeline',\n description='A toy pipeline that performs arithmetic calculations.'\n)\ndef calc_pipeline(\n a: float =0,\n b: float =7\n):\n #Passing pipeline parameter and a constant value as operation arguments\n add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance. \n \n #You can create explicit dependency between the tasks using xyz_task.after(abc_task)\n add_2_task = add_op(a, b)\n \n add_3_task = add_op(add_task.output, add_2_task.output)",
"Compile and run the pipeline\nKubeflow Pipelines lets you group pipeline runs by Experiments. You can create a new experiment, or call kfp.Client().list_experiments() to see existing ones.\nIf you don't specify the experiment name, the Default experiment will be used.\nYou can directly run a pipeline given its function definition:",
"# Specify pipeline argument values\narguments = {'a': '7', 'b': '8'}\n# Launch a pipeline run given the pipeline function definition\nkfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments, \n experiment_name=EXPERIMENT_NAME)\n# The generated links below lead to the Experiment page and the pipeline run details page, respectively",
"Alternately, you can separately compile the pipeline and then upload and run it as follows:",
"# Compile the pipeline\npipeline_func = calc_pipeline\npipeline_filename = pipeline_func.__name__ + '.pipeline.zip'\ncompiler.Compiler().compile(pipeline_func, pipeline_filename)\n\n# Get or create an experiment\nclient = kfp.Client()\nexperiment = client.create_experiment(EXPERIMENT_NAME)",
"Submit the compiled pipeline for execution:",
"# Specify pipeline argument values\narguments = {'a': '7', 'b': '8'}\n\n# Submit a pipeline run\nrun_name = pipeline_func.__name__ + ' run'\nrun_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)\n\n# The generated link below leads to the pipeline run information page.",
"That's it!\nYou just created and deployed your first pipeline in Kubeflow! You can put more complex python code within the functions, and you can import any libraries that are included in the base image (you can use VersionedDependencies to import libraries not included in the base image). \n\nCopyright 2019 Google Inc. All Rights Reserved.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
xebia-france/luigi-airflow
|
Luigi_et_le_Machine_Learning_lui_dit_merci.ipynb
|
apache-2.0
|
[
"Import libraries",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\npd.set_option('display.max_columns', None)\n%matplotlib inline",
"Define source paths",
"source_path = \"/Users/sandrapietrowska/Documents/Trainings/luigi/data_source/\"",
"Import data",
"raw_dataset = pd.read_csv(source_path + \"Speed_Dating_Data.csv\")",
"Data exploration\nShape, types",
"raw_dataset.shape\n\nraw_dataset.head()\n\nraw_dataset.dtypes.value_counts()",
"Missing values",
"raw_dataset.isnull().sum().head(10)\n\nsummary = raw_dataset.describe().transpose()\nprint summary.head(15)\n\nplt.hist(raw_dataset['age'].dropna());",
"We want to know what you look for in the opposite sex.",
"# Attractiveness\nplt.hist(raw_dataset['attr_o'].dropna());\n\n# Sincere\nplt.hist(raw_dataset['sinc_o'].dropna());\n\n# Intelligent\nplt.hist(raw_dataset['intel_o'].dropna()) ;\n\n# Fun\nplt.hist(raw_dataset['fun_o'].dropna());\n\n# Ambitious\nplt.hist(raw_dataset['amb_o'].dropna());",
"What is your primary goal in participating in this event?\n\nSeemed like a fun night out=1, \nTo meet new people=2, \nTo get a date=3, \nLooking for a serious relationship=4, \nTo say I did it=5, \nOther=6",
"raw_dataset.groupby('date').iid.nunique().sort_values(ascending=False)",
"In general, how frequently do you go on dates?\n\nSeveral times a week=1\nTwice a week=2\nOnce a week=3\nTwice a month=4\nOnce a month=5\nSeveral times a year=6\nAlmost never=7",
"raw_dataset.groupby('go_out').iid.nunique().sort_values(ascending=False)\n\nraw_dataset.groupby('career').iid.nunique().sort_values(ascending=False).head(10)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/hammoz-consortium/cmip6/models/mpiesm-1-2-ham/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: HAMMOZ-CONSORTIUM\nSource ID: MPIESM-1-2-HAM\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:03\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
subutai/htmresearch
|
projects/l2_pooling/notebooks/similar_objects.ipynb
|
agpl-3.0
|
[
"First, some code. Scroll down.",
"import itertools\nimport random\n\nfrom htmresearch.algorithms.column_pooler import ColumnPooler\n\nINPUT_SIZE = 10000\n\ndef createFeatureLocationPool(size=10):\n duplicateFound = False\n for _ in xrange(5):\n candidateFeatureLocations = [frozenset(random.sample(xrange(INPUT_SIZE), 40))\n for featureNumber in xrange(size)]\n\n # Sanity check that they're pretty unique.\n duplicateFound = False\n for pattern1, pattern2 in itertools.combinations(candidateFeatureLocations, 2):\n if len(pattern1 & pattern2) >= 5:\n duplicateFound = True\n break\n \n if not duplicateFound:\n break\n \n if duplicateFound:\n raise ValueError(\"Failed to generate unique feature-locations\")\n \n featureLocationPool = {}\n for i, featureLocation in enumerate(candidateFeatureLocations):\n if i < 26:\n name = chr(ord('A') + i)\n else:\n name = \"Feature-location %d\" % i\n featureLocationPool[name] = featureLocation\n \n return featureLocationPool\n\n\ndef getLateralInputs(columnPoolers):\n cellsPerColumnPooler = columnPoolers[0].numberOfCells()\n assert all(column.numberOfCells() == cellsPerColumnPooler\n for column in columnPoolers)\n\n inputsByColumn = []\n for recipientColumnIndex in xrange(len(columnPoolers)):\n columnInput = []\n for inputColumnIndex, column in enumerate(columnPoolers):\n if inputColumnIndex == recipientColumnIndex:\n continue\n elif inputColumnIndex < recipientColumnIndex:\n relativeIndex = inputColumnIndex\n elif inputColumnIndex > recipientColumnIndex:\n relativeIndex = inputColumnIndex - 1\n \n offset = relativeIndex * cellsPerColumnPooler\n \n columnInput.extend(cell + offset \n for cell in column.getActiveCells())\n inputsByColumn.append(columnInput)\n \n return inputsByColumn\n\n\ndef getColumnPoolerParams(inputWidth, numColumns):\n cellCount = 2048\n \n return {\n \"inputWidth\": inputWidth,\n \"lateralInputWidth\": cellCount * (numColumns - 1),\n \"columnDimensions\": (cellCount,),\n \"activationThresholdDistal\": 13,\n \"initialPermanence\": 0.41,\n \"connectedPermanence\": 0.50,\n \"minThresholdProximal\": 10,\n \"minThresholdDistal\": 10,\n \"maxNewProximalSynapseCount\": 20,\n \"maxNewDistalSynapseCount\": 20,\n \"permanenceIncrement\": 0.10,\n \"permanenceDecrement\": 0.10,\n \"predictedSegmentDecrement\": 0.0,\n \"synPermProximalInc\": 0.1,\n \"synPermProximalDec\": 0.001,\n \"initialProximalPermanence\": 0.6,\n \"seed\": 42,\n \"numActiveColumnsPerInhArea\": 40,\n \"maxSynapsesPerProximalSegment\": inputWidth,\n }\n\ndef experiment(objects, numColumns):\n #\n # Initialize\n #\n params = getColumnPoolerParams(INPUT_SIZE, numColumns)\n columnPoolers = [ColumnPooler(**params) for _ in xrange(numColumns)]\n\n #\n # Learn\n #\n columnObjectRepresentations = [{} for _ in xrange(numColumns)]\n\n for objectName, objectFeatureLocations in objects.iteritems():\n for featureLocationName in objectFeatureLocations:\n pattern = featureLocationPool[featureLocationName]\n for _ in xrange(10):\n lateralInputs = getLateralInputs(columnPoolers)\n\n for i, column in enumerate(columnPoolers):\n column.compute(feedforwardInput=pattern,\n lateralInput = lateralInputs[i],\n learn=True)\n\n for i, column in enumerate(columnPoolers):\n columnObjectRepresentations[i][objectName] = frozenset(column.getActiveCells())\n column.reset()\n \n objectName = \"Object 1\"\n objectFeatureLocations = objects[objectName]\n\n success = False\n featureLocationLog = []\n activeCellsLog = []\n for attempt in xrange(60):\n featureLocations = random.sample(objectFeatureLocations, numColumns)\n featureLocationLog.append(featureLocations)\n \n # Give the feedforward input 3 times so that the lateral inputs have time to spread.\n for _ in xrange(3):\n lateralInputs = getLateralInputs(columnPoolers)\n\n for i, column in enumerate(columnPoolers):\n pattern = featureLocationPool[featureLocations[i]]\n column.compute(feedforwardInput=pattern,\n lateralInput=lateralInputs[i],\n learn=False)\n\n allActiveCells = [set(column.getActiveCells()) for column in columnPoolers]\n activeCellsLog.append(allActiveCells)\n\n if all(set(column.getActiveCells()) == columnObjectRepresentations[i][objectName]\n for i, column in enumerate(columnPoolers)):\n success = True\n print \"Converged after %d steps\" % (attempt + 1)\n break\n\n if not success:\n print \"Failed to converge after %d steps\" % (attempt + 1)\n \n return (objectName, columnPoolers, featureLocationLog, activeCellsLog, columnObjectRepresentations)",
"Initialize some feature-locations",
"featureLocationPool = createFeatureLocationPool(size=8)",
"Issue: One column spots a difference, but its voice is drowned out\nCreate 8 objects, each with 7 feature-locations. Each object is 1 different from each other object.",
"objects = {\"Object 1\": set([\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\"]),\n \"Object 2\": set([\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"H\"]),\n \"Object 3\": set([\"A\", \"B\", \"C\", \"D\", \"E\", \"G\", \"H\"]),\n \"Object 4\": set([\"A\", \"B\", \"C\", \"D\", \"F\", \"G\", \"H\"]),\n \"Object 5\": set([\"A\", \"B\", \"C\", \"E\", \"F\", \"G\", \"H\"]),\n \"Object 6\": set([\"A\", \"B\", \"D\", \"E\", \"F\", \"G\", \"H\"]),\n \"Object 7\": set([\"A\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]),\n \"Object 8\": set([\"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"])}",
"We're testing L2 in isolation, so these \"A\", \"B\", etc. patterns are L4 representations, i.e. \"feature-locations\".\nTrain an array of 5 columns to recognize these objects, then show it Object 1. It will randomly move its sensors to different feature-locations on the object. It will never put two sensors on the same feature-location at the same time.",
"(testObject,\n columnPoolers,\n featureLocationLog,\n activeCellsLog,\n columnObjectRepresentations) = experiment(objects, numColumns=5)",
"Print what just happened",
"columnContentsLog = []\nfor timestep, allActiveCells in enumerate(activeCellsLog):\n columnContents = []\n for columnIndex, activeCells in enumerate(allActiveCells):\n contents = {}\n for objectName, objectCells in columnObjectRepresentations[columnIndex].iteritems():\n containsRatio = len(activeCells & objectCells) / float(len(objectCells))\n if containsRatio >= 0.20:\n contents[objectName] = containsRatio\n columnContents.append(contents)\n columnContentsLog.append(columnContents)\n\nfor timestep in xrange(len(featureLocationLog)):\n allFeedforwardInputs = featureLocationLog[timestep]\n allActiveCells = activeCellsLog[timestep]\n allColumnContents = columnContentsLog[timestep]\n \n print \"Step %d\" % timestep\n \n for columnIndex in xrange(len(allFeedforwardInputs)):\n feedforwardInput = allFeedforwardInputs[columnIndex]\n activeCells = allActiveCells[columnIndex]\n columnContents = allColumnContents[columnIndex]\n \n print \"Column %d: Input: %s, Active cells: %d %s\" % (columnIndex,\n allFeedforwardInputs[columnIndex],\n len(activeCells),\n columnContents)\n \n print",
"Each column is activating a union of cells. Column 2 sees input G, so it knows this isn't \"Object 2\", but multiple other columns are including \"Object 2\" in their unions, so Column 2's voice gets drowned out.\nHow does this vary with number of columns?",
"for numColumns in xrange(2, 8):\n print \"With %d columns:\" % numColumns\n experiment(objects, numColumns)\n print"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
km-Poonacha/python4phd
|
Session 2/ipython/Lesson 5- Crawl and scrape-Worksheet.ipynb
|
gpl-3.0
|
[
"Lesson 5 - Crawl and Scrape\nMaking the request\nUsing 'requests' module\nUse the requests module to make a HTTP request to http://www.tripadvisor.com\n- Check the status of the request \n- Display the response header information\nGet the '/robots.txt' file contents\nGet the HTML content from the website\nScraping websites\nSometimes, you may want a little bit of information - a movie rating, stock price, or product availability - but the information is available only in HTML pages, surrounded by ads and extraneous content.\nTo do this we build an automated web fetcher called a crawler or spider. After the HTML contents have been retrived from the remote web servers, a scraper parses it to find the needle in the haystack.\nBeautifulSoup Module\nThe bs4 module can be used for searching a webpage (HTML file) and pulling required data from it. It does three things to make a HTML page searchable-\n* First, converts the HTML page to Unicode, and HTML entities are converted to Unicode characters\n* Second, parses (analyses) the HTML page using the best available parser. It will use an HTML parser unless you specifically tell it to use an XML parser\n* Finally transforms a complex HTML document into a complex tree of Python objects.\nThis module takes the HTML page and creates four kinds of objects: Tag, NavigableString, BeautifulSoup, and Comment.\n* The BeautifulSoup object itself represents the webpage as a whole\n* A Tag object corresponds to an XML or HTML tag in the webpage\n* The NavigableString class to contains the bit of text within a tag\nRead more about BeautifulSoup : https://www.crummy.com/software/BeautifulSoup/bs4/doc/",
"<h1 id=\"HEADING\" property=\"name\" class=\"heading_name \">\n <div class=\"heading_height\"></div>\n \"\n Le Jardin Napolitain\n \"\n</h1>",
"Step 1: Making the soup\nFirst we need to use the BeautifulSoup module to parse the HTML data into Python readable Unicode Text format.\n*Let us write the code to parse a html page. We will use the trip advisor URL for an infamous restaurant - https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html *\nStep 2: Inspect the element you want to scrape\nIn this step we will inspect the HTML data of the website to understand the tags and attributes that matches the element. Let us inspect the HTML data of the URL and understand where (under which tag) the review data is located.",
"<div class=\"entry\">\n <p class=\"partial_entry\">\n Popped in on way to Eiffel Tower for lunch, big mistake. \n Pizza was disgusting and service was poor. \n It’s a shame Trip Advisor don’t let you score venues zero....\n <span class=\"taLnk ulBlueLinks\" onclick=\"widgetEvCall('handlers.clickExpand',event,this);\">More\n </span>\n </p>\n</div>",
"Step 3: Searching the soup for the data\nBeautiful Soup defines a lot of methods for searching the parse tree (soup), the two most popular methods are: find() and find_all(). \nThe simplest filter is a tag. Pass a tag to a search method and Beautiful Soup will perform a match against that exact string. \nLet us try and find all the < p > (paragraph) tags in the soup:\nStep 4: Enable pagination\nAutomatically access subsequent pages\nUsing yesterdays sentiment analysis code and the corpus of sentiment found in the word_sentiment.csv file, calculate the sentiment of the reviews.",
"#Enter your code here\n\n\n",
"Expanding this further\nTo add additional details we can inspect the tags further and add the reviewer rating and reviwer details.\nUsing the review data and the ratings available is there any way we can improve the corpus of sentiments \"word_sentiment.csv\" file?\nDynamic Pages\nSome websites make request in the background to fetch the data from the server and load it into the page dynamically (often an AJAX request). In this case, the url will not indicate the location of the data. To find such requests, open the Chrome or Firefox Developer Tools, you can load the page, go to the “Network” tab and then look through the all of the requests that are being sent in the background to find the one that’s returning the data you’re looking for. Start by filtering the requests to only XHR or JS to make this easier.\nOnce you find the AJAX request that returns the data you’re hoping to scrape, then you can make your scraper send requests to this URL, instead of to the parent page’s URL. If you’re lucky, the response will be encoded with JSON which is even easier to parse than HTML.\nSpoofing the User Agent\nBy default, the requests library sets the User-Agent header on each request to something like “python-requests/3.xx.x”. You can change it to identify your web scraper, perhaps providing a contact email address so that an admin from the target website can reach out if they see you in their logs.\nMore commonly, this can be used to make it appear that the request is coming from a normal web browser, and not a web scraping program.",
"header = {\n 'cookie': 'TAUnique=%1%enc%3AHvAwOscAcmfzIwJbsS10GnXn4FrCUpCm%2Bnw21XKuzXoV7vSwMEnyTA%3D%3D; fbm_162729813767876=base_domain=.tripadvisor.com; TACds=B.3.11419.1.2019-03-31; TASSK=enc%3AABCGM1r6xBekOjRaaQZ3QVS7dP4cwZ8sombvPTq8xK6xN55i7TN8puwZdwvXvG1i%2FJ2UQXYG1CwsU%2BXLwLs5qIxnmW5qbLt4I48DfK5FhHpwUw3ZgrbskK%2FjDc4ENfcCXw%3D%3D; ServerPool=C; TART=%1%enc%3A8yMCW7EtdBqPX0oluvfOS5mBk6DRMHXwNEAPJlcpaDumiCWsxs%2BxfBbTYsxpa%2F9l%2FJzCllshf9g%3D; VRMCID=%1%V1*id.10568*llp.%2FRestaurant_Review-g187147-d3405673-Reviews-La_Terrasse_Vedettes_de_Paris-Paris_Ile_de_France%5C.html*e.1557691551614; PMC=V2*MS.36*MD.20190505*LD.20190506; PAC=ALNtqHPT2KJjQwExTPJt3gCvzvDYH_x63ZOT4b3LetvkHuHXcEUY4eLx0TqKGzOIpoXF3K_j57rNigUkWJzSv7TtTna4L3DKcfiaeK9zT9ixGEevH6QwZVd-PdMyr9y5aRzjEVAfid42zC4WXeTcQTJkPVwGMCW2mB2k3xxfB78GgJFIR_I9vf6Bzhq89x_UTTUcQgFpCr8GEFV9GpJWG8UNGeriJSbmPtCXA10oXl5ox7U9TQvSILLSH8PdrP8nwUQMRnfUA_fKbXTaRgH4tzBwZQpbd1vlOOg7fKyfIN9V95PzNOXBEQCJIo3z09Nux0tyZZVX0PX_zI_moLpr9Od3eSi1E8Hm5QcLyG9QNfA1C5WckG9GOV5VKEL0bxDY5TG1smCaQDXpRLkvp8w2bD7vyI2e27WFbtuYvJDJ126v2_KyZmVbG3laZlvWrX2kWGL13IyhVS2Ivjr_9uJAwMpBKuNByH0FBU3ziJcRdqkXiz6lnYMSRSQ1Y8Dmkjkrc0DNTABvuHjbZ7Fh0LOINswW_wrkVsP4PjDq1IVh7IY0hLE_W1G1DKlROc5BZEOjcw%3D%3D; BEPIN=%1%16a8c46770b%3Bbak92b.b.tripadvisor.com%3A10023%3B; TATravelInfo=V2*A.2*MG.-1*HP.2*FL.3*DSM.1557131589173*RS.1*RY.2019*RM.5*RD.6*RH.20*RG.2; CM=%1%RestAds%2FRPers%2C%2C-1%7CRCPers%2C%2C-1%7Csesstch15%2C%2C-1%7CCYLPUSess%2C%2C-1%7Ctvsess%2C%2C-1%7CPremiumMCSess%2C%2C-1%7CRestPartSess%2C%2C-1%7CUVOwnersSess%2C%2C-1%7CRestPremRSess%2C%2C-1%7CPremRetPers%2C%2C-1%7CViatorMCPers%2C%2C-1%7Csesssticker%2C%2C-1%7C%24%2C%2C-1%7Ct4b-sc%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS2%2C%2C-1%7CPremMCBtmSess%2C%2C-1%7CLaFourchette+Banners%2C%2C-1%7Csesshours%2C%2C-1%7CTARSWBPers%2C%2C-1%7CTheForkORSess%2C%2C-1%7CTheForkRRSess%2C%2C-1%7CRestAds%2FRSess%2C%2C-1%7CPremiumMobPers%2C%2C-1%7CLaFourchette+MC+Banners%2C%2C-1%7Csesslaf%2C%2C-1%7CRestPartPers%2C%2C-1%7CCYLPUPers%2C%2C-1%7CCCUVOwnSess%2C%2C-1%7Cperslaf%2C%2C-1%7CUVOwnersPers%2C%2C-1%7Csh%2C%2C-1%7CTheForkMCCSess%2C%2C-1%7CCCPers%2C%2C-1%7Cb2bmcsess%2C%2C-1%7CSPMCPers%2C%2C-1%7Cperswifi%2C%2C-1%7CPremRetSess%2C%2C-1%7CViatorMCSess%2C%2C-1%7CPremiumMCPers%2C%2C-1%7CPremiumRRPers%2C%2C-1%7CRestAdsCCPers%2C%2C-1%7CTrayssess%2C%2C-1%7CPremiumORPers%2C%2C-1%7CSPORPers%2C%2C-1%7Cperssticker%2C%2C-1%7Cbooksticks%2C%2C-1%7CSPMCWBSess%2C%2C-1%7Cbookstickp%2C%2C-1%7CPremiumMobSess%2C%2C-1%7Csesswifi%2C%2C-1%7Ct4b-pc%2C%2C-1%7CWShadeSeen%2C%2C-1%7CTheForkMCCPers%2C%2C-1%7CHomeASess%2C9%2C-1%7CPremiumSURPers%2C%2C-1%7CCCUVOwnPers%2C%2C-1%7CTBPers%2C%2C-1%7Cperstch15%2C%2C-1%7CCCSess%2C2%2C-1%7CCYLSess%2C%2C-1%7Cpershours%2C%2C-1%7CPremiumORSess%2C%2C-1%7CRestAdsPers%2C%2C-1%7Cb2bmcpers%2C%2C-1%7CTrayspers%2C%2C-1%7CPremiumSURSess%2C%2C-1%7CMC_IB_UPSELL_IB_LOGOS%2C%2C-1%7Csess_rev%2C%2C-1%7Csessamex%2C%2C-1%7CPremiumRRSess%2C%2C-1%7CTADORSess%2C%2C-1%7CAdsRetPers%2C%2C-1%7CMCPPers%2C%2C-1%7CSPMCSess%2C%2C-1%7Cpers_rev%2C%2C-1%7Cmdpers%2C%2C-1%7Cmds%2C1557131565748%2C1557217965%7CSPMCWBPers%2C%2C-1%7CRBAPers%2C%2C-1%7CHomeAPers%2C%2C-1%7CRCSess%2C%2C-1%7CRestAdsCCSess%2C%2C-1%7CRestPremRPers%2C%2C-1%7Cpssamex%2C%2C-1%7CCYLPers%2C%2C-1%7Ctvpers%2C%2C-1%7CTBSess%2C%2C-1%7CAdsRetSess%2C%2C-1%7CMCPSess%2C%2C-1%7CTADORPers%2C%2C-1%7CTheForkORPers%2C%2C-1%7CPremMCBtmPers%2C%2C-1%7CTheForkRRPers%2C%2C-1%7CTARSWBSess%2C%2C-1%7CRestAdsSess%2C%2C-1%7CRBASess%2C%2C-1%7Cmdsess%2C%2C-1%7C; fbsr_162729813767876=wtGNSIucBSm5EusyRkPyX_GfZwxNkyHLxTRli46iHoM.eyJjb2RlIjoiQVFBUHV3SlZpOVNXQXVkMDh1bUdaYjZ2R3hBMkdfdFBZdm9Bb2l2cDEzSDNvaG1ESjRkamo1V1A3dnB5WloxWmwzeWxFTmdCT0dCbTB6dzc1S2pwUHFKak5nQVNKMGNqOEtvUVY1YzZXNHhNQ1FlMURNNXJOUUpMeEJldjlBS2xKNnhVVjVXQ1ZaajZjN1k4X1ZWeGdxbzlIclhKT3BvUDZSLTVzNkVUZ3Q5Q0xMNmg0ZnZIY0pMSm1KdXJwN0lGVFBSOUdvX0Z4M0FiM0VWQ1RnVFNGNzc2NFFuU29fdER5VFk3TWY0V0VKSFZXZi11ME1pa2ZWS1ZzUHdHQlBOOE1xZkVQNjZfZHpZMVdnSEVfcWR4d2FHN2xNODNyR1BWaDVwdDdodlFQQmFBbGtzU21IYjZiSktEaGVGajM4WTg3TGxUUF9hNEVGUjVjOVdoOVNhY2RmV04iLCJ1c2VyX2lkIjoiMTY1NjQ2NDcxNSIsImFsZ29yaXRobSI6IkhNQUMtU0hBMjU2IiwiaXNzdWVkX2F0IjoxNTU3MTMzMTgxfQ; TAReturnTo=%1%%2FRestaurants-g304551-New_Delhi_National_Capital_Territory_of_Delhi.html; roybatty=TNI1625!APyGsDM6tcKypRo49myenvbO5Zyk367lJP3JEhTSBrfno%2F4Bbienyfvs6Q2DU%2F2UmkzjN1pKquiSNGeY2cXQm8s8oX1jKwXT8hgK3GL%2B6psZHdp4k7TF4F52uoI2kQ1e9Ni2k9Ub8D5ak%2FXgN%2F9as9m2HZIB0G6SZnZMT%2FPD73Fo%2C1; SRT=%1%enc%3A8yMCW7EtdBqPX0oluvfOS5mBk6DRMHXwNEAPJlcpaDumiCWsxs%2BxfBbTYsxpa%2F9l%2FJzCllshf9g%3D; TASession=V2ID.2C4059CFCBC27797DA97994A5CF94A28*SQ.233*LS.PageMoniker*GR.7*TCPAR.44*TBR.80*EXEX.60*ABTR.87*PHTB.57*FS.2*CPU.54*HS.recommended*ES.popularity*DS.5*SAS.popularity*FPS.oldFirst*LF.en*FA.1*DF.0*IR.4*TRA.false*LD.304551; TAUD=LA-1557055610999-1*RDD-1-2019_05_05*RD-75954750-2019_05_06.9784431*HDD-75978369-2019_05_19.2019_05_20.1*HC-76743574*LG-77588176-2.1.F.*LD-77588177-.....',\n'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'\n}\n\n\nresponse = requests.post(\"https://www.tripadvisor.com/RestaurantSearch?Action=PAGE&geo=304551&ajax=1&itags=10591&sortOrder=relevance&o=a30&availSearchEnabled=false\", headers=header)\n",
"Selenium"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dwhswenson/annotated_trajectories
|
examples/annotation_example.ipynb
|
lgpl-2.1
|
[
"Annotated Trajectory Example\nThis example shows how to annotate a trajectory (and save the annotations) using the annotated_trajectories package, which supplements OpenPathSampling.\nFirst you'll need to import the two packages (after installing them, of course).",
"from __future__ import print_function\nimport openpathsampling as paths\nfrom annotated_trajectories import AnnotatedTrajectory, Annotation, plot_annotated",
"Now I'm going to create some fake data:",
"from openpathsampling.tests.test_helpers import make_1d_traj\ntraj = make_1d_traj([-1, 1, 4, 3, 6, 11, 22, 33, 23, 101, 205, 35, 45])\n\n# to get a real trajectory:\n# from openpathsampling.engines.openmm.tools import ops_load_trajectory\n# traj = ops_load_trajectory(\"name_of_file.xtc\", top=\"topology.pdb\") # can also be a .gro",
"Next I'll open the file. You'll only do this once, and then add all of your annotations for each trajectory into the open file.",
"storage = paths.Storage(\"output.nc\", \"w\")",
"Annotating trajectories\nNow we get to the core. For each trajectory, you can choose state names, and you create a list of annotations for those states. Each annotation includes the state name, the first frame in the state, and the final frame in the state (first and final, named begin and end, are included in the state). Remember that, in Python, the first frame is 0.\nOnce you've made your annotations, you assign them to your trajectory by putting them both into an AnnotatedTrajectory object.",
"annotations = [\n Annotation(state=\"1-digit\", begin=1, end=4),\n Annotation(state=\"2-digit\", begin=6, end=8),\n Annotation(state=\"3-digit\", begin=10, end=10),\n Annotation(state=\"2-digit\", begin=11, end=12)\n]\na_traj = AnnotatedTrajectory(trajectory=traj, annotations=annotations)",
"Note that I worry more about incorrectly identifying something as in the state when it actually is not, than missing any frame that could be in the state. There's always some room for optimization here, but you should err on the side of ensuring that your labels actually identify that state. Allow false negatives; don't allow false positives.\nNext, you save the trajectory to the file using the tag attribute of the storage. This will save both the trajectory and all its annotations to the file.\nIn the future, we hope to avoid use of the tag store. However, for now I recommend using something like the file name of the trajectory as the string for the tag. It must be unique.",
"storage.tag['my_file_name'] = a_traj",
"Repeat the steps in the last two cells for each trajectory. When you're done, you can run:",
"storage.sync()\nstorage.close()",
"Plotting annotations",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef ln_x(snapshot):\n import math\n return math.log10(abs(snapshot.xyz[0][0]))\n\ncv = paths.CoordinateFunctionCV(\"log(x)\", ln_x)\n\nstate_1 = paths.CVDefinedVolume(cv, 0.0, 1.0)\nstate_2 = paths.CVDefinedVolume(cv, 1.0, 2.0)\nstate_3 = paths.CVDefinedVolume(cv, 2.0, 3.0)\n\nnames_to_states = {\n '1-digit': state_1,\n '2-digit': state_2,\n '3-digit': state_3, \n}\nnames_to_colors = {\n '1-digit': 'b',\n '2-digit': 'c',\n '3-digit': 'r'\n}\n\nplot_annotated(a_traj, cv, names_to_states, names_to_colors, dt=0.1)\nplt.xlim(xmax=1.5)\nplt.ylim(ymin=-0.1)",
"Checking for conflicts\nNow I'm going to label one of my frames in a way that conflicts with my state definition. This means I'll have a false positive: the state definition says this is in the state when the annotation says it isn't (and specifically, when the annotation says it is in a different state).\nNote that this still isn't a sufficient check: a more complicated test would ensure that, if the state gives a false positive where there is no annotation, that one of the nearest annotations (forward or backward) is of the same state. This has not been implemented yet, but might be added in the future.",
"# only difference is that I claim frame 5 (x=11) is in the 1-digit state\nbad_annotations = [\n Annotation(state=\"1-digit\", begin=1, end=5),\n Annotation(state=\"2-digit\", begin=6, end=8),\n Annotation(state=\"3-digit\", begin=10, end=10),\n Annotation(state=\"2-digit\", begin=11, end=12)\n]\nbad_traj = AnnotatedTrajectory(traj, bad_annotations)\n\nplot_annotated(bad_traj, cv, names_to_states, names_to_colors, dt=0.1)\nplt.xlim(xmax=1.5)\nplt.ylim(ymin=-0.1)",
"Note how the value at $t=0.5$ conflicts: the annotation (the line) says that this this is in the blue (1-digit) state, whereas the state definition (the points) says that this is in the can (2-digit) state.",
"(results, conflicts) = bad_traj.validate_states(names_to_states)",
"The results object will count something as a false postive if the state identifies it, but the annotation doesn't. Not all false positives are bad: if the state identified by the annotation is smaller than the state definition, then the state definition will catch some frames that the annotation left marked as not in any state.\nIt will count something as a false negative if the annotation identifies it, but the state doesn't. Not all false negatives are bad: this merely indicates that the state definitions may claim a frame is in no state, even though the annotation says it is in a state. In fact, we typically want this: a reasonable number of false negatives is necessary for your state to a have a decent flux in TIS.",
"print(results['1-digit'])\n\nprint(results['2-digit'])\n\nprint(results['3-digit'])\n\nprint(bad_traj.trajectory.index(results['1-digit'].false_negative[0]))",
"While false positives and false negatives are not inherently bad, conflicts are. A frame is said to be in conflict if the state definition assigns a different state than the annotation. The conflicts dictionary tells which frame numbers are in conflict, organized by which state definition volume they are in.",
"print(conflicts)",
"You can identify which state the annotations claim using get_label_for_frame:",
"bad_traj.get_label_for_frame(5)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
5hubh4m/CS231n
|
Assignment2/BatchNormalization.ipynb
|
mit
|
[
"Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.",
"# As usual, a bit of setup\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape",
"Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.",
"# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint 'Before batch normalization:'\nprint ' means: ', a.mean(axis=0)\nprint ' stds: ', a.std(axis=0)\n\n# Means should be close to zero and stds close to one\nprint 'After batch normalization (gamma=1, beta=0)'\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint ' mean: ', a_norm.mean(axis=0)\nprint ' std: ', a_norm.std(axis=0)\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint 'After batch normalization (nontrivial gamma, beta)'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in xrange(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint 'After batch normalization (test-time):'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)",
"Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.",
"# Gradient check batchnorm backward pass\n\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dgamma error: ', rel_error(da_num, dgamma)\nprint 'dbeta error: ', rel_error(db_num, dbeta)",
"Batch Normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.",
"N, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint 'dx difference: ', rel_error(dx1, dx2)\nprint 'dgamma difference: ', rel_error(dgamma1, dgamma2)\nprint 'dbeta difference: ', rel_error(dbeta1, dbeta2)\nprint 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))",
"Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.",
"N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print 'Running check with reg = ', reg\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))\n if reg == 0: print",
"Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.",
"# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()",
"Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.",
"plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.",
"# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maxis42/ML-DA-Coursera-Yandex-MIPT
|
4 Stats for data analysis/Lectures notebooks/7 student tests/stat.student_tests.ipynb
|
mit
|
[
"Критерии Стьюдента",
"import numpy as np\nimport pandas as pd\n\nimport scipy\nfrom statsmodels.stats.weightstats import *\n\n%pylab inline",
"Treatment effects of methylphenidate\nВ рамках исследования эффективности препарата метилфенидат 24 пациента с синдромом дефицита внимания и гиперактивности в течение недели принимали либо метилфенидат, либо плацебо. В конце недели каждый пациент проходили тест на способность к подавлению импульсивных поведенческих реакций. На втором этапе плацебо и препарат менялись, и после недельного курса каждый испытуемые проходили второй тест.\nТребуется оценить эффект применения препарата.\nPearson D.A, Santos C.W., Casat C.D., et al. (2004). Treatment effects of methylphenidate on cognitive functioning in children with mental retardation and ADHD. Journal of the American Academy of Child and Adolescent Psychiatry, 43(6), 677–685.",
"data = pd.read_csv('ADHD.txt', sep = ' ', header = 0)\ndata.columns = ['Placebo', 'Methylphenidate']\n\ndata.plot.scatter('Placebo', 'Methylphenidate', c = 'r', s = 30)\npylab.grid()\npylab.plot(range(100), c = 'black')\npylab.xlim((20, 80))\npylab.ylim((20, 80))\npylab.show()\n\ndata.plot.hist()\npylab.show()",
"Одновыборочный критерий Стьюдента\nИсходя из того, что способность к подавлению испульсивных поведенческих реакций измеряется по шкале [0, 100], можно предположить, что при хорошей калибровке теста средняя способоность к подавлению реакций в популяции составляет 50. Тогда для того, чтобы проверить гипотезу о том, что пациенты в выборке действительно в среднем хуже справляются с подавлением импульсивных реакций (нуждаются в лечении), давайте проверим, что их способность к подавлению реакций отличается от средней (не равна 50). \n$H_0\\colon$ среднее значение способности к подавлению испульсивных поведенческих реакций равно 50.\n$H_1\\colon$ не равно.",
"stats.ttest_1samp(data.Placebo, 50.0)\n\nprint \"95%% confidence interval: [%f, %f]\" % zconfint(data.Placebo)",
"Двухвыборочный критерий Стьюдента (независимые выборки)\nДля того, чтобы использовать двухвыборочный критерий Стьюдента, убедимся, что распределения в выборках существенно не отличаются от нормальных.",
"pylab.figure(figsize=(12,8))\npylab.subplot(2,2,1)\nstats.probplot(data.Placebo, dist=\"norm\", plot=pylab)\npylab.subplot(2,2,2)\nstats.probplot(data.Methylphenidate, dist=\"norm\", plot=pylab)\npylab.show()",
"Критерий Шапиро-Уилка:\n$H_0\\colon$ способности к подавлению импульсивных реакций распредлены нормально\n$H_1\\colon$ не нормально.",
"print \"Shapiro-Wilk normality test, W-statistic: %f, p-value: %f\" % stats.shapiro(data.Placebo)\n\nprint \"Shapiro-Wilk normality test, W-statistic: %f, p-value: %f\" % stats.shapiro(data.Methylphenidate)",
"С помощью критерия Стьюдента проверим гипотезу о развенстве средних двух выборок.\nКритерий Стьюдента:\n$H_0\\colon$ средние значения способности к подавлению испульсивных поведенческих реакций одинаковы для пациентов, принимавших препарат, и для пациентов, принимавших плацебо.\n$H_0\\colon$ не одинаковы.",
"scipy.stats.ttest_ind(data.Placebo, data.Methylphenidate, equal_var = False)\n\ncm = CompareMeans(DescrStatsW(data.Methylphenidate), DescrStatsW(data.Placebo))\nprint \"95%% confidence interval: [%f, %f]\" % cm.tconfint_diff(usevar='unequal')",
"Двухвыборочный критерий Стьюдента (зависмые выборки)\nДля того, чтобы использовать критерй Стьюдента для связанных выборок, давайте проверим, что распределение попарных разностей существенно не отличается от нормального.",
"stats.probplot(data.Placebo - data.Methylphenidate, dist = \"norm\", plot = pylab)\npylab.show()",
"Критерий Шапиро-Уилка:\n$H_0\\colon$ попарные разности распределены нормально.\n$H_1\\colon$ не нормально.",
"print \"Shapiro-Wilk normality test, W-statistic: %f, p-value: %f\" % stats.shapiro(data.Methylphenidate - data.Placebo)",
"Критерий Стьюдента:\n$H_0\\colon$ средние значения способности к подавлению испульсивных поведенческих реакций одинаковы для пациентов, принимавших препарат, и для пациентов, принимавших плацебо.\n$H_1\\colon$ не одинаковы.",
"stats.ttest_rel(data.Methylphenidate, data.Placebo)\n\nprint \"95%% confidence interval: [%f, %f]\" % DescrStatsW(data.Methylphenidate - data.Placebo).tconfint_mean()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
molgor/spystats
|
notebooks/Sandboxes/GMRF/.ipynb_checkpoints/Stationary_with_fft-checkpoint.ipynb
|
bsd-2-clause
|
[
"Stationary GMRF simulation with Discrete Fourier Transformation",
"# Load Biospytial modules and etc.\n%matplotlib inline\nimport sys\nsys.path.append('/apps')\nsys.path.append('..')\n#sys.path.append('../../spystats')\nimport django\ndjango.setup()\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n## Use the ggplot style\nplt.style.use('ggplot')\n\nfrom external_plugins.spystats.spystats import tools as sptools\nimport scipy",
"Algorithm to simulate GMRF with block-circulant Matrix.\nTaken from: Rue, H., & Held, L. (2005). Gaussian Markov random fields: theory and applications. CRC press.\nAlgorithm 2.10\nNow let's build the circulant matrix for the tourus\nOke, for the moment I´ll follow the example in GMRF book.\ni.e. a Torus (stationary of 128x128)",
"#c_delta = lambda d : np.hstack(((4 + d),-1,np.zeros(128 - 3),-1))\n#c_delta = lambda d : np.hstack(((0),-1,np.zeros(128 - 3),-1))\n#C = scipy.linalg.circulant(c_delta(0.1))\n\ndef createToroidalCircularBase(d=0.1,N=128):\n \"\"\"\n Creates a circular base similar to the one described in GMRF Rue and Held, 2005.\n \"\"\"\n c00 = np.hstack(((4 + d),-1,np.zeros(N - 3),-1))\n c01 = np.hstack((-1,np.zeros(N - 1)))\n c0 = np.zeros((N - 2 ,N))\n c1 = np.vstack((c00,c01))\n c = np.vstack((c1,c0))\n c[N -1, 0] = -1\n return c\n\n%%time \n## Create circular base\nd = 0.00001\nN = 100\nc = createToroidalCircularBase(d=d,N=N)\n## Simulate random noise (Normal distributed)\nfrom scipy.fftpack import ifft2, fft2\nzr = scipy.stats.norm.rvs(size=(c.size,2),loc=0,scale=1,random_state=1234)\nzr.dtype=np.complex_\n#plt.hist(zr.real)\n#Lm = scipy.sqrt(C.shape[0]*C.shape[0]) * fft2(C)\n\nLm = fft2(c)\nv = 1.0/ len(c) * fft2((Lm ** -0.5)* zr.reshape(Lm.shape))\nx = v.real\nplt.imshow(x,interpolation='None')\n\n## Calculate inverse of c\nC_inv = ifft2 ((fft2(c) ** -1))\n\nplt.plot(C_inv[:,0])",
"For benchmarking we will perfom a GF simulation.\nBased on non-conditional simulation.",
"%%time \nvm = sptools.ExponentialVariogram(sill=0.3,range_a=0.4)\nxx,yy,z = sptools.simulatedGaussianFieldAsPcolorMesh(vm,grid_sizex=100,grid_sizey=100,random_seed=1234)\nplt.imshow(z)",
"comparison\n| Size | Method | Seconds |\n|----------|---------------|-----------|\n| 100x100 | Full Gaussian | 346 | \n| 100x100 | Markov FFT | 0.151 |\nThe stationary circulant markov FFT method is 2291x faster",
"346 / 0.151\n\nplt.figure(figsize=(10, 5))\nplt.subplot(1,2,1)\nplt.imshow(z)\nplt.subplot(1,2,2)\nplt.imshow(x,interpolation='None')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/exams/td_note_2022.ipynb
|
mit
|
[
"1A - Enoncé 3 novembre 2021\nCorrection de l'examen du 3 novembre 2021.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Exercice 1 : multiplication de matrices\nOn a besoin d'une fonction qui mesure le temps d'exécution d'une fonction.",
"import time\n\ndef mesure_temps_fonction(fct, N=100):\n begin = time.perf_counter()\n for i in range(N):\n fct()\n return (time.perf_counter() - begin) / N\n\nmesure_temps_fonction(lambda: time.sleep(0.1), N=10)",
"Q1 : Pourquoi (m1 @ m2) @ m3 est-il plus lent que m1 @ (m2 @ m3) ? (2 points)\nIl y a deux options possible. Il suffit de compter le nombre d'opérations dans chaque option. Le coût d'une multiplication $M_{ab} \\times m_{bc}$ est de l'ordre de $O(abc)$. Donc :\n\ncout((m1 @ m2) @ m3) ~ O(997 * 93 * 1003 + 997 * 1003 * 97) = 189998290\ncout(m1 @ (m2 @ m3)) ~ O(93 * 1003 * 97 + 997 * 93 * 97) = 18042000\n\nLa seconde option est dix fois plus rapide.",
"print(997 * 93 * 1003 + 997 * 1003 * 97, 93 * 1003 * 97 + 997 * 93 * 97)\n\nimport numpy\n\nm1 = numpy.random.randn(997, 93)\nm2 = numpy.random.randn(93, 1003)\nm3 = numpy.random.randn(1003, 97)\n\nmesure_temps_fonction(lambda: m1 @ m2 @ m3)\n\nmesure_temps_fonction(lambda: (m1 @ m2) @ m3)\n\nmesure_temps_fonction(lambda: m1 @ (m2 @ m3))",
"Q2 : Ecrire une fonction qui calcule le nombre d'operations dans une multiplication de deux matrices (2 points)",
"def n_ops(m1_shape, m2_shape):\n return m1_shape[0] * m2_shape[1] * m1_shape[1] * 2\n\nn_ops(m1.shape, m2.shape)",
"Q3 : Ecrire une fonction qui retourne le meilleur coût d'une multiplication de deux matrices et la meilleure option (2 points)",
"def n_ops_3(sh1, sh2, sh3):\n m1_m2m3 = n_ops(sh1, (sh2[0], sh3[1])) + n_ops(sh2, sh3)\n m1m2_m3 = n_ops(sh1, sh2) + n_ops((sh1[0], sh2[1]), sh3)\n if m1m2_m3 < m1_m2m3:\n return m1m2_m3, 2\n else:\n return m1_m2m3, 1\n \nn_ops_3(m1.shape, m2.shape, m3.shape)",
"Q4 : Ecrire une fonction qui effectue le produit de trois matrices le plus rapidement possible (2 points)",
"from numpy.testing import assert_almost_equal\n\ndef produit3(m1, m2, m3):\n cout, meilleur = n_ops_3(m1.shape, m2.shape, m3.shape)\n if meilleur == 2:\n return (m1 @ m2) @ m3\n else:\n return m1 @ (m2 @ m3)\n\n\nassert_almost_equal(produit3(m1, m2, m3), m1 @ (m2 @ m3))",
"Q5 : Vérifiez que vous retrouvez les mêmes résultats avec la fonction mesure_temps (2 points)",
"mesure_temps_fonction(lambda: produit3(m1, m2, m3))",
"On vérifie que c'est égal à :",
"mesure_temps_fonction(lambda: m1 @ (m2 @ m3))",
"Ici, vous avez le choix entre faire les questions 6 à 9 ou les questions 9 et 10.\nQ6 : Ecrire une fonction qui retourne le meilleur coût d'une multiplication de 4 matrices et la meilleure option (3 points)",
"m4 = numpy.random.randn(97, 20)\n\ndef n_ops_4(sh1, sh2, sh3, sh4):\n m1_m2m3m4 = n_ops(sh1, (sh2[0], sh4[1])) + n_ops_3(sh2, sh3, sh4)[0]\n m1m2_m3m4 = n_ops(sh1, sh2) + n_ops((sh1[0], sh2[1]), (sh3[0], sh4[1])) + n_ops(sh3, sh4)\n m1m2m3_m4 = n_ops_3(sh1, sh2, sh3)[0] + n_ops((sh1[0], sh3[1]), sh4)\n m = min(m1_m2m3m4, m1m2_m3m4, m1m2m3_m4)\n if m == m1_m2m3m4:\n return m, 1\n if m == m1m2_m3m4:\n return m, 2\n return m, 3\n\nn_ops_4(m1.shape, m2.shape, m3.shape, m4.shape)",
"Q7 : Ecrire une fonction qui effectue le produit de 4 matrices le plus rapidement possible (3 points)",
"def produit4(m1, m2, m3, m4):\n cout, meilleur = n_ops_4(m1.shape, m2.shape, m3.shape, m4.shape)\n if meilleur == 1:\n return m1 @ produit3(m2, m3, m4)\n if meilleur == 2:\n return (m1 @ m2) @ (m3 @ m4)\n return produit3(m1, m2, m3) @ m4\n\nmesure_temps_fonction(lambda: produit4(m1, m2, m3, m4))",
"Q8 : Vérifiez que vous retrouvez les mêmes résultats avec la fonction mesure_temps et la matrice m4. (2 points)",
"mesure_temps_fonction(lambda: ((m1 @ m2) @ m3) @ m4)\n\nmesure_temps_fonction(lambda: (m1 @ m2) @ (m3 @ m4))\n\nmesure_temps_fonction(lambda: m1 @ (m2 @ (m3 @ m4)))\n\nmesure_temps_fonction(lambda: produit4(m1, m2, m3, m4))",
"Q9 : On se penche sur le cas à une multiplication de N matrices, combien y a-t-il de multiplications de 2 matrices ? (2 points)\nIl y a en toujours N-1. On considère le produit $M_1 \\times... \\times M_n$. La multiplication commence toujours par une multiplication de deux matrices consécutives quelles qu'elles soient. On les suppose aux positions $(i, i+1)$. On note le résultat $MM_i$. Après ce produit, il faudra faire : $(M_1 \\times ... \\times M_{i-1} \\times MM_i \\times M_{i+2} \\times ... \\times M_n$, soit une multiplication de $N-2$ matrices. On obtient le résultat par récurrence.\nIci s'arrête l'énoncé pour ceux qui ont choisit de répondre aux question 6 à 9.\nQ10 : Résoudre l'optimisation de multiplication de N matrices.\nOn l'envisage de façon récursive. La première solution effectue plein de calculs en double mais nous verront comment la modifier.",
"def n_ops_N(shapes):\n if len(shapes) <= 1:\n raise RuntimeError(\"Unexpected list of shapes: %r.\" % shapes)\n if len(shapes) == 2:\n return n_ops(*shapes), 1\n if len(shapes) == 3:\n return n_ops_3(*shapes)\n best_cost = None\n best_pos = None\n for i in range(1, len(shapes)):\n if i == 1:\n cost = n_ops(shapes[0], (shapes[1][0], shapes[-1][1])) + n_ops_N(shapes[1:])[0]\n best_cost = cost\n best_pos = i\n elif i == len(shapes)-1:\n cost = n_ops_N(shapes[:-1])[0] + n_ops((shapes[0][0], shapes[-2][1]), shapes[-1])\n if cost < best_cost:\n best_cost = cost\n best_pos = i\n else:\n cost = (n_ops_N(shapes[:i])[0] + n_ops_N(shapes[i:])[0] + \n n_ops((shapes[0][0], shapes[i-1][1]), (shapes[i][0], shapes[-1][1])))\n if cost < best_cost:\n best_cost = cost\n best_pos = i\n\n if best_pos is None:\n raise RuntimeError(shapes)\n return best_cost, best_pos\n\n \nn_ops_N([m1.shape, m2.shape, m3.shape, m4.shape])\n\nn_ops_4(m1.shape, m2.shape, m3.shape, m4.shape)\n\ndef product_N(inputs):\n if len(inputs) <= 1:\n raise RuntimeError(\n \"List inputs must contain at least two elements bot has %d.\" % len(inputs))\n cost, pos = n_ops_N([i.shape for i in inputs])\n if len(inputs) == 2:\n return inputs[0] @ inputs[1]\n if pos == 1:\n right = product_N(inputs[1:])\n return inputs[0] @ right\n if pos == len(shapes) - 1:\n left = product_N(inputs[:-1])\n return left @ inputs[-1]\n else:\n left = product_N(inputs[:pos + 1])\n right = product_N(inputs[pos + 1:])\n return left @ right\n\n\nassert_almost_equal(m1 @ m2 @ m3 @ m4, product_N([m1, m2, m3, m4]))\n\nmesure_temps_fonction(lambda: produit4(m1, m2, m3, m4))\n\nmesure_temps_fonction(lambda: product_N([m1, m2, m3, m4]))",
"Ici s'arrête ce qui est attendu comme réponse à la question 10.\nLes calculs en double...\nOn vérifie en ajoutant une ligne pour afficher tous les appels à n_ops_N.",
"def n_ops_N(shapes, verbose=False):\n if verbose:\n print(\"n_ops_N(%r)\" % shapes)\n if len(shapes) <= 1:\n raise RuntimeError(\"Unexpected list of shapes: %r.\" % shapes)\n if len(shapes) == 2:\n return n_ops(*shapes), 1\n if len(shapes) == 3:\n return n_ops_3(*shapes)\n best_cost = None\n best_pos = None\n for i in range(1, len(shapes)):\n if i == 1:\n cost = (n_ops(shapes[0], (shapes[1][0], shapes[-1][1])) + \n n_ops_N(shapes[1:], verbose=verbose)[0])\n best_cost = cost\n best_pos = i\n elif i == len(shapes)-1:\n cost = (n_ops_N(shapes[:-1], verbose=verbose)[0] + \n n_ops((shapes[0][0], shapes[-2][1]), shapes[-1]))\n if cost < best_cost:\n best_cost = cost\n best_pos = i\n else:\n cost = (n_ops_N(shapes[:i], verbose=verbose)[0] + \n n_ops_N(shapes[i:], verbose=verbose)[0] + \n n_ops((shapes[0][0], shapes[i-1][1]), (shapes[i][0], shapes[-1][1])))\n if cost < best_cost:\n best_cost = cost\n best_pos = i\n\n if best_pos is None:\n raise RuntimeError(shapes)\n return best_cost, best_pos\n\n\nm5 = numpy.random.randn(20, 17)\n\nn_ops_N([m1.shape, m2.shape, m3.shape, m4.shape, m5.shape], verbose=True)",
"On voit deux appels identiques n_ops_N([(97, 20), (20, 17)]) et n_ops_N([(93, 1003), (1003, 97), (97, 20)]). Ce n'est pas trop problématique pour un petit nombre de matrices mais cela pourrait le devenir si ce même algorithme était appliquée à autre chose.\nPlutôt que de réécrire l'algorithme différemment, on se propose d'ajouter un paramètre pour garder la trace des résultats déjà retournés.",
"def n_ops_N_opt(shapes, cache=None, verbose=False):\n if cache is None:\n cache = {}\n key = tuple(shapes)\n if key in cache:\n # On s'arrête, déjà calculé.\n return cache[key]\n\n if verbose:\n print(\"n_ops_N(%r)\" % shapes)\n if len(shapes) <= 1:\n raise RuntimeError(\"Unexpected list of shapes: %r.\" % shapes)\n \n if len(shapes) == 2:\n res = n_ops(*shapes), 1\n cache[key] = res\n return res\n \n if len(shapes) == 3:\n res = n_ops_3(*shapes)\n cache[key] = res\n return res\n \n best_cost = None\n best_pos = None\n for i in range(1, len(shapes)):\n if i == 1:\n cost = (n_ops(shapes[0], (shapes[1][0], shapes[-1][1])) + \n n_ops_N_opt(shapes[1:], verbose=verbose, cache=cache)[0])\n best_cost = cost\n best_pos = i\n elif i == len(shapes)-1:\n cost = (n_ops_N_opt(shapes[:-1], verbose=verbose, cache=cache)[0] + \n n_ops((shapes[0][0], shapes[-2][1]), shapes[-1]))\n if cost < best_cost:\n best_cost = cost\n best_pos = i\n else:\n cost = (n_ops_N_opt(shapes[:i], verbose=verbose, cache=cache)[0] + \n n_ops_N_opt(shapes[i:], verbose=verbose, cache=cache)[0] + \n n_ops((shapes[0][0], shapes[i-1][1]), (shapes[i][0], shapes[-1][1])))\n if cost < best_cost:\n best_cost = cost\n best_pos = i\n\n if best_pos is None:\n raise RuntimeError(shapes)\n \n res = best_cost, best_pos\n cache[key] = res\n return res\n\n\nn_ops_N_opt([m1.shape, m2.shape, m3.shape, m4.shape, m5.shape], verbose=True)",
"La liste est moins longue et tous les appels sont uniques. On met à jour la fonction product_N.",
"def product_N_opt(inputs, cache=None):\n if len(inputs) <= 1:\n raise RuntimeError(\n \"List inputs must contain at least two elements bot has %d.\" % len(inputs))\n cost, pos = n_ops_N_opt([i.shape for i in inputs], cache=cache)\n if len(inputs) == 2:\n return inputs[0] @ inputs[1]\n if pos == 1:\n right = product_N_opt(inputs[1:], cache=cache)\n return inputs[0] @ right\n if pos == len(shapes) - 1:\n left = product_N_opt(inputs[:-1], cache=cache)\n return left @ inputs[-1]\n else:\n left = product_N_opt(inputs[:pos + 1], cache=cache)\n right = product_N_opt(inputs[pos + 1:], cache=cache)\n return left @ right\n\n\nassert_almost_equal(m1 @ m2 @ m3 @ m4, product_N([m1, m2, m3, m4]))\n\nmesure_temps_fonction(lambda: product_N([m1, m2, m3, m4, m5]))\n\nmesure_temps_fonction(lambda: product_N_opt([m1, m2, m3, m4, m5]))\n\nmesure_temps_fonction(lambda: m1 @ m2 @ m3 @ m4 @ m5)",
"Tout fonctionne.\nRemarques lors de la correction\nIl y a eu peu d'erreurs lors des premières questions. Par la suite des erreurs fréquentes sont apparues.\nIl ne fallait pas utiliser de produits matriciel dans les fonctions de coûts. L'intérêt est d'utiliser ces fonctions pour décider du calcul à faire, pour déterminer le calcul optimal. Et le calcu de ce coût doit être négligeable par rapport au coût matriciel lui-même sinon l'intérêt en est fortement réduit.\nLe produit de 4 matrices ne pouvait pas faire intervenir m1 @ m2 @ m3 car cette notation ne précise pas explicitement l'ordre à suivre.\nEnfin, les mesures de temps étaient destinées à repérer les erreurs de code éventuelles. Si la mesure donne l'inverse ce qui est attendu, c'est qu'il y a sans doute une erreur de code. De même, si la mesure de temps dure très longtemps, c'est aussi une indication que le code est probablement erroné."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
UserAd/data_science
|
Twitter bots/Botnet search #2.ipynb
|
mit
|
[
"import tweepy\nimport time\nimport pandas as pd\n\nfrom IPython.core.display import HTML, display \n\nimport matplotlib.pyplot as plt\nimport seaborn\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (15, 8)\n\n\nOAUTH_KEY = ''\nOAUTH_SECRET = ''\nACCESS_TOKEN = ''\nACCESS_TOKEN_SECRET = ''\n\n\nauth = tweepy.OAuthHandler(OAUTH_KEY, OAUTH_SECRET)\nauth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)\napi = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)\ngraph = Graph(user=NEO4J_USER, password=NEO4J_SECRET)\n\ndef get_follwers_by_id(account_id):\n ids = []\n for page in tweepy.Cursor(api.followers_ids, user_id=account_id).pages():\n print(\"FOLLOWERS: Next page for %s\" % account_id)\n ids.extend(page)\n return ids\n\ndef get_friends_by_id(account_id):\n ids = []\n for page in tweepy.Cursor(api.friends_ids, user_id=account_id).pages():\n print(\"FRIENDS: Next page for %s\" % account_id)\n ids.extend(page)\n return ids\n\ndef get_friends(account):\n ids = []\n for page in tweepy.Cursor(api.friends_ids, screen_name=account).pages():\n print(\"Next page for %s\" % account)\n ids.extend(page)\n return ids\n\ndef chunks(l, n):\n for i in range(0, len(l), n):\n yield l[i:i + n]\n\nbotnet = pd.read_csv('./moscow_ny_bots.csv')\n\nbotnet.mean()\n\nbotnet.std()\n\nfriend_ids = []\nfor account in botnet['screen_name'].values:\n friend_ids = list(set(friend_ids) | set(get_friends(account)))",
"Now get info about all these users",
"print(\"We get %s friends\" % len(friend_ids))",
"Now extract user info from these users",
"possible_bots = []\nfor group in chunks(friend_ids, 100):\n for user in api.lookup_users(user_ids=list(group)):\n possible_bots.append(user)\n \npossible_bots_df = pd.DataFrame([{'name': user.name, 'id': user.id, 'location': user.location, 'screen_name': user.screen_name, 'followers': user.followers_count, 'friends': user.friends_count, 'created_at': user.created_at, 'favorites': user.favourites_count, 'tweets': user.statuses_count} for user in possible_bots])\n\npossible_bots_df.to_csv(\"./users_pt2.csv\", encoding='utf8')\n#Here i have pause in my investigation, so i saved and load back all info\n#possible_bots_df = pd.read_csv(\"./users_pt2.csv\", encoding='utf8')\n\nlocations = possible_bots_df[[\"id\", \"location\"]].groupby('location').count()\n\nlocations[locations['id'] > 100].plot(kind=\"bar\")",
"Let's look at moscow users",
"moscow_users = possible_bots_df[(possible_bots_df[\"location\"] == u'Москва') & (possible_bots_df[\"id\"] > 6 * 1e17)]\n\nmoscow_users.hist()\n\nmoscow_users[:10]\n\nprint(\"Total found: %s\" % moscow_users.count()[0])\n\nmoscow_users.to_csv(\"./botnet_moscow.csv\", encoding=\"utf8\")",
"Conclusion\nSomeone back in 2016 fall built small very deep linked botnet about 3k members.\nEvery bot has random photo (all same size) and random background image. \nEvery bot post cites and photos. Some have ad retweets. \nLooks like names created by some \"faker\" library. Screen name and Profile name doesn't have any correlations."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/cloud
|
src/python/tensorflow_cloud/core/tests/examples/dogs_classification.ipynb
|
apache-2.0
|
[
"# Copyright 2020 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TensorFlow Cloud - Putting it all together\nIn this example, we will use all of the features outlined in the Keras cloud guide to train a state-of-the-art model to classify dog breeds using feature extraction. Let's begin by installing TensorFlow Cloud and importing a few important packages.\nSetup",
"!pip install tensorflow-cloud\n\nimport datetime\nimport os\n\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tensorflow_cloud as tfc\nimport tensorflow_datasets as tfds\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.models import Model",
"Cloud Configuration\nIn order to run TensorFlow Cloud from a Colab notebook, we'll need to upload our authentication key and specify our Cloud storage bucket for image building and publishing.",
"if not tfc.remote():\n from google.colab import files\n\n key_upload = files.upload()\n key_path = list(key_upload.keys())[0]\n os.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = key_path\n os.system(f\"gcloud auth activate-service-account --key-file {key_path}\")\n\nGCP_BUCKET = \"[your-bucket-name]\" #@param {type:\"string\"}",
"Model Creation\nDataset preprocessing\nWe'll be loading our training data from TensorFlow Datasets:",
"(ds_train, ds_test), metadata = tfds.load(\n \"stanford_dogs\",\n split=[\"train\", \"test\"],\n shuffle_files=True,\n with_info=True,\n as_supervised=True,\n)\n \nNUM_CLASSES = metadata.features[\"label\"].num_classes",
"Let's visualize this dataset:",
"print(\"Number of training samples: %d\" % tf.data.experimental.cardinality(ds_train))\nprint(\"Number of test samples: %d\" % tf.data.experimental.cardinality(ds_test))\nprint(\"Number of classes: %d\" % NUM_CLASSES)\n\nplt.figure(figsize=(10, 10))\nfor i, (image, label) in enumerate(ds_train.take(9)):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(image)\n plt.title(int(label))\n plt.axis(\"off\")",
"Here we will resize and rescale our images to fit into our model's input, as well as create batches.",
"IMG_SIZE = 224\nBATCH_SIZE = 64\nBUFFER_SIZE = 2\n \nsize = (IMG_SIZE, IMG_SIZE)\nds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))\nds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))\n \ndef input_preprocess(image, label):\n image = tf.keras.applications.resnet50.preprocess_input(image)\n return image, label\n\nds_train = ds_train.map(\n input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE\n)\n \nds_train = ds_train.batch(batch_size=BATCH_SIZE, drop_remainder=True)\nds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)\n \nds_test = ds_test.map(input_preprocess)\nds_test = ds_test.batch(batch_size=BATCH_SIZE, drop_remainder=True)",
"Model Architecture\nWe're using ResNet50 pretrained on ImageNet, from the Keras Applications module.",
"inputs = tf.keras.layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))\nbase_model = tf.keras.applications.ResNet50(\n weights=\"imagenet\", include_top=False, input_tensor=inputs\n)\nx = tf.keras.layers.GlobalAveragePooling2D()(base_model.output)\nx = tf.keras.layers.Dropout(0.5)(x)\noutputs = tf.keras.layers.Dense(NUM_CLASSES)(x)\n \nmodel = tf.keras.Model(inputs, outputs)\n\nbase_model.trainable = False",
"Callbacks using Cloud Storage",
"MODEL_PATH = \"resnet-dogs\"\ncheckpoint_path = os.path.join(\"gs://\", GCP_BUCKET, MODEL_PATH, \"save_at_{epoch}\")\ntensorboard_path = os.path.join(\n \"gs://\", GCP_BUCKET, \"logs\", datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n)\n\ncallbacks = [\n # TensorBoard will store logs for each epoch and graph performance for us. \n keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1),\n # ModelCheckpoint will save models after each epoch for retrieval later.\n keras.callbacks.ModelCheckpoint(checkpoint_path),\n # EarlyStopping will terminate training when val_loss ceases to improve. \n keras.callbacks.EarlyStopping(monitor=\"val_loss\", patience=3),\n]\n\nmodel.compile(\n optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[\"accuracy\"],\n)",
"Here, we're using the tfc.remote() flag to designate a smaller number of epochs than intended for the full training job when running locally. This enables easy debugging on Colab.",
"if tfc.remote():\n epochs = 500\n train_data = ds_train\n test_data = ds_test\nelse:\n epochs = 1\n train_data = ds_train.take(5)\n test_data = ds_test.take(5)\n callbacks = None\n \nmodel.fit(\n train_data, epochs=epochs, callbacks=callbacks, validation_data=test_data, verbose=2\n)\n\nif tfc.remote():\n SAVE_PATH = os.path.join(\"gs://\", GCP_BUCKET, MODEL_PATH)\n model.save(SAVE_PATH)",
"Our model requires two additional libraries. We'll create a requirements.txt which specifies those libraries:",
"requirements = [\"tensorflow-datasets\", \"matplotlib\"]\n\nf = open(\"requirements.txt\", 'w')\nf.write('\\n'.join(requirements))\nf.close()",
"Let's add a job label so we can document our job logs later:",
"job_labels = {\"job\":\"resnet-dogs\"}",
"Train on Cloud\nAll that's left to do is run our model on Cloud. To recap, our run() call enables:\n- A model that will be trained and stored on Cloud, including checkpoints\n- Tensorboard callback logs that will be accessible through tensorboard.dev\n- Specific python library requirements that will be fulfilled\n- Customizable job labels for log documentation\n- Real-time streaming logs printed in Colab\n- Deeply customizable machine configuration (ours will use two Tesla T4s)\n- An automatic resolution of distribution strategy for this configuration",
"tfc.run(\n requirements_txt=\"requirements.txt\",\n distribution_strategy=\"auto\",\n chief_config=tfc.MachineConfig(\n cpu_cores=8,\n memory=30,\n accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4,\n accelerator_count=2,\n ),\n docker_config=tfc.DockerConfig(\n image_build_bucket=GCP_BUCKET,\n ),\n job_labels=job_labels,\n stream_logs=True,\n)",
"Evaluate your model\nWe'll use the cloud storage directories we saved for callbacks in order to load tensorboard and retrieve the saved model. Tensorboard logs can be used to monitor training performance in real-time",
"!tensorboard dev upload --logdir $tensorboard_path --name \"ResNet Dogs\"\n\nif tfc.remote():\n model = tf.keras.models.load_model(SAVE_PATH)\nmodel.evaluate(test_data)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PyPSA/PyPSA
|
examples/notebooks/battery-electric-vehicle-charging.ipynb
|
mit
|
[
"Battery Electric Vehicle Charging\nIn this example a battery electric vehicle (BEV) is driven 100 km in the morning and 100 km in the evening, to simulate commuting, and charged during the day by a solar panel at the driver's place of work. The size of the panel is computed by the optimisation.\nThe BEV has a battery of size 100 kWh and an electricity consumption of 0.18 kWh/km.\nNB: this example will use units of kW and kWh, unlike the PyPSA defaults",
"import pypsa\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n# use 24 hour period for consideration\nindex = pd.date_range(\"2016-01-01 00:00\", \"2016-01-01 23:00\", freq=\"H\")\n\n# consumption pattern of BEV\nbev_usage = pd.Series([0.0] * 7 + [9.0] * 2 + [0.0] * 8 + [9.0] * 2 + [0.0] * 5, index)\n\n# solar PV panel generation per unit of capacity\npv_pu = pd.Series(\n [0.0] * 7\n + [0.2, 0.4, 0.6, 0.75, 0.85, 0.9, 0.85, 0.75, 0.6, 0.4, 0.2, 0.1]\n + [0.0] * 5,\n index,\n)\n\n# availability of charging - i.e. only when parked at office\ncharger_p_max_pu = pd.Series(0, index=index)\ncharger_p_max_pu[\"2016-01-01 09:00\":\"2016-01-01 16:00\"] = 1.0\n\ndf = pd.concat({\"BEV\": bev_usage, \"PV\": pv_pu, \"Charger\": charger_p_max_pu}, axis=1)\ndf.plot.area(subplots=True, figsize=(10, 7))\nplt.tight_layout()",
"Initialize the network",
"network = pypsa.Network()\nnetwork.set_snapshots(index)\n\nnetwork.add(\"Bus\", \"place of work\", carrier=\"AC\")\n\nnetwork.add(\"Bus\", \"battery\", carrier=\"Li-ion\")\n\nnetwork.add(\n \"Generator\",\n \"PV panel\",\n bus=\"place of work\",\n p_nom_extendable=True,\n p_max_pu=pv_pu,\n capital_cost=1000.0,\n)\n\nnetwork.add(\"Load\", \"driving\", bus=\"battery\", p_set=bev_usage)\n\nnetwork.add(\n \"Link\",\n \"charger\",\n bus0=\"place of work\",\n bus1=\"battery\",\n p_nom=120, # super-charger with 120 kW\n p_max_pu=charger_p_max_pu,\n efficiency=0.9,\n)\n\n\nnetwork.add(\"Store\", \"battery storage\", bus=\"battery\", e_cyclic=True, e_nom=100.0)\n\nnetwork.lopf()\nprint(\"Objective:\", network.objective)",
"The optimal panel size in kW is",
"network.generators.p_nom_opt[\"PV panel\"]\n\nnetwork.generators_t.p.plot.area(figsize=(9, 4))\nplt.tight_layout()\n\ndf = pd.DataFrame(\n {attr: network.stores_t[attr][\"battery storage\"] for attr in [\"p\", \"e\"]}\n)\ndf.plot(grid=True, figsize=(10, 5))\nplt.legend(labels=[\"Energy output\", \"State of charge\"])\nplt.tight_layout()",
"The losses in kWh per pay are:",
"(\n network.generators_t.p.loc[:, \"PV panel\"].sum()\n - network.loads_t.p.loc[:, \"driving\"].sum()\n)\n\nnetwork.links_t.p0.plot.area(figsize=(9, 5))\nplt.tight_layout()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ProbablyOverthinkingIt
|
ess.ipynb
|
mit
|
[
"Internet use and religion in Europe\nThis notebook presents a quick-and-dirty analysis of the association between Internet use and religion in Europe, using data from the European Social Survey (http://www.europeansocialsurvey.org).\nCopyright 2015 Allen Downey\nMIT License: http://opensource.org/licenses/MIT",
"from __future__ import print_function, division\n\nimport numpy as np\nimport pandas as pd\n\nimport statsmodels.formula.api as smf\n\n%matplotlib inline",
"The following function selects the columns I need.",
"def select_cols(df):\n cols = ['cntry', 'tvtot', 'tvpol', 'rdtot', 'rdpol', 'nwsptot', 'nwsppol', 'netuse', \n 'rlgblg', 'rlgdgr', 'eduyrs', 'hinctnta', 'yrbrn', 'eisced', 'pspwght', 'pweight']\n df = df[cols]\n return df",
"Read data from Cycle 1.\nTODO: investigate the difference between hinctnt and hinctnta; is there a recode that reconciles them?",
"df1 = pd.read_stata('ESS1e06_4.dta', convert_categoricals=False)\ndf1['hinctnta'] = df1.hinctnt\ndf1 = select_cols(df1)\ndf1.head()",
"Read data from Cycle 2.",
"df2 = pd.read_stata('ESS2e03_4.dta', convert_categoricals=False)\ndf2['hinctnta'] = df2.hinctnt\ndf2 = select_cols(df2)\ndf2.head()",
"Read data from Cycle 3.",
"df3 = pd.read_stata('ESS3e03_5.dta', convert_categoricals=False)\ndf3['hinctnta'] = df3.hinctnt\ndf3 = select_cols(df3)\ndf3.head()",
"Read data from Cycle 4.",
"df4 = pd.read_stata('ESS4e04_3.dta', convert_categoricals=False)\ndf4 = select_cols(df4)\ndf4.head()",
"Read data from Cycle 5.",
"df5 = pd.read_stata('ESS5e03_2.dta', convert_categoricals=False)\ndf5 = select_cols(df5)\ndf5.head()",
"Concatenate the cycles.\nTODO: Have to resample each cycle before concatenating.",
"df = pd.concat([df1, df2, df3, df4, df5], ignore_index=True)\ndf.head()",
"TV watching time on average weekday",
"df.tvtot.replace([77, 88, 99], np.nan, inplace=True)\ndf.tvtot.value_counts().sort_index()",
"Radio listening, total time on average weekday.",
"df.rdtot.replace([77, 88, 99], np.nan, inplace=True)\ndf.rdtot.value_counts().sort_index()",
"Newspaper reading, total time on average weekday.",
"df.nwsptot.replace([77, 88, 99], np.nan, inplace=True)\ndf.nwsptot.value_counts().sort_index()",
"TV watching: news, politics, current affairs",
"df.tvpol.replace([66, 77, 88, 99], np.nan, inplace=True)\ndf.tvpol.value_counts().sort_index()",
"Radio listening: news, politics, current affairs",
"df.rdpol.replace([66, 77, 88, 99], np.nan, inplace=True)\ndf.rdpol.value_counts().sort_index()",
"Newspaper reading: politics, current affairs",
"df.nwsppol.replace([66, 77, 88, 99], np.nan, inplace=True)\ndf.nwsppol.value_counts().sort_index()",
"Personal use of Internet, email, www",
"df.netuse.replace([77, 88, 99], np.nan, inplace=True)\ndf.netuse.value_counts().sort_index()",
"Belong to a particular religion or denomination",
"df.rlgblg.replace([7, 8, 9], np.nan, inplace=True)\ndf.rlgblg.value_counts().sort_index()",
"How religious",
"df.rlgdgr.replace([77, 88, 99], np.nan, inplace=True)\ndf.rlgdgr.value_counts().sort_index()",
"Total household net income, all sources\nTODO: It looks like one cycle measured HINCTNT on a 12 point scale. Might need to reconcile",
"df.hinctnta.replace([77, 88, 99], np.nan, inplace=True)\ndf.hinctnta.value_counts().sort_index()",
"Shift income to mean near 0.",
"df['hinctnta5'] = df.hinctnta - 5\ndf.hinctnta5.describe()",
"Year born",
"df.yrbrn.replace([7777, 8888, 9999], np.nan, inplace=True)\ndf.yrbrn.describe()",
"Shifted to mean near 0",
"df['yrbrn60'] = df.yrbrn - 1960\ndf.yrbrn60.describe()",
"Number of years of education",
"df.eduyrs.replace([77, 88, 99], np.nan, inplace=True)\ndf.loc[df.eduyrs > 25, 'eduyrs'] = 25\ndf.eduyrs.value_counts().sort_index()",
"There are a bunch of really high values for eduyrs, need to investigate.",
"df.eduyrs.describe()",
"Shift to mean near 0",
"df['eduyrs12'] = df.eduyrs - 12\n\ndf.eduyrs12.describe()",
"Country codes",
"df.cntry.value_counts().sort_index()",
"Make a binary dependent variable",
"df['hasrelig'] = (df.rlgblg==1).astype(int)",
"Run the model",
"def run_model(df, formula):\n model = smf.logit(formula, data=df) \n results = model.fit(disp=False)\n return results",
"Here's the model with all control variables and all media variables:",
"formula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'\n 'tvtot + tvpol + rdtot + rdpol + nwsptot + nwsppol + netuse')\nres = run_model(df, formula)\nres.summary()",
"Most of the media variables are not statistically significant. If we drop the politial media variables, we get a cleaner model:",
"formula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'\n 'tvtot + rdtot + nwsptot + netuse')\nres = run_model(df, formula)\nres.summary()",
"And if we fill missing values for income, cleaner still.",
"def fill_var(df, var):\n fill = df[var].dropna().sample(len(df), replace=True)\n fill.index = df.index\n df[var].fillna(fill, inplace=True)\n\nfill_var(df, var='hinctnta5')\n\nformula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'\n 'tvtot + rdtot + nwsptot + netuse')\nres = run_model(df, formula)\nres.summary()",
"Now all variables have small p-values. All parameters have the expected signs:\n\nYounger people are less affiliated.\nMore educated people are less affiliated.\nHigher income people are less affiliated (although this could go either way)\nConsumers of all media are less affiliated.\n\nThe strength of the Internet effect is stronger than for other media.\nThese results are consistent in each cycle of the data, and across a few changes I've made in the cleaning process.\nHowever, these results should be considered preliminary:\n\nI have not dealt with the stratification weights.\nI have not dealt with missing data (particularly important for education)\n\nNevertheless, I'll run a breakdown by country.\nHere's a function to extract the parameter associated with netuse:",
"def extract_res(res, var='netuse'):\n param = res.params[var]\n pvalue = res.pvalues[var]\n stars = '**' if pvalue < 0.01 else '*' if pvalue < 0.05 else ''\n return res.nobs, param, stars\n\nextract_res(res)",
"Running a similar model with degree of religiosity.",
"formula = ('rlgdgr ~ yrbrn60 + eduyrs12 + hinctnta5 +'\n 'tvtot + rdtot + nwsptot + netuse')\nmodel = smf.ols(formula, data=df) \nres = model.fit(disp=False)\nres.summary()",
"Group by country:",
"grouped = df.groupby('cntry')\nfor name, group in grouped:\n print(name, len(group))",
"Run a sample country",
"gb = grouped.get_group('DK')\nrun_model(gb, formula).summary()",
"Run all countries",
"for name, group in grouped:\n try:\n fill_var(group, var='hinctnta5')\n res = run_model(group, formula)\n nobs, param, stars = extract_res(res)\n arrow = '<--' if stars and param > 0 else ''\n print(name, len(group), nobs, '%0.3g'%param, stars, arrow, sep='\\t')\n except:\n print(name, len(group), ' ', 'NA', sep='\\t')",
"In more than half of the countries, the association between Internet use and religious affiliation is statistically significant. In all except two, the association is negative.\nIn many countries we've lost a substantial number of observations due to missing data. Really need to fill that in!",
"group = grouped.get_group('FR')\nlen(group)\n\nfor col in group.columns:\n print(col, sum(group[col].isnull()))\n\nfill_var(group, 'hinctnta5')\n\nformula\n\nres = run_model(group, formula)\nres.summary()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Upward-Spiral-Science/team1
|
code/Spike Imaging.ipynb
|
apache-2.0
|
[
"Imaging the Spike",
"# Spike images\nfrom mpl_toolkits.mplot3d import axes3d\nimport numpy as np\nimport urllib2\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\nfrom image_builder import get_image\n\nnp.set_printoptions(precision=3, suppress=True)\nurl = ('https://raw.githubusercontent.com/Upward-Spiral-Science'\n '/data/master/syn-density/output.csv')\ndata = urllib2.urlopen(url)\ncsv = np.genfromtxt(data, delimiter=\",\")[1:] # don't want first row (labels)\n\n# chopping data based on thresholds on x and y coordinates\nx_bounds = (409, 3529)\ny_bounds = (1564, 3124)\n\ndef check_in_bounds(row, x_bounds, y_bounds):\n if row[0] < x_bounds[0] or row[0] > x_bounds[1]:\n return False\n if row[1] < y_bounds[0] or row[1] > y_bounds[1]:\n return False\n if row[3] == 0:\n return False\n \n return True\n\nindices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv,\n x_bounds, y_bounds))\ndata_thresholded = csv[indices_in_bound]\nn = data_thresholded.shape[0]\n\n\ndef synapses_over_unmasked(row):\n s = (row[4]/row[3])*(64**3)\n return [row[0], row[1], row[2], s]\nsyn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded)\nsyn_normalized = syn_unmasked",
"We're going to extract images of representing the bins in the spike",
"a = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded)\nspike = a[np.logical_and(a <= 0.0015, a >= 0.0012)]\nn, bins, _ = plt.hist(spike, 2000)\nbin_max = np.where(n == n.max())\nbin_width = bins[1]-bins[0]\nsyn_normalized[:,3] = syn_normalized[:,3]/(64**3)\nspike = syn_normalized[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]\nspike_thres = data_thresholded[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]\nlen_spike = len(spike_thres)\n\n# Compare some of the bins represented the spike\nxs = np.unique(spike_thres[:,0])\nys = np.unique(spike_thres[:,1])\nname = 'spike'\nget_image((0,10),(0,10),xs,ys,name)\n",
"<img src='spike0_0.bmp' style=\"width: 800px;\"/>\nDistributions of Synapses across x, y, z in spike",
"%matplotlib inline\nunique_x = np.unique(spike_thres[:,0])\nunique_y = np.unique(spike_thres[:,1])\nunique_z = np.unique(spike_thres[:,2])\n\nx_sum = [0] * len(unique_x)\nfor i in range(len(unique_x)):\n x_sum[i] = sum(spike_thres[spike_thres[:,0]==unique_x[i]][:,4])\n \ny_sum = [0] * len(unique_y)\nfor i in range(len(unique_y)):\n y_sum[i] = sum(spike_thres[spike_thres[:,1]==unique_y[i]][:,4])\n \nz_sum = [0] * len(unique_z)\nfor i in range(len(unique_z)):\n z_sum[i] = sum(spike_thres[spike_thres[:,2]==unique_z[i]][:,4])\n\nplt.figure()\nplt.figure(figsize=(28,7))\n\nplt.subplot(131)\nplt.bar(unique_x, x_sum, 1)\nplt.xlim(450, 3600)\nplt.ylabel('density in synapses/voxel',fontsize=20)\nplt.xlabel('x-coordinate',fontsize=20)\nplt.title('Total Density across Each X-Layer',fontsize=20)\n\nplt.subplot(132)\nplt.bar(unique_y, y_sum, 1)\nplt.xlim(1570, 3190)\nplt.ylabel('density in synapses/voxel',fontsize=20)\nplt.xlabel('y-coordinate',fontsize=20)\nplt.title('Total Density across Each Y-Layer',fontsize=20)\n\nplt.subplot(133)\nplt.bar(unique_z, z_sum, 1)\nplt.ylabel('density in synapses/voxel',fontsize=20)\nplt.xlabel('z-coordinate',fontsize=20)\nplt.title('Total Density across Each Z-Layer',fontsize=20)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.15/_downloads/plot_source_alignment.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Source alignment\nThe aim of this tutorial is to show how to visually assess that the data\nare well aligned in space for computing the forward solution.",
"import os.path as op\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nsubjects_dir = op.join(data_path, 'subjects')\nraw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')\ntr_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw-trans.fif')\nraw = mne.io.read_raw_fif(raw_fname)",
":func:mne.viz.plot_alignment is a very useful function for inspecting\nthe surface alignment before source analysis. If the subjects_dir and\nsubject parameters are provided, the function automatically looks for the\nFreesurfer surfaces from the subject's folder. Here we use trans=None, which\n(incorrectly!) equates the MRI and head coordinate frames.",
"mne.viz.plot_alignment(raw.info, trans=None, subject='sample',\n subjects_dir=subjects_dir, surfaces=['head', 'brain'])",
"It is quite clear that things are not well aligned for estimating the\nsources. We need to provide the function with a transformation that aligns\nthe MRI with the MEG data. Here we use a precomputed matrix, but you can try\ncreating it yourself using :func:mne.gui.coregistration.\nAligning the data using GUI\nUncomment the following line to align the data yourself.\n\nFirst you must load the digitization data from the raw file\n (Head Shape Source). The MRI data is already loaded if you provide the\n subject and subjects_dir. Toggle Always Show Head Points to see\n the digitization points.\nTo set the landmarks, toggle Edit radio button in MRI Fiducials.\nSet the landmarks by clicking the radio button (LPA, Nasion, RPA) and then\n clicking the corresponding point in the image.\nAfter doing this for all the landmarks, toggle Lock radio button. You\n can omit outlier points, so that they don't interfere with the finetuning.\n\n.. note:: You can save the fiducials to a file and pass\n mri_fiducials=True to plot them in\n :func:mne.viz.plot_alignment. The fiducials are saved to the\n subject's bem folder by default.\n* Click Fit Head Shape. This will align the digitization points to the\n head surface. Sometimes the fitting algorithm doesn't find the correct\n alignment immediately. You can try first fitting using LPA/RPA or fiducials\n and then align according to the digitization. You can also finetune\n manually with the controls on the right side of the panel.\n* Click Save As... (lower right corner of the panel), set the filename\n and read it with :func:mne.read_trans.",
"# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)\ntrans = mne.read_trans(tr_fname)\nsrc = mne.read_source_spaces(op.join(data_path, 'MEG', 'sample',\n 'sample_audvis-meg-oct-6-meg-inv.fif'))\nmne.viz.plot_alignment(raw.info, trans=trans, subject='sample', src=src,\n subjects_dir=subjects_dir, surfaces=['head', 'white'])",
"The previous is possible if you have the surfaces available from Freesurfer.\nThe function automatically searches for the correct surfaces from the\nprovided subjects_dir. Otherwise it is possible to use the sphere\nconductor model. It is passed through bem parameter.\n<div class=\"alert alert-info\"><h4>Note</h4><p>``bem`` also accepts bem solutions (:func:`mne.read_bem_solution`)\n or a list of bem surfaces (:func:`mne.read_bem_surfaces`).</p></div>",
"sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto')\nmne.viz.plot_alignment(raw.info, subject='sample', eeg='projected',\n meg='helmet', bem=sphere, dig=True,\n surfaces=['brain', 'inner_skull', 'outer_skull',\n 'outer_skin'])",
"For more information see step by step instructions\nfor subjects with structural MRI\n<http://www.slideshare.net/mne-python/mnepython-coregistration> and for\nsubjects for which no MRI is available\n<http://www.slideshare.net/mne-python/mnepython-scale-mri>."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bmorris3/gsoc2015
|
timezones.ipynb
|
mit
|
[
"Time zones\n🌐!\nThis is a demo of some convenience methods for manipulating timezones with the Observer class. The two methods are: \n\nself.datetime_to_astropy_time(datetime) which converts a naive or timezone-aware datetime to an astropy.time.Time object. \n\nIf the input datetime is naive, it assumes that the implied timezone is the one saved in the instance of Observer (in self.timezone).\n\n\nself.astropy_time_to_datetime(astropy_time) which converts an astropy.time.Time object into a localized datetime, in the timezone saved in the instance of Observer (in self.timezone).",
"from __future__ import (absolute_import, division, print_function,\n unicode_literals)\n\nfrom astropy.time import Time\nimport astropy.units as u\nfrom astropy.coordinates import EarthLocation\nimport pytz\nimport datetime\n\nfrom astroplan import Observer\n\n# Set up an observer at ~Subaru\nlocation = EarthLocation.from_geodetic(-155.4*u.deg, 19.8*u.deg)\nobs = Observer(location=location, timezone=pytz.timezone('US/Hawaii'))\n\n# Pick a local (Hawaii) time to observe: midnight\nlocal_naive_datetime = datetime.datetime(2015, 7, 14, 0)\n\n# What is the astropy.time.Time equivalent for this datetime?\nastropy_time = obs.datetime_to_astropy_time(local_naive_datetime)\nprint('astropy.time.Time (UTC):', astropy_time)",
"Convert that astropy.time.Time back to a localized datetime, arriving back at the original datetime (only this one is localized):",
"localized_datetime = obs.astropy_time_to_datetime(astropy_time)\nprint('datetime:', localized_datetime)\nprint('new datetime equivalent to original naive datetime?:', \n local_naive_datetime == localized_datetime.replace(tzinfo=None))",
"Let's say the Subaru observer is remotely observing from the East Coast. Let's convert their local time (Eastern) to an astropy time. Since this datetime is localized, datetime_to_astropy_time will use the datetime's timezone (rather than assuming self.timezone):",
"east_coast_datetime = pytz.timezone('US/Eastern').localize(datetime.datetime(2015, 7, 14, 6))\neast_coast_astropy_time = obs.datetime_to_astropy_time(east_coast_datetime)\nprint('Convert local East Coast time to UTC:', east_coast_astropy_time)\nprint('Equivalent to original astropy time?:', east_coast_astropy_time == astropy_time)",
"Warning\nHow you construct your localized timezone is important! Don't initialize datetimes with\nthe tzinfo kwarg. Here's an example of when it doesn't work. These two times should be equal, but are not. See pytz documentation for discussion.",
"tzinfo_kwarg = datetime.datetime(2015, 7, 14, 6, tzinfo=pytz.timezone('US/Eastern'))\nlocalized = pytz.timezone('US/Eastern').localize(datetime.datetime(2015, 7, 14, 6))\nprint('with tz assigned in kwarg:', tzinfo_kwarg)\nprint('with localization by tz.localize(dt):', localized)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DIPlib/diplib
|
examples/python/tensor_images.ipynb
|
apache-2.0
|
[
"Tensor images\nThis notebook gives an overview of the concept of tensor images, and demonstrates how to use this feature.",
"import diplib as dip",
"After reading the \"PyDIP basics\" notebook, you should be familiar with the concepts of scalar images and color images. We remind the reader that an image can have any number of values associated to each pixel. An image with a single value per pixel is a scalar image. Multiple values can be arranged in one or two dimensions, as a vector image or a matrix image. A color image is an example of a vector image, for example in the RGB color space the vector for each pixel has 3 values, it is a 3D vector.\nThe generalization of vectors and matrices is a tensor. A rank 0 tensor is a scalar, a rank 1 tensor is a vector, and a rank 2 tensor is a matrix.\nThis is a scalar image:",
"img = dip.ImageRead('../trui.ics')\nimg.Show()",
"We can compute its gradient, which is a vector image:",
"g = dip.Gradient(img)\ng.Show()",
"The vector image is displayed by showing the first vector component in the red channel, and the second one in the green channel. g has two components:",
"print(g.TensorElements())\n\nprint(g.TensorShape())",
"Multiplying a vector with its transposed leads to a symmetric matrix:",
"S = g @ dip.Transpose(g)\nprint(\"Tensor size:\", S.TensorSizes())\nprint(\"Tensor shape:\", S.TensorShape())\nprint(\"Tensor elements:\", S.TensorElements())",
"Note how the 2x2 symmetric matrix stores only 3 elements per pixel. Because of the symmetry, the [0,1] and the [1,0] elements are identical, and need not be both stored. See the documentation for details on how the individual elements are stored.\nLocal averaging of this matrix image (i.e. applying a low-pass filter) leads to the structure tensor:",
"S = dip.Gauss(S, [5])\nS.Show()",
"We can still display this tensor image, because it has only 3 tensor elements, which can be mapped to the three RGB channels of the display.\nThe structure tensor is one of the more important applications for the concept of the tensor image. In this documentation page there are some example applications of the structure tensor. Here we show how to get the local orientation from it using the eigenvalue decomposition.",
"eigenvalues, eigenvectors = dip.EigenDecomposition(S)\nprint(eigenvalues.TensorShape())\nprint(eigenvectors.TensorShape())",
"The eigendecomposition is such that S * eigenvectors == eigenvectors * eigenvalues. eigenvectors is a full 2x2 matrix, and hence has 4 tensor elements. These are stored in column-major order. The first column is the eigenvector that corresponds to the first eigenvalue. Eigenvalues are sorted in descending order, and hence the first eigenvector is perpendicular to the edges in the image.",
"v1 = eigenvectors.TensorColumn(0)\nangle = dip.Angle(v1)\nangle.Show('orientation')",
"Note that extracting a column from the tensor yields a vector image, and that this vector image shares data with the column-major matrix image. Transposing a matrix is a cheap operation that just changes the storage order of the matrix, without a need to copy or reorder the data:",
"tmp = dip.Transpose(eigenvectors)\nprint(tmp.TensorShape())\nprint(tmp.SharesData(eigenvectors))",
"A second important matrix image is the Hessian matrix, which contains all second order derivatives. Just like the strucutre tensor, it is a symmetric 2x2 matrix:",
"H = dip.Hessian(img)\nprint(\"Tensor size:\", S.TensorSizes())\nprint(\"Tensor shape:\", S.TensorShape())\nprint(\"Tensor elements:\", S.TensorElements())\nH.Show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NICTA/revrand
|
demos/sarcos_demo.ipynb
|
apache-2.0
|
[
"Comparison of revrand's algorithms on the SARCOS dataset\nIn this notebook we test how the GLM in revrand performs on the inverse dynamics experiment conducted in Gaussian Processes for Machine Learning, Chapter 8, page 182. In this experiment there are 21 dimensions, and 44,484 training examples. All GP's are using square exponential covariance functions, with a separate lengthscale for each dimension.",
"import logging\nimport numpy as np\nfrom scipy.stats import gamma \n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.gaussian_process import GaussianProcessRegressor\nfrom sklearn.gaussian_process.kernels import WhiteKernel, RBF\n\nfrom revrand import GeneralizedLinearModel, StandardLinearModel, Parameter, Positive\nfrom revrand.basis_functions import RandomRBF, OrthogonalRBF\nfrom revrand.likelihoods import Gaussian\nfrom revrand.metrics import smse, msll\nfrom revrand.utils.datasets import fetch_gpml_sarcos_data\nfrom revrand.optimize import Adam, AdaDelta\n\nfrom plotting import fancy_yyplot\nimport matplotlib.pyplot as pl\n%matplotlib inline\n\nlogging.basicConfig(level=logging.INFO)",
"Settings",
"ALG = 'SLM'\n\nrandom_state = 100\n\nif ALG == 'GLM':\n lenscale = gamma(2, scale=50)\n regularizer = gamma(2, scale=10)\n var = gamma(2, scale=50)\n nbases = 8192\n nsamples = 10\n batch_size = 10\n maxiter = int(1e6)\n updater = Adam()\nelif ALG == 'SLM':\n lenscale = gamma(1, scale=50)\n regularizer = gamma(2, scale=10)\n var = gamma(2, scale=5)\n nbases = 512\n m = 10000\nelif ALG == 'GP':\n m = 1024\n n_restarts=1\nelse:\n raise ValueError(\"Invalid algorithm\")",
"Load the data",
"gpml_sarcos = fetch_gpml_sarcos_data()\n\nX_train = gpml_sarcos.train.data\ny_train = gpml_sarcos.train.targets\n\nX_test = gpml_sarcos.test.data\ny_test = gpml_sarcos.test.targets\n\nNtrain, D = X_train.shape\n\nprint(\"Training data shape = {}\".format(X_train.shape))\nprint(\"Testing data shape = {}\".format(X_test.shape))\n",
"Transform targets and inputs\nAs per GPML p23",
"# Targets\nymean = y_train.mean()\ny_train -= ymean\ny_test -= ymean\n\n# Inputs\nXscaler = StandardScaler()\nXscaler.fit(X_train)\nX_train = Xscaler.transform(X_train)\nX_test = Xscaler.transform(X_test)",
"Initialise the algorithms",
"regularizer_init = Parameter(regularizer, Positive())\nlenscale_init = Parameter(lenscale, Positive(), shape=(D,))\nbase = RandomRBF(nbases=nbases,\n Xdim=D,\n lenscale=lenscale_init,\n random_state=random_state,\n regularizer=regularizer_init\n )\nvar_init = Parameter(var, Positive())\n\nif ALG == 'GLM':\n llhood = Gaussian(var=var_init)\n alg = GeneralizedLinearModel(llhood,\n base,\n updater=updater,\n batch_size=batch_size,\n maxiter=maxiter,\n nsamples=nsamples,\n random_state=random_state\n )\nelif ALG == 'GP':\n kern = 3**2 * RBF(length_scale=np.ones(D), length_scale_bounds=(1e-3, 1e7)) \\\n + WhiteKernel(noise_level=1)\n alg = GaussianProcessRegressor(kernel=kern, n_restarts_optimizer=n_restarts)\nelif ALG == 'SLM':\n alg = StandardLinearModel(\n basis=base,\n var=var_init,\n random_state=random_state\n )\nelse:\n raise ValueError(\"Invalid algorithm\")\n ",
"Train the algorithms",
"rnd = np.random.RandomState(random_state)\nif ALG == 'GLM':\n alg.fit(X_train, y_train)\nelse:\n t_ind = rnd.choice(Ntrain, size=m, replace=False)\n alg.fit(X_train[t_ind], y_train[t_ind])\n",
"Predict and score",
"if ALG == 'GLM':\n Ey, Vf = alg.predict_moments(X_test)\n Vy = Vf + alg.like_hypers_\n Sy = np.sqrt(Vy)\nelif ALG == 'GP':\n Ey, Sy = alg.predict(X_test, return_std=True)\n Vy = Sy**2\nelse:\n Ey, Vy = alg.predict_moments(X_test)\n Sy = np.sqrt(Vy)\n \nprint(\"SMSE = {}\".format(smse(y_test, Ey)))\nprint(\"MSLL = {}\".format(msll(y_test, Ey, Vy, y_train)))\n\n# YY plot\npl.figure(figsize=(15, 10))\nfancy_yyplot(y_test, Ey, Ey - 2 * Sy, Ey + 2 * Sy, \"Joint torque\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
FlorentSilve/Udacity_ML_nanodegree
|
projects/digit_recognition/digit_recognition.ipynb
|
mit
|
[
"Machine Learning Engineer Nanodegree\nDeep Learning\nProject: Build a Digit Recognition Program\nIn this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.\n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\n\nStep 1: Design and Test a Model Architecture\nDesign and implement a deep learning model that learns to recognize sequences of digits. Train the model using synthetic data generated by concatenating character images from notMNIST or MNIST. To produce a synthetic sequence of digits for testing, you can for example limit yourself to sequences up to five digits, and use five classifiers on top of your deep network. You would have to incorporate an additional ‘blank’ character to account for shorter number sequences.\nThere are various aspects to consider when thinking about this problem:\n- Your model can be derived from a deep neural net or a convolutional network.\n- You could experiment sharing or not the weights between the softmax classifiers.\n- You can also use a recurrent network in your deep neural net to replace the classification layers and directly emit the sequence of digits one-at-a-time.\nYou can use Keras to implement your model. Read more at keras.io.\nHere is an example of a published baseline model on this problem. (video). You are not expected to model your architecture precisely using this model nor get the same performance levels, but this is more to show an exampe of an approach used to solve this particular problem. We encourage you to try out different architectures for yourself and see what works best for you. Here is a useful forum post discussing the architecture as described in the paper and here is another one discussing the loss function.\nImplementation\nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.",
"# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport random\nimport os\nimport sys\nimport tarfile\nimport cPickle\nimport gzip\nimport theano\nimport theano.tensor as T\nfrom IPython.display import display, Image\nfrom scipy import ndimage\nfrom sklearn.linear_model import LogisticRegression\nfrom six.moves.urllib.request import urlretrieve\nfrom six.moves import cPickle as pickle\n\n# Config the matplotlib backend as plotting inline in IPython\n%matplotlib inline",
"Using MNIST dataset\nLoading pickled MNIST dataset",
"# Load the dataset\nf = gzip.open('data/mnist.pkl.gz', 'rb')\ntrain_set, valid_set, test_set = cPickle.load(f)\nf.close()\n\ndef shared_dataset(data_xy):\n \"\"\" Function that loads the dataset into shared variables\n\n The reason we store our dataset in shared variables is to allow\n Theano to copy it into the GPU memory (when code is run on GPU).\n Since copying data into the GPU is slow, copying a minibatch everytime\n is needed (the default behaviour if the data is not in a shared\n variable) would lead to a large decrease in performance.\n \"\"\"\n data_x, data_y = data_xy\n shared_x = theano.shared(np.asarray(data_x, dtype=theano.config.floatX))\n shared_y = theano.shared(np.asarray(data_y, dtype=theano.config.floatX))\n # When storing data on the GPU it has to be stored as floats\n # therefore we will store the labels as ``floatX`` as well\n # (``shared_y`` does exactly that). But during our computations\n # we need them as ints (we use labels as index, and if they are\n # floats it doesn't make sense) therefore instead of returning\n # ``shared_y`` we will have to cast it to int. This little hack\n # lets us get around this issue\n return shared_x, T.cast(shared_y, 'int32')\n\ntest_set_x, test_set_y = shared_dataset(test_set)\nvalid_set_x, valid_set_y = shared_dataset(valid_set)\ntrain_set_x, train_set_y = shared_dataset(train_set)\n\nbatch_size = 500 # size of the minibatch\n\n# accessing the third minibatch of the training set\n\ndata = train_set_x[2 * batch_size: 3 * batch_size]\nlabel = train_set_y[2 * batch_size: 3 * batch_size]\n\nplt.imshow(train_set[np.random.randint(train_set.shape[0])])\n\nindex = 0\nimg = np.asarray(test_set[index]).reshape(28,28)\nplt.imshow(img)\nprint(img.shape)\nprint(bytes(t_labels[index]).decode('utf-8'))\n\nprint train_set\n\nimages = loadMNISTImages('data/train-images-idx3-ubyte');\nlabels = loadMNISTLabels('data/train-labels-idx1-ubyte');\n \n% We are using display_network from the autoencoder code\ndisplay_network(images(:,1:100)); % Show the first 100 images\ndisp(labels(1:10));\n\n#Functions definition\n\ndef download_progress_hook(count, blockSize, totalSize):\n \"\"\"A hook to report the progress of a download. This is mostly intended for users with\n slow internet connections. Reports every 5% change in download progress.\n \"\"\"\n global last_percent_reported\n percent = int(count * blockSize * 100 / totalSize)\n\n if last_percent_reported != percent:\n if percent % 5 == 0:\n sys.stdout.write(\"%s%%\" % percent)\n sys.stdout.flush()\n else:\n sys.stdout.write(\".\")\n sys.stdout.flush()\n \n last_percent_reported = percent\n \ndef maybe_download(filename, expected_bytes, force=False):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n dest_filename = os.path.join(data_root, filename)\n if force or not os.path.exists(dest_filename):\n print('Attempting to download:', filename) \n filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)\n print('\\nDownload Complete!')\n statinfo = os.stat(dest_filename)\n if statinfo.st_size == expected_bytes:\n print('Found and verified', dest_filename)\n else:\n raise Exception(\n 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')\n return dest_filename\n\ndef maybe_extract(filename, force=False):\n root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz\n if os.path.isdir(root) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping extraction of %s.' % (root, filename))\n else:\n print('Extracting data for %s. This may take a while. Please wait.' % root)\n tar = tarfile.open(filename)\n sys.stdout.flush()\n tar.extractall(data_root)\n tar.close()\n data_folders = [\n os.path.join(root, d) for d in sorted(os.listdir(root))\n if os.path.isdir(os.path.join(root, d))]\n if len(data_folders) != num_classes:\n raise Exception(\n 'Expected %d folders, one per class. Found %d instead.' % (\n num_classes, len(data_folders)))\n print(data_folders)\n return data_folders\n\ndef load_letter(folder, min_num_images):\n \"\"\"Load the data for a single letter label.\"\"\"\n image_files = os.listdir(folder)\n dataset = np.ndarray(shape=(len(image_files), image_size, image_size),\n dtype=np.float32)\n print(folder)\n num_images = 0\n for image in image_files:\n image_file = os.path.join(folder, image)\n try:\n image_data = (ndimage.imread(image_file).astype(float) - \n pixel_depth / 2) / pixel_depth\n if image_data.shape != (image_size, image_size):\n raise Exception('Unexpected image shape: %s' % str(image_data.shape))\n dataset[num_images, :, :] = image_data\n num_images = num_images + 1\n except IOError as e:\n print('Could not read:', image_file, ':', e, '- it\\'s ok, skipping.')\n \n dataset = dataset[0:num_images, :, :]\n if num_images < min_num_images:\n raise Exception('Many fewer images than expected: %d < %d' %\n (num_images, min_num_images))\n \n print('Full dataset tensor:', dataset.shape)\n print('Mean:', np.mean(dataset))\n print('Standard deviation:', np.std(dataset))\n return dataset\n \ndef maybe_savez(data_folders, min_num_images_per_class, force=False):\n dataset_name = data_folders[0][:-1]+'images.npz'\n dataset = {}\n if force or not os.path.exists(dataset_name):\n for folder in data_folders:\n print(folder[-1], end='')\n dataset[folder[-1:]]=load_letter(folder, min_num_images_per_class)\n try:\n np.savez(dataset_name, **dataset)\n except Exception as e:\n print('Unable to save data to', dataset_name, ':', e)\n return dataset_name\n\ndef gen_data_dict(dataset):\n data_dict = {}\n all_data = np.load(dataset)\n for letter in all_data.files:\n try:\n data_dict[letter] = all_data[letter]\n except Exception as e:\n print('Unable to process data from', dataset, ':', e)\n raise\n all_data.close()\n return data_dict\n\nurl = 'http://commondatastorage.googleapis.com/books1000/'\ndataset_name = 'notMNIST_data.npz'\nnum_classes = 10\nimage_size = 28\npixel_depth = 255.0\nrandom.seed(0)\nlast_percent_reported = None\ndata_root = 'data/' # Change me to store data elsewhere\n\nprint('Download files')\ntrain_filename = maybe_download('notMNIST_large.tar.gz', 247336696)\ntest_filename = maybe_download('notMNIST_small.tar.gz', 8458043)\nprint('Download Complete')\n\ntrain_folders = maybe_extract(train_filename)\ntest_folders = maybe_extract(test_filename)\nprint('Extract Complete')\n\ntrain_datasets = maybe_savez(train_folders, 45000)\ntest_datasets = maybe_savez(test_folders, 1800)\nprint('Saving Complete')\n\ntrain_data = gen_data_dict(train_datasets)\ntest_data = gen_data_dict(test_datasets)\nprint('Data Dictionaries Built')\n\nnum_classes = 10\nnp.random.seed(133)\n\n\n \n",
"Question 1\nWhat approach did you take in coming up with a solution to this problem?\nAnswer: \nQuestion 2\nWhat does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.)\nAnswer:\nQuestion 3\nHow did you train your model? How did you generate your synthetic dataset? Include examples of images from the synthetic data you constructed.\nAnswer:\n\nStep 2: Train a Model on a Realistic Dataset\nOnce you have settled on a good architecture, you can train your model on real data. In particular, the Street View House Numbers (SVHN) dataset is a good large-scale dataset collected from house numbers in Google Street View. Training on this more challenging dataset, where the digits are not neatly lined-up and have various skews, fonts and colors, likely means you have to do some hyperparameter exploration to perform well.\nImplementation\nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.",
"\n\n### Your code implementation goes here.\n### Feel free to use as many code cells as needed.\n\n",
"Question 4\nDescribe how you set up the training and testing data for your model. How does the model perform on a realistic dataset?\nAnswer:\nQuestion 5\nWhat changes did you have to make, if any, to achieve \"good\" results? Were there any options you explored that made the results worse?\nAnswer:\nQuestion 6\nWhat were your initial and final results with testing on a realistic dataset? Do you believe your model is doing a good enough job at classifying numbers correctly?\nAnswer:\n\nStep 3: Test a Model on Newly-Captured Images\nTake several pictures of numbers that you find around you (at least five), and run them through your classifier on your computer to produce example results. Alternatively (optionally), you can try using OpenCV / SimpleCV / Pygame to capture live images from a webcam and run those through your classifier.\nImplementation\nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.",
"\n\n### Your code implementation goes here.\n### Feel free to use as many code cells as needed.\n\n",
"Question 7\nChoose five candidate images of numbers you took from around you and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult?\nAnswer:\nQuestion 8\nIs your model able to perform equally well on captured pictures or a live camera stream when compared to testing on the realistic dataset?\nAnswer:\nOptional: Question 9\nIf necessary, provide documentation for how an interface was built for your model to load and classify newly-acquired images.\nAnswer: Leave blank if you did not complete this part.\n\nStep 4: Explore an Improvement for a Model\nThere are many things you can do once you have the basic classifier in place. One example would be to also localize where the numbers are on the image. The SVHN dataset provides bounding boxes that you can tune to train a localizer. Train a regression loss to the coordinates of the bounding box, and then test it. \nImplementation\nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.",
"\n\n### Your code implementation goes here.\n### Feel free to use as many code cells as needed.\n\n",
"Question 10\nHow well does your model localize numbers on the testing set from the realistic dataset? Do your classification results change at all with localization included?\nAnswer:\nQuestion 11\nTest the localization function on the images you captured in Step 3. Does the model accurately calculate a bounding box for the numbers in the images you found? If you did not use a graphical interface, you may need to investigate the bounding boxes by hand. Provide an example of the localization created on a captured image.\nAnswer:\n\nOptional Step 5: Build an Application or Program for a Model\nTake your project one step further. If you're interested, look to build an Android application or even a more robust Python program that can interface with input images and display the classified numbers and even the bounding boxes. You can for example try to build an augmented reality app by overlaying your answer on the image like the Word Lens app does.\nLoading a TensorFlow model into a camera app on Android is demonstrated in the TensorFlow Android demo app, which you can simply modify.\nIf you decide to explore this optional route, be sure to document your interface and implementation, along with significant results you find. You can see the additional rubric items that you could be evaluated on by following this link.\nOptional Implementation\nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.",
"\n\n### Your optional code implementation goes here.\n### Feel free to use as many code cells as needed.\n\n",
"Documentation\nProvide additional documentation sufficient for detailing the implementation of the Android application or Python program for visualizing the classification of numbers in images. It should be clear how the program or application works. Demonstrations should be provided. \nWrite your documentation here.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/ko/r1/tutorials/keras/basic_regression.ipynb
|
apache-2.0
|
[
"Copyright 2018 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"회귀: 자동차 연비 예측하기\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/keras/basic_regression.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />구글 코랩(Colab)에서 실행하기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/keras/basic_regression.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />깃허브(GitHub) 소스 보기</a>\n </td>\n</table>\n\nNote: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도\n불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.\n이 번역에 개선할 부분이 있다면\ntensorflow/docs 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.\n문서 번역이나 리뷰에 참여하려면\ndocs-ko@tensorflow.org로\n메일을 보내주시기 바랍니다.\n회귀(regression)는 가격이나 확률 같이 연속된 출력 값을 예측하는 것이 목적입니다. 이와는 달리 분류(classification)는 여러개의 클래스 중 하나의 클래스를 선택하는 것이 목적입니다(예를 들어, 사진에 사과 또는 오렌지가 포함되어 있을 때 어떤 과일인지 인식하는 것).\n이 노트북은 Auto MPG 데이터셋을 사용하여 1970년대 후반과 1980년대 초반의 자동차 연비를 예측하는 모델을 만듭니다. 이 기간에 출시된 자동차 정보를 모델에 제공하겠습니다. 이 정보에는 실린더 수, 배기량, 마력(horsepower), 공차 중량 같은 속성이 포함됩니다.\n이 예제는 tf.keras API를 사용합니다. 자세한 내용은 케라스 가이드를 참고하세요.",
"# 산점도 행렬을 그리기 위해 seaborn 패키지를 설치합니다\n!pip install seaborn\n\nimport pathlib\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\n\nimport tensorflow.compat.v1 as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nprint(tf.__version__)",
"Auto MPG 데이터셋\n이 데이터셋은 UCI 머신 러닝 저장소에서 다운로드할 수 있습니다.\n데이터 구하기\n먼저 데이터셋을 다운로드합니다.",
"dataset_path = keras.utils.get_file(\"auto-mpg.data\", \"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data\")\ndataset_path",
"판다스를 사용하여 데이터를 읽습니다.",
"column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',\n 'Acceleration', 'Model Year', 'Origin']\nraw_dataset = pd.read_csv(dataset_path, names=column_names,\n na_values = \"?\", comment='\\t',\n sep=\" \", skipinitialspace=True)\n\ndataset = raw_dataset.copy()\ndataset.tail()",
"데이터 정제하기\n이 데이터셋은 일부 데이터가 누락되어 있습니다.",
"dataset.isna().sum()",
"문제를 간단하게 만들기 위해서 누락된 행을 삭제하겠습니다.",
"dataset = dataset.dropna()",
"\"Origin\" 열은 수치형이 아니고 범주형이므로 원-핫 인코딩(one-hot encoding)으로 변환하겠습니다:",
"origin = dataset.pop('Origin')\n\ndataset['USA'] = (origin == 1)*1.0\ndataset['Europe'] = (origin == 2)*1.0\ndataset['Japan'] = (origin == 3)*1.0\ndataset.tail()",
"데이터셋을 훈련 세트와 테스트 세트로 분할하기\n이제 데이터를 훈련 세트와 테스트 세트로 분할합니다.\n테스트 세트는 모델을 최종적으로 평가할 때 사용합니다.",
"train_dataset = dataset.sample(frac=0.8,random_state=0)\ntest_dataset = dataset.drop(train_dataset.index)",
"데이터 조사하기\n훈련 세트에서 몇 개의 열을 선택해 산점도 행렬을 만들어 살펴 보겠습니다.",
"sns.pairplot(train_dataset[[\"MPG\", \"Cylinders\", \"Displacement\", \"Weight\"]], diag_kind=\"kde\")",
"전반적인 통계도 확인해 보죠:",
"train_stats = train_dataset.describe()\ntrain_stats.pop(\"MPG\")\ntrain_stats = train_stats.transpose()\ntrain_stats",
"특성과 레이블 분리하기\n특성에서 타깃 값 또는 \"레이블\"을 분리합니다. 이 레이블을 예측하기 위해 모델을 훈련시킬 것입니다.",
"train_labels = train_dataset.pop('MPG')\ntest_labels = test_dataset.pop('MPG')",
"데이터 정규화\n위 train_stats 통계를 다시 살펴보고 각 특성의 범위가 얼마나 다른지 확인해 보죠.\n특성의 스케일과 범위가 다르면 정규화(normalization)하는 것이 권장됩니다. 특성을 정규화하지 않아도 모델이 수렴할 수 있지만, 훈련시키기 어렵고 입력 단위에 의존적인 모델이 만들어집니다.\n노트: 의도적으로 훈련 세트만 사용하여 통계치를 생성했습니다. 이 통계는 테스트 세트를 정규화할 때에도 사용됩니다. 이는 테스트 세트를 모델이 훈련에 사용했던 것과 동일한 분포로 투영하기 위해서입니다.",
"def norm(x):\n return (x - train_stats['mean']) / train_stats['std']\nnormed_train_data = norm(train_dataset)\nnormed_test_data = norm(test_dataset)",
"정규화된 데이터를 사용하여 모델을 훈련합니다.\n주의: 여기에서 입력 데이터를 정규화하기 위해 사용한 통계치(평균과 표준편차)는 원-핫 인코딩과 마찬가지로 모델에 주입되는 모든 데이터에 적용되어야 합니다. 여기에는 테스트 세트는 물론 모델이 실전에 투입되어 얻은 라이브 데이터도 포함됩니다.\n모델\n모델 만들기\n모델을 구성해 보죠. 여기에서는 두 개의 완전 연결(densely connected) 은닉층으로 Sequential 모델을 만들겠습니다. 출력 층은 하나의 연속적인 값을 반환합니다. 나중에 두 번째 모델을 만들기 쉽도록 build_model 함수로 모델 구성 단계를 감싸겠습니다.",
"def build_model():\n model = keras.Sequential([\n layers.Dense(64, activation=tf.nn.relu, input_shape=[9]),\n layers.Dense(64, activation=tf.nn.relu),\n layers.Dense(1)\n ])\n\n optimizer = tf.keras.optimizers.RMSprop(0.001)\n\n model.compile(loss='mean_squared_error',\n optimizer=optimizer,\n metrics=['mean_absolute_error', 'mean_squared_error'])\n return model\n\nmodel = build_model()",
"모델 확인\n.summary 메서드를 사용해 모델에 대한 간단한 정보를 출력합니다.",
"model.summary()",
"모델을 한번 실행해 보죠. 훈련 세트에서 10 샘플을 하나의 배치로 만들어 model.predict 메서드를 호출해 보겠습니다.",
"example_batch = normed_train_data[:10]\nexample_result = model.predict(example_batch)\nexample_result",
"제대로 작동하는 것 같네요. 결괏값의 크기와 타입이 기대했던 대로입니다.\n모델 훈련\n이 모델을 1,000번의 에포크(epoch) 동안 훈련합니다. 훈련 정확도와 검증 정확도는 history 객체에 기록됩니다.",
"# 에포크가 끝날 때마다 점(.)을 출력해 훈련 진행 과정을 표시합니다\nclass PrintDot(keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs):\n if epoch % 100 == 0: print('')\n print('.', end='')\n\nEPOCHS = 1000\n\nhistory = model.fit(\n normed_train_data, train_labels,\n epochs=EPOCHS, validation_split = 0.2, verbose=0,\n callbacks=[PrintDot()])",
"history 객체에 저장된 통계치를 사용해 모델의 훈련 과정을 시각화해 보죠.",
"hist = pd.DataFrame(history.history)\nhist['epoch'] = history.epoch\nhist.tail()\n\nimport matplotlib.pyplot as plt\n\ndef plot_history(history):\n hist = pd.DataFrame(history.history)\n hist['epoch'] = history.epoch\n\n plt.figure(figsize=(8,12))\n\n plt.subplot(2,1,1)\n plt.xlabel('Epoch')\n plt.ylabel('Mean Abs Error [MPG]')\n plt.plot(hist['epoch'], hist['mean_absolute_error'],\n label='Train Error')\n plt.plot(hist['epoch'], hist['val_mean_absolute_error'],\n label = 'Val Error')\n plt.ylim([0,5])\n plt.legend()\n\n plt.subplot(2,1,2)\n plt.xlabel('Epoch')\n plt.ylabel('Mean Square Error [$MPG^2$]')\n plt.plot(hist['epoch'], hist['mean_squared_error'],\n label='Train Error')\n plt.plot(hist['epoch'], hist['val_mean_squared_error'],\n label = 'Val Error')\n plt.ylim([0,20])\n plt.legend()\n plt.show()\n\nplot_history(history)",
"이 그래프를 보면 수 백번 에포크를 진행한 이후에는 모델이 거의 향상되지 않는 것 같습니다. model.fit 메서드를 수정하여 검증 점수가 향상되지 않으면 자동으로 훈련을 멈추도록 만들어 보죠. 에포크마다 훈련 상태를 점검하기 위해 EarlyStopping 콜백(callback)을 사용하겠습니다. 지정된 에포크 횟수 동안 성능 향상이 없으면 자동으로 훈련이 멈춥니다.\n이 콜백에 대해 더 자세한 내용은 여기를 참고하세요.",
"model = build_model()\n\n# patience 매개변수는 성능 향상을 체크할 에포크 횟수입니다\nearly_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)\n\nhistory = model.fit(normed_train_data, train_labels, epochs=EPOCHS,\n validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])\n\nplot_history(history)",
"이 그래프를 보면 검증 세트의 평균 오차가 약 +/- 2 MPG입니다. 좋은 결과인가요? 이에 대한 평가는 여러분에게 맡기겠습니다.\n모델을 훈련할 때 사용하지 않았던 테스트 세트에서 모델의 성능을 확인해 보죠. 이를 통해 모델이 실전에 투입되었을 때 모델의 성능을 짐작할 수 있습니다:",
"loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)\n\nprint(\"테스트 세트의 평균 절대 오차: {:5.2f} MPG\".format(mae))",
"예측\n마지막으로 테스트 세트에 있는 샘플을 사용해 MPG 값을 예측해 보겠습니다:",
"test_predictions = model.predict(normed_test_data).flatten()\n\nplt.scatter(test_labels, test_predictions)\nplt.xlabel('True Values [MPG]')\nplt.ylabel('Predictions [MPG]')\nplt.axis('equal')\nplt.axis('square')\nplt.xlim([0,plt.xlim()[1]])\nplt.ylim([0,plt.ylim()[1]])\n_ = plt.plot([-100, 100], [-100, 100])\n",
"모델이 꽤 잘 예측한 것 같습니다. 오차의 분포를 살펴 보죠.",
"error = test_predictions - test_labels\nplt.hist(error, bins = 25)\nplt.xlabel(\"Prediction Error [MPG]\")\n_ = plt.ylabel(\"Count\")",
"가우시안 분포가 아니지만 아마도 훈련 샘플의 수가 매우 작기 때문일 것입니다.\n결론\n이 노트북은 회귀 문제를 위한 기법을 소개합니다.\n\n평균 제곱 오차(MSE)는 회귀 문제에서 자주 사용하는 손실 함수입니다(분류 문제에서 사용하는 손실 함수와 다릅니다).\n비슷하게 회귀에서 사용되는 평가 지표도 분류와 다릅니다. 많이 사용하는 회귀 지표는 평균 절댓값 오차(MAE)입니다.\n수치 입력 데이터의 특성이 여러 가지 범위를 가질 때 동일한 범위가 되도록 각 특성의 스케일을 독립적으로 조정해야 합니다.\n훈련 데이터가 많지 않다면 과대적합을 피하기 위해 은닉층의 개수가 적은 소규모 네트워크를 선택하는 방법이 좋습니다.\n조기 종료(Early stopping)은 과대적합을 방지하기 위한 좋은 방법입니다."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
johnnyliu27/openmc
|
examples/jupyter/pandas-dataframes.ipynb
|
mit
|
[
"This notebook demonstrates how systematic analysis of tally scores is possible using Pandas dataframes. A dataframe can be automatically generated using the Tally.get_pandas_dataframe(...) method. Furthermore, by linking the tally data in a statepoint file with geometry and material information from a summary file, the dataframe can be shown with user-supplied labels.",
"import glob\n\nfrom IPython.display import Image\nimport matplotlib.pyplot as plt\nimport scipy.stats\nimport numpy as np\nimport pandas as pd\n\nimport openmc\n%matplotlib inline",
"Generate Input Files\nFirst we need to define materials that will be used in the problem. We will create three materials for the fuel, water, and cladding of the fuel pin.",
"# 1.6 enriched fuel\nfuel = openmc.Material(name='1.6% Fuel')\nfuel.set_density('g/cm3', 10.31341)\nfuel.add_nuclide('U235', 3.7503e-4)\nfuel.add_nuclide('U238', 2.2625e-2)\nfuel.add_nuclide('O16', 4.6007e-2)\n\n# borated water\nwater = openmc.Material(name='Borated Water')\nwater.set_density('g/cm3', 0.740582)\nwater.add_nuclide('H1', 4.9457e-2)\nwater.add_nuclide('O16', 2.4732e-2)\nwater.add_nuclide('B10', 8.0042e-6)\n\n# zircaloy\nzircaloy = openmc.Material(name='Zircaloy')\nzircaloy.set_density('g/cm3', 6.55)\nzircaloy.add_nuclide('Zr90', 7.2758e-3)",
"With our three materials, we can now create a materials file object that can be exported to an actual XML file.",
"# Instantiate a Materials collection\nmaterials_file = openmc.Materials([fuel, water, zircaloy])\n\n# Export to \"materials.xml\"\nmaterials_file.export_to_xml()",
"Now let's move on to the geometry. This problem will be a square array of fuel pins for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.",
"# Create cylinders for the fuel and clad\nfuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)\nclad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)\n\n# Create boundary planes to surround the geometry\n# Use both reflective and vacuum boundaries to make life interesting\nmin_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')\nmax_x = openmc.XPlane(x0=+10.71, boundary_type='vacuum')\nmin_y = openmc.YPlane(y0=-10.71, boundary_type='vacuum')\nmax_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')\nmin_z = openmc.ZPlane(z0=-10.71, boundary_type='reflective')\nmax_z = openmc.ZPlane(z0=+10.71, boundary_type='reflective')",
"With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.",
"# Create fuel Cell\nfuel_cell = openmc.Cell(name='1.6% Fuel', fill=fuel,\n region=-fuel_outer_radius)\n\n# Create a clad Cell\nclad_cell = openmc.Cell(name='1.6% Clad', fill=zircaloy)\nclad_cell.region = +fuel_outer_radius & -clad_outer_radius\n\n# Create a moderator Cell\nmoderator_cell = openmc.Cell(name='1.6% Moderator', fill=water,\n region=+clad_outer_radius)\n\n# Create a Universe to encapsulate a fuel pin\npin_cell_universe = openmc.Universe(name='1.6% Fuel Pin', cells=[\n fuel_cell, clad_cell, moderator_cell\n])",
"Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.",
"# Create fuel assembly Lattice\nassembly = openmc.RectLattice(name='1.6% Fuel - 0BA')\nassembly.pitch = (1.26, 1.26)\nassembly.lower_left = [-1.26 * 17. / 2.0] * 2\nassembly.universes = [[pin_cell_universe] * 17] * 17",
"OpenMC requires that there is a \"root\" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.",
"# Create root Cell\nroot_cell = openmc.Cell(name='root cell', fill=assembly)\n\n# Add boundary planes\nroot_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z\n\n# Create root Universe\nroot_universe = openmc.Universe(name='root universe')\nroot_universe.add_cell(root_cell)",
"We now must create a geometry that is assigned a root universe and export it to XML.",
"# Create Geometry and export to \"geometry.xml\"\ngeometry = openmc.Geometry(root_universe)\ngeometry.export_to_xml()",
"With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 minimum active batches each with 2500 particles. We also tell OpenMC to turn tally triggers on, which means it will keep running until some criterion on the uncertainty of tallies is reached.",
"# OpenMC simulation parameters\nmin_batches = 20\nmax_batches = 200\ninactive = 5\nparticles = 2500\n\n# Instantiate a Settings object\nsettings = openmc.Settings()\nsettings.batches = min_batches\nsettings.inactive = inactive\nsettings.particles = particles\nsettings.output = {'tallies': False}\nsettings.trigger_active = True\nsettings.trigger_max_batches = max_batches\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings.source = openmc.source.Source(space=uniform_dist)\n\n# Export to \"settings.xml\"\nsettings.export_to_xml()",
"Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.",
"# Instantiate a Plot\nplot = openmc.Plot(plot_id=1)\nplot.filename = 'materials-xy'\nplot.origin = [0, 0, 0]\nplot.width = [21.5, 21.5]\nplot.pixels = [250, 250]\nplot.color_by = 'material'\n\n# Instantiate a Plots collection and export to \"plots.xml\"\nplot_file = openmc.Plots([plot])\nplot_file.export_to_xml()",
"With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.",
"# Run openmc in plotting mode\nopenmc.plot_geometry(output=False)\n\n# Convert OpenMC's funky ppm to png\n!convert materials-xy.ppm materials-xy.png\n\n# Display the materials plot inline\nImage(filename='materials-xy.png')",
"As we can see from the plot, we have a nice array of pin cells with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.",
"# Instantiate an empty Tallies object\ntallies = openmc.Tallies()",
"Instantiate a fission rate mesh Tally",
"# Instantiate a tally Mesh\nmesh = openmc.Mesh(mesh_id=1)\nmesh.type = 'regular'\nmesh.dimension = [17, 17]\nmesh.lower_left = [-10.71, -10.71]\nmesh.width = [1.26, 1.26]\n\n# Instantiate tally Filter\nmesh_filter = openmc.MeshFilter(mesh)\n\n# Instantiate energy Filter\nenergy_filter = openmc.EnergyFilter([0, 0.625, 20.0e6])\n\n# Instantiate the Tally\ntally = openmc.Tally(name='mesh tally')\ntally.filters = [mesh_filter, energy_filter]\ntally.scores = ['fission', 'nu-fission']\n\n# Add mesh and Tally to Tallies\ntallies.append(tally)",
"Instantiate a cell Tally with nuclides",
"# Instantiate tally Filter\ncell_filter = openmc.CellFilter(fuel_cell)\n\n# Instantiate the tally\ntally = openmc.Tally(name='cell tally')\ntally.filters = [cell_filter]\ntally.scores = ['scatter']\ntally.nuclides = ['U235', 'U238']\n\n# Add mesh and tally to Tallies\ntallies.append(tally)",
"Create a \"distribcell\" Tally. The distribcell filter allows us to tally multiple repeated instances of the same cell throughout the geometry.",
"# Instantiate tally Filter\ndistribcell_filter = openmc.DistribcellFilter(moderator_cell)\n\n# Instantiate tally Trigger for kicks\ntrigger = openmc.Trigger(trigger_type='std_dev', threshold=5e-5)\ntrigger.scores = ['absorption']\n\n# Instantiate the Tally\ntally = openmc.Tally(name='distribcell tally')\ntally.filters = [distribcell_filter]\ntally.scores = ['absorption', 'scatter']\ntally.triggers = [trigger]\n\n# Add mesh and tally to Tallies\ntallies.append(tally)\n\n# Export to \"tallies.xml\"\ntallies.export_to_xml()",
"Now we a have a complete set of inputs, so we can go ahead and run our simulation.",
"# Remove old HDF5 (summary, statepoint) files\n!rm statepoint.*\n\n# Run OpenMC!\nopenmc.run()",
"Tally Data Processing",
"# We do not know how many batches were needed to satisfy the \n# tally trigger(s), so find the statepoint file(s)\nstatepoints = glob.glob('statepoint.*.h5')\n\n# Load the last statepoint file\nsp = openmc.StatePoint(statepoints[-1])",
"Analyze the mesh fission rate tally",
"# Find the mesh tally with the StatePoint API\ntally = sp.get_tally(name='mesh tally')\n\n# Print a little info about the mesh tally to the screen\nprint(tally)",
"Use the new Tally data retrieval API with pure NumPy",
"# Get the relative error for the thermal fission reaction \n# rates in the four corner pins \ndata = tally.get_values(scores=['fission'],\n filters=[openmc.MeshFilter, openmc.EnergyFilter], \\\n filter_bins=[((1,1),(1,17), (17,1), (17,17)), \\\n ((0., 0.625),)], value='rel_err')\nprint(data)\n\n# Get a pandas dataframe for the mesh tally data\ndf = tally.get_pandas_dataframe(nuclides=False)\n\n# Set the Pandas float display settings\npd.options.display.float_format = '{:.2e}'.format\n\n# Print the first twenty rows in the dataframe\ndf.head(20)\n\n# Create a boxplot to view the distribution of\n# fission and nu-fission rates in the pins\nbp = df.boxplot(column='mean', by='score')\n\n# Extract thermal nu-fission rates from pandas\nfiss = df[df['score'] == 'nu-fission']\nfiss = fiss[fiss['energy low [eV]'] == 0.0]\n\n# Extract mean and reshape as 2D NumPy arrays\nmean = fiss['mean'].values.reshape((17,17))\n\nplt.imshow(mean, interpolation='nearest')\nplt.title('fission rate')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.colorbar()",
"Analyze the cell+nuclides scatter-y2 rate tally",
"# Find the cell Tally with the StatePoint API\ntally = sp.get_tally(name='cell tally')\n\n# Print a little info about the cell tally to the screen\nprint(tally)\n\n# Get a pandas dataframe for the cell tally data\ndf = tally.get_pandas_dataframe()\n\n# Print the first twenty rows in the dataframe\ndf.head(20)",
"Use the new Tally data retrieval API with pure NumPy",
"# Get the standard deviations the total scattering rate\ndata = tally.get_values(scores=['scatter'], \n nuclides=['U238', 'U235'], value='std_dev')\nprint(data)",
"Analyze the distribcell tally",
"# Find the distribcell Tally with the StatePoint API\ntally = sp.get_tally(name='distribcell tally')\n\n# Print a little info about the distribcell tally to the screen\nprint(tally)",
"Use the new Tally data retrieval API with pure NumPy",
"# Get the relative error for the scattering reaction rates in\n# the first 10 distribcell instances \ndata = tally.get_values(scores=['scatter'], filters=[openmc.DistribcellFilter],\n filter_bins=[tuple(range(10))], value='rel_err')\nprint(data)",
"Print the distribcell tally dataframe",
"# Get a pandas dataframe for the distribcell tally data\ndf = tally.get_pandas_dataframe(nuclides=False)\n\n# Print the last twenty rows in the dataframe\ndf.tail(20)\n\n# Show summary statistics for absorption distribcell tally data\nabsorption = df[df['score'] == 'absorption']\nabsorption[['mean', 'std. dev.']].dropna().describe()\n\n# Note that the maximum standard deviation does indeed\n# meet the 5e-5 threshold set by the tally trigger",
"Perform a statistical test comparing the tally sample distributions for two categories of fuel pins.",
"# Extract tally data from pins in the pins divided along y=-x diagonal \nmulti_index = ('level 2', 'lat',)\nlower = df[df[multi_index + ('x',)] + df[multi_index + ('y',)] < 16]\nupper = df[df[multi_index + ('x',)] + df[multi_index + ('y',)] > 16]\nlower = lower[lower['score'] == 'absorption']\nupper = upper[upper['score'] == 'absorption']\n\n# Perform non-parametric Mann-Whitney U Test to see if the \n# absorption rates (may) come from same sampling distribution\nu, p = scipy.stats.mannwhitneyu(lower['mean'], upper['mean'])\nprint('Mann-Whitney Test p-value: {0}'.format(p))",
"Note that the symmetry implied by the y=-x diagonal ensures that the two sampling distributions are identical. Indeed, as illustrated by the test above, for any reasonable significance level (e.g., $\\alpha$=0.05) one would not reject the null hypothesis that the two sampling distributions are identical.\nNext, perform the same test but with two groupings of pins which are not symmetrically identical to one another.",
"# Extract tally data from pins in the pins divided along y=x diagonal\nmulti_index = ('level 2', 'lat',)\nlower = df[df[multi_index + ('x',)] > df[multi_index + ('y',)]]\nupper = df[df[multi_index + ('x',)] < df[multi_index + ('y',)]]\nlower = lower[lower['score'] == 'absorption']\nupper = upper[upper['score'] == 'absorption']\n\n# Perform non-parametric Mann-Whitney U Test to see if the \n# absorption rates (may) come from same sampling distribution\nu, p = scipy.stats.mannwhitneyu(lower['mean'], upper['mean'])\nprint('Mann-Whitney Test p-value: {0}'.format(p))",
"Note that the asymmetry implied by the y=x diagonal ensures that the two sampling distributions are not identical. Indeed, as illustrated by the test above, for any reasonable significance level (e.g., $\\alpha$=0.05) one would reject the null hypothesis that the two sampling distributions are identical.",
"# Extract the scatter tally data from pandas\nscatter = df[df['score'] == 'scatter']\n\nscatter['rel. err.'] = scatter['std. dev.'] / scatter['mean']\n\n# Show a scatter plot of the mean vs. the std. dev.\nscatter.plot(kind='scatter', x='mean', y='rel. err.', title='Scattering Rates')\n\n# Plot a histogram and kernel density estimate for the scattering rates\nscatter['mean'].plot(kind='hist', bins=25)\nscatter['mean'].plot(kind='kde')\nplt.title('Scattering Rates')\nplt.xlabel('Mean')\nplt.legend(['KDE', 'Histogram'])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
charmoniumQ/Surprise
|
examples/notebooks/KNNBasic_analysis.ipynb
|
bsd-3-clause
|
[
"Analysis of the KNNBasic algorithm\nIn this notebook, we will run a basic neighborhood algorithm on the movielens dataset, dump the results, and use pandas to make some data analysis.",
"from __future__ import (absolute_import, division, print_function, \n unicode_literals) \nimport pickle\nimport os\n\nimport pandas as pd\n\nfrom surprise import KNNBasic\nfrom surprise import Dataset \nfrom surprise import Reader \nfrom surprise import dump\nfrom surprise.accuracy import rmse\n\n# We will train and test on the u1.base and u1.test files of the movielens-100k dataset.\n# if you haven't already, you need to download the movielens-100k dataset\n# You can do it manually, or by running:\n\n#Dataset.load_builtin('ml-100k')\n\n# Now, let's load the dataset\ntrain_file = os.path.expanduser('~') + '/.surprise_data/ml-100k/ml-100k/u1.base'\ntest_file = os.path.expanduser('~') + '/.surprise_data/ml-100k/ml-100k/u1.test'\ndata = Dataset.load_from_folds([(train_file, test_file)], Reader('ml-100k'))\n\n \n# We'll use a basic nearest neighbor approach, where similarities are computed\n# between users.\nalgo = KNNBasic() \n\nfor trainset, testset in data.folds(): \n algo.train(trainset) \n predictions = algo.test(testset)\n rmse(predictions)\n \n dump.dump('./dump_file', predictions, algo)\n\n# The dump has been saved and we can now use it whenever we want.\n# Let's load it and see what we can do\npredictions, algo = dump.load('./dump_file')\n\ntrainset = algo.trainset\nprint('algo: {0}, k = {1}, min_k = {2}'.format(algo.__class__.__name__, algo.k, algo.min_k))\n\n# Let's build a pandas dataframe with all the predictions\n\ndef get_Iu(uid):\n \"\"\"Return the number of items rated by given user\n \n Args:\n uid: The raw id of the user.\n Returns:\n The number of items rated by the user.\n \"\"\"\n \n try:\n return len(trainset.ur[trainset.to_inner_uid(uid)])\n except ValueError: # user was not part of the trainset\n return 0\n \ndef get_Ui(iid):\n \"\"\"Return the number of users that have rated given item\n \n Args:\n iid: The raw id of the item.\n Returns:\n The number of users that have rated the item.\n \"\"\"\n \n try:\n return len(trainset.ir[trainset.to_inner_iid(iid)])\n except ValueError: # item was not part of the trainset\n return 0\n\ndf = pd.DataFrame(predictions, columns=['uid', 'iid', 'rui', 'est', 'details']) \ndf['Iu'] = df.uid.apply(get_Iu)\ndf['Ui'] = df.iid.apply(get_Ui)\ndf['err'] = abs(df.est - df.rui)\n\ndf.head()\n\nbest_predictions = df.sort_values(by='err')[:10]\nworst_predictions = df.sort_values(by='err')[-10:]\n\n# Let's take a look at the best predictions of the algorithm\nbest_predictions",
"It's interesting to note that these perfect predictions are actually lucky shots: $|U_i|$ is always very small, meaning that very few users have rated the target item. This implies that the set of neighbors is very small (see the actual_k field)... And, it just happens that all the ratings from the neighbors are the same (and mostly, are equal to that of the target user).\nThis may be a bit surprising but these lucky shots are actually very important to the accuracy of the algorithm... Try running the same algorithm with a value of min_k equal to $10$. This means that if there are less than $10$ neighbors, the prediction is set to the mean of all ratings. You'll see your accuracy decrease!",
"# Now, let's look at the prediction with the biggest error\nworst_predictions",
"Let's focus first on the last two predictions. Well, we can't do much about them. We should have predicted $5$, but the only available neighbor had a rating of $1$, so we were screwed. The only way to avoid this kind of errors would be to increase the min_k parameter, but it would actually worsen the accuracy (see note above).\nHow about the other ones? It seems that for each prediction, the users are some kind of outsiders: they rated their item with a rating of $1$ when the most of the ratings for the item where high (or inversely, rated a bad item with a rating of $5$). See the plot below as an illustration for the first rating.\nThese are situations where baseline estimates would be quite helpful, in order to deal with highly biased users (and items).",
"from collections import Counter\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n%matplotlib notebook\nmatplotlib.style.use('ggplot')\n\ncounter = Counter([r for (_, r) in trainset.ir[trainset.to_inner_iid('302')]])\npd.DataFrame.from_dict(counter, orient='index').plot(kind='bar', legend=False)\nplt.xlabel('Rating value')\nplt.ylabel('Number of users')\nplt.title('Number of users having rated item 302')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Ensembl/cttv024
|
tests/reports/template.ipynb
|
apache-2.0
|
[
"# This cell contains default parameters values for execution by `papermill`.\nfilename = '../sample_data/postgap.20180817.asthma.txt.gz'",
"POSTGAP Report\nThis notebook was automatically generated as a summary of POSTGAP output.\nSetup\nNote that for command line usage (python reporter.py <filename>) the following will work just fine. However, to edit the template, temporarily change the following to import helpers.",
"from reports import helpers\n\nhelpers.calc_run_str()\n\n# pg = pd.read_csv(filename, sep='\\t', na_values=['None'])\npg = helpers.load_file(filename)",
"Headline\nQ: How many rows and columns?",
"print(pg.shape)",
"Q: How many unique target-disease associations?",
"helpers.calc_g2d_pair_counts(pg)",
"Q: What is the distribution of unique diseases per gene? And vice versa?",
"helpers.calc_pairwise_degree_dist(pg, 'gene_id', 'disease_efo_id', 'Gene', 'Disease')",
"Identifiers\nQ: How many unique values appear for each identifier?",
"helpers.calc_id_field_counts(pg)",
"Q: What is the maximum number of rows for a given fixed identifier?",
"helpers.calc_id_field_max_rows(pg)",
"Identifier pairs\nQ: How many unique identifier pairs appear?",
"helpers.calc_id_field_pair_counts(pg)",
"Gene-LD SNP associations\nQ: What is the distribution of each association subscore (VEP, GTEx, etc.)?",
"helpers.calc_g2v_field_hists(pg)",
"Q: What is the distribution of unique LD SNPs per gene? And vice versa?",
"helpers.calc_pairwise_degree_dist(pg, 'gene_id', 'ld_snp_rsID', 'Gene', 'LD SNP')",
"Q: What is the overlap between presence of association subscores?",
"helpers.calc_g2v_field_overlap(pg)",
"Q: What is the joint distribution between association subscore pairs (ie. how correlated are they)?",
"helpers.calc_g2v_field_cross_dists(pg)",
"LD SNP-GWAS SNP associations\nQ: What is the distribution of r2?",
"helpers.calc_dist_r2(pg)",
"Q: What is the distribution of unique GWAS SNPs per LD SNP? And vice versa?",
"helpers.calc_pairwise_degree_dist(pg, 'ld_snp_rsID', 'gwas_snp', 'LD SNP', 'GWAS SNP')",
"GWAS SNP-Disease associations\nQ: What are the distributions of (gwas_pvalue, gwas_beta, gwas_odds_ratio)?",
"helpers.calc_v2d_field_hists(pg)",
"Q: What is the distribution of unique diseases per GWAS SNP? And vice versa?",
"helpers.calc_pairwise_degree_dist(pg, 'gwas_snp', 'disease_efo_id', 'GWAS SNP', 'Disease')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Hvass-Labs/TensorFlow-Tutorials
|
05_Ensemble_Learning.ipynb
|
mit
|
[
"TensorFlow Tutorial #05\nEnsemble Learning\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nWARNING!\nThis tutorial does not work with TensorFlow v. 1.9 due to the PrettyTensor builder API apparently no longer being updated and supported by the Google Developers. It is recommended that you use the Keras API instead, which also makes it much easier to train or load multiple models to create an ensemble, see e.g. Tutorial #10 for inspiration on how to load and use pre-trained models using Keras.\nIntroduction\nThis tutorial shows how to use a so-called ensemble of convolutional neural networks. Instead of using a single neural network, we use several neural networks and average their outputs.\nThis is used on the MNIST data-set for recognizing hand-written digits. The ensemble improves the classification accuracy slightly on the test-set, but the difference is so small that it is possibly random. Furthermore, the ensemble mis-classifies some images that are correctly classified by some of the individual networks.\nThis tutorial builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text here is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.\nFlowchart\nThe following chart shows roughly how the data flows in a single Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.\nThis tutorial implements an ensemble of 5 such neural networks, where the network structure is the same but the weights and other variables are different for each network.",
"from IPython.display import Image\nImage('images/02_network_flowchart.png')",
"Imports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport time\nfrom datetime import timedelta\nimport math\nimport os\n\n# Use PrettyTensor to simplify Neural Network construction.\nimport prettytensor as pt",
"This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:",
"tf.__version__",
"PrettyTensor version:",
"pt.__version__",
"Load Data\nThe MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.",
"from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)",
"The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets, but we will make random training-sets further below.",
"print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))",
"Class numbers\nThe class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.",
"data.test.cls = np.argmax(data.test.labels, axis=1)\ndata.validation.cls = np.argmax(data.validation.labels, axis=1)",
"Helper-function for creating random training-sets\nWe will train 5 neural networks on different training-sets that are selected at random. First we combine the original training- and validation-sets into one big set. This is done for both the images and the labels.",
"combined_images = np.concatenate([data.train.images, data.validation.images], axis=0)\ncombined_labels = np.concatenate([data.train.labels, data.validation.labels], axis=0)",
"Check that the shape of the combined arrays is correct.",
"print(combined_images.shape)\nprint(combined_labels.shape)",
"Size of the combined data-set.",
"combined_size = len(combined_images)\ncombined_size",
"Define the size of the training-set used for each neural network. You can try and change this.",
"train_size = int(0.8 * combined_size)\ntrain_size",
"We do not use a validation-set during training, but this would be the size.",
"validation_size = combined_size - train_size\nvalidation_size",
"Helper-function for splitting the combined data-set into a random training- and validation-set.",
"def random_training_set():\n # Create a randomized index into the full / combined training-set.\n idx = np.random.permutation(combined_size)\n\n # Split the random index into training- and validation-sets.\n idx_train = idx[0:train_size]\n idx_validation = idx[train_size:]\n\n # Select the images and labels for the new training-set.\n x_train = combined_images[idx_train, :]\n y_train = combined_labels[idx_train, :]\n\n # Select the images and labels for the new validation-set.\n x_validation = combined_images[idx_validation, :]\n y_validation = combined_labels[idx_validation, :]\n\n # Return the new training- and validation-sets.\n return x_train, y_train, x_validation, y_validation",
"Data Dimensions\nThe data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.",
"# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10",
"Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.",
"def plot_images(images, # Images to plot, 2-d array.\n cls_true, # True class-no for images.\n ensemble_cls_pred=None, # Ensemble predicted class-no.\n best_cls_pred=None): # Best-net predicted class-no.\n\n assert len(images) == len(cls_true)\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n\n # Adjust vertical spacing if we need to print ensemble and best-net.\n if ensemble_cls_pred is None:\n hspace = 0.3\n else:\n hspace = 1.0\n fig.subplots_adjust(hspace=hspace, wspace=0.3)\n\n # For each of the sub-plots.\n for i, ax in enumerate(axes.flat):\n\n # There may not be enough images for all sub-plots.\n if i < len(images):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if ensemble_cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n msg = \"True: {0}\\nEnsemble: {1}\\nBest Net: {2}\"\n xlabel = msg.format(cls_true[i],\n ensemble_cls_pred[i],\n best_cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()",
"Plot a few images to see if data is correct",
"# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)",
"TensorFlow Graph\nThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.\nTensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.\nTensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.\nA TensorFlow graph consists of the following parts which will be detailed below:\n\nPlaceholder variables used for inputting data to the graph.\nVariables that are going to be optimized so as to make the convolutional network perform better.\nThe mathematical formulas for the neural network.\nA loss measure that can be used to guide the optimization of the variables.\nAn optimization method which updates the variables.\n\nIn addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.\nPlaceholder variables\nPlaceholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.",
"x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')",
"The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:",
"x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])",
"Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.",
"y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')",
"We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.",
"y_true_cls = tf.argmax(y_true, dimension=1)",
"Neural Network\nThis section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.\nThe basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.",
"x_pretty = pt.wrap(x_image)",
"Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.\nNote that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.",
"with pt.defaults_scope(activation_fn=tf.nn.relu):\n y_pred, loss = x_pretty.\\\n conv2d(kernel=5, depth=16, name='layer_conv1').\\\n max_pool(kernel=2, stride=2).\\\n conv2d(kernel=5, depth=36, name='layer_conv2').\\\n max_pool(kernel=2, stride=2).\\\n flatten().\\\n fully_connected(size=128, name='layer_fc1').\\\n softmax_classifier(num_classes=num_classes, labels=y_true)",
"Optimization Method\nPretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.\nIt is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.",
"optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)",
"Performance Measures\nWe need a few more performance measures to display the progress to the user.\nFirst we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.",
"y_pred_cls = tf.argmax(y_pred, dimension=1)",
"Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.",
"correct_prediction = tf.equal(y_pred_cls, y_true_cls)",
"The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.",
"accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))",
"Saver\nIn order to save the variables of the neural network, we now create a Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.\nNote that if you have more than 100 neural networks in the ensemble then you must increase max_to_keep accordingly.",
"saver = tf.train.Saver(max_to_keep=100)",
"This is the directory used for saving and retrieving the data.",
"save_dir = 'checkpoints/'",
"Create the directory if it does not exist.",
"if not os.path.exists(save_dir):\n os.makedirs(save_dir)",
"This function returns the save-path for the data-file with the given network number.",
"def get_save_path(net_number):\n return save_dir + 'network' + str(net_number)",
"TensorFlow Run\nCreate TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.",
"session = tf.Session()",
"Initialize variables\nThe variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it several times below.",
"def init_variables():\n session.run(tf.initialize_all_variables())",
"Helper-function to create a random training batch.\nThere are thousands of images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.",
"train_batch_size = 64",
"Function for selecting a random training-batch of the given size.",
"def random_batch(x_train, y_train):\n # Total number of images in the training-set.\n num_images = len(x_train)\n\n # Create a random index into the training-set.\n idx = np.random.choice(num_images,\n size=train_batch_size,\n replace=False)\n\n # Use the random index to select random images and labels.\n x_batch = x_train[idx, :] # Images.\n y_batch = y_train[idx, :] # Labels.\n\n # Return the batch.\n return x_batch, y_batch",
"Helper-function to perform optimization iterations\nFunction for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.",
"def optimize(num_iterations, x_train, y_train):\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(num_iterations):\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = random_batch(x_train, y_train)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n\n # Print status every 100 iterations and after last iteration.\n if i % 100 == 0:\n\n # Calculate the accuracy on the training-batch.\n acc = session.run(accuracy, feed_dict=feed_dict_train)\n \n # Status-message for printing.\n msg = \"Optimization Iteration: {0:>6}, Training Batch Accuracy: {1:>6.1%}\"\n\n # Print it.\n print(msg.format(i + 1, acc))\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))",
"Create ensemble of neural networks\nNumber of neural networks in the ensemble.",
"num_networks = 5",
"Number of optimization iterations for each neural network.",
"num_iterations = 10000",
"Create the ensemble of neural networks. All networks use the same TensorFlow graph that was defined above. For each neural network the TensorFlow weights and variables are initialized to random values and then optimized. The variables are then saved to disk so they can be reloaded later.\nYou may want to skip this computation if you just want to re-run the Notebook with different analysis of the results.",
"if True:\n # For each of the neural networks.\n for i in range(num_networks):\n print(\"Neural network: {0}\".format(i))\n\n # Create a random training-set. Ignore the validation-set.\n x_train, y_train, _, _ = random_training_set()\n\n # Initialize the variables of the TensorFlow graph.\n session.run(tf.global_variables_initializer())\n\n # Optimize the variables using this training-set.\n optimize(num_iterations=num_iterations,\n x_train=x_train,\n y_train=y_train)\n\n # Save the optimized variables to disk.\n saver.save(sess=session, save_path=get_save_path(i))\n\n # Print newline.\n print()",
"Helper-functions for calculating and predicting classifications\nThis function calculates the predicted labels of images, that is, for each image it calculates a vector of length 10 indicating which of the 10 classes the image is.\nThe calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.",
"# Split the data-set in batches of this size to limit RAM usage.\nbatch_size = 256\n\ndef predict_labels(images):\n # Number of images.\n num_images = len(images)\n\n # Allocate an array for the predicted labels which\n # will be calculated in batches and filled into this array.\n pred_labels = np.zeros(shape=(num_images, num_classes),\n dtype=np.float)\n\n # Now calculate the predicted labels for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_images:\n # The ending index for the next batch is denoted j.\n j = min(i + batch_size, num_images)\n\n # Create a feed-dict with the images between index i and j.\n feed_dict = {x: images[i:j, :]}\n\n # Calculate the predicted labels using TensorFlow.\n pred_labels[i:j] = session.run(y_pred, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n return pred_labels",
"Calculate a boolean array whether the predicted classes for the images are correct.",
"def correct_prediction(images, labels, cls_true):\n # Calculate the predicted labels.\n pred_labels = predict_labels(images=images)\n\n # Calculate the predicted class-number for each image.\n cls_pred = np.argmax(pred_labels, axis=1)\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n return correct",
"Calculate a boolean array whether the images in the test-set are classified correctly.",
"def test_correct():\n return correct_prediction(images = data.test.images,\n labels = data.test.labels,\n cls_true = data.test.cls)",
"Calculate a boolean array whether the images in the validation-set are classified correctly.",
"def validation_correct():\n return correct_prediction(images = data.validation.images,\n labels = data.validation.labels,\n cls_true = data.validation.cls)",
"Helper-functions for calculating the classification accuracy\nThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4",
"def classification_accuracy(correct):\n # When averaging a boolean array, False means 0 and True means 1.\n # So we are calculating: number of True / len(correct) which is\n # the same as the classification accuracy.\n return correct.mean()",
"Calculate the classification accuracy on the test-set.",
"def test_accuracy():\n # Get the array of booleans whether the classifications are correct\n # for the test-set.\n correct = test_correct()\n \n # Calculate the classification accuracy and return it.\n return classification_accuracy(correct)",
"Calculate the classification accuracy on the original validation-set.",
"def validation_accuracy():\n # Get the array of booleans whether the classifications are correct\n # for the validation-set.\n correct = validation_correct()\n \n # Calculate the classification accuracy and return it.\n return classification_accuracy(correct)",
"Results and analysis\nFunction for calculating the predicted labels for all the neural networks in the ensemble. The labels are combined further below.",
"def ensemble_predictions():\n # Empty list of predicted labels for each of the neural networks.\n pred_labels = []\n\n # Classification accuracy on the test-set for each network.\n test_accuracies = []\n\n # Classification accuracy on the validation-set for each network.\n val_accuracies = []\n\n # For each neural network in the ensemble.\n for i in range(num_networks):\n # Reload the variables into the TensorFlow graph.\n saver.restore(sess=session, save_path=get_save_path(i))\n\n # Calculate the classification accuracy on the test-set.\n test_acc = test_accuracy()\n\n # Append the classification accuracy to the list.\n test_accuracies.append(test_acc)\n\n # Calculate the classification accuracy on the validation-set.\n val_acc = validation_accuracy()\n\n # Append the classification accuracy to the list.\n val_accuracies.append(val_acc)\n\n # Print status message.\n msg = \"Network: {0}, Accuracy on Validation-Set: {1:.4f}, Test-Set: {2:.4f}\"\n print(msg.format(i, val_acc, test_acc))\n\n # Calculate the predicted labels for the images in the test-set.\n # This is already calculated in test_accuracy() above but\n # it is re-calculated here to keep the code a bit simpler.\n pred = predict_labels(images=data.test.images)\n\n # Append the predicted labels to the list.\n pred_labels.append(pred)\n \n return np.array(pred_labels), \\\n np.array(test_accuracies), \\\n np.array(val_accuracies)\n\npred_labels, test_accuracies, val_accuracies = ensemble_predictions()",
"Summarize the classification accuracies on the test-set for the neural networks in the ensemble.",
"print(\"Mean test-set accuracy: {0:.4f}\".format(np.mean(test_accuracies)))\nprint(\"Min test-set accuracy: {0:.4f}\".format(np.min(test_accuracies)))\nprint(\"Max test-set accuracy: {0:.4f}\".format(np.max(test_accuracies)))",
"The predicted labels of the ensemble is a 3-dim array, the first dim is the network-number, the second dim is the image-number, the third dim is the classification vector.",
"pred_labels.shape",
"Ensemble predictions\nThere are different ways to calculate the predicted labels for the ensemble. One way is to calculate the predicted class-number for each neural network, and then select the class-number with most votes. But this requires a large number of neural networks relative to the number of classes.\nThe method used here is instead to take the average of the predicted labels for all the networks in the ensemble. This is simple to calculate and does not require a large number of networks in the ensemble.",
"ensemble_pred_labels = np.mean(pred_labels, axis=0)\nensemble_pred_labels.shape",
"The ensemble's predicted class number is then the index of the highest number in the label, which is calculated using argmax as usual.",
"ensemble_cls_pred = np.argmax(ensemble_pred_labels, axis=1)\nensemble_cls_pred.shape",
"Boolean array whether each of the images in the test-set was correctly classified by the ensemble of neural networks.",
"ensemble_correct = (ensemble_cls_pred == data.test.cls)",
"Negate the boolean array so we can use it to lookup incorrectly classified images.",
"ensemble_incorrect = np.logical_not(ensemble_correct)",
"Best neural network\nNow we find the single neural network that performed best on the test-set.\nFirst list the classification accuracies on the test-set for all the neural networks in the ensemble.",
"test_accuracies",
"The index of the neural network with the highest classification accuracy.",
"best_net = np.argmax(test_accuracies)\nbest_net",
"The best neural network's classification accuracy on the test-set.",
"test_accuracies[best_net]",
"Predicted labels of the best neural network.",
"best_net_pred_labels = pred_labels[best_net, :, :]",
"The predicted class-number.",
"best_net_cls_pred = np.argmax(best_net_pred_labels, axis=1)",
"Boolean array whether the best neural network classified each image in the test-set correctly.",
"best_net_correct = (best_net_cls_pred == data.test.cls)",
"Boolean array whether each image is incorrectly classified.",
"best_net_incorrect = np.logical_not(best_net_correct)",
"Comparison of ensemble vs. the best single network\nThe number of images in the test-set that were correctly classified by the ensemble.",
"np.sum(ensemble_correct)",
"The number of images in the test-set that were correctly classified by the best neural network.",
"np.sum(best_net_correct)",
"Boolean array whether each image in the test-set was correctly classified by the ensemble and incorrectly classified by the best neural network.",
"ensemble_better = np.logical_and(best_net_incorrect,\n ensemble_correct)",
"Number of images in the test-set where the ensemble was better than the best single network:",
"ensemble_better.sum()",
"Boolean array whether each image in the test-set was correctly classified by the best single network and incorrectly classified by the ensemble.",
"best_net_better = np.logical_and(best_net_correct,\n ensemble_incorrect)",
"Number of images in the test-set where the best single network was better than the ensemble.",
"best_net_better.sum()",
"Helper-functions for plotting and printing comparisons\nFunction for plotting images from the test-set and their true and predicted class-numbers.",
"def plot_images_comparison(idx):\n plot_images(images=data.test.images[idx, :],\n cls_true=data.test.cls[idx],\n ensemble_cls_pred=ensemble_cls_pred[idx],\n best_cls_pred=best_net_cls_pred[idx])",
"Function for printing the predicted labels.",
"def print_labels(labels, idx, num=1):\n # Select the relevant labels based on idx.\n labels = labels[idx, :]\n\n # Select the first num labels.\n labels = labels[0:num, :]\n \n # Round numbers to 2 decimal points so they are easier to read.\n labels_rounded = np.round(labels, 2)\n\n # Print the rounded labels.\n print(labels_rounded)",
"Function for printing the predicted labels for the ensemble of neural networks.",
"def print_labels_ensemble(idx, **kwargs):\n print_labels(labels=ensemble_pred_labels, idx=idx, **kwargs)",
"Function for printing the predicted labels for the best single network.",
"def print_labels_best_net(idx, **kwargs):\n print_labels(labels=best_net_pred_labels, idx=idx, **kwargs)",
"Function for printing the predicted labels of all the neural networks in the ensemble. This only prints the labels for the first image.",
"def print_labels_all_nets(idx):\n for i in range(num_networks):\n print_labels(labels=pred_labels[i, :, :], idx=idx, num=1)",
"Examples: Ensemble is better than the best network\nPlot examples of images that were correctly classified by the ensemble and incorrectly classified by the best single network.",
"plot_images_comparison(idx=ensemble_better)",
"The ensemble's predicted labels for the first of these images (top left image):",
"print_labels_ensemble(idx=ensemble_better, num=1)",
"The best network's predicted labels for the first of these images:",
"print_labels_best_net(idx=ensemble_better, num=1)",
"The predicted labels of all the networks in the ensemble, for the first of these images:",
"print_labels_all_nets(idx=ensemble_better)",
"Examples: Best network is better than ensemble\nNow plot examples of images that were incorrectly classified by the ensemble but correctly classified by the best single network.",
"plot_images_comparison(idx=best_net_better)",
"The ensemble's predicted labels for the first of these images (top left image):",
"print_labels_ensemble(idx=best_net_better, num=1)",
"The best single network's predicted labels for the first of these images:",
"print_labels_best_net(idx=best_net_better, num=1)",
"The predicted labels of all the networks in the ensemble, for the first of these images:",
"print_labels_all_nets(idx=best_net_better)",
"Close TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.",
"# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\n# session.close()",
"Conclusion\nThis tutorial created an ensemble of 5 convolutional neural networks for classifying hand-written digits in the MNIST data-set. The ensemble worked by averaging the predicted class-labels of the 5 individual neural networks. This resulted in slightly improved classification accuracy on the test-set, with the ensemble having an accuracy of 99.1% compared to 98.9% for the best individual network.\nHowever, the ensemble did not always perform better than the individual neural networks, which sometimes classified images correctly while the ensemble misclassified those images. This suggests that the effect of using an ensemble of neural networks is somewhat random and may not provide a reliable way of improving the performance over a single neural network.\nThe form of ensemble learning used here is called bagging (or Bootstrap Aggregating), which is mainly useful for avoiding overfitting and may not be necessary for this particular neural network and data-set. So it is still possible that ensemble learning may work in other settings.\nTechnical Note\nThis implementation of ensemble learning used the TensorFlow Saver()-object to save and reload the variables of the neural network. But this functionality was really designed for another purpose and becomes very awkward to use for ensemble learning with different types of neural networks, or if you want to load multiple neural networks at the same time. There's an add-on package for TensorFlow called sk-flow which makes this much easier, but it is still in the early stages of development as of August 2016.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nChange different aspects of this program to see how it affects the performance:\nUse more neural networks in the ensemble.\nChange the size of the training-sets.\nChange the number of optimization iterations, try both more and less.\n\n\nExplain to a friend how the program works.\nDo you think Ensemble Learning is worth more research effort, or should you rather focus on improving a single neural network?\n\nLicense (MIT)\nCopyright (c) 2016 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
martinjrobins/hobo
|
examples/plotting/mcmc-pairwise-scatterplots.ipynb
|
bsd-3-clause
|
[
"Inference plots - Pairwise scatterplots\nThis example builds on adaptive covariance MCMC, and shows you how to plot the parameter distributions.\nInference plots:\n* Predicted time series\n* Trace plots\n* Autocorrelation\nSetting up an MCMC routine\nSee the adaptive covariance MCMC example for details.",
"import pints\nimport pints.toy as toy\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Load a forward model\nmodel = toy.LogisticModel()\n\n# Create some toy data\nreal_parameters = [0.015, 500] # growth rate, carrying capacity\ntimes = np.linspace(0, 1000, 100)\norg_values = model.simulate(real_parameters, times)\n\n# Add noise\nnoise = 50\nvalues = org_values + np.random.normal(0, noise, org_values.shape)\nreal_parameters = np.array(real_parameters + [noise])\n\n# Get properties of the noise sample\nnoise_sample_mean = np.mean(values - org_values)\nnoise_sample_std = np.std(values - org_values)\n\n# Create an object with links to the model and time series\nproblem = pints.SingleOutputProblem(model, times, values)\n\n# Create a log-likelihood function (adds an extra parameter!)\nlog_likelihood = pints.GaussianLogLikelihood(problem)\n\n# Create a uniform prior over both the parameters and the new noise variable\nlog_prior = pints.UniformLogPrior(\n [0.01, 400, noise*0.1],\n [0.02, 600, noise*100]\n )\n\n# Create a posterior log-likelihood (log(likelihood * prior))\nlog_posterior = pints.LogPosterior(log_likelihood, log_prior)\n\n# Perform sampling using MCMC, with a single chain\nx0 = real_parameters * 1.1\nmcmc = pints.MCMCController(log_posterior, 1, [x0])\nmcmc.set_max_iterations(6000)\nmcmc.set_log_to_screen(False)",
"Plotting 1d histograms\nWe can now run the MCMC routine and plot the histograms of the inferred parameters.",
"print('Running...')\nchains = mcmc.run()\nprint('Done!')\n\n# Select chain 0 and discard warm-up\nchain = chains[0]\nchain = chain[3000:]\n\nimport pints.plot\n\n# Plot the 1d histogram of each parameter\npints.plot.histogram([chain], parameter_names=['growth rate', 'carrying capacity', 'noise'])\nplt.show()",
"Plotting 2d histograms and a matrix of parameter distribution plots\nPlotting the histograms of two variables (showing their correlation) can be performed using pints.plot.pairwise with the parameter heatmap set to True. Additionally, by supplying the known real parameters, we see the locations of the true values appear as dotted lines in each plot.",
"pints.plot.pairwise(chain, heatmap=True, parameter_names=['growth rate', 'carrying capacity', 'noise'], ref_parameters=real_parameters)\nplt.show()",
"Matrix of parameter distribution plots with KDE\nIt is also possible to use kernel density estimation (KDE) to visualize the probability distributions of each parameter. Using the pints.plot.pairwise function, this time with the parameter kde set to True, we can create a matrix of scatterplots with KDE.",
"pints.plot.pairwise(chain, kde=True, parameter_names=['growth rate', 'carrying capacity', 'noise'], ref_parameters=real_parameters)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MartyWeissman/Python-for-number-theory
|
P3wNT Notebook 4.ipynb
|
gpl-3.0
|
[
"Part 4: Dictionaries, factorization, and multiplicative functions in Python 3.x\nThe list type is perfect for keeping track of ordered data. Here we introduce the dict (dictionary) type, which can be used for key-value pairs. This data structure is well suited for storing the prime decomposition of integers, in which each prime (key) is given an exponent (value). As we introduce the dict type, we also discuss some broader issues of objects and methods in Python programming.\nWe apply these programming concepts to prime decomposition and multiplicative functions (e.g., the divisor sum function). This material accompanies Chapter 2 of An Illustrated Theory of Numbers.\nTable of Contents\n\nDictionaries and factorization\nMultiplicative functions\n\n<a id='dictfact'></a>\nDictionaries and factorization\nLists, dictionaries, objects and methods\nLists, like [2,3,5,7] are data structures built for sequentially ordered data. The items of a list (in this case, the numbers 2,3,5,7) are indexed by natural numbers (in this case, the indices are 0,1,2,3). Python allows you to access the items of a list through their index.",
"L = [2,3,5,7]\ntype(L)\n\nprint(L[0]) # What is the output?\n\nprint(L[3]) # What is the output?\n\nprint(L[5]) # This should give an IndexError.",
"Python dictionaries are structures built for data that have a key-value structure. The keys are like indices. But instead of numerical indices (0,1,2,etc.), the keys can be any numbers or strings (technically, any hashable type)! Each key in the dictionary references a value, in the same way that each index of a list references an item. The syntax for defining a dictionary is {key1:value1, key2:value2, key3:value3, ...}. A first example is below. You can also read the official tutorial for more on dictionaries.",
"nemo = {'species':'clownfish', 'color':'orange', 'age':6}\n\nnemo['color'] # The key 'color' references the value 'orange'.\n\nnemo['age'] # Predict the result. Notice the quotes are necessary. The *string* 'age' is the key.\n\nnemo[1] # This yields a KeyError, since 1 is not a key in the dictionary.",
"Dictionaries can have values of any type, and their keys can be numbers or strings. In this case, the keys are all strings, while the values include strings and integers. In this way, dictionaries are useful for storing properties of different kinds -- they can be used to store [records](https://en.wikipedia.org/wiki/Record_(computer_science%29 ), as they are called in other programming languages.\nAn interlude on Python objects\nWe have discussed how Python stores data of various types: int, bool, str, list, dict, among others. But now seems like a good time to discuss the fundamental \"units\" which are stored: these are called Python objects. If you have executed the cells above, Python is currently storing a lot of objects in your computer's memory. These objects include nemo and L. Also L[0] is an object and nemo['age'] is an object. Each of these objects are occupying a little space in memory.\nWe reference these objects by the names we created, like nemo and L. But for internal purposes, Python assigns every object a unique ID number. You can see an object's ID number with the id function.",
"id(L)\n\nid(nemo)\n\nid(L[0])",
"It is sometimes useful to check the ID numbers of objects, to look \"under the hood\" a bit. For example, consider the following.",
"x = 3\ny = 3\nprint(x == y) # This should be true!\n\nid(x)\n\nid(y)",
"What happened? You probably noticed that both variables x and y have the same id number. That means that Python is being efficient, and not filling up two different slots of memory with the same number (3). Instead, it puts the number in one memory slot, and uses x and y as alternative names for this slot.\nBut what happens if we change a value of one variable?",
"x = 5\n\nid(x)\n\nid(y)",
"Python won't be confused by this. When we assigned x = 5, Python opened up a new memory slot for the number 5, and assigned x to refer to the number in this new slot. Note that y still \"points\" at the old slot. Python tries to be smart about memory, remembering where numbers are stored, and putting numbers into slots \"under the hood\" as it sees fit.",
"id(3) # Does Python remember where it put 3?\n\nid(5) # Does Python remember where it put 5?\n\nid(4) # 4 was probably not in memory before. But now it is!\n\ny = 5\n\nid(y) # Did Python change the number in a slot? Or did it point `y` at another slot?\n\nid(L[2]) # Python doesn't like to waste space.",
"This sort of memory management can be helpful to avoid repetetion. For example, consider a list with repetition.",
"R = [19,19,19]\n\nid(R) # The list itself is an object.\n\nid(R[0]) # The 0th item in the list is an object.\n\nid(R[1]) # The 1st item in the list is an object.\n\nid(R[2]) # The 2nd item in the list is an object.",
"By having each list entry point to the same location in memory, Python avoids having to fill three blocks of memory with the same number 19. \nPython objects can have methods attached to them. Methods are functions which can utilize and change the data within an object. The basic syntax for using methods is <object>.<method>(). Here are two examples to get started: The keys and values of a dictionary can be recovered using the keys() and values() methods.",
"nemo.keys() # What are the keys of nemo?\n\nnemo.values() # What are the values of nemo?",
"The output of the keys() and values() methods are list-like. As such, they are convenient for iteration and membership-testing.",
"'color' in nemo.keys()\n\n'taste' in nemo.keys()\n\n'orange' in nemo.keys() # Is 'orange' a key in the dictionary?\n\nfor k in nemo.keys(): # Iterates through the keys.\n print('Nemo\\'s {} is {}.'.format(k,nemo[k])) # \\' is used to get a single-quote in a string.",
"In fact, Python provides a simpler syntax for iterating over keys or testing membership in keys. The syntax for k in <dictionary>: iterates the variable k through the keys of the <dictionary>. Similarly the syntax k in <dictionary> is shorthand for k in <dictionary>.keys().",
"for k in nemo: # This will iterate through the *keys* of the dictionary nemo.\n print('Nemo\\'s {} is {}.'.format(k,nemo[k]))",
"Sometimes we'll want to change a dictionary. Perhaps we learn that nemo has gotten lost.",
"nemo['status'] = 'lost'\n\nid(nemo)\n\nid('status')\n\nprint(nemo)",
"The command nemo['status'] = 'lost' creates a new key in the dictionary called 'status' and assigns the value 'lost' to the key. If we find nemo, then we can change the value.",
"nemo['status'] = 'found'\nprint(nemo)",
"Since 'status' is already among the keys of nemo, the command nemo['status'] = 'found' does not create a new key this time. It just changes the associated value from 'lost' to 'found'.",
"nemo.keys() # What are the keys of nemo now?\n\nnemo.values() # What are the values of nemo now?",
"We mentioned earlier that keys() and values() are methods attached to the object nemo, and methods are functions which are attached to Python objects. \nPython objects often (and often by default!) have methods attached to them. Every dictionary and every list in Python comes with attached methods. Methods can be used to extract properties of objects or change them. Here are examples of some list methods.",
"L = [2,3,5,7]\nprint(L) # Let's remember what the list L is.\n\nL[0] # What is this?\n\nid(L[0]) # What is the ID number of the 0th item in the list?\n\nL.reverse() # The reverse() method changes L!\nprint(L)\n\nL[0] # We have definitely changed L.\n\nL[3] # The last item in the list L.\n\nid(L[3]) # The ID number of the last item in the list L.",
"Observe that Python changed the order of the items in the list. But it didn't move them around in memory! The object 2 maintains the same ID number, and stays in the same place in memory. But the list item L[0] points at 2 before reversing while L[3] points at 2 after reversing. This kind of thing is confusing at first, but the general framework is <variable> points at <memory location>. You choose the name of the variable and work with the variable directly. Python labels each memory location with an ID number, and puts stuff in memory and retrieves values from memory according to your wishes.",
"L.append(11) # Let's add another term to the list with the append(*) method.\nprint(L)\n\nL.sort() # Let's get this list back in order.\nprint(L)",
"Some more useful list methods can be found at the official Python tutorial. \nPrime decomposition dictionaries\nIf $N$ is a positive integer, then $N$ can be uniquely decomposed into a product of primes. Here \"uniquely\" means that $N$ has a unique expression of the form\n$$N = 2^{e_2} 3^{e_3} 5^{e_5} \\cdots$$\nin which the exponents $e_2, e_3, e_5$, etc., are natural numbers (and only finitely many are nonzero).\nA Python dictionary is well-suited to store the resulting prime decomposition. For example, we might store the prime decomposition $2^3 3^2 7$ with the dictionary {2:3, 3:2, 7:1}. The primes which occur in the decomposition become the keys of the dictionary, and the natural number exponents becomes the values of the dictionary.\nThe functions below decompose a positive integer N into primes, storing the result in a dictionary. The strategy is to repeatedly strip off (divide by) the smallest prime factor of a number, adjusting the dictionary along the way, until the number is reduced to 1. The first function below finds the smallest prime factor of a number.",
"from math import sqrt # We'll want to use the square root.\n\ndef smallest_factor(n):\n '''\n Gives the smallest prime factor of n.\n '''\n if n < 2:\n return None # No prime factors!\n \n test_factor = 2 # The smallest possible prime factor.\n max_factor = sqrt(n) # we don't have to search past sqrt(n).\n \n while test_factor <= max_factor:\n if n%test_factor == 0:\n return test_factor\n test_factor = test_factor + 1 # This could be sped up.\n \n return n # If we didn't find a factor up to sqrt(n), n itself is prime!\n \n\nsmallest_factor(105)\n\nsmallest_factor(1999**2) # 1999 might be called the Prince of primes.\n\nsmallest_factor(11**3 * 13**9) # The result should be 11.\n\ndef decompose(N):\n '''\n Gives the unique prime decomposition of a positive integer N,\n as a dictionary with primes as keys and exponents as values.\n '''\n current_number = N # We'll divide out factors from current_number until we get 1.\n decomp = {} # An empty dictionary to start.\n while current_number > 1:\n p = smallest_factor(current_number) # The smallest prime factor of the current number.\n if p in decomp.keys(): # Is p already in the list of keys?\n decomp[p] = decomp[p] + 1 # Increase the exponent (value with key p) by 1.\n else: # \"else\" here means \"if p is not in decomp.keys()\".\n decomp[p] = 1 # Creates a new entry in the dictionary, with key p and value 1.\n current_number = current_number // p # Factor out p. Integer division!\n return decomp\n\ndecompose(100) # What is the prime decomposition of 100?\n\ndecompose(56401910421778813463) # This should be quick.\n\ndecompose(1) # Good to test the base case!\n\n# Use this space to experiment a bit with the decompose function.\n",
"Now that we have a function to compute the prime decomposition of a positive integer, we write a function to recover a positive integer from such a prime decomposition. The function is deceptively simple, since Python makes it easy to iterate through the keys of a dictionary. Make sure that you understand every line.",
"def recompose(D):\n '''\n If D is a dictionary with prime keys and natural values,\n this function outputs the product of terms of the form\n key^value. In this way, it recovers a single number from a\n prime decomposition.\n '''\n N = 1\n for p in D.keys(): # iterate p through all the keys of D.\n N = N * (p ** D[p]) # Note that D[p] refers to the value (exponent) for the key p.\n return N\n\nD = decompose(1000)\nprint(D)\n\nrecompose(D) # This should recover 1000.\n\nrecompose({2:1, 3:1, 5:1, 7:1}) # What will this give?\n\n# Use this space to experiment with decompose and recompose.\n",
"Exercises\n\n\nCreate the list [1,100,2,99,3,98,4,97,...,50,51] with as few list commands as you can.\n\n\nIf you try the commands x = 7, y = 11, then x,y = y,x, what do you expect happens with id(x) and id(y) along the way?\n\n\nHow might you adapt the decompose function to work with all integers (positive and negative)? Note that zero does not have a prime decomposition, but negative numbers have an associated sign.\n\n\nWrite a function multiply(A,B), in which the parameters A and B are prime decomposition dictionaries and the output is the prime decomposition of their product. \n\n\nWrite a function divides(A,B), in which the parameters A and B are prime decomposition dictionaries and the output is a boolean: True if A divides B and false otherwise.\n\n\nThe radical of a positive integer N is the positive integer whose prime factors are the same as N, but in which every prime occurs with exponent 1. For example, $rad(500) = 2 \\cdot 5 = 10$. Write a function radical(N) which computes the radical of N. You can use the decompose(N) and recompose(N) functions along the way.",
"# Use this space for the exercises.\n",
"<a id='multfunc'></a>\nMultiplicative functions\nA multiplicative function is a function $f(n)$ which takes positive integer input $n$, and which satisfies $f(1) = 1$ and $f(ab) = f(a) f(b)$ whenever $a$ and $b$ are coprime. A good example is the divisor-sum function, implemented below.",
"def divisor_sum(n):\n S = 0 # Start the sum at zero.\n for d in range(1,n+1): # potential divisors between 1 and n.\n if n%d == 0:\n S = S + d\n return S\n\ndivisor_sum(100) # The sum 1 + 2 + 4 + 5 + 10 + 20 + 25 + 50 + 100\n\n%timeit divisor_sum(730) # Let's see how quickly this runs.",
"A perfect number is a positive integer which equals the sum of its proper factors (its positive factors, not including itself). Thus a number $n$ is perfect if its divisor sum equals $2n$. This can be implemented in a very short function.",
"def is_perfect(n):\n return divisor_sum(n) == 2*n\n\nis_perfect(10)\n\nis_perfect(28)",
"Let's find the perfect numbers up to 10000. It might take a few seconds.",
"for j in range(1,10000):\n if is_perfect(j):\n print(\"{} is perfect!\".format(j))",
"Multiplicative functions like the divisor sum function can be computed via prime decomposition. Indeed, if $f$ is a multiplicative function, and $$n = 2^{e_2} 3^{e_3} 5^{e_5} \\cdots,$$ then the value $f(n)$ satisfies\n$$f(n) = f(2^{e_2}) \\cdot f(3^{e_3}) \\cdot f(5^{e_5}) \\cdots.$$\nSo if we can compute the values of $f$ on prime powers, we can compute the values of $f$ for all positive integers.\nThe following function computes the divisor sum function, for a prime power $p^e$.",
"def divisor_sum_pp(p,e): # pp stands for prime power\n '''\n Computes the divisor sum of the prime power p**e,\n when p is prime and e is a positive integer.\n This is just 1 + p^2 + p^3 + ... + p^e,\n simplified using a geometric series formula.\n '''\n return (p**(e+1) - 1) // (p - 1)\n\ndivisor_sum_pp(2,3) # Should equal 1 + 2 + 4 + 8\n\ndivisor_sum_pp(3,1) # Should equal 1 + 3",
"Now let's re-implement the divisor sum function, using prime decomposition and the divisor_sum_pp function for prime powers.",
"def divisor_sum(n):\n '''\n Computes the sum of the positive divisors of a \n positive integer n.\n '''\n D = decompose(n) # We require the decompose function from before!\n result = 1\n for p in D.keys():\n result = result * divisor_sum_pp(p,D[p])\n return result\n \n\ndivisor_sum(15)\n\n% timeit(divisor_sum(730)) # this probably runs faster than the previous version.",
"There are a lot of interesting multiplicative functions. We could implement each one by a two-step process as above: implementing the function for prime powers, then defining a version for positive integers by using the decompose function. But there's a shortcut for the second step, which brings in a very cool aspect of Python.\nIn Python, functions are Python objects.",
"type(divisor_sum_pp) # Every object has a type.\n\nid(divisor_sum_pp) # Yes, every object gets an ID number.",
"Since functions are Python objects, it is possible to define a function which takes a function as input and outputs a function too! You can pass a function as an input parameter to another function, just as if it were any other variable. You can output a function with the return keyword, just as if it were another variable. And you can define a new function within the scope of a function!\nHere's a basic example as a warmup.",
"def addone(x): # Let's make a simple function.\n return x + 1 # It's not a very interesting function, is it.\n\naddone(10) # Predict the result.\n\ndef do_twice(f):\n '''\n If a function f is input, then the output is the function\n \"f composed with f.\"\n '''\n def ff(x): # Defines a new function ff!\n return f(f(x)) # This is what ff does.\n \n return ff\n\naddtwo = do_twice(addone) # addtwo is a function!\n\naddtwo(10) # What is the result?",
"Now we exploit this function-as-object approach to create a Python function called mult_function. Given a function f_pp(p,e), the function mult_function outputs the multiplicative function which coincides with f_pp on prime powers. In other words, if f = mult_function(f_pp), then f(p**e) will equal f_pp(p,e).",
"def mult_function(f_pp):\n '''\n When a function f_pp(p,e) of two arguments is input,\n this outputs a multiplicative function obtained from f_pp\n via prime decomposition.\n '''\n def f(n):\n D = decompose(n)\n result = 1\n for p in D:\n result = result * f_pp(p, D[p])\n return result\n \n return f",
"Let's see how this works for the divisor-counting function. This is the function $\\sigma_0(n)$ whose value is the number of positive divisors of $n$. For prime powers, it is easy to count divisors, $$\\sigma_0(p^e) = e + 1.$$",
"def sigma0_pp(p,e):\n return e+1",
"Since the divisor-counting function is multiplicative, we can implement it by applying mult_function to sigma0_pp.",
"sigma0 = mult_function(sigma0_pp)\n\nsigma0(100) # How many divisors does 100 have?",
"Exercises\n\n\nA positive integer $n$ is called deficient/perfect/abundant according to whether the sum of its proper divisors is less than/equal to/greater than $n$ itself. Among the numbers up to 10000, how many are deficient, perfect, and abundant?\n\n\nIf $f(n)$ is a function with natural number input and real output, define $F(n)$ to be the function given by the formula $F(n) = \\sum_{i=0}^n f(i)$. Create a function sumfun(f) which takes as input a function f and outputs the function F as described above.\n\n\nConsider the function $f(n)$ which counts the number of positive divisors of $n$ which are not divisible by 4. Verify that this is a multiplicative function, and implement it using mult_function.\n\n\nWrite a function foursquare(n) which counts the number of ways that a positive integer n can be expressed as a*a + b*b + c*c + d*d for integers a, b, c, d. Hint: loop the variables through integers between $-\\sqrt{n}$ and $\\sqrt{n}$. Compare the values of foursquare(n) to the multiplicative function in the previous problem.\n\n\nA positive integer is \"square-free\" if it has no square factors besides 1. The Mobius function $\\mu(n)$ is defined by $\\mu(n) = 0$ if $n$ is not square-free, and otherwise $\\mu(n) = 1$ or $\\mu(n) = -1$ according to whether $n$ has an even or odd number of prime factors. Verify that the Mobius function is multiplicative and implement it. Try to reproduce the graph of the Mertens function $M(n)$ as described at Wikipedia's article on the Mertens conjecture. (See the previous Python notebook for an introduction to matplotlib for creating graphs.)",
"# Use this space to work on the exercises."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
TomTranter/OpenPNM
|
examples/percolation/Part B - Invasion Percolation.ipynb
|
mit
|
[
"Part B: Invasion Percolation\nThe next percolation algorithm to be demonstrated is known as Invasion Percolation. Instead of identifying connected clusters and invading them all in one go, as Ordinary Percolation does, this algorithm progresses one invasion step at a time. This is a more dynamic process and better simulates scenarios where instead of controlling the pressure at the network boundaries something else such as mass flow rate is controlled as the pressure is allowed to fluctuate up and down in order to meet the lowest available entry pressure for the growing cluster(s).",
"import sys\nimport openpnm as op\nimport numpy as np\nnp.random.seed(10)\nimport matplotlib.pyplot as plt\nimport porespy as ps\nfrom ipywidgets import interact, IntSlider\nfrom openpnm.topotools import trim\n%matplotlib inline\nws = op.Workspace()\nws.settings[\"loglevel\"] = 40",
"In order to also showcase some other network generation options we first start with a small 2D network with StickAndBall geometry.",
"spacing=2.5e-5\nnet = op.network.Cubic([20, 20, 1], spacing=spacing)\ngeo = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)",
"We then trim all the surface pores to obtain disctint sets of boundary edge pores.",
"net.labels()\nnet.num_throats('surface')\ntrim(network=net, throats=net.throats('surface'))\nh = net.check_network_health()\ntrim(network=net, pores=h['trim_pores'])",
"Then we use a function from our porespy package to generate a tomography style image of the abstract network providing the number of pixels in each dimension.",
"#NBVAL_IGNORE_OUTPUT\nim = ps.io.openpnm_to_im(net, max_dim=1000)\n\nim.shape",
"This creates a 3D image but we can crop it to get the central slice in 2D for visualization.",
"#NBVAL_IGNORE_OUTPUT\nfig, ax = plt.subplots(figsize=(5, 5))\nplt.imshow(im[25:-25, 25:-25, 25].T)\ncrop = im[25:-25, 25:-25, :]",
"Next the snow algorithm is used to do network extraction on the tomography style image. Of course if you have your own tomogrpahy image this can be used instead.",
"#NBVAL_IGNORE_OUTPUT\nsnow_out = ps.networks.snow(crop > 0, voxel_size=4e-7)\n\nsnow_out.regions.shape",
"The SNOW algorithm provides a labelled region image containing the pore index. As zero is used for the background it is actually the pore index + 1 because python references arrays with first element as zero and we do not explicitly store the pore index.",
"#NBVAL_IGNORE_OUTPUT\nfig, ax = plt.subplots(figsize=(5, 5))\nreg = snow_out.regions.astype(float) - 1\nreg[reg == -1] = np.nan\nregion_slice = snow_out.regions[:, :, 28] - 1\nmask = region_slice >= 0\nplt.imshow(region_slice.T);",
"Now our new network is extracted we can fill a network object with all the properties and begin simulation.",
"wrk = op.Workspace()\nwrk.clear()\n\nnet = op.network.GenericNetwork()\nnet.update(snow_out)\ngeo = op.geometry.GenericGeometry(network=net, pores=net.Ps, throats=net.Ts)",
"A helper function is defined for plotting a particular data set.",
"def update_image(data):\n data = data.astype(float)\n out_im = np.ones(region_slice.shape, dtype=float)*-1\n out_im[mask] = data[region_slice[mask]]\n out_im[~mask] = np.nan\n return out_im\n\n#NBVAL_IGNORE_OUTPUT\nfig, ax = plt.subplots(figsize=(5, 5))\nout = update_image(net['pore.diameter'])\nplt.imshow(out.T);",
"Again, stadard physics is used to define the capillary entry pressures. And these are shown as a histogram for all the throats in the network.",
"water = op.phases.Water(network=net)\nphys = op.physics.Standard(network=net, geometry=geo, phase=water)\n\n#NBVAL_IGNORE_OUTPUT\nfig, ax = plt.subplots(figsize=[5, 5])\nax.hist(phys['throat.entry_pressure'], bins=10)",
"Next, the algorithm is defined and run with no arguments or outlets defined. This will proceed step by step assessing which pores are currently invaded (i.e. inlets first), which throats connect to an uninvaded pore and of these, which throat has the lowest capillary entry pressure for invasion. Invasion then proceeds along the path of least capillary resistance.",
"#NBVAL_IGNORE_OUTPUT\nalg_ip = op.algorithms.InvasionPercolation(network=net)\nalg_ip.setup(phase=water)\nalg_ip.set_inlets(pores=net.pores('left'))\nalg_ip.run()\nfig, ax = plt.subplots(figsize=(5, 5))\nout = update_image(alg_ip['pore.invasion_sequence'])\nplt.imshow(out.T);\n\ndef plot_invasion(seq):\n data = alg_ip['pore.invasion_sequence'] < seq\n fig, ax = plt.subplots(figsize=(5, 5))\n out = update_image(data)\n plt.imshow(out.T);",
"Using the slider below we can interactively plot the saturation at each invasion step (this works best using the left and right arrow keys).",
"#NBVAL_IGNORE_OUTPUT\nmax_seq = alg_ip['pore.invasion_sequence'].max()\ninteract(plot_invasion, seq=IntSlider(min=0, max=max_seq, step=1, value=200))",
"As with Ordinary Percolation we can plot a drainage or intrusion curve but this time the capillary pressure is plotted from one step to the next as a continuous process with dynamic pressure boundary conditions and so is allowed to increase and decrease to meet the next lowest entry pressure for the invading cluster.",
"#NBVAL_IGNORE_OUTPUT\nfig, ax = plt.subplots(figsize=(5, 5))\nalg_ip.plot_intrusion_curve(fig)\nplt.show()",
"We can compare the results of the two algorithms and see that the pressure envelope, i.e. maximum pressure reached historically by the invasion process is the same as the ordinary percolation value.",
"#NBVAL_IGNORE_OUTPUT\nfig, ax = plt.subplots(figsize=(5, 5))\nalg_op = op.algorithms.OrdinaryPercolation(network=net, phase=water)\nalg_op.set_inlets(net.pores('left'))\nalg_op.setup(pore_volume='pore.volume',\n throat_volume='throat.volume')\nalg_op.run(points=1000)\nalg_op.plot_intrusion_curve(fig)\nalg_ip.plot_intrusion_curve(fig)\nplt.show()",
"An additional feature of the algorithm is the ability to identify where the defending phase becomes trapped. Whether this happens in reality in-fact relies on the connectivity of the defending phase and whether it can reside in the invaded pores as thin wetting films. If not then the defending phase is completely pushed out of a pore when invaded and it can become isolated and trapped when encircled by the invading phase. OpenPNM actually calculates this trapping as a post-process, employing some clever logic described by Masson 2016.",
"#NBVAL_IGNORE_OUTPUT\nalg_ip_t = op.algorithms.InvasionPercolation(network=net)\nalg_ip_t.setup(phase=water)\nalg_ip_t.set_inlets(pores=net.pores('left'))\nalg_ip_t.run()\nalg_ip_t.apply_trapping(outlets=net.pores(['boundary']))\nfig, ax = plt.subplots(figsize=(5, 5))\nout = update_image(alg_ip_t['pore.trapped'])\nplt.imshow(out.T);",
"Here a reasonable fraction of the pore space is not invaded due to trapping of the defending phase. Generally this fraction will be lower in truly 3D networks as there are more routes out of the network because pores have higher connectivity. Also, typically if a defending phase is considered to be wetting then film flow is assumed to allow residual defending phase to escape. However, we can show the differences on one plot with and without trapping below.",
"#NBVAL_IGNORE_OUTPUT\nfig, ax = plt.subplots(figsize=(5, 5))\nalg_ip.plot_intrusion_curve(fig)\nalg_ip_t.plot_intrusion_curve(fig)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JasonSanchez/w261
|
week3/MIDS-W261-HW-03-Sanchez.ipynb
|
mit
|
[
"DATSCIW261 ASSIGNMENT\nVersion 2016-01-27 (FINAL)\nWeek 3 ASSIGNMENTS\nJason Sanchez - Group 2\nHW3.0.\nHow do you merge two sorted lists/arrays of records of the form [key, value]?\n\nMerge sort.\n\nWhere is this used in Hadoop MapReduce? [Hint within the shuffle]\n\nIt is used after files are spilled to disk from the circular buffer before the map-side combiner is run as well as after the data is partitioned during the \"Hadoop shuffle\" and before it is fed into the reduce-side combiner.\n\nWhat is a combiner function in the context of Hadoop? \n\nCombiners improve the speed of MapReduce jobs. Map-side combiners can reduce the amount of data that needs to be transferred over the network by acting as a simplified reducer. Also, long running map jobs block the merge sort that is part of the \"Hadoop shuffle\" phase. A map-side combiner can run on all of the data that has been processed by the mappers before being blocked by the merge-sort. \n\nGive an example where it can be used and justify why it should be used in the context of this problem.\n\nWord count. Greatly reduce data needed to transfer of the network by combining key-value pairs. \n\nWhat is the Hadoop shuffle?\n\nPartition --> Merge sort --> Pass to reduce-side combiner (or directly to reducer)\n\nHW3.1 consumer complaints dataset: Use Counters to do EDA (exploratory data analysis and to monitor progress)\nCounters are lightweight objects in Hadoop that allow you to keep track of system progress in both the map and reduce stages of processing. By default, Hadoop defines a number of standard counters in \"groups\"; these show up in the jobtracker webapp, giving you information such as \"Map input records\", \"Map output records\", etc. \nWhile processing information/data using MapReduce job, it is a challenge to monitor the progress of parallel threads running across nodes of distributed clusters. Moreover, it is also complicated to distinguish between the data that has been processed and the data which is yet to be processed. The MapReduce Framework offers a provision of user-defined Counters, which can be effectively utilized to monitor the progress of data across nodes of distributed clusters.\nUse the Consumer Complaints Dataset provide here to complete this question:\n https://www.dropbox.com/s/vbalm3yva2rr86m/Consumer_Complaints.csv?dl=0\n\nThe consumer complaints dataset consists of diverse consumer complaints, which have been reported across the United States regarding various types of loans. The dataset consists of records of the form:\nComplaint ID,Product,Sub-product,Issue,Sub-issue,State,ZIP code,Submitted via,Date received,Date sent to company,Company,Company response,Timely response?,Consumer disputed?\nHere’s is the first few lines of the of the Consumer Complaints Dataset:\nComplaint ID,Product,Sub-product,Issue,Sub-issue,State,ZIP code,Submitted via,Date received,Date sent to company,Company,Company response,Timely response?,Consumer disputed?\n1114245,Debt collection,Medical,Disclosure verification of debt,Not given enough info to verify debt,FL,32219,Web,11/13/2014,11/13/2014,\"Choice Recovery, Inc.\",Closed with explanation,Yes,\n1114488,Debt collection,Medical,Disclosure verification of debt,Right to dispute notice not received,TX,75006,Web,11/13/2014,11/13/2014,\"Expert Global Solutions, Inc.\",In progress,Yes,\n1114255,Bank account or service,Checking account,Deposits and withdrawals,,NY,11102,Web,11/13/2014,11/13/2014,\"FNIS (Fidelity National Information Services, Inc.)\",In progress,Yes,\n1115106,Debt collection,\"Other (phone, health club, etc.)\",Communication tactics,Frequent or repeated calls,GA,31721,Web,11/13/2014,11/13/2014,\"Expert Global Solutions, Inc.\",In progress,Yes,\nUser-defined Counters\nNow, let’s use Hadoop Counters to identify the number of complaints pertaining to debt collection, mortgage and other categories (all other categories get lumped into this one) in the consumer complaints dataset. Basically produce the distribution of the Product column in this dataset using counters (limited to 3 counters here).\nHadoop offers Job Tracker, an UI tool to determine the status and statistics of all jobs. Using the job tracker UI, developers can view the Counters that have been created. Screenshot your job tracker UI as your job completes and include it here. Make sure that your user defined counters are visible. \nPresumes you have downloaded the file and put it in the \"Temp_data\" folder.",
"%%writefile ComplaintDistribution.py\nfrom mrjob.job import MRJob\n\nclass ComplaintDistribution(MRJob):\n def mapper(self, _, lines):\n line = lines[:30]\n if \"Debt collection\" in line:\n self.increment_counter('Complaint', 'Debt collection', 1)\n elif \"Mortgage\" in line:\n self.increment_counter('Complaint', 'Mortgage', 1)\n else:\n self.increment_counter('Complaint', 'Other', 1)\n \nif __name__ == \"__main__\":\n ComplaintDistribution.run()\n\n%%time\n!python ComplaintDistribution.py Temp_data/Consumer_Complaints.csv",
"HW 3.2 Analyze the performance of your Mappers, Combiners and Reducers using Counters\nFor this brief study the Input file will be one record (the next line only):\nfoo foo quux labs foo bar quux\nPerform a word count analysis of this single record dataset using a Mapper and Reducer based WordCount (i.e., no combiners are used here) using user defined Counters to count up how many time the mapper and reducer are called. What is the value of your user defined Mapper Counter, and Reducer Counter after completing this word count job. The answer should be 1 and 4 respectively. Please explain.",
"%%writefile SimpleCounters.py\n\nfrom mrjob.job import MRJob\n\nclass SimpleCounters(MRJob):\n def mapper_init(self):\n self.increment_counter(\"Mappers\", \"Count\", 1)\n \n def mapper(self, _, lines):\n self.increment_counter(\"Mappers\", \"Tasks\", 1)\n for word in lines.split():\n yield (word, 1)\n \n def reducer_init(self):\n self.increment_counter(\"Reducers\", \"Count\", 1)\n \n def reducer(self, word, count):\n self.increment_counter(\"Reducers\", \"Tasks\", 1)\n yield (word, sum(count))\n \nif __name__ == \"__main__\":\n SimpleCounters.run()\n\n!echo \"foo foo quux labs foo bar quux\" | python SimpleCounters.py --jobconf mapred.map.tasks=2 --jobconf mapred.reduce.tasks=2",
"Please use multiple mappers and reducers for these jobs (at least 2 mappers and 2 reducers).\nPerform a word count analysis of the Issue column of the Consumer Complaints Dataset using a Mapper and Reducer based WordCount (i.e., no combiners used anywhere) using user defined Counters to count up how many time the mapper and reducer are called. What is the value of your user defined Mapper Counter, and Reducer Counter after completing your word count job.",
"%%writefile IssueCounter.py\n\nfrom mrjob.job import MRJob\nimport csv\nimport sys\n\nclass IssueCounter(MRJob):\n\n def mapper(self, _, lines):\n self.increment_counter(\"Mappers\", \"Tasks\", 1)\n terms = list(csv.reader([lines]))[0]\n yield (terms[3], 1)\n \n def reducer(self, word, count):\n self.increment_counter(\"Reducers\", \"Tasks\", 1)\n self.increment_counter(\"Reducers\", \"Lines processed\", len(list(count)))\n yield (word, sum(count))\n \nif __name__ == \"__main__\":\n IssueCounter.run()\n\n!cat Temp_data/Consumer_Complaints.csv | python IssueCounter.py | head -n 1",
"Mapper tasks = 312913. The mapper was called this many times because that is how many lines there are in the file.\nReducer tasks = 80. The reducer was called this many times because that is how many unique issues there are in the file.\nReducer lines processed = 312913. The reducer was passed all of the data from the mappers.",
"# We can easily confirm the first hypothesis\n!wc -l Temp_data/Consumer_Complaints.csv",
"Perform a word count analysis of the Issue column of the Consumer Complaints Dataset using a Mapper, Reducer, and standalone combiner (i.e., not an in-memory combiner) based WordCount using user defined Counters to count up how many time the mapper, combiner, reducer are called. What is the value of your user defined Mapper Counter, and Reducer Counter after completing your word count job.",
"%%writefile IssueCounterCombiner.py\n\nfrom mrjob.job import MRJob\nfrom mrjob.step import MRStep\nimport csv\nimport sys\n\nclass IssueCounterCombiner(MRJob):\n \n def mapper(self, _, lines):\n self.increment_counter(\"Mappers\", \"Tasks\", 1)\n terms = list(csv.reader([lines]))[0]\n yield (terms[3], 1)\n \n def combiner(self, word, count):\n self.increment_counter(\"Combiners\", \"Tasks\", 1)\n yield (word, sum(count))\n \n def reducer(self, word, count):\n self.increment_counter(\"Reducers\", \"Tasks\", 1)\n self.increment_counter(\"Reducers\", \"Lines processed\", len(list(count)))\n yield (word, sum(count))\n \nif __name__ == \"__main__\":\n IssueCounterCombiner.run()\n\n%%writefile python_mr_driver.py\n\nfrom IssueCounterCombiner import IssueCounterCombiner\n\nmr_job = IssueCounterCombiner(args=['Temp_data/Consumer_Complaints.csv'])\n\nwith mr_job.make_runner() as runner:\n runner.run() \n print(runner.counters())\n# for line in runner.stream_output(): \n# print(mr_job.parse_output_line(line))\n\nresults = !python python_mr_driver.py\n\nresults",
"Although the same amount of map and reduce tasks were called, because 146 combiner tasks were called, my hypothesis would be that the number of observations read by reducers was less. I went back and included a counter that kept track of the lines passed over the network. With the combiner, only 146 observations were passed over the network. This is equal to the number of times the combiner was called (which makes sense because combiners act as map-side reducers and each one would process on a different key of the data and output a single line).\nUsing a single reducer: What are the top 50 most frequent terms in your word count analysis? Present the top 50 terms and their frequency and their relative frequency. If there are ties please sort the tokens in alphanumeric/string order. Present bottom 10 tokens (least frequent items).",
"%%writefile Top50.py\n\nfrom mrjob.job import MRJob\nfrom mrjob.step import MRStep\nimport csv\nimport sys\n\ndef order_key(order_in_reducer, key_name):\n number_of_stars = order_in_reducer//10 + 1\n number = str(order_in_reducer%10)\n return \"%s %s\" % (\"*\"*number_of_stars+number, key_name)\n\nclass Top50(MRJob):\n\n MRJob.SORT_VALUES = True\n \n def mapper_get_issue(self, _, lines):\n terms = list(csv.reader([lines]))[0]\n issue = terms[3]\n if issue == \"\":\n issue = \"<blank>\"\n yield (issue, 1)\n \n def combiner_count_issues(self, word, count):\n yield (word, sum(count))\n \n def reducer_init_totals(self):\n self.issue_counts = []\n \n def reducer_count_issues(self, word, count):\n issue_count = sum(count)\n self.issue_counts.append(int(issue_count))\n yield (word, issue_count)\n \n def reducer_final_emit_counts(self):\n yield (order_key(1, \"Total\"), sum(self.issue_counts))\n yield (order_key(2, \"40th\"), sorted(self.issue_counts)[-40])\n \n def reducer_init(self):\n self.increment_counter(\"Reducers\", \"Count\", 1)\n self.var = {}\n \n def reducer(self, word, count):\n if word.startswith(\"*\"):\n _, term = word.split()\n self.var[term] = next(count)\n\n else:\n total = sum(count)\n if total >= self.var[\"40th\"]:\n yield (word, (total/self.var[\"Total\"], total))\n \n def mapper_sort(self, key, value):\n value[0] = 1-float(value[0])\n yield value, key\n \n def reducer_sort(self, key, value):\n key[0] = round(1-float(key[0]),3)\n yield key, next(value)\n\n def steps(self):\n mr_steps = [MRStep(mapper=self.mapper_get_issue,\n combiner=self.combiner_count_issues,\n reducer_init=self.reducer_init_totals,\n reducer=self.reducer_count_issues,\n reducer_final=self.reducer_final_emit_counts),\n MRStep(reducer_init=self.reducer_init,\n reducer=self.reducer),\n MRStep(mapper=self.mapper_sort,\n reducer=self.reducer_sort)\n ]\n return mr_steps\n \n \n \nif __name__ == \"__main__\":\n Top50.run()\n\n!head -n 3001 Temp_data/Consumer_Complaints.csv | python Top50.py --jobconf mapred.reduce.tasks=1",
"3.2.1\nUsing 2 reducers: What are the top 50 most frequent terms in your word count analysis? \nPresent the top 50 terms and their frequency and their relative frequency. Present the top 50 terms and their frequency and their relative frequency. If there are ties please sort the tokens in alphanumeric/string order. Present bottom 10 tokens (least frequent items). Please use a combiner.\nHW3.3. Shopping Cart Analysis\nProduct Recommendations: The action or practice of selling additional products or services \nto existing customers is called cross-selling. Giving product recommendation is \none of the examples of cross-selling that are frequently used by online retailers. \nOne simple method to give product recommendations is to recommend products that are frequently\nbrowsed together by the customers.\nFor this homework use the online browsing behavior dataset located at: \n https://www.dropbox.com/s/zlfyiwa70poqg74/ProductPurchaseData.txt?dl=0\n\nEach line in this dataset represents a browsing session of a customer. \nOn each line, each string of 8 characters represents the id of an item browsed during that session. \nThe items are separated by spaces.\nHere are the first few lines of the ProductPurchaseData \nFRO11987 ELE17451 ELE89019 SNA90258 GRO99222 \nGRO99222 GRO12298 FRO12685 ELE91550 SNA11465 ELE26917 ELE52966 FRO90334 SNA30755 ELE17451 FRO84225 SNA80192 \nELE17451 GRO73461 DAI22896 SNA99873 FRO86643 \nELE17451 ELE37798 FRO86643 GRO56989 ELE23393 SNA11465 \nELE17451 SNA69641 FRO86643 FRO78087 SNA11465 GRO39357 ELE28573 ELE11375 DAI54444 \nDo some exploratory data analysis of this dataset guided by the following questions:. \nHow many unique items are available from this supplier?\nUsing a single reducer: Report your findings such as number of unique products; largest basket; report the top 50 most frequently purchased items, their frequency, and their relative frequency (break ties by sorting the products alphabetical order) etc. using Hadoop Map-Reduce.",
"!head -n 10 Temp_data/ProductPurchaseData.txt\n\n%%writefile ProductPurchaseStats.py\n\nfrom mrjob.job import MRJob\nfrom mrjob.step import MRStep\nimport sys\nimport heapq\n\n\nclass TopList(list):\n def __init__(self, max_size):\n \"\"\"\n Just like a list, except the append method adds the new value to the \n list only if it is larger than the smallest value (or if the size of \n the list is less than max_size). If each element of the list is an int\n or float, uses that value for comparison. If the first element is a \n list or tuple, uses the first element of the list or tuple for the \n comparison.\n \"\"\"\n self.max_size = max_size\n \n def _get_key(self, x):\n return x[0] if isinstance(x, (list, tuple)) else x\n \n def append(self, val):\n key=lambda x: x[0] if isinstance(x, (list, tuple)) else x\n if len(self) < self.max_size:\n heapq.heappush(self, val)\n elif self._get_key(self[0]) < self._get_key(val):\n heapq.heapreplace(self, val)\n \n def final_sort(self):\n return sorted(self, key=self._get_key, reverse=True)\n\n\nclass ProductPurchaseStats(MRJob):\n \n def mapper_init(self):\n self.largest_basket = 0\n self.total_items = 0\n \n def mapper(self, _, lines):\n products = lines.split()\n n_products = len(products)\n self.total_items += n_products\n if n_products > self.largest_basket:\n self.largest_basket = n_products\n for prod in products:\n yield (prod, 1)\n \n def mapper_final(self):\n self.increment_counter(\"product stats\", \"largest basket\", self.largest_basket)\n yield (\"*** Total\", self.total_items)\n \n def combiner(self, keys, values):\n yield keys, sum(values)\n \n def reducer_init(self):\n self.top50 = TopList(50)\n self.total = 0\n \n def reducer(self, key, values):\n value_count = sum(values)\n \n if key == \"*** Total\":\n self.total = value_count\n else:\n self.increment_counter(\"product stats\", \"unique products\")\n self.top50.append([value_count, value_count/self.total, key])\n\n def reducer_final(self):\n for counts, relative_rate, key in self.top50.final_sort():\n yield key, (counts, round(relative_rate,3))\n \nif __name__ == \"__main__\":\n ProductPurchaseStats.run()\n\n!cat Temp_data/ProductPurchaseData.txt | python ProductPurchaseStats.py --jobconf mapred.reduce.tasks=1",
"3.3.1 OPTIONAL \nUsing 2 reducers: Report your findings such as number of unique products; largest basket; report the top 50 most frequently purchased items, their frequency, and their relative frequency (break ties by sorting the products alphabetical order) etc. using Hadoop Map-Reduce. \nHW3.4. (Computationally prohibitive but then again Hadoop can handle this) Pairs\nSuppose we want to recommend new products to the customer based on the products they\nhave already browsed on the online website. Write a map-reduce program \nto find products which are frequently browsed together. Fix the support count (cooccurence count) to s = 100 \n(i.e. product pairs need to occur together at least 100 times to be considered frequent) \nand find pairs of items (sometimes referred to itemsets of size 2 in association rule mining) that have a support count of 100 or more.\nList the top 50 product pairs with corresponding support count (aka frequency), and relative frequency or support (number of records where they coccur, the number of records where they coccur/the number of baskets in the dataset) in decreasing order of support for frequent (100>count) itemsets of size 2. \nUse the Pairs pattern (lecture 3) to extract these frequent itemsets of size 2. Free free to use combiners if they bring value. Instrument your code with counters for count the number of times your mapper, combiner and reducers are called. \nPlease output records of the following form for the top 50 pairs (itemsets of size 2): \n item1, item2, support count, support\n\nFix the ordering of the pairs lexicographically (left to right), \nand break ties in support (between pairs, if any exist) \nby taking the first ones in lexicographically increasing order. \nReport the compute time for the Pairs job. Describe the computational setup used (E.g., single computer; dual core; linux, number of mappers, number of reducers)\nInstrument your mapper, combiner, and reducer to count how many times each is called using Counters and report these counts.",
"%%writefile PairsRecommender.py\n\nfrom mrjob.job import MRJob\nimport heapq\nimport sys\n\ndef all_itemsets_of_size_two(array, key=None, return_type=\"string\", concat_val=\" \"):\n \"\"\"\n Generator that yields all valid itemsets of size two\n where each combo is returned in an order sorted by key.\n \n key = None defaults to standard sorting.\n \n return_type: can be \"string\" or \"tuple\". If \"string\", \n concatenates values with concat_val and returns string.\n If tuple, returns a tuple with two elements.\n \"\"\"\n array = sorted(array, key=key)\n for index, item in enumerate(array):\n for other_item in array[index:]:\n if item != other_item:\n if return_type == \"string\":\n yield \"%s%s%s\" % (str(item), concat_val, str(other_item))\n else:\n yield (item, other_item) \n\nclass TopList(list):\n def __init__(self, max_size):\n \"\"\"\n Just like a list, except the append method adds the new value to the \n list only if it is larger than the smallest value (or if the size of \n the list is less than max_size). If each element of the list is an int\n or float, uses that value for comparison. If the first element is a \n list or tuple, uses the first element of the list or tuple for the \n comparison.\n \"\"\"\n self.max_size = max_size\n \n def _get_key(self, x):\n return x[0] if isinstance(x, (list, tuple)) else x\n \n def append(self, val):\n key=lambda x: x[0] if isinstance(x, (list, tuple)) else x\n if len(self) < self.max_size:\n heapq.heappush(self, val)\n elif self._get_key(self[0]) < self._get_key(val):\n heapq.heapreplace(self, val)\n \n def final_sort(self):\n return sorted(self, key=self._get_key, reverse=True)\n \n \nclass PairsRecommender(MRJob):\n def mapper_init(self):\n self.total_baskets = 0\n \n def mapper(self, _, lines):\n self.total_baskets += 1\n products = lines.split()\n self.increment_counter(\"job stats\", \"number of items\", len(products))\n for itemset in all_itemsets_of_size_two(products):\n self.increment_counter(\"job stats\", \"number of item combos\")\n yield (itemset, 1)\n \n def mapper_final(self):\n self.increment_counter(\"job stats\", \"number of baskets\", self.total_baskets)\n yield (\"*** Total\", self.total_baskets)\n \n def combiner(self, key, values):\n self.increment_counter(\"job stats\", \"number of keys fed to combiner\")\n yield key, sum(values)\n \n def reducer_init(self):\n self.top_values = TopList(50)\n self.total_baskets = 0\n \n def reducer(self, key, values):\n values_sum = sum(values)\n if key == \"*** Total\":\n self.total_baskets = values_sum\n elif values_sum >= 100:\n self.increment_counter(\"job stats\", \"number of unique itemsets >= 100\")\n basket_percent = values_sum/self.total_baskets\n self.top_values.append([values_sum, round(basket_percent,3), key])\n else:\n self.increment_counter(\"job stats\", \"number of unique itemsets < 100\")\n \n def reducer_final(self):\n for values_sum, basket_percent, key in self.top_values.final_sort():\n yield key, (values_sum, basket_percent)\n \nif __name__ == \"__main__\":\n PairsRecommender.run()\n\n%%time\n!cat Temp_data/ProductPurchaseData.txt | python PairsRecommender.py --jobconf mapred.reduce.tasks=1\n\n!system_profiler SPHardwareDataType",
"HW3.5: Stripes\nRepeat 3.4 using the stripes design pattern for finding cooccuring pairs.\nReport the compute times for stripes job versus the Pairs job. Describe the computational setup used (E.g., single computer; dual core; linux, number of mappers, number of reducers)\nInstrument your mapper, combiner, and reducer to count how many times each is called using Counters and report these counts. Discuss the differences in these counts between the Pairs and Stripes jobs\nOPTIONAL: all HW below this are optional",
"%%writefile StripesRecommender.py\n\nfrom mrjob.job import MRJob\nfrom collections import Counter\nimport sys\nimport heapq\n\ndef all_itemsets_of_size_two_stripes(array, key=None):\n \"\"\"\n Generator that yields all valid itemsets of size two\n where each combo is as a stripe.\n \n key = None defaults to standard sorting.\n \"\"\"\n array = sorted(array, key=key)\n for index, item in enumerate(array[:-1]):\n yield (item, {val:1 for val in array[index+1:]})\n\nclass TopList(list):\n def __init__(self, max_size):\n \"\"\"\n Just like a list, except the append method adds the new value to the \n list only if it is larger than the smallest value (or if the size of \n the list is less than max_size). If each element of the list is an int\n or float, uses that value for comparison. If the first element is a \n list or tuple, uses the first element of the list or tuple for the \n comparison.\n \"\"\"\n self.max_size = max_size\n \n def _get_key(self, x):\n return x[0] if isinstance(x, (list, tuple)) else x\n \n def append(self, val):\n key=lambda x: x[0] if isinstance(x, (list, tuple)) else x\n if len(self) < self.max_size:\n heapq.heappush(self, val)\n elif self._get_key(self[0]) < self._get_key(val):\n heapq.heapreplace(self, val)\n \n def final_sort(self):\n return sorted(self, key=self._get_key, reverse=True)\n \n \nclass StripesRecommender(MRJob):\n \n def mapper_init(self):\n self.basket_count = 0\n \n def mapper(self, _, lines):\n self.basket_count += 1\n products = lines.split()\n for item, value in all_itemsets_of_size_two_stripes(products):\n yield item, value\n \n def mapper_final(self):\n yield (\"*** Total\", {\"total\": self.basket_count})\n \n def combiner(self, keys, values):\n values_sum = Counter()\n for val in values:\n values_sum += Counter(val)\n yield keys, dict(values_sum)\n \n def reducer_init(self):\n self.top = TopList(50)\n \n def reducer(self, keys, values):\n values_sum = Counter()\n for val in values:\n values_sum += Counter(val)\n\n if keys == \"*** Total\": \n self.total = values_sum[\"total\"]\n else:\n for k, v in values_sum.items():\n if v >= 100:\n self.top.append([v, round(v/self.total,3), keys+\" \"+k])\n\n def reducer_final(self):\n for count, perc, key in self.top.final_sort():\n yield key, (count, perc)\n \nif __name__ == \"__main__\":\n StripesRecommender.run()\n\n%%time\n!cat Temp_data/ProductPurchaseData.txt | python StripesRecommender.py --jobconf mapred.reduce.tasks=1",
"The pairs operation took 1 minute 30 seconds. The stripes operation took 24 seconds, which is about a quarter of the time for pairs.\nHW3.6 Computing Relative Frequencies on 100K WikiPedia pages (93Meg)\nDataset description\nFor this assignment you will explore a set of 100,000 Wikipedia documents:\nhttps://www.dropbox.com/s/n5lfbnztclo93ej/wikitext_100k.txt?dl=0\ns3://cs9223/wikitext_100k.txt, or\nhttps://s3.amazonaws.com/cs9223/wikitext_100k.txt\nEach line in this file consists of the plain text extracted from a Wikipedia document.\nTask\nCompute the relative frequencies of each word that occurs in the documents in wikitext_100k.txt and output the top 100 word pairs sorted by decreasing order of relative frequency.\nRecall that the relative frequency (RF) of word B given word A is defined as follows:\nf(B|A) = Count(A, B) / Count (A) = Count(A, B) / sum_B'(Count (A, B')\nwhere count(A,B) is the number of times A and B co-occur within a window of two words (co-occurrence window size of two) in a document and count(A) the number of times A occurs with anything else. Intuitively, given a document collection, the relative frequency captures the proportion of time the word B appears in the same document as A. (See Section 3.3, in Data-Intensive Text Processing with MapReduce).\nIn the async lecture you learned different approaches to do this, and in this assignment, you will implement them:\na. Write a mapreduce program which uses the Stripes approach and writes its output in a file named rfstripes.txt \nb. Write a mapreduce program which uses the Pairs approach and writes its output in a file named rfpairs.txt\nc. Compare the performance of the two approaches and output the relative performance to a file named rfcomp.txt. Compute the relative performance as follows: (running time for Pairs/ running time for Stripes). Also include an analysis comparing the communication costs for the two approaches. Instrument your mapper and reduces for counters where necessary to aid with your analysis.\nNOTE: please limit your analysis to the top 100 word pairs sorted by decreasing order of relative frequency for each word (tokens with all alphabetical letters).\nPlease include markdown cell named rf.txt that describes the following:\nthe input/output format in each Hadoop task, i.e., the keys for the mappers and reducers\nthe Hadoop cluster settings you used, i.e., number of mappers and reducers\nthe running time for each approach: pairs and stripes\nYou can write your program using Python or MrJob (with Hadoop streaming) and you should run it on AWS. It is a good idea to develop and test your program on a local machine before deploying on AWS. Remember your notebook, needs to have all the commands you used to run each Mapreduce job (i.e., pairs and stripes) -- include the Hadoop streaming commands you used to run your jobs.\nIn addition the All the following files should be compressed in one ZIP file and submitted. The ZIP file should contain:\nA. The result files: rfstripes.txt, rfpairs.txt, rfcomp.txt\nPrior to working with Hadoop, the corpus should first be preprocessed as follows:\nperform tokenization (whitespace and all non-alphabetic characters) and stopword removal using standard tools from the Lucene search engine. All tokens should then be replaced\nwith unique integers for a more efficient encoding. \n== Preliminary information for the remaing HW problems===\nMuch of this homework beyond this point will focus on the Apriori algorithm for frequent itemset mining and the additional step for extracting association rules from these frequent itemsets.\nPlease acquaint yourself with the background information (below)\nbefore approaching the remaining assignments.\n=== Apriori background information ===\nSome background material for the Apriori algorithm is located at:\n\nSlides in Live Session #3\nhttps://en.wikipedia.org/wiki/Apriori_algorithm\nhttps://www.dropbox.com/s/k2zm4otych279z2/Apriori-good-slides.pdf?dl=0\nhttp://snap.stanford.edu/class/cs246-2014/slides/02-assocrules.pdf\n\nAssociation Rules are frequently used for Market Basket Analysis (MBA) by retailers to\nunderstand the purchase behavior of their customers. This information can be then used for\nmany different purposes such as cross-selling and up-selling of products, sales promotions,\nloyalty programs, store design, discount plans and many others.\nEvaluation of item sets: Once you have found the frequent itemsets of a dataset, you need\nto choose a subset of them as your recommendations. Commonly used metrics for measuring\nsignificance and interest for selecting rules for recommendations are: confidence; lift; and conviction.\nHW3.7 Apriori Algorithm\nWhat is the Apriori algorithm? Describe an example use in your domain of expertise and what kind of . Define confidence and lift.\nNOTE:\nFor the remaining homework use the online browsing behavior dataset located at (same dataset as used above): \n https://www.dropbox.com/s/zlfyiwa70poqg74/ProductPurchaseData.txt?dl=0\n\nEach line in this dataset represents a browsing session of a customer. \nOn each line, each string of 8 characters represents the id of an item browsed during that session. \nThe items are separated by spaces.\nHere are the first few lines of the ProductPurchaseData \nFRO11987 ELE17451 ELE89019 SNA90258 GRO99222 \nGRO99222 GRO12298 FRO12685 ELE91550 SNA11465 ELE26917 ELE52966 FRO90334 SNA30755 ELE17451 FRO84225 SNA80192 \nELE17451 GRO73461 DAI22896 SNA99873 FRO86643 \nELE17451 ELE37798 FRO86643 GRO56989 ELE23393 SNA11465 \nELE17451 SNA69641 FRO86643 FRO78087 SNA11465 GRO39357 ELE28573 ELE11375 DAI54444 \nHW3.8. Shopping Cart Analysis\nProduct Recommendations: The action or practice of selling additional products or services \nto existing customers is called cross-selling. Giving product recommendation is \none of the examples of cross-selling that are frequently used by online retailers. \nOne simple method to give product recommendations is to recommend products that are frequently\nbrowsed together by the customers.\nSuppose we want to recommend new products to the customer based on the products they\nhave already browsed on the online website. Write a program using the A-priori algorithm\nto find products which are frequently browsed together. Fix the support to s = 100 \n(i.e. product sets need to occur together at least 100 times to be considered frequent) \nand find itemsets of size 2 and 3.\nThen extract association rules from these frequent items. \nA rule is of the form: \n(item1, item5) ⇒ item2.\nList the top 10 discovered rules in descreasing order of confidence in the following format\n(item1, item5) ⇒ item2, supportCount ,support, confidence\nHW3.8\nBenchmark your results using the pyFIM implementation of the Apriori algorithm\n(Apriori - Association Rule Induction / Frequent Item Set Mining implemented by Christian Borgelt). \nYou can download pyFIM from here: \nhttp://www.borgelt.net/pyfim.html\nComment on the results from both implementations (your Hadoop MapReduce of apriori versus pyFIM) \nin terms of results and execution times.\nEND OF HOMEWORK",
"set([1,2,3])\n\n[1,2,3][:-1]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
slac207/cs207project
|
cs207rbtree/demo_RedBlackTree.ipynb
|
mit
|
[
"Package Installation\nAt the console, within the cs207project directory, you can use\npython \npip install -e .\nor equivalently\npython\npython setup.py install\nAt this point, four subpackages will be available to you:\n1. timeseries\n2. TimeseriesDB\n3. Similarity\n4. cs207rbtree",
"import timeseries, TimeseriesDB, Similarity\nimport cs207rbtree.RedBlackTree as Database\n\ndir(Database)",
"Using the Red-Black Tree\nBelow is a function that will allow us to visualize our tree (copied from CS207 Fall 2016 lecture notes). \nNext, we create a tree, add a few nodes to it, and retrieve their contents.",
"demoDB = Database.connect(\"/tmp/test1.dbdb\")\n\ndemoDB.set(\"rahul\", 81)\ndemoDB.set(\"pavlos\", 20)\ndemoDB.set(\"sarah\", 29)\ndemoDB.set(\"courtney\", 11)\ndemoDB.set(\"andrew\", 12)\ndemoDB.set(\"laura\", 81)\n\ndemoDB.get(\"sarah\")\n\ndemoDB.get(\"laura\")",
"Multithreadedness",
"from cs207rbtree import RedBlackTree\nfrom threading import Thread\nfrom pytest import raises\nimport portalocker\nimport os \n\n\ndef thread_function(num):\n print(\"FIRST FUN\")\n db = RedBlackTree.connect(\"/tmp/test6.dbdb\")\n db.set(\"kobe\", \"baby\"+str(num))\n print(\"1\")\n db.set(\"rahul\", \"veryyoung\"+str(num))\n print(\"2\")\n db.set(\"pavlos\", \"stillyoung\"+str(num))\n print(\"3\")\n db.set(\"andy\", \"old\"+str(num))\n print(\"4\")\n db.set(\"lisa\", \"ancient\"+str(num)) \n print(\"5\")\n db.commit()\n print(\"6\")\n\ndef thread_function2():\n db = RedBlackTree.connect(\"/tmp/test6.dbdb\")\n for i in [\"kobe\",\"rahul\",\"pavlos\",\"andy\",\"lisa\"]:\n print(\"SECOND FUNC\")\n with raises(KeyError):\n print(\"FAILED\")\n print(db.get(i))\n \nos.remove('/tmp/test6.dbdb')\nt1 = Thread(target=thread_function, args=([1]))\nt2 = Thread(target=thread_function2)#, args=(2)) \nt1.start()\nt2.start()\nprint(\"DONE\")\n\n\n \n\nimport time\n\ndef thread_function():\n print(\"THREAD 1\")\n db = RedBlackTree.connect(\"/tmp/test6.dbdb\")\n db.set(\"Laura\", \"Ware\")\n time.sleep(200)\n print(\"THREAD ONE DONE SLEEPING\")\n db.commit()\n print(\"COMMITED RESULTS\")\n \ndef thread_function2():\n print(\"THREAD 2\")\n db2 = RedBlackTree.connect(\"/tmp/test6.dbdb\")\n with raises(KeyError):\n print(db2.get('Laura'))\n print(\"THERE\")\n time.sleep(10)\n print(\"THREAD TWO DONE SLEEPING\")\n print(db2.get('Laura'))\n \n\nos.remove('/tmp/test6.dbdb')\n#t1 = Thread(target=thread_function)\n#t2 = Thread(target=thread_function2)\n#t1.start()\n#t2.start()\n\nimport multiprocessing\np = multiprocessing.Process(target=thread_function) \np2 = multiprocessing.Process(target=thread_function2) \np.start()\np2.start()\nprint(\"I AM DONE\")\n\n\n#db = RedBlackTree.connect(\"/tmp/test6.dbdb\")\n#db.set(\"Laura\", \"Ware\")\n#print(\"HERE\")\n#db2 = RedBlackTree.connect(\"/tmp/test6.dbdb\")\n#print(db2.get(\"Laura\"))\n#print(\"HERE\")\n\nos.remove('/tmp/test6.dbdb')\ndb = RedBlackTree.connect(\"/tmp/test6.dbdb\")\ndb.set(\"Laura\", \"Ware\")\nprint(\"HERE\")\n\n\n\ndb2 = RedBlackTree.connect(\"/tmp/test6.dbdb\")\n#print(\"CONNECTED\")\nwith raises(KeyError):\n print(db2.get(\"Laura\"))\ndb.commit()\nprint(db2.get(\"Laura\"))\nprint(\"HERE\")\n\ndb.close()\ndb2.close()\n\nfrom portalocker.utils import Lock\nfrom portalocker import *\nalock = Lock(\"/tmp/test6.dbdb\", timeout=5)\n#with assertRaises(Exception): #LockException\n #print(\"HERE\")\nalock.acquire()\nprint(\"DONE\")\n\n\nfrom TimeseriesDB.MessageFormatting import *\nimport importlib\nimport unittest\nfrom pytest import raises\nimport numpy as np\nfrom TimeseriesDB.tsdb_error import *\nfrom TimeseriesDB import DatabaseServer\nfrom TimeseriesDB.MessageFormatting import * #Deserializer\nfrom Similarity.find_most_similar import find_most_similiar, sanity_check\nfrom TimeseriesDB.simsearch_init import initialize_simsearch_parameters\nfrom socketserver import BaseRequestHandler, ThreadingTCPServer, TCPServer\nfrom timeseries.ArrayTimeSeries import ArrayTimeSeries as ts\nimport threading\nfrom socket import socket, AF_INET, SOCK_STREAM\nimport sys\nfrom scipy.stats import norm\nimport multiprocessing\n\n\ndef query_1():\n #function to compute simsearch\n print(\"QUERY1\")\n s = socket(AF_INET, SOCK_STREAM)\n s.connect(('localhost', 20000))\n d2 = {'op':'simsearch_id','id':12,'n_closest':2,'courtesy':'please'}\n s2 = serialize(json.dumps(d2)) \n s.send(s2)\n msg = s.recv(8192)\n ds = Deserializer()\n ds.append(msg)\n ds.ready()\n response = ds.deserialize()\n print(response)\n s.close()\n \ndef query_2():\n #function to return timeseries from id\n print(\"QUERY2\")\n s = socket(AF_INET, SOCK_STREAM)\n s.connect(('localhost', 20000))\n d2 = {'op':'TSfromID','id':12,'courtesy':'please'}\n s2 = serialize(json.dumps(d2)) \n s.send(s2)\n msg = s.recv(8192)\n ds = Deserializer()\n ds.append(msg)\n ds.ready()\n response = ds.deserialize()\n print(response)\n s.close()\n\nTCPServer.allow_reuse_address = True\nserv = TCPServer(('', 20000), DatabaseServer)\nserv.data = initialize_simsearch_parameters()\nserv.deserializer = Deserializer() \nserv_thread = threading.Thread(target=serv.serve_forever)\nserv_thread.setDaemon(True)\nserv_thread.start() \n\n\np = multiprocessing.Process(target=query_1) \np2 = multiprocessing.Process(target=query_2) \np.start()\np2.start() \n\n\nserv.socket.close()\nserv.server_close()\nprint(\"DONE\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MTG/sms-tools
|
notebooks/E2-Sinusoids-and-DFT.ipynb
|
agpl-3.0
|
[
"Exercise 2: Sinusoids and the DFT\nDoing this exercise you will get a better understanding of the basic elements and operations that take place in the Discrete Fourier Transform (DFT). There are five parts: 1) Generate a sinusoid, 2) Generate a complex sinusoid, 3) Implement the DFT, 4) Implement the IDFT, and 5) Compute the magnitude spectrum of an input sequence.\nRelevant Concepts\nA real sinusoid in discrete time domain can be expressed by:\n\\begin{equation}\nx[n] = A\\cos(2 \\pi fnT + \\varphi)\n\\end{equation}\nwhere, $x$ is the array of real values of the sinusoid, $n$ is an integer value expressing the time index, $A$ is the amplitude value of the sinusoid, $f$ is the frequency value of the sinusoid in Hz, $T$ is the sampling period equal to $1/fs$, fs is the sampling frequency in Hz, and $\\varphi$ is the initial phase of the sinusoid in radians.\nA complex sinusoid in discrete time domain can be expressed by:\n\\begin{equation}\n\\bar{x}[n] = Ae^{j(\\omega nT + \\varphi)} = A\\cos(\\omega nT + \\varphi)+ j A\\sin(\\omega nT + \\varphi)\n\\end{equation}\nwhere, $\\bar{x}$ is the array of complex values of the sinusoid, $n$ is an integer value expressing the time index, $A$ is the amplitude value of the sinusoid, $e$ is the complex exponential number, $\\omega$ is the frequency of the sinusoid in radians per second (equal to $2 \\pi f$), $T$ is the sampling period equal $1/fs$, fs is the sampling frequency in Hz and $\\varphi$ is the initial phase of the sinusoid in radians.\nThe $N$ point DFT of a sequence of real values $x$ (a sound) can be expressed by:\n\\begin{equation}\nX[k] = \\sum_{n=0}^{N-1} x[n]e^{-j2 \\pi kn/N} \\hspace{1cm} k=0,...,N-1\n\\end{equation}\nwhere $n$ is an integer value expressing the discrete time index, $k$ is an integer value expressing the discrete frequency index, and $N$ is the length of the DFT.\nThe IDFT of a spectrum $X$ of length $N$ can be expressed by:\n\\begin{equation}\nx[n] = \\frac{1}{N} \\sum_{k=0}^{N-1} X[k]e^{j2 \\pi kn/N} \\hspace{1cm} n=0,...,N-1\n\\end{equation}\nwhere, $n$ is an integer value expressing the discrete time index, $k$ is an integer value expressing the discrete frequency index, and $N$ is the length of the spectrum $X$.\nThe magnitude of a complex spectrum $X$ is obtained by taking its absolute value: $|X[k]| $\nPart 1 - Generate a sinusoid\nThe function gen_sine() should generate a real sinusoid (use np.cos()) given its amplitude A, frequency f (Hz), initial phase phi (radians), sampling rate fs (Hz) and duration t (seconds). \nAll the input arguments to this function (A, f, phi, fs and t) are real numbers such that A, t and fs are positive, and fs > 2*f to avoid aliasing. The function should return a numpy array x of the generated sinusoid. \nUse the function cos of the numpy package to compute the sinusoidal values.",
"import numpy as np\n\n# E2 - 1.1: Complete function gen_sine()\n\ndef gen_sine(A, f, phi, fs, t):\n \"\"\"Generate a real sinusoid given its amplitude, frequency, initial phase, sampling rate, and duration.\n \n Args:\n A (float): amplitude of the sinusoid\n f (float): frequency of the sinusoid in Hz\n phi (float): initial phase of the sinusoid in radians\n fs (float): sampling frequency of the sinusoid in Hz\n t (float): duration of the sinusoid (is second)\n \n Returns:\n np.array: array containing generated sinusoid\n \n \"\"\"\n ### your code here\n\n ",
"If you use A=1.0, f = 10.0, phi = 1.0, fs = 50 and t = 0.1 as input to the function gen_sine() the output numpy array should be:\narray([ 0.54030231, -0.63332387, -0.93171798, 0.05749049, 0.96724906])\nTo generate a sinewave that you can hear, it should be longer and with a higher sampling rate. For example you can use A=1.0, f = 440.0, phi = 1.0, fs = 5000 and t = 0.5. To play it import the Ipython.display package and use ipd.display(ipd.Audio(data=x, rate=fs)).",
"# E2 - 1.2: Call the function gen_sine() with the values proposed above, plot and play the output sinusoid\n\nimport IPython.display as ipd\n\n### your code here\n",
"Part 2 - Generate a complex sinusoid\nThe gen_complex_sine() function should generate the complex sinusoid that is used in DFT computation of length N (samples), corresponding to the frequency index k. [Note that in the DFT we use the conjugate of this complex sinusoid.]\nThe amplitude of such a complex sinusoid is 1, the length is N, and the frequency in radians is 2*pi*k/N.\nThe input arguments to the function are two positive integers, k and N, such that k < N-1. The function should return c_sine, a numpy array of the complex sinusoid. Use the function exp() of the numpy package to compute the complex sinusoidal values.",
"# E2 - 2.1: Complete function the function gen_complex_sine()\n\ndef gen_complex_sine(k, N):\n \"\"\"Generate one of the complex sinusoids used in the DFT from its frequency index and the DFT lenght.\n \n Args:\n k (integer): frequency index of the complex sinusoid of the DFT\n N (integer) = length of complex sinusoid, DFT length, in samples\n \n Returns:\n np.array: array with generated complex sinusoid (length N)\n \n \"\"\"\n ### your code here\n",
"If you run the function gen_complex_sine() using k=1 and N=5, it should return the following numpy array:\narray([ 1. + 0.j, 0.30901699 + 0.95105652j, -0.80901699 + 0.58778525j, -0.80901699 - 0.58778525j, 0.30901699 - 0.95105652j])",
"# E2 - 2.2: Call gen_complex_sine() with the values suggested above and plot the real and imaginary parts of the \n# output complex sinusoid\n\n### your code here\n",
"Part 3 - Implement the discrete Fourier transform (DFT)\nThe function dft() should implement the discrete Fourier transform (DFT) equation given above. Given a sequence x of length N, the function should return its spectrum of length N with the frequency indexes ranging from 0 to N-1.\nThe input argument to the function is a numpy array x and the function should return a numpy array X, the DFT of x.",
"# E2 - 3.1: Complete the function dft()\n\ndef dft(x):\n \"\"\"Compute the DFT of a signal.\n \n Args:\n x (numpy array): input sequence of length N\n \n Returns:\n np.array: N point DFT of the input sequence x\n \"\"\"\n ## Your code here\n",
"If you run dft() using as input x = np.array([1, 2, 3, 4]), the function shoulds return the following numpy array:\narray([10.0 + 0.0j, -2. +2.0j, -2.0 - 9.79717439e-16j, -2.0 - 2.0j])\nNote that you might not get an exact 0 in the output because of the small numerical errors due to the limited precision of the data in your computer. Usually these errors are of the order 1e-15 depending on your machine.",
"# E2 - 3.2: Call dft() with the values suggested above and plot the real and imaginary parts of output spectrum\n\n### your code here\n",
"Part 4 - Implement the inverse discrete Fourier transform (IDFT)\nThe function idft() should implement the inverse discrete Fourier transform (IDFT) equation given above. Given a frequency spectrum X of length N, the function should return its IDFT x, also of length N. Assume that the frequency index of the input spectrum ranges from 0 to N-1.\nThe input argument to the function is a numpy array X of the frequency spectrum and the function should return a numpy array of the IDFT of X.\nRemember to scale the output appropriately.",
"# E2 - 4.1: Complete the function idft()\n\ndef idft(X):\n \"\"\"Compute the inverse-DFT of a spectrum.\n \n Args:\n X (np.array): frequency spectrum (length N)\n \n Returns:\n np.array: N point IDFT of the frequency spectrum X\n \n \"\"\"\n ### Your code here\n",
"If you run idft() with the input X = np.array([1, 1, 1, 1]), the function should return the following numpy array: \narray([ 1.00000000e+00 +0.00000000e+00j, -4.59242550e-17 +5.55111512e-17j, 0.00000000e+00 +6.12323400e-17j, 8.22616137e-17 +8.32667268e-17j])\nNotice that the output numpy array is essentially [1, 0, 0, 0]. Instead of exact 0 we get very small numerical values of the order of 1e-15, which can be ignored. Also, these small numerical errors are machine dependent and might be different in your case.\nIn addition, an interesting test of the IDFT function can be done by providing the output of the DFT of a sequence as the input to the IDFT. See if you get back the original time domain sequence.",
"# E2 - 4.2: Plot input spectrum (real and imaginary parts) suggested above, call idft(), and plot output signal \n# (real and imaginary parts)\n\n### Your code here\n",
"Part 5 - Compute the magnitude spectrum\nThe function gen_mag_spectrum() should compute the magnitude spectrum of an input sequence x of length N. The function should return an N point magnitude spectrum with frequency index ranging from 0 to N-1.\nThe input argument to the function is a numpy array x and the function should return a numpy array of the magnitude spectrum of x.",
"# E2 - 5.1: Complete the function gen_mag_spec()\n\ndef gen_mag_spec(x):\n \"\"\"Compute magnitude spectrum of a signal.\n \n Args:\n x (np.array): input sequence of length N\n \n Returns:\n np.array: magnitude spectrum of the input sequence x (length N)\n \n \"\"\"\n ### your code here\n",
"If you run gen_mag_spec() using as input x = np.array([1, 2, 3, 4]), it should return the following numpy array:\narray([10.0, 2.82842712, 2.0, 2.82842712])\nFor a more realistic use of gen_mag_spec() use as input a longer signal, such as x = np.cos(2*np.pi*200.0*np.arange(512)/1000), and to get a visual representation of the input and output, import the matplotlib.pyplot package and use plt.plot(x) and plt.plot(X).",
"import IPython.display as ipd\nimport matplotlib.pyplot as plt\n\n# E2 - 5.2: Plot input cosine signal suggested above, call gen_mag_spec(), and plot the output result\n\n### Your code here\n\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bwinkel/cygrid
|
notebooks/04_sightline_gridding.ipynb
|
gpl-3.0
|
[
"Sightline gridding\nWe demonstrate the gridding of selected sightlines with cygrid. This can be particularly useful if you have some high-resolution data such as QSO absorption spectra and want to get accurate foreground values from a dataset with lower angular resolution.\nWe start by adjusting the notebook settings.",
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'",
"We attempt to limit our dependencies as much as possible, but astropy and healpy needs to be available on your machine if you want to re-run the calculations. We can highly recommend anaconda as a scientific python platform.",
"from __future__ import print_function\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport healpy as hp\nfrom astropy.io import fits\nfrom astropy.utils.misc import NumpyRNGContext\n\nimport cygrid",
"Create dummy data\nThe properties of the map are given by the ordering and the nside of the map. For more details, check the paper by Gorski et al. (2005).",
"NSIDE = 128\nNPIX = hp.nside2npix(NSIDE)",
"The data are just random draws from the standard normal distribution. For the weights, we choose uniform weighting. The coordinates can be easily calculated with healpy.",
"# data and weights\nwith NumpyRNGContext(0):\n # make sure to have \"predictable\" random numbers\n input_data = np.random.randn(NPIX)\n\n# coordinates\ntheta, phi = hp.pix2ang(NSIDE, np.arange(NPIX))\nlons = np.rad2deg(phi)\nlats = 90. - np.rad2deg(theta)",
"The pixel size for this NPIX is:",
"print('pixel size: {:.1f}\"'.format(3600 * hp.nside2resol(NSIDE)))",
"A quick look confirms that our data looks just as expected.",
"hp.mollview(input_data, xsize=300)",
"Gridding\nWe are now interested in the values of this map at a couple of given positions. It wouldn't make sense to use cygrid at all, if we were just interested in the values of the map at the given positions. Even when the positions are not exactly aligned with the HEALPix pixel centers, employing some interpolation routine would do a good job.\nBut let's assume that we would want to compare the values with another data set, whose angular resolution is much worse. Then it is reasonable to down-sample (i.e., lower the angular resolution by smoothing with a Gaussian kernel) our HEALPix map before extracting the sight-line values. With cygrid's sight-line gridder, this is done only for the vicinity of the requested positions, which can save a lot of computing time (only for large NSIDE, because healpy's smoothing function is very fast for small and moderate NSIDE due to the use of FFTs). cygrid would be at true advantage for most other projections, though.\nIn order to compare the results with healpy's smoothing routine (see below), we will use HEALPix pixel center coordinates without loss of generality.",
"with NumpyRNGContext(0):\n target_hpx_indices = np.random.randint(0, NPIX, 5)\n\ntheta, phi = hp.pix2ang(NSIDE,target_hpx_indices)\ntarget_lons = np.rad2deg(phi)\ntarget_lats = 90. - np.rad2deg(theta)\n\nprint('{:>8s} {:>8s}'.format('glon', 'glat'))\nfor glon, glat in zip(target_lons, target_lats):\n print('{:8.4f} {:8.4f}'.format(glon, glat))",
"We initiate the gridder by specifying the target sightlines.",
"gridder = cygrid.SlGrid(target_lons, target_lats)",
"The gridding kernel is of key importance for the entire gridding process. cygrid allows you to specify the shape of the kernel (e.g. elliptical Gaussian or tapered sinc) and its size.\nIn our example, we use a symmetrical Gaussian (i.e. the major and minor axis of the kernel are identical). In that case, we need to furthermore specify kernelsize_sigma, the sphere_radius up to which the kernel will be computed, and the maximum acceptable healpix resolution for which we recommend kernelsize_sigma/2.\nWe refer to section 3.5 of the paper ('a minimal example') for a short discussion of the kernel parameters.",
"kernelsize_fwhm = 1. # 1 degree\n# see https://en.wikipedia.org/wiki/Full_width_at_half_maximum\nkernelsize_sigma = kernelsize_fwhm / np.sqrt(8 * np.log(2))\nsphere_radius = 4. * kernelsize_sigma\n\ngridder.set_kernel(\n 'gauss1d',\n (kernelsize_sigma,),\n sphere_radius,\n kernelsize_sigma / 2.\n )",
"After the kernel has been set, we perform the actual gridding by calling grid() with the coordinates and the data.",
"gridder.grid(lons, lats, input_data)",
"To get the gridded data, we simply call get_datacube().",
"sightlines = gridder.get_datacube()",
"Finally, we get a list of our gridded sightlines within the chosen aperture.\nWe can compare this with the healpy smoothing operation:",
"smoothed_map = hp.sphtfunc.smoothing(\n input_data,\n fwhm=np.radians(kernelsize_fwhm),\n )\nsmoothed_data = smoothed_map[target_hpx_indices]\n\nprint('{:>8s} {:>8s} {:>10s} {:>10s}'.format(\n 'glon', 'glat', 'cygrid', 'healpy')\n )\nfor t in zip(\n target_lons, target_lats,\n sightlines, smoothed_data,\n ):\n print('{:8.4f} {:8.4f} {:10.6f} {:10.6f}'.format(*t))",
"Note, that it is expected that the two methods differ somewhat, because they are based on very different techniques (healpy transforms the map to harmonic space and convolves the harmonic coefficients $a_{lm}$ with the beam, which is also transformed to harmonic space)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dacr26/CompPhys
|
01_00_numerical_differentiation.ipynb
|
mit
|
[
"A primer on numerical differentiation\nIn order to numerically evaluate a derivative $y'(x)=dy/dx$ at point $x_0$, we approximate is by using finite differences:\nTherefore we find: $$\\begin{eqnarray}\n&& dx \\approx \\Delta x &=&x_1-x_0, \\\n&& dy \\approx \\Delta y &=&y_1-y_0 = y(x_1)-y(x_0) = y(x_0+\\Delta_x)-y(x_0),\\end{eqnarray}$$\nThen we re-write the derivative in terms of discrete differences as:\n$$\\frac{dy}{dx} \\approx \\frac{\\Delta y}{\\Delta x}$$\nExample\nLet's look at the accuracy of this approximation in terms of the interval $\\Delta x$. In our first example we will evaluate the derivative of $y=x^2$ at $x=1$.",
"dx = 1.\nx = 1.\nwhile(dx > 1.e-10):\n dy = (x+dx)*(x+dx)-x*x\n d = dy / dx\n print(\"%6.0e %20.16f %20.16f\" % (dx, d, d-2.))\n dx = dx / 10.\n ",
"Why is it that the sequence does not converge? This is due to the round-off errors in the representation of the floating point numbers. To see this, we can simply type:",
"((1.+0.0001)*(1+0.0001)-1)",
"Let's try using powers of 1/2",
"dx = 1.\nx = 1.\nwhile(dx > 1.e-10):\n dy = (x+dx)*(x+dx)-x*x\n d = dy / dx\n print(\"%6.0e %20.16f %20.16f\" % (dx, d, d-2.))\n dx = dx / 2.",
"In addition, one could consider the midpoint difference, defined as:\n$$ dy \\approx \\Delta y = y(x_0+\\frac{\\Delta_x}{2})-y(x_0-\\frac{\\Delta_x}{2}).$$\nFor a more complex function we need to import it from math. For instance, let's calculate the derivative of $sin(x)$ at $x=\\pi/4$, including both the forward and midpoint differences.",
"from math import sin, sqrt, pi\ndx = 1.\nwhile(dx > 1.e-10):\n x = pi/4.\n d1 = sin(x+dx) - sin(x); #forward\n d2 = sin(x+dx*0.5) - sin(x-dx*0.5); # midpoint\n d1 = d1 / dx;\n d2 = d2 / dx;\n print(\"%6.0e %20.16f %20.16f %20.16f %20.16f\" % (dx, d1, d1-sqrt(2.)/2., d2, d2-sqrt(2.)/2.) )\n dx = dx / 2.",
"A more in-depth discussion about round-off erros in numerical differentiation can be found <a href=\"http://www.uio.no/studier/emner/matnat/math/MAT-INF1100/h10/kompendiet/kap11.pdf\">here</a>\nSpecial functions in numpy\nnumpy provides a simple method diff() to calculate the numerical derivatives of a dataset stored in an array by forward differences. The function gradient() will calculate the derivatives by midpoint (or central) difference, that provides a more accurate result.",
"%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot\n\ny = lambda x: x*x\n\nx1 = np.arange(0,10,1)\nx2 = np.arange(0,10,0.1)\n\ny1 = np.gradient(y(x1), 1.)\nprint y1\n\npyplot.plot(x1,np.gradient(y(x1),1.),'r--o');\npyplot.plot(x1[:x1.size-1],np.diff(y(x1))/np.diff(x1),'b--x'); ",
"Notice above that gradient() uses forward and backward differences at the two ends.",
"pyplot.plot(x2,np.gradient(y(x2),0.1),'b--o');",
"More discussion about numerical differenciation, including higher order methods with error extrapolation can be found <a href=\"http://young.physics.ucsc.edu/115/diff.pdf\">here</a>. \nThe module scipy also includes methods to accurately calculate derivatives:",
"from scipy.misc import derivative\n\ny = lambda x: x**2\n\ndx = 1.\nx = 1.\n\nwhile(dx > 1.e-10):\n d = derivative(f, x, dx, n=1, order=3)\n print(\"%6.0e %20.16f %20.16f\" % (dx, d, d-2.))\n dx = dx / 10.",
"One way to improve the roundoff errors is by simply using the decimal package",
"from decimal import Decimal\n\ndx = Decimal(\"1.\")\nwhile(dx >= Decimal(\"1.e-10\")):\n x = Decimal(\"1.\")\n dy = (x+dx)*(x+dx)-x*x\n d = dy / dx\n print(\"%6.0e %20.16f %20.16f\" % (dx, d, d-Decimal(\"2.\")))\n dx = dx / Decimal(\"10.\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/teachpyx
|
_doc/notebooks/python/serialisation_protobuf.ipynb
|
mit
|
[
"Sérialisation avec protobuf\nprotobuf optimise la sérialisation de deux façons. Elle accélère l'écriture et la lecture des données et permet aussi un accès rapide à une information précise dans désérialiser les autres. Elle réalise cela en imposant un schéma strict de données.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Schéma\nOn récupère l'exemple du tutorial.",
"schema = \"\"\"\nsyntax = \"proto2\";\n\npackage tutorial;\n\nmessage Person {\n required string name = 1;\n required int32 id = 2;\n optional string email = 3;\n\n enum PhoneType {\n MOBILE = 0;\n HOME = 1;\n WORK = 2;\n }\n\n message PhoneNumber {\n required string number = 1;\n optional PhoneType type = 2 [default = HOME];\n }\n\n repeated PhoneNumber phones = 4;\n}\n\nmessage AddressBook {\n repeated Person people = 1;\n}\n\"\"\"",
"Compilation\nIl faut d'abord récupérer le compilateur. Cela peut se faire depuis le site de protobuf ou sur Linux (Ubuntu/Debian) apt-get install protobuf-compiler pour obtenir le programme protoc.",
"import google.protobuf as gp\nversion = gp.__version__\nif version == \"3.5.2.post1\":\n version = \"3.5.1\"\nversion\n\nimport sys, os\n\nif sys.platform.startswith(\"win\"):\n url = \"https://github.com/google/protobuf/releases/download/v{0}/protoc-{0}-win32.zip\".format(version)\n name = \"protoc-{0}-win32.zip\".format(version)\n exe = 'protoc.exe'\nelse:\n url = \"https://github.com/google/protobuf/releases/download/v{0}/protoc-{0}-linux-x86_64.zip\".format(version)\n exe = 'protoc'\n name = \"protoc-{0}-linux-x86_64.zip\".format(version)\n\nprotoc = os.path.join(\"bin\", exe)\nif not os.path.exists(name):\n from pyquickhelper.filehelper import download\n try:\n download(url)\n except Exception as e:\n raise Exception(\"Unable to download '{0}'\\nERROR\\n{1}\".format(url, e))\nelse:\n print(name)\n\nif not os.path.exists(protoc):\n from pyquickhelper.filehelper import unzip_files\n unzip_files(name,where_to='.')\n\nif not os.path.exists(protoc):\n raise FileNotFoundError(protoc)",
"On écrit le format sur disque.",
"with open('schema.proto', 'w') as f:\n f.write(schema)",
"Et on peut compiler.",
"from pyquickhelper.loghelper import run_cmd\ncmd = '{0} --python_out=. schema.proto'.format(protoc)\ntry:\n out, err = run_cmd(cmd=cmd, wait=True)\nexcept PermissionError as e:\n # Sous Linux si ne marche pas avec bin/protoc, on utilise\n # protoc directement à supposer que le package\n # protobuf-compiler a été installé.\n if not sys.platform.startswith(\"win\"):\n protoc = \"protoc\"\n cmd = '{0} --python_out=. schema.proto'.format(protoc)\n try:\n out, err = run_cmd(cmd=cmd, wait=True)\n except Exception as e:\n mes = \"CMD: {0}\".format(cmd)\n raise Exception(\"Unable to use {0}\\n{1}\".format(protoc, mes)) from e\n else:\n mes = \"CMD: {0}\".format(cmd)\n raise Exception(\"Unable to use {0}\\n{1}\".format(protoc, mes)) from e\nprint(\"\\n----\\n\".join([out, err]))",
"Un fichier a été généré.",
"[_ for _ in os.listdir(\".\") if '.py' in _]\n\nwith open('schema_pb2.py', 'r') as f:\n content = f.read()\nprint(content[:1000])",
"Import du module créé\nPour utliser protobuf, il faut importer le module créé.",
"import schema_pb2",
"On créé un enregistrement.",
"person = schema_pb2.Person()\nperson.id = 1234\nperson.name = \"John Doe\"\nperson.email = \"jdoe@example.com\"\nphone = person.phones.add()\nphone.number = \"555-4321\"\nphone.type = schema_pb2.Person.HOME\n\nperson",
"Sérialisation en chaîne de caractères",
"res = person.SerializeToString()\ntype(res), res\n\n%timeit person.SerializeToString()\n\npers = schema_pb2.Person.FromString(res)\npers\n\npers = schema_pb2.Person()\npers.ParseFromString(res)\npers\n\n%timeit schema_pb2.Person.FromString(res)\n\n%timeit pers.ParseFromString(res)",
"Plusieurs chaînes de caractères",
"db = []\n\nperson = schema_pb2.Person()\nperson.id = 1234\nperson.name = \"John Doe\"\nperson.email = \"jdoe@example.com\"\nphone = person.phones.add()\nphone.number = \"555-4321\"\nphone.type = schema_pb2.Person.HOME\ndb.append(person)\n\nperson = schema_pb2.Person()\nperson.id = 5678\nperson.name = \"Johnette Doette\"\nperson.email = \"jtdoet@example2.com\"\nphone = person.phones.add()\nphone.number = \"777-1234\"\nphone.type = schema_pb2.Person.MOBILE\ndb.append(person)\n\nimport struct\nfrom io import BytesIO\nbuffer = BytesIO()\nfor p in db:\n size = p.ByteSize()\n buffer.write(struct.pack('i', size))\n buffer.write(p.SerializeToString())\nres = buffer.getvalue()\nres\n\nfrom google.protobuf.internal.decoder import _DecodeVarint32\ndb2 = []\nbuffer = BytesIO(res)\nn = 0\nwhile True:\n bsize = buffer.read(4)\n if len(bsize) == 0:\n # C'est fini.\n break\n size = struct.unpack('i', bsize)[0]\n data = buffer.read(size)\n p = schema_pb2.Person.FromString(data)\n db2.append(p) \n\ndb2[0], db2[1]",
"Sérialisation JSON",
"from google.protobuf.json_format import MessageToJson\n\nprint(MessageToJson(pers))\n\n%timeit MessageToJson(pers)\n\nfrom google.protobuf.json_format import Parse as ParseJson\njs = MessageToJson(pers)\nres = ParseJson(js, message=schema_pb2.Person())\nres\n\n%timeit ParseJson(js, message=schema_pb2.Person())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jrmontag/Data-Science-45min-Intros
|
pandas-201/functional_ish_pandas.ipynb
|
unlicense
|
[
"from IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"\n\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport seaborn as sns\nimport pandas as pd\nimport requests\n\nfrom tweet_parser.tweet import Tweet\nfrom gapi import gnipapi\nfrom gapi.gnipapi import *",
"This will lean heavily on Tom Augspurger's excellent series on Modern Pandas.\nQuote:\nMethod chaining, where you call methods on an object one after another, is in vogue at the moment. It's always been a style of programming that's been possible with pandas, and over the past several releases, we've added methods that enable even more chaining.\n\n- assign (0.16.0): For adding new columns to a DataFrame in a chain (inspired by dplyr's mutate)\n- pipe (0.16.2): For including user-defined methods in method chains.\n- rename (0.18.0): For altering axis names (in additional to changing the actual labels as before).\n\n- Window methods (0.18): Took the top-level pd.rolling\\_\\* and pd.expanding\\_\\* functions and made them NDFrame methods with a groupby-like API.\n- Resample (0.18.0) Added a new groupby-like API\n- .where/mask/Indexers accept Callables (0.18.1): In the next release you'll be able to pass a callable to the indexing methods, to be evaluated within the DataFrame's context (like .query, but with code instead of strings).\n\nMy scripts will typically start off with large-ish chain at the start getting things into a manageable state. It's good to have the bulk of your munging done with right away so you can start to do Science™:\n\nPart of the goal will be to develop different coding styles with Pandas, moving from a script-ish, verbose approach to a piped style that flows well with discrete cleaning operations grouped into single functions. This flows very well into using pyspark's dataframe as well, as pyspark requires that kind of style and there is a great deal of overlap with pandas' dataframe methods in pyspark. \nMethod chains are a popular method in programming these days, with the rise of functional languages that can change function composition to be more readable. Examples of this in various languages:\n```.scala\n def fooNotIndent : List[Int] = (1 to 100).view.map { _ + 3 }.filter { _ > 10 }.flatMap { table.get }.take(3).toList\ndef fooIndent: List[Int] =\n (1 to 100)\n .view\n .map { _ + 3 }\n .filter { _ > 10 }\n .flatMap { table.get }\n .take(3)\n .toList\n```\nor comparing (from TA's post)\ntumble_after(\n broke(\n fell_down(\n fetch(went_up(jack_jill, \"hill\"), \"water\"),\n jack),\n \"crown\"),\n \"jill\"\n)\nwith (from TA's post)\n```\njack_jill %>%\n went_up(\"hill\") %>%\n fetch(\"water\") %>%\n fell_down(\"jack\") %>%\n broke(\"crown\") %>%\n tumble_after(\"jill\")\n```\nor\njack_jill \n .pipe(went(\"hill\", \"up\"))\n .pipe(fetch(\"water\"))\n .pipe(fell_down(\"jack\"))\n .pipe(broke(\"crown\"))\n .pipe(tumble_after(\"jill\")) \nThere are several cases I'd like to address in this session -\n\nEffective pandas usage\nInteractive development strategies\nBalancing exploration and reproducability\nJoining heterogenous datatypes\n\nThis might be a lot for a single session, but hey. \nLet's start off with a problem that we might have that we can try to answer:\nWhat Airports or flights have issues with delays?\nI am purposefully choosing a dataset that we will have difficulties in joining with twitter data, and also to illustrate a point...\nLet's use Tom's use of the BTS airline delay dataset, which requires a bit of work to obtain and parse through.\nIn data work, I have a loose set of semantics to describe stages in a workflow:\nResearch and understanding\n\nunderstand question of interest\nunderstand the data sources available\nevaluate requirements from stakeholders\nthink about available methods and timeframes for implementation\njudge final output (serialized data, production model, figures / slides / report, etc)\n\ndata gathering / pull\n\na single stage of collecting data from some source, be it scraping a website, some form of database, reading from a csv or other binary file, etc.\n\ntrivial cleaning\n\ndealing with various data sources returns data in formats that you may not want, be it with weird variable names, transformations from CamelCase to camel_case, or other very early-stages ops that you define to ease the rest of the process. E.g., renaming columns with spaces in them to be _ delinated. This can often be integrated into the basic data pull step, but should be explicit for reproducibility purposes.\n\nNon-trivial cleaning / preprocessing\n\nThere might be missing data, multiple or non-standard representation of NULL values (99s, strings 'nan', etc), which finding and handling are crucial\nfor JSON or similar formats, nested data structures might be present\ndatetime parsing and conversion if important to underlying analysis\nreshaping data to facilitate analysis (wide to long, stacked to unstacked, etc).\nfrequency normalization for time-series data\ntext cleaning or tokenization\njoining additional data sources with current data (which has already be gathered and pulled)\n\nexploratory analysis\n\nwith \"clean\" data, you can begin to poke at questions, from basic summarization and counting to faceted charts if it makes sense for the data\nmay include branching by reformatting your data into a different set (single operations to aggregated buckets, denormalized time-series stats, etc)\nlots of plotting and descriptions\noften can loop back to previous stages to gather more data or stabilize workflow when finding good features or ways of processing data\n\nmodeling / prediction\n\npotential input to ML functions, which includes sampling and so forth\noutput could be predicted values for a database or downstream operation, figures and text for a report, etc.\n\nIt's important to note that these stages are NEVER LINEAR, even though it almost always looks like it to the end consumer of posts like this. Each stage can be non-trivial for a host of reasons, and choices made in the early stages have strong effects in the rest of the process. Lots of iteration might be needed in each stage, and managing technical debt here can make each iteration faster.\nGiven these tasks, it seems logical to define our code in simliar stages, though I have no precise guides for how to do this. In our example for today, we can start by grabbing some data from the web. We'll follow TA's flights data grab and later add some tweet data via the Gnip api.\nLet's play\nWhat type of questions do we want to answer? Let's say we have a project that will investigate customer sentiment around airports / airlines. Perhaps some questions of interest are:\n\nAre customer moods affected heavily by flight delays?\nAre customers more likely to tweet due to flight delays?\nDo these effects vary by airport, airlines, or other factors?\n\nWhat type of data will we need? Probably some detailed data about flight delays, preferabbly flight-level data that includes information about the carrier, airport, destination, etc. We'll also need tweet data that reasonably matches these critera, of course. In this context, we'll probably be satisfied with simple exploration.\nI chose these questions partially due to the great series of posts by Augsperger that illustrate working with complex data, such as flight delays. :) \nData Gathering\nFor most notebook purposes, I like to include all imports at the top of a notebook, even if the code they enable will not be introduced until a later point. I like to keep this cell apart from others in the notebook for easy maintenence.",
"import os\nimport zipfile\n\nimport requests\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"In the original example from Tom, the code is written out as such:\n```.python\nheaders = {\n 'Referer': 'https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time',\n 'Origin': 'https://www.transtats.bts.gov',\n 'Content-Type': 'application/x-www-form-urlencoded',\n}\nparams = (\n ('Table_ID', '236'),\n ('Has_Group', '3'),\n ('Is_Zipped', '0'),\n)\ndata = <TRUNCATED>\nos.makedirs('data', exist_ok=True)\ndest = \"data/flights.csv.zip\"\nif not os.path.exists(dest):\n r = requests.post('https://www.transtats.bts.gov/DownLoad_Table.asp',\n headers=headers, params=params, data=data, stream=True)\nwith open(\"data/flights.csv.zip\", 'wb') as f:\n for chunk in r.iter_content(chunk_size=102400): \n if chunk:\n f.write(chunk)\n\n```\nGiven out focus today, let's wrap all initial data pulling into a function for logical separation.",
"def maybe_pull_airport_data():\n \"\"\"\n lightly modified from TA's post.\n \n \"\"\"\n headers = {\n 'Referer': 'https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time',\n 'Origin': 'https://www.transtats.bts.gov',\n 'Content-Type': 'application/x-www-form-urlencoded',\n }\n\n params = (\n ('Table_ID', '236'),\n ('Has_Group', '3'),\n ('Is_Zipped', '0'),\n )\n \n # query string to be sent. can modify the 'where' dates to change the size of data returned.\n\n data = \"UserTableName=On_Time_Performance&DBShortName=On_Time&RawDataTable=T_ONTIME&sqlstr=+SELECT+FL_DATE%2CUNIQUE_CARRIER%2CAIRLINE_ID%2CTAIL_NUM%2CFL_NUM%2CORIGIN_AIRPORT_ID%2CORIGIN_AIRPORT_SEQ_ID%2CORIGIN_CITY_MARKET_ID%2CORIGIN%2CORIGIN_CITY_NAME%2CDEST_AIRPORT_ID%2CDEST_AIRPORT_SEQ_ID%2CDEST_CITY_MARKET_ID%2CDEST%2CDEST_CITY_NAME%2CCRS_DEP_TIME%2CDEP_TIME%2CDEP_DELAY%2CTAXI_OUT%2CWHEELS_OFF%2CWHEELS_ON%2CTAXI_IN%2CCRS_ARR_TIME%2CARR_TIME%2CARR_DELAY%2CCANCELLED%2CCANCELLATION_CODE%2CCARRIER_DELAY%2CWEATHER_DELAY%2CNAS_DELAY%2CSECURITY_DELAY%2CLATE_AIRCRAFT_DELAY+FROM++T_ONTIME+WHERE+YEAR%3D2017&varlist=FL_DATE%2CUNIQUE_CARRIER%2CAIRLINE_ID%2CTAIL_NUM%2CFL_NUM%2CORIGIN_AIRPORT_ID%2CORIGIN_AIRPORT_SEQ_ID%2CORIGIN_CITY_MARKET_ID%2CORIGIN%2CORIGIN_CITY_NAME%2CDEST_AIRPORT_ID%2CDEST_AIRPORT_SEQ_ID%2CDEST_CITY_MARKET_ID%2CDEST%2CDEST_CITY_NAME%2CCRS_DEP_TIME%2CDEP_TIME%2CDEP_DELAY%2CTAXI_OUT%2CWHEELS_OFF%2CWHEELS_ON%2CTAXI_IN%2CCRS_ARR_TIME%2CARR_TIME%2CARR_DELAY%2CCANCELLED%2CCANCELLATION_CODE%2CCARRIER_DELAY%2CWEATHER_DELAY%2CNAS_DELAY%2CSECURITY_DELAY%2CLATE_AIRCRAFT_DELAY&grouplist=&suml=&sumRegion=&filter1=title%3D&filter2=title%3D&geo=All%A0&time=January&timename=Month&GEOGRAPHY=All&XYEAR=2017&FREQUENCY=1&VarDesc=Year&VarType=Num&VarDesc=Quarter&VarType=Num&VarDesc=Month&VarType=Num&VarDesc=DayofMonth&VarType=Num&VarDesc=DayOfWeek&VarType=Num&VarName=FL_DATE&VarDesc=FlightDate&VarType=Char&VarName=UNIQUE_CARRIER&VarDesc=UniqueCarrier&VarType=Char&VarName=AIRLINE_ID&VarDesc=AirlineID&VarType=Num&VarDesc=Carrier&VarType=Char&VarName=TAIL_NUM&VarDesc=TailNum&VarType=Char&VarName=FL_NUM&VarDesc=FlightNum&VarType=Char&VarName=ORIGIN_AIRPORT_ID&VarDesc=OriginAirportID&VarType=Num&VarName=ORIGIN_AIRPORT_SEQ_ID&VarDesc=OriginAirportSeqID&VarType=Num&VarName=ORIGIN_CITY_MARKET_ID&VarDesc=OriginCityMarketID&VarType=Num&VarName=ORIGIN&VarDesc=Origin&VarType=Char&VarName=ORIGIN_CITY_NAME&VarDesc=OriginCityName&VarType=Char&VarDesc=OriginState&VarType=Char&VarDesc=OriginStateFips&VarType=Char&VarDesc=OriginStateName&VarType=Char&VarDesc=OriginWac&VarType=Num&VarName=DEST_AIRPORT_ID&VarDesc=DestAirportID&VarType=Num&VarName=DEST_AIRPORT_SEQ_ID&VarDesc=DestAirportSeqID&VarType=Num&VarName=DEST_CITY_MARKET_ID&VarDesc=DestCityMarketID&VarType=Num&VarName=DEST&VarDesc=Dest&VarType=Char&VarName=DEST_CITY_NAME&VarDesc=DestCityName&VarType=Char&VarDesc=DestState&VarType=Char&VarDesc=DestStateFips&VarType=Char&VarDesc=DestStateName&VarType=Char&VarDesc=DestWac&VarType=Num&VarName=CRS_DEP_TIME&VarDesc=CRSDepTime&VarType=Char&VarName=DEP_TIME&VarDesc=DepTime&VarType=Char&VarName=DEP_DELAY&VarDesc=DepDelay&VarType=Num&VarDesc=DepDelayMinutes&VarType=Num&VarDesc=DepDel15&VarType=Num&VarDesc=DepartureDelayGroups&VarType=Num&VarDesc=DepTimeBlk&VarType=Char&VarName=TAXI_OUT&VarDesc=TaxiOut&VarType=Num&VarName=WHEELS_OFF&VarDesc=WheelsOff&VarType=Char&VarName=WHEELS_ON&VarDesc=WheelsOn&VarType=Char&VarName=TAXI_IN&VarDesc=TaxiIn&VarType=Num&VarName=CRS_ARR_TIME&VarDesc=CRSArrTime&VarType=Char&VarName=ARR_TIME&VarDesc=ArrTime&VarType=Char&VarName=ARR_DELAY&VarDesc=ArrDelay&VarType=Num&VarDesc=ArrDelayMinutes&VarType=Num&VarDesc=ArrDel15&VarType=Num&VarDesc=ArrivalDelayGroups&VarType=Num&VarDesc=ArrTimeBlk&VarType=Char&VarName=CANCELLED&VarDesc=Cancelled&VarType=Num&VarName=CANCELLATION_CODE&VarDesc=CancellationCode&VarType=Char&VarDesc=Diverted&VarType=Num&VarDesc=CRSElapsedTime&VarType=Num&VarDesc=ActualElapsedTime&VarType=Num&VarDesc=AirTime&VarType=Num&VarDesc=Flights&VarType=Num&VarDesc=Distance&VarType=Num&VarDesc=DistanceGroup&VarType=Num&VarName=CARRIER_DELAY&VarDesc=CarrierDelay&VarType=Num&VarName=WEATHER_DELAY&VarDesc=WeatherDelay&VarType=Num&VarName=NAS_DELAY&VarDesc=NASDelay&VarType=Num&VarName=SECURITY_DELAY&VarDesc=SecurityDelay&VarType=Num&VarName=LATE_AIRCRAFT_DELAY&VarDesc=LateAircraftDelay&VarType=Num&VarDesc=FirstDepTime&VarType=Char&VarDesc=TotalAddGTime&VarType=Num&VarDesc=LongestAddGTime&VarType=Num&VarDesc=DivAirportLandings&VarType=Num&VarDesc=DivReachedDest&VarType=Num&VarDesc=DivActualElapsedTime&VarType=Num&VarDesc=DivArrDelay&VarType=Num&VarDesc=DivDistance&VarType=Num&VarDesc=Div1Airport&VarType=Char&VarDesc=Div1AirportID&VarType=Num&VarDesc=Div1AirportSeqID&VarType=Num&VarDesc=Div1WheelsOn&VarType=Char&VarDesc=Div1TotalGTime&VarType=Num&VarDesc=Div1LongestGTime&VarType=Num&VarDesc=Div1WheelsOff&VarType=Char&VarDesc=Div1TailNum&VarType=Char&VarDesc=Div2Airport&VarType=Char&VarDesc=Div2AirportID&VarType=Num&VarDesc=Div2AirportSeqID&VarType=Num&VarDesc=Div2WheelsOn&VarType=Char&VarDesc=Div2TotalGTime&VarType=Num&VarDesc=Div2LongestGTime&VarType=Num&VarDesc=Div2WheelsOff&VarType=Char&VarDesc=Div2TailNum&VarType=Char&VarDesc=Div3Airport&VarType=Char&VarDesc=Div3AirportID&VarType=Num&VarDesc=Div3AirportSeqID&VarType=Num&VarDesc=Div3WheelsOn&VarType=Char&VarDesc=Div3TotalGTime&VarType=Num&VarDesc=Div3LongestGTime&VarType=Num&VarDesc=Div3WheelsOff&VarType=Char&VarDesc=Div3TailNum&VarType=Char&VarDesc=Div4Airport&VarType=Char&VarDesc=Div4AirportID&VarType=Num&VarDesc=Div4AirportSeqID&VarType=Num&VarDesc=Div4WheelsOn&VarType=Char&VarDesc=Div4TotalGTime&VarType=Num&VarDesc=Div4LongestGTime&VarType=Num&VarDesc=Div4WheelsOff&VarType=Char&VarDesc=Div4TailNum&VarType=Char&VarDesc=Div5Airport&VarType=Char&VarDesc=Div5AirportID&VarType=Num&VarDesc=Div5AirportSeqID&VarType=Num&VarDesc=Div5WheelsOn&VarType=Char&VarDesc=Div5TotalGTime&VarType=Num&VarDesc=Div5LongestGTime&VarType=Num&VarDesc=Div5WheelsOff&VarType=Char&VarDesc=Div5TailNum&VarType=Char\"\n\n os.makedirs('data', exist_ok=True)\n dest = \"data/flights.csv.zip\"\n\n if not os.path.exists(dest):\n r = requests.post('https://www.transtats.bts.gov/DownLoad_Table.asp',\n headers=headers, params=params, data=data, stream=True)\n\n with open(\"data/flights.csv.zip\", 'wb') as f:\n for chunk in r.iter_content(chunk_size=102400): \n if chunk:\n f.write(chunk)\n \n\n zf = zipfile.ZipFile(\"data/flights.csv.zip\")\n fp = zf.extract(zf.filelist[0].filename, path='data/')\n df = (pd\n .read_csv(fp, parse_dates=[\"FL_DATE\"])\n .rename(columns=str.lower) #note this takes a callable\n )\n return df\n ",
"Our function may be a bit sloppy from a DRY standpoint, but let's be serious: there is no need for arguments in this function and no other piece of our analysis will ever touch the fields inside of here. You could argue that it could take a flexibile filename option, but again, for the purposes of this demo, that might be overkill, but refactoring the single function to take a filename argument would take a minutue or two, now that the core logic is stable. This gives us a high-level intro point for our demo, a single call to the function.\nImagine this was going to go into a much larger set of functions or a library of some sort -- the function can be moved to a python file and work out of the box, which can simply your notebook or code at the risk of making more dependencies for users and disrupting the flow of analysis for a technical consumer.\nThat was a lot of crap - let's get back to the data.",
"flights = maybe_pull_airport_data()\n\nflights.head()\nflights.shape\nflights.info()",
"Digression on pandas\nWoo, we have a moderate-sized dataframe with a lot of columns, many which are NAN or non-intuitive. Let's define a few indices on the data, which assign metadata to each row and allow for fancy selection and operations along the way.",
"hdf = flights.set_index([\"unique_carrier\", \"origin\", \"dest\", \"tail_num\", \"fl_date\"]).sort_index()\nhdf[hdf.columns[:4]].head()",
"I think this clears up some thinking about the rows -- indexing by the flight operator, the airport origin, airport destination, the plan id, and the date of the flight make it clear what each row is.\nSelecting the data out in useful ways is somewhat straightforward, using the .loc semantics, which allow for label-oriented indexing in a dataframe.",
"hdf.loc[[\"AA\"], [\"dep_delay\"]].head()",
"What if we wanted to get ANY flight from denver to albuquerque? Pandas IndexSlice is a brilliant help here.\nThe semantics work as follows:\n: is \"include all labels from this level of the index\nhdf.loc[pd.IndexSlice[:, [\"DEN\"], [\"ABQ\"]], [\"dep_delay\"]]\ntranslates to\nhdf.loc[pd.IndexSlice[ALL CARRIERS, origin=[\"DEN\"], dest=[\"ABQ\"], ALL_TAILS, ALL_DATES], [\"dep_delay\"]]",
"(hdf.loc[pd.IndexSlice[:, [\"DEN\"], [\"ABQ\"]],\n [\"dep_delay\"]]\n .sort_values(\"dep_delay\", ascending=False)\n .head()\n)",
"we can also use the powerful query function, which allows a limited vocabulary to be executed on a dataframe and is wildly useful for slightly more clear operations.",
"(hdf\n .query(\"origin == 'DEN' and dest == 'ABQ'\")\n .loc[:, \"dep_delay\"]\n .to_frame()\n .sort_values(\"dep_delay\", ascending=False)\n .head()\n)",
"These days, I prefer query most of the time, particularly for exploration, but .loc with explicit indices can be far faster in many cases.\nback to gathering and inspection\nSo, at this stage, it seems reasonable to examine some of the data and see what might be problematic or needs further work.\nWe can see that several columns that should be datetimes are not \"dep_time\", etc. and that the City names are not really city names but City, State pairs. In Toms' series, the cleaning operations are done in a different post, but I will copy them here to fit in our framework. I think these functions can be safelycounted as advanced preprocessing and cleaning.",
"flights.head()\nflights.dtypes\n\ndef extract_city_name(df):\n '''\n Chicago, IL -> Chicago for origin_city_name and dest_city_name. From Augsperger.\n '''\n cols = ['origin_city_name', 'dest_city_name']\n city = df[cols].apply(lambda x: x.str.extract(\"(.*), \\w{2}\", expand=False))\n df = df.copy()\n df[['origin_city_name', 'dest_city_name']] = city\n return df\n\n\ndef time_to_datetime(df, columns):\n '''\n Combine all time items into datetimes. From Augsperger.\n\n 2014-01-01,0914 -> 2014-01-01 09:14:00\n '''\n df = df.copy()\n def converter(col):\n timepart = (col.astype(str)\n .str.replace('\\.0$', '') # NaNs force float dtype\n .str.pad(4, fillchar='0'))\n return pd.to_datetime(df['fl_date'].astype(\"str\") + ' ' +\n timepart.str.slice(0, 2) + ':' +\n timepart.str.slice(2, 4),\n errors='coerce')\n df[columns] = df[columns].apply(converter)\n return df",
"Note that both methods accept a pandas.DataFrame and return a pandas.DataFrame . This is critical to our upcoming methodology, and for portability to spark.\nIt seems obvious, but writing code that operates on immutable data structures is wildly useful for data processing. DataFrames are not immutable, but can be treated as such, as many operations either implicitly return a copy or methods can be written as such. With our methods, we can now create a new top-level function that handles our preprocessing.\nIt's not too often that your major performance bottleneck in pandas is copying dataframes.\nAnyway, we can now integrate our simple gathering method with some of the cleaning methods for a new top-level entry for our exploration.",
"def read_and_process_flights_data():\n drop_cols = [\"unnamed: 32\", \"security_delay\", \"late_aircraft_delay\",\n \"nas_delay\", \"origin_airport_id\", \"origin_city_market_id\",\n \"taxi_out\", \"wheels_off\", \"wheels_on\", \"crs_arr_time\", \"crs_dep_time\",\n \"carrier_delay\"]\n df = (maybe_pull_airport_data()\n .rename(columns=str.lower)\n .drop(drop_cols, axis=1)\n .pipe(extract_city_name)\n .pipe(time_to_datetime, ['dep_time', 'arr_time'])\n .assign(fl_date=lambda x: pd.to_datetime(x['fl_date']),\n dest=lambda x: pd.Categorical(x['dest']),\n origin=lambda x: pd.Categorical(x['origin']),\n tail_num=lambda x: pd.Categorical(x['tail_num']),\n unique_carrier=lambda x: pd.Categorical(x['unique_carrier']),\n cancellation_code=lambda x: pd.Categorical(x['cancellation_code'])))\n return df\n\n\n## this will take a few minutues with the full 2017 data; far faster with a month's sample\nflights = read_and_process_flights_data()\n\nflights.tail()\nflights.dtypes\nflights.shape",
"Exploring data\nFrom Tom's post, here's a long method chain that does an awful lot of work to generate a plot of flights per day for the top carriers. I'll break this down a bit after.",
"(flights\n .dropna(subset=['dep_time', 'unique_carrier'])\n .loc[flights['unique_carrier'].isin(flights['unique_carrier']\n .value_counts()\n .index[:5])]\n .set_index('dep_time')\n .groupby(['unique_carrier', pd.TimeGrouper(\"D\")])\n [\"fl_num\"]\n .count()\n .unstack(0)\n .fillna(0)\n .rename_axis(\"Flights per Day\", axis=1)\n .plot()\n)\n",
"If we broke this out like many people do, we might end up with code like this, where each step is broken into a variable.",
"# gets the carriers with the most traffic, hacking with the index. We use this for other ops. \ndf_clean = flights.dropna(subset=[\"dep_time\", \"unique_carrier\"])\ntop_carriers = flights[\"unique_carrier\"].value_counts().index[:5]\ndf_clean = df_clean.query(\"unique_carrier in @top_carriers\")\ndf_clean = df_clean.set_index(\"dep_time\")\n\ncarriers_by_hour = (df_clean\n .groupby(['unique_carrier',\n pd.TimeGrouper(\"H\")])[\"fl_num\"]\n .count())\ncarriers_df = carriers_by_hour.unstack(0)\ncarriers_df = carriers_df.fillna(0)\ncarriers_flights_per_day = (carriers_df\n .rolling(24)\n .sum()\n .rename_axis(\"Flights per Day\", axis=1))\n\ncarriers_flights_per_day.plot()",
"Naming things is hard. Given that pandas has exteremely expressive semantics and nearly all analytic methods return a fresh dataframe or series, it makes it straightforward to chain many ops together. This style will lend itself well to spark and should be familiar to those of you who have worked with Scala or other functional languages.\nIf the chains get very verbose or hard to follow, break them up and put them in a function, where you can keep it all in one place. Try to be very specific about naming your functions (remember, naming things is hard, functions are no different).\nIn an exploratory context, you might continue adding methods onto your chain until you can expand and continue until you get to your chart or end stage goal. In some cases, saving some exploratory work to varibles is great. \nLet's briefly talk about the .assign operator. This operation returns a new column for a dataframe, where the new column can be a constant, some like-indexed numpy array or series, a callable that references the dataframe in question, etc. It's very powerful in method chains and also very useful for keeping your namespace clean.\nthe semantics of \ndf.assign(NEW_COLUMN_NAME=lambda df: df[\"column\"] + df[\"column2\"]\ncan be read as\nassign a column named \"NEW_COLUMN_NAME\" to my referenced dataframe that is the sum of \"column\" and \"column2\". In the below example , the lambda references the datetime object of the departure time column to extract the hour, which gives us a convenient categorical value for examination.\nThis is similar to R's mutate function in the dyplr world.\nNote -- the top_carriers variable above is a good example of something we might want to keep around, and I'll use it several times in the post.",
"#taken from Augsperger\n(flights[['fl_date', 'unique_carrier', 'tail_num', 'dep_time', 'dep_delay']]\n .dropna()\n .query(\"unique_carrier in @top_carriers\")\n .assign(hour=lambda x: x['dep_time'].dt.hour)\n #.query('5 <= dep_delay < 600')\n .pipe((sns.boxplot, 'data'), 'hour', 'dep_delay')\n)",
"This enables rapid exploration, and within the interactive context, allows you to copy a cell and change single lines to modify your results. \nA heatmap might be a nice way to visualize categories in this data, and the assign syntax allows creating those categoricals seamless.",
"(flights[['fl_date', \"unique_carrier\", 'dep_time', 'dep_delay']]\n .dropna()\n .query(\"unique_carrier in @top_carriers\")\n .assign(hour=lambda x: x.dep_time.dt.hour)\n .assign(day=lambda x: x.dep_time.dt.dayofweek)\n .query('-1 < dep_delay < 600')\n .groupby([\"day\", \"hour\"])[\"dep_delay\"]\n .median()\n .unstack()\n .pipe((sns.heatmap, 'data'))\n)\n\n(flights[['fl_date', 'unique_carrier', 'dep_time', 'dep_delay']]\n .query(\"unique_carrier in @top_carriers\")\n .dropna()\n .assign(hour=lambda x: x.dep_time.dt.hour)\n .assign(day=lambda x: x.dep_time.dt.dayofweek)\n #.query('0 <= dep_delay < 600')\n .groupby([\"unique_carrier\", \"day\"])[\"dep_delay\"]\n .mean()\n .unstack()\n .sort_values(by=0)\n .pipe((sns.heatmap, 'data'))\n)",
"What about some other exploration? Pandas alows for some nifty ways of slicing up data to flexibly apply basic operations.\nWhat if we want to \"center\" the carrier's delay time at an airport by the mean airport delay? This is a case where we assigning variables might be useful. We'll limit our analysis to the top carrriers / airports, and save some variables for further interactive use.",
"top_airport_codes = flights[\"origin\"].value_counts().to_frame().head(5).index\ntop_airport_cities = flights[\"origin_city_name\"].value_counts().head(5).index\ntop_airport_cities\ntop_airport_codes\n\ngrand_airport_delay = (flights\n .query(\"unique_carrier in @top_carriers\")\n .query(\"origin in @top_airport_codes\")\n .groupby(\"origin\")[\"dep_delay\"] \n .mean()\n .dropna()\n .to_frame()\n)\n\nairport_delay = (flights\n.query(\"unique_carrier in @top_carriers\")\n.query(\"origin in @top_airport_codes\")\n .set_index(\"fl_date\")\n .groupby([pd.TimeGrouper(\"H\"), \"origin\"])[\"dep_delay\"] \n .mean()\n .to_frame()\n)\n\ncarrier_delay = (flights\n.query(\"unique_carrier in @top_carriers\")\n.query(\"origin in @top_airport_codes\")\n .set_index(\"fl_date\")\n .groupby([pd.TimeGrouper(\"H\"), \"origin\", \"unique_carrier\"])[\"dep_delay\"] \n .mean()\n .to_frame()\n)\n\nairport_delay.head()\ncarrier_delay.head()\n\ngrand_airport_delay\nairport_delay.unstack().head()\ncarrier_delay.unstack(1).head()",
"Pandas handles alignment along axes, so we can do an operation along an axis with another dataframe with similar index labels.",
"(carrier_delay\n .unstack(1)\n .div(grand_airport_delay.unstack())\n .head()\n \n)\n(carrier_delay\n .unstack(1)\n .div(airport_delay.unstack())\n .head()\n \n)",
"Putting that together, we can then get ratios of flight delays to the overall airport delay (grand mean or daily delays).",
"(carrier_delay\n .unstack(1)\n .div(airport_delay.unstack())\n .stack()\n .reset_index()\n .assign(day=lambda x: x[\"fl_date\"].dt.dayofweek)\n .set_index(\"fl_date\")\n .groupby([\"unique_carrier\", \"day\"])\n .mean()\n .dropna()\n .unstack()\n [\"dep_delay\"]\n .pipe((sns.heatmap, 'data'))\n)\n\n(carrier_delay\n .unstack(1)\n .div(grand_airport_delay.unstack())\n .stack()\n .reset_index()\n .assign(day=lambda x: x[\"fl_date\"].dt.dayofweek)\n .set_index(\"fl_date\")\n .groupby([\"unique_carrier\", \"day\"])\n .mean()\n .dropna()\n .unstack()\n [\"dep_delay\"]\n .pipe((sns.heatmap, 'data'))\n)\n\n(carrier_delay\n .unstack(1)\n .subtract(airport_delay.unstack())\n .stack()\n .reset_index()\n .assign(day=lambda x: x[\"fl_date\"].dt.dayofweek)\n .groupby([\"unique_carrier\", \"day\"])\n .mean()\n .dropna()\n .unstack()\n [\"dep_delay\"]\n .pipe((sns.heatmap, 'data'))\n)",
"So, now that we have some working flight data, let's poke at getting some tweets.\nTweets\nI've recently refactored the python gnip search api to be a bit more flexible, including making each search return a lazy stream. There are also some tools for programatically generated\nthe 'city name' column and the airport abbreviation are likely sources of help for finding tweets related to flights / airport data. We'll use those and define a small function to help quickly generate our rules, which are somewhat simplistic but should serve as a reasonable start.",
"def generate_rules(codes, cities):\n base_rule = \"\"\"\n ({code} OR \"{city} airport\") (flying OR flight OR plane OR jet)\n -(football OR\n basketball OR\n baseball OR\n party)\n -is:retweet\n \"\"\"\n rules = []\n for code, city in zip(list(codes), list(cities)):\n _rule = base_rule.format(code=code, city=city.lower())\n rule = gen_rule_payload(_rule,\n from_date=\"2017-01-01\",\n to_date=\"2017-07-31\",\n max_results=500)\n rules.append(rule)\n return rules\n\ngnip_rules = generate_rules(top_airport_codes, top_airport_cities)\ngnip_rules[0]",
"the gnip api has some functions to handle our connection information. Please ensure that the environment variable GNIP_PW is set with your password. If it isn't already set, you can set it here.",
"# os.environ[\"GNIP_PW\"] = \"\"\n\nusername = \"agonzales@twitter.com\"\nsearch_api = \"fullarchive\"\naccount_name = \"shendrickson\"\nendpoint_label = \"ogformat.json\"\nog_search_endpoint = gen_endpoint(search_api,\n account_name,\n endpoint_label,\n count_endpoint=False)\nog_args = {\"username\": username,\n \"password\": os.environ[\"GNIP_PW\"],\n \"url\": og_search_endpoint}",
"In our get_tweets function, we wrap some of the functionality of our result stream to collect specific data from tweets into a dataframe.",
"def get_tweets(result_stream, label):\n fields = [\"id\", \"created_at_datetime\",\n \"all_text\", \"hashtags\", \"user_id\",\n \"user_mentions\", \"screen_name\"]\n \n tweet_extracts = []\n for tweet in result_stream.start_stream():\n attrs = [tweet.__getattribute__(field) for field in fields]\n tweet_extracts.append(attrs)\n \n result_stream.end_stream()\n df = pd.DataFrame(tweet_extracts, columns=fields).assign(airport=label)\n return df\n",
"We can test this with a single rule.",
"rs = ResultStream(**og_args, rule_payload=gnip_rules[0], max_results=1000)\n\ntweets = get_tweets(result_stream=rs, label=top_airport_codes[0])\n\ntweets.head()\ntweets.shape",
"Now let's collect tweets for each airport. It might be a hair overkill, but I'll wrap the process up in a function, so we have a similar high point for grabbing our inital data. It will take a minute to grab this data, and for the time being, i'm not going to save it to disk.",
"def pull_tweet_data(gnip_rules, results_per_rule=25000):\n streams = [ResultStream(**og_args,\n rule_payload=rp,\n max_results=results_per_rule)\n for rp in gnip_rules]\n\n tweets = [get_tweets(rs, airport)\n for rs, airport\n in zip(streams, top_airport_codes)]\n \n return pd.concat(tweets)\n\ntweets = pull_tweet_data(gnip_rules)",
"Given our new data, let's do some quick exploration and cleaning.",
"tweets.shape\ntweets.head()\n\n(tweets\n .set_index(\"created_at_datetime\")\n .groupby([pd.TimeGrouper(\"D\")])\n .size()\n .sort_values()\n .tail()\n)\n\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n(tweets\n .set_index(\"created_at_datetime\")\n .groupby([pd.TimeGrouper(\"D\")])\n .size()\n .plot()\n)\n \nax.annotate(\"united senselessly\\nbeating a passenger\",\n xytext=(\"2017-02-01\", 1200),\n xy=(\"2017-04-04\", 900),\n arrowprops=dict(facecolor=\"black\", shrink=0.05))",
"The number of tweets per day by airport rule is a bit odd:",
"(tweets\n .drop([\"id\", \"all_text\"], axis=1)\n .set_index(\"created_at_datetime\")\n .groupby([pd.TimeGrouper(\"H\"), \"airport\"])\n [\"user_id\"]\n .count()\n .unstack()\n .fillna(0)\n .rolling(24).sum()\n .plot()\n)\n ",
"So lets look at what is going on with LAX:",
"tweets[\"airport\"].value_counts()\ntweets.groupby(\"airport\")[\"created_at_datetime\"].min().sort_values().tail(1)\nmin_lax_time = tweets.groupby(\"airport\")[\"created_at_datetime\"].min().sort_values().tail(1)[0]",
"Far, far more people tweeting from LAX than from other airports or the number of extra tweets were dominated by the spikes in the data. Given it's size, this makes some sense, but i would question my rules a bit. Let's even out these samples a hair, by selecting tweets only from when LAX exisited",
"(tweets\n.drop([\"id\", \"all_text\"], axis=1)\n.query(\"created_at_datetime >= @min_lax_time\")\n .set_index(\"created_at_datetime\")\n .groupby([pd.TimeGrouper(\"D\"), \"airport\"])\n [\"user_id\"]\n .count()\n .unstack()\n .fillna(0)\n #.rolling(7).mbean()\n .plot()\n)\n ",
"Moving on, let's do some more things with our tweets, like parse out the mentions from the dict structure to something more useful.\nWe'll be making a function that takes a dataframe and returns one, so we can use it in the .pipe method.",
"def parse_mentions(df):\n extract_mentions = lambda x: [d[\"name\"] for d in x]\n mentions = (pd.DataFrame([x for x in df[\"user_mentions\"]\n .apply(extract_mentions)])\n .loc[:, [0, 1]]\n .rename(columns={0: \"mention_1\", 1: \"mention_2\"})\n )\n \n return (pd.merge(df,\n mentions,\n left_index=True,\n right_index=True)\n .drop(\"user_mentions\", axis=1)\n )\n \n\nairline_name_code_dict = {\n \"Southwest Airlines\": \"WN\",\n \"Delta\": \"DL\",\n \"American Airlines\": \"AA\",\n \"United Airlines\": \"UA\",\n \"Sky\": \"Sk\"\n}",
"now, what about labeling a row with strictly American Airlines mentions? We could do this a few ways...",
"(tweets\n .pipe(parse_mentions)\n .assign(AA=lambda df: (df[\"mention_1\"] == \"American Airlines\") |\n (df[\"mention_2\"] == \"American Airlines\"))\n .query(\"AA == True\")\n .head()\n \n\n)\n\n(tweets\n .pipe(parse_mentions)\n .query(\"mention_1 == 'American Airlines' or mention_2 == 'American Airlines'\")\n .shape\n)\n\n(tweets\n .pipe(parse_mentions)\n .query(\"mention_1 == 'American Airlines' or mention_2 == 'American Airlines'\")\n .query(\"created_at_datetime >= @min_lax_time\")\n .set_index(\"created_at_datetime\")\n .groupby([pd.TimeGrouper(\"D\"), \"airport\"])\n [\"user_id\"]\n .count()\n .unstack()\n .fillna(0)\n .rolling(7).mean()\n .plot()\n)\n ",
"Moving on a bit, what about a simple sentiment model? We'll grab a word database that simply matches words to a value and use it as a simple baseline.",
"from nltk.tokenize import TweetTokenizer\n\ndef get_affin_dict():\n url = \"https://raw.githubusercontent.com/fnielsen/afinn/master/afinn/data/AFINN-111.txt\"\n affin_words = (pd\n .read_table(url,\n sep='\\t',\n header=None)\n .rename(columns={0: \"word\", 1: \"score\"})\n .to_dict(orient=\"list\")\n )\n affin_words = {k: v for k, v in\n zip(affin_words[\"word\"],\n affin_words[\"score\"])}\n return affin_words\n\n\n\ntknizer = TweetTokenizer()\n\ndef score_sentiment(words):\n words = set(words) \n union = words & affin_words.keys()\n return sum([affin_words[w] for w in union])\n \ndef score_tweet(tweet_text):\n return score_sentiment(tknizer.tokenize(tweet_text))\n\naffin_words = get_affin_dict()\n\n(tweets\n .assign(sentiment=lambda df: df[\"all_text\"].apply(score_tweet))\n [\"sentiment\"]\n .plot.hist(bins=20))\n\n(tweets\n .assign(sentiment=lambda df: df[\"all_text\"].apply(score_tweet))\n .pipe(lambda df: pd.concat([df.query(\"sentiment <= -5\").head(),\n df.query(\"sentiment >= 5\").head()]))\n)",
"seems semi-reasonable to me!\nLet's look at a timeseries of sentiment overall:",
"(tweets\n .assign(sentiment=lambda df: df[\"all_text\"].apply(score_tweet))\n .set_index(\"created_at_datetime\")\n .groupby([pd.TimeGrouper(\"D\")])\n [\"sentiment\"]\n .mean()\n .rolling(2).mean()\n .plot()\n)\n ",
"Since we have our reasonable sentiment and mentions data, let's assign it to a fresh dataframe and continue looking. \nNote that a full data pull step at this point might look like\n.python\ntweets = (pull_tweet_data(gnip_rules)\n .pipe(parse_mentions)\n .assign(sentiment=lambda df: df[\"all_text\"].apply(score_tweet))\n )",
"tweets = (tweets\n .pipe(parse_mentions)\n .assign(sentiment=lambda df: df[\"all_text\"].apply(score_tweet))\n )",
"And let's do some basic exploration of our tweet data.",
"(tweets\n .groupby([\"airport\"])\n [\"sentiment\"]\n .mean()\n .sort_values()\n .plot.barh()\n)\n\n(tweets\n .assign(day=lambda x: x.created_at_datetime.dt.dayofweek)\n .assign(hour=lambda x: x.created_at_datetime.dt.hour)\n .groupby([\"day\", \"hour\"])\n [\"sentiment\"]\n .mean()\n .unstack()\n .pipe((sns.heatmap, 'data') )\n)\n\n(tweets\n .assign(hour=lambda x: x.created_at_datetime.dt.hour)\n .assign(day=lambda x: x.created_at_datetime.dt.dayofweek)\n .groupby([\"airport\", \"day\"])\n [\"sentiment\"]\n .mean()\n .unstack()\n .sort_values(by=0)\n .pipe((sns.heatmap, 'data') )\n)\n\n(tweets\n .assign(hour=lambda x: x.created_at_datetime.dt.hour)\n .groupby([\"airport\", \"hour\"])\n [\"sentiment\"]\n .mean()\n .unstack()\n .sort_values(by=0)\n .pipe((sns.heatmap, 'data'))\n)\n\n(tweets\n .assign(day=lambda x: x.created_at_datetime.dt.dayofweek)\n .query(\"airport == 'ATL' and day == 2\")\n .sample(10)\n .all_text\n )\n\n(tweets\n .assign(day=lambda x: x.created_at_datetime.dt.dayofweek)\n .query(\"airport == 'ATL' and day == 5\")\n .sample(10)\n .all_text\n )\n\ntweet_sent_airport = (tweets\n .set_index(\"created_at_datetime\")\n .groupby([pd.TimeGrouper(\"D\"), \"airport\"])[\"sentiment\"]\n .mean()\n)\n\ndelay_sent = (pd.concat([airport_delay, tweet_sent_airport],\n axis=1,\n names=(\"day\", \"airport\"))\n .sort_index())\n\nfor code in top_airport_codes:\n (delay_sent\n .loc[pd.IndexSlice[:, code], :]\n .plot(subplots=True, title=\"Sentiment and delay time at {}\".format(code)))\n\ndelay_sent.loc[pd.IndexSlice[:, \"ATL\"], :].corr()\n\ndelay_sent.groupby(level=1).corr().T.loc[\"sentiment\"].unstack()[\"dep_delay\"].sort_values()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Amarchuk/2FInstability
|
notebooks/2f/.ipynb_checkpoints/instabilities-checkpoint.ipynb
|
gpl-3.0
|
[
"Неустойчивости\nТут будут все функции, имеющие отношение к неустойчивостям, чтобы не копировать их каждый раз.\nТесты на эти функции в соседнем ноутбуке.",
"from IPython.display import HTML\nfrom IPython.display import Image\nfrom PIL import Image as ImagePIL\n\n%pylab\n%matplotlib inline",
"Одножидкостный критерий\nУстойчиво, когда > 1:\n$$Q_g = \\frac{\\Sigma_g^{cr}}{\\Sigma_g}=\\frac{\\kappa c_g}{\\pi G \\Sigma_g}$$\n$$Q_s = \\frac{\\Sigma_s^{cr}}{\\Sigma_s}=\\frac{\\sigma_R}{\\sigma_R^{min}}=\\frac{\\kappa \\sigma_R}{3.36 G \\Sigma_s}$$",
"G = 4.32 #гравитационная постоянная в нужных единицах\n\ndef Qs(epicycl=None, sigma=None, star_density=None):\n '''Вычисление безразмерного параметра Тумре для звездного диска. \n Зависит от плотности звезд, дисперсии скоростей и эпициклической частоты.'''\n return epicycl * sigma / (3.36 * G * star_density)\n\n\ndef Qg(epicycl=None, sound_vel=None, gas_density=None):\n '''Вычисление безразмерного параметра Тумре для газового диска. \n Зависит от плотности газа и эпициклической частоты, скорости звука в газе.'''\n return epicycl * sound_vel / (math.pi * G * gas_density)",
"Двухжидкостный критерий\nКинетическое приближение:\n$$\\frac{1}{Q_{\\mathrm{eff}}}=\\frac{2}{Q_{\\mathrm{s}}}\\frac{1}{\\bar{k}}\\left[1-e^{-\\bar{k}^{2}}I_{0}(\\bar{k}^{2})\\right]+\\frac{2}{Q_{\\mathrm{g}}}s\\frac{\\bar{k}}{1+\\bar{k}^{2}s^{2}}>1\\,$$\nГидродинамическое приближение:\n$$\\frac{2\\,\\pi\\, G\\, k\\,\\Sigma_{\\mathrm{s}}}{\\kappa+k^{2}\\sigma_{\\mathrm{s}}}+\\frac{2\\,\\pi\\, G\\, k\\,\\Sigma_{\\mathrm{g}}}{\\kappa+k^{2}c_{\\mathrm{g}}}>1$$ или $$\\frac{1}{Q_{\\mathrm{eff}}}=\\frac{2}{Q_{\\mathrm{s}}}\\frac{\\bar{k}}{1+\\bar{k}^{2}}+\\frac{2}{Q_{\\mathrm{g}}}s\\frac{\\bar{k}}{1+\\bar{k}^{2}s^{2}}>1$$ для безразмерного волнового числа ${\\displaystyle \\bar{k}\\equiv\\frac{k\\,\\sigma_{\\mathrm{s}}}{\\kappa}},\\, s=c/\\sigma$",
"from scipy.special import i0e, i1e\n\ndef inverse_hydro_Qeff_from_k(dimlK, Qg=None, Qs=None, s=None):\n return 2.*dimlK / Qs / (1 + dimlK**2) + 2*s*dimlK / Qg / (1 + dimlK**2 * s**2)\n\ndef inverse_kinem_Qeff_from_k(dimlK, Qg=None, Qs=None, s=None):\n return 2. / dimlK / Qs * (1 - i0e(dimlK ** 2)) + 2*s*dimlK / Qg / (1 + dimlK**2 * s**2)",
"Нахождение максимума:\nНайти максимум функции в гидродинамическом приближении вообще просто - это многочлен\n$$\\frac{2}{Q_{\\mathrm{s}}}\\frac{\\bar{k}}{1+\\bar{k}^{2}}+\\frac{2}{Q_{\\mathrm{g}}}s\\frac{\\bar{k}}{1+\\bar{k}^{2}s^{2}}>1$$\nи у него можно найти максимум методами Sympy, взяв производную:",
"from sympy import Symbol, solve\n\ndef findInvHydroQeffSympy(Qs, Qg, s):\n '''Решаем уравнение deriv()=0 чтобы найти максимум функции в гидродинамическом приближении.'''\n k = Symbol('k') #solve for complex because it may returns roots as 1.03957287978471 + 0.e-20*I \n foo = 2./Qs*k/(1+k**2) + 2/Qg*s*k/(1+k**2 * s**2)\n foo2 = 2./Qs * (1-k)*(1+k * s**2)**2 + 2/Qg*s*(1-k*s**2)*(1+k)**2\n roots = solve(foo2.simplify(), k)\n roots = [np.sqrt(float(abs(re(r)))) for r in roots]\n _tmp = [foo.evalf(subs={k:r}) for r in roots]\n max_val = max(_tmp)\n return (roots[_tmp.index(max_val)], max_val)\n\ndef findInvHydroQeffBrute(Qs, Qg, s, krange):\n '''Находим максимум функции в гидродинамическом приближении перебором по сетке.'''\n _tmp = [inverse_hydro_Qeff_from_k(l, Qg=Qg, Qs=Qs, s=s) for l in krange]\n max_val = max(_tmp)\n root_for_max = krange[_tmp.index(max_val)]\n if abs(root_for_max-krange[-1]) < 0.5:\n print 'WARNING! For Qs={} Qg={} s={} root of max near the max of k-range'.format(Qs, Qg, s)\n return (root_for_max, max_val)\n\nfrom scipy.optimize import brentq\n\ndef findInvHydroQeffBrentq(Qs, Qg, s, krange):\n '''Решение уравнения deriv(9) = 0 для нахождения максимума исходной функции. Запускается brentq на исходной сетке,\n в случае если на концах сетки разные знаки функции (промежуток содержит корень),\n затем выбираются лучшие корни, после чего ищется, какой их них дает максимум. Возвращается только этот корень.'''\n grid = krange\n args = [Qs, Qg, s]\n signs = [derivTwoFluidHydroQeff(x, *args) for x in grid]\n signs = map(lambda x: x / abs(x), signs)\n roots = []\n for i in range(0, signs.__len__() - 1):\n if signs[i] * signs[i + 1] < 0:\n roots.append(brentq(lambda x: derivTwoFluidHydroQeff(x, *args), grid[i], grid[i + 1]))\n original = [inverse_hydro_Qeff_from_k(l, Qg=Qg, Qs=Qs, s=s) for l in roots]\n root_for_max = roots[original.index(max(original))]\n if abs(root_for_max-krange[-1]) < 0.5:\n print 'WARNING! For Qs={} Qg={} s={} root of max near the max of k-range'.format(Qs, Qg, s)\n return (root_for_max, max(original))\n\ndef derivTwoFluidHydroQeff(dimlK, Qs, Qg, s):\n '''Производная по \\bar{k} от левой части (9) для того, чтобы найти максимум.'''\n part1 = (1 - dimlK ** 2) / (1 + dimlK ** 2) ** 2\n part3 = (1 - (dimlK * s) ** 2) / (1 + (dimlK * s) ** 2) ** 2\n return (2 * part1 / Qs) + (2 * s * part3 / Qg)",
"Теперь кинематическое приближение:\n$$\\frac{2}{Q_{\\mathrm{s}}}\\frac{1}{\\bar{k}}\\left[1-e^{-\\bar{k}^{2}}I_{0}(\\bar{k}^{2})\\right]+\\frac{2}{Q_{\\mathrm{g}}}s\\frac{\\bar{k}}{1+\\bar{k}^{2}s^{2}}>1\\,$$\nТут сложнее, честно уже не решить. остается два способа - брутфорсом и brentq, производная известна.",
"def findInvKinemQeffBrute(Qs, Qg, s, krange):\n '''Находим максимум функции в кинематическом приближении перебором по сетке.'''\n _tmp = [inverse_kinem_Qeff_from_k(l, Qg=Qg, Qs=Qs, s=s) for l in krange]\n max_val = max(_tmp)\n root_for_max = krange[_tmp.index(max_val)]\n if abs(root_for_max-krange[-1]) < 0.5:\n print 'WARNING! For Qs={} Qg={} s={} root of max near the max of k-range'.format(Qs, Qg, s)\n return (root_for_max, max_val)\n\ndef findInvKinemQeffBrentq(Qs, Qg, s, krange):\n '''Решение уравнения deriv(13) = 0 для нахождения максимума исходной функции. Запускается brentq на исходной сетке,\n в случае если на концах сетки разные знаки функции (промежуток содержит корень),\n затем выбираются лучшие корни, после чего ищется, какой их них дает максимум. Возвращается только этот корень.'''\n grid = krange\n args = [Qs, Qg, s]\n signs = [derivTwoFluidKinemQeff(x, *args) for x in grid]\n signs = map(lambda x: x / abs(x), signs)\n roots = []\n for i in range(0, signs.__len__() - 1):\n if signs[i] * signs[i + 1] < 0:\n roots.append(brentq(lambda x: derivTwoFluidKinemQeff(x, *args), grid[i], grid[i + 1]))\n original = [inverse_kinem_Qeff_from_k(l, Qg=Qg, Qs=Qs, s=s) for l in roots]\n root_for_max = roots[original.index(max(original))]\n if abs(root_for_max-krange[-1]) < 0.5:\n print 'WARNING! For Qs={} Qg={} s={} root of max near the max of k-range'.format(Qs, Qg, s)\n return (root_for_max, max(original))\n\n\ndef derivTwoFluidKinemQeff(dimlK, Qs, Qg, s):\n '''Производная по \\bar{k} от левой части (13) для того, чтобы найти максимум. Коррекция за ассимптотику производится\n с помощью встроенных функций бесселя, нормированных на exp.'''\n part1 = (1 - i0e(dimlK ** 2)) / (-dimlK ** 2)\n part2 = (2 * dimlK * i0e(dimlK ** 2) - 2 * dimlK * i1e(dimlK ** 2)) / dimlK\n part3 = (1 - (dimlK * s) ** 2) / (1 + (dimlK * s) ** 2) ** 2\n return 2 * (part1 + part2) / Qs + 2 * s * part3 / Qg\n\ndef calc_Qeffs_(Qss=None, Qgs=None, s_params=None, verbose=False):\n '''считает сразу все Qeff в кинематическом'''\n invQeff = []\n for Qs, Qg, s in zip(Qss, Qgs, s_params):\n qeff = findInvKinemQeffBrentq(Qs, Qg, s, np.arange(0.01, 60000., 1.))\n if verbose:\n print 'Qs = {:2.2f}; Qg = {:2.2f}; s = {:2.2f}; Qeff = {:2.2f}'.format(Qs, Qg, s, 1./qeff[1])\n invQeff.append(qeff[1])\n return invQeff\n\ndef calc_Qeffs(r_g_dens=None, gas_dens=None, epicycl=None, \n sound_vel=None, star_density=None, sigma=None, verbose=False):\n '''считаем модельное Qeff в кинематическом'''\n Qgs = []\n Qss = []\n s_params = []\n for r, gd, sd in zip(r_g_dens, gas_dens, star_density):\n Qgs.append(Qg(epicycl=epicycl(r), sound_vel=sound_vel, gas_density=gd))\n Qss.append(Qs(epicycl=epicycl(r), sigma=sigma(r), star_density=sd))\n s_params.append(sound_vel/sigma(r))\n return calc_Qeffs_(Qss=Qss, Qgs=Qgs, s_params=s_params, verbose=verbose)\n\ndef plot_k_dependency(Qs=None, Qg=None, s=None, krange=None, ax=None, label=None, color=None):\n '''рисуется зависимость между волновыми числами и двухжидкостной неустойчивостью, показан максимум'''\n TFcriteria = []\n _tmp = [inverse_kinem_Qeff_from_k(dimlK, Qg=Qg, Qs=Qs, s=s) for dimlK in krange]\n root_for_max, max_val = findInvKinemQeffBrentq(Qs, Qg, s, krange)\n ax.plot(krange, _tmp, '-', label=label, color=color)\n ax.plot(root_for_max, max_val, 'o', color=color)\n\ndef plot_k_dependencies(r_g_dens=None, gas_dens=None, epicycl=None, \n sound_vel=None, star_density=None, sigma=None, krange=None, show=False):\n '''рисуем много зависимостей сразу'''\n Qgs, Qss, s_params = [], [], []\n maxk = 30.\n fig = plt.figure(figsize=[16,8])\n ax = plt.gca()\n colors = cm.rainbow(np.linspace(0, 1, len(r_g_dens)))\n for r, gd, sd, color in zip(r_g_dens, gas_dens, star_density, colors):\n Qgs.append(Qg(epicycl=epicycl(r), sound_vel=sound_vel, gas_density=gd))\n Qss.append(Qs(epicycl=epicycl(r), sigma=sigma(r), star_density=sd))\n s_params.append(sound_vel/sigma(r))\n if show:\n print 'r={:7.3f} Qg={:7.3f} Qs={:7.3f} Qg^-1={:7.3f} Qs^-1={:7.3f} s={:7.3f}'.format(r, Qgs[-1], Qss[-1], 1./Qgs[-1], 1./Qss[-1], s_params[-1])\n plot_k_dependency(Qs=Qss[-1], Qg=Qgs[-1], s=s_params[-1], krange=krange, ax=ax, label=str(r), color=color)\n maxk = max(maxk, findInvKinemQeffBrentq(Qss[-1], Qgs[-1], s_params[-1], krange)[0]) #not optimal\n plt.legend()\n plt.xlim(0, maxk+100.)",
"У Rafikov написано следующее\n\"For the local analysis we employ the WKB approximation (or tight-winding approximation) which requires that kr ≫ 1 and allows us to neglect terms proportional to 1/r compared to the terms proportional to k.\"\nТ.е. применялось ВКБ и действительно придется смотреть, какие там длины волн в максимуме\n$$kr \\gg 1$$\n$$k=\\frac{2\\pi}{\\lambda}$$\n$${\\displaystyle \\bar{k}\\equiv\\frac{k\\,\\sigma_{\\mathrm{s}}}{\\kappa}}$$\n$$k\\times r = \\frac{\\bar{k}\\varkappa}{\\sigma}\\times(r\\times scale)$$",
"def plot_WKB_dependency(Qs=None, Qg=None, s=None, krange=None, ax=None, label=None, color=None, r=None, scale=None, epicycl=None, sound_vel=None):\n '''рисуется зависимость между (k x r) и двухжидкостной неустойчивостью, показан максимум (см. уравнение выше), чтобы проверить справедливость WKB'''\n TFcriteria = []\n sigma = sound_vel/s\n _tmp = [inverse_kinem_Qeff_from_k(dimlK, Qg=Qg, Qs=Qs, s=s) for dimlK in krange]\n root_for_max, max_val = findInvKinemQeffBrentq(Qs, Qg, s, krange)\n factor = epicycl*r*scale/sigma\n ax.plot(np.array(krange)*factor, _tmp, '-', label=label, color=color)\n ax.plot(root_for_max*factor, max_val, 'o', color=color)\n \ndef plot_WKB_dependencies(r_g_dens=None, gas_dens=None, epicycl=None, \n sound_vel=None, star_density=None, sigma=None, krange=None, scale=None):\n '''рисуем много зависимостей сразу для проверки WKB'''\n Qgs, Qss, s_params = [], [], []\n maxk = 30.\n fig = plt.figure(figsize=[16,8])\n ax = plt.gca()\n colors = cm.rainbow(np.linspace(0, 1, len(r_g_dens)))\n for r, gd, sd, color in zip(r_g_dens, gas_dens, star_density, colors):\n Qgs.append(Qg(epicycl=epicycl(r), sound_vel=sound_vel, gas_density=gd))\n Qss.append(Qs(epicycl=epicycl(r), sigma=sigma(r), star_density=sd))\n s_params.append(sound_vel/sigma(r))\n plot_WKB_dependency(Qs=Qss[-1], Qg=Qgs[-1], s=s_params[-1], krange=krange, ax=ax, label=str(r), \n color=color, r=r, scale=scale, epicycl=epicycl(r), sound_vel=sound_vel)\n maxk = max(maxk, findInvKinemQeffBrentq(Qss[-1], Qgs[-1], s_params[-1], krange)[0]) #not optimal\n plt.legend()\n plt.xlim(0, maxk+100.)\n\ndef get_invQeff_from_data(gas_data=None, epicycl=None, gas_approx=None, sound_vel=None, scale=None, sigma=None, star_density=None, verbose=False):\n '''рассчитывает из наблюдательных данных сразу много значений Qeff^-1 и возвращает набор (Qg, Qs, Qeff)'''\n Qgs = []\n Qss = []\n invQeff = []\n for ind, (r, gd) in enumerate(gas_data):\n if type(sound_vel) == tuple or type(sound_vel) == list: #учет случая разных скоростей звука \n s_vel = sound_vel[ind]\n else:\n s_vel = sound_vel\n Qgs.append(Qg(epicycl=epicycl(gas_approx, r, scale), sound_vel=s_vel, gas_density=gd))\n Qss.append(Qs(epicycl=epicycl(gas_approx, r, scale), sigma=sigma(r), star_density=star_density(r)))\n qeff = findInvKinemQeffBrentq(Qss[-1], Qgs[-1], s_vel/sigma(r), np.arange(0.01, 60000., 1.))\n if verbose:\n print 'r = {:2.2f}; gas_d = {:2.2f}; epicycl = {:2.2f}; sig = {:2.2f}; star_d = {:2.2f}'.format(r, gd, epicycl(gas_approx, r, scale), \n sigma(r), star_density(r))\n print '\\tQs = {:2.2f}; Qg = {:2.2f}; Qeff = {:2.2f}'.format(Qss[-1], Qgs[-1], 1./qeff[1])\n invQeff.append(qeff[1])\n return zip(map(lambda l: 1./l, Qgs), map(lambda l: 1./l, Qss), invQeff)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adrn/TwoFace
|
notebooks/figures/Sample K cuts.ipynb
|
mit
|
[
"import os\nfrom os import path\n\n# Third-party\nfrom astropy.table import Table\nimport astropy.units as u\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\nimport h5py\nimport pandas as pd\nimport tqdm\n\nfrom thejoker import JokerSamples\n\nfrom twoface.config import TWOFACE_CACHE_PATH\nfrom twoface.samples_analysis import unimodal_P\nfrom twoface.db import (db_connect, AllStar, AllVisit, AllVisitToAllStar,\n StarResult, Status, JokerRun, initialize_db)\n\nplot_path = '../../paper/1-catalog/figures/'\ntable_path = '../../paper/1-catalog/tables/'\n\nSession, _ = db_connect(path.join(TWOFACE_CACHE_PATH, 'apogee.sqlite'))\nsession = Session()\n\nsamples_file = path.join(TWOFACE_CACHE_PATH, 'apogee-jitter.hdf5')\ncontrol_samples_file = path.join(TWOFACE_CACHE_PATH, 'apogee-jitter-control.hdf5')",
"First, look at the control sample K percentiles",
"with h5py.File(control_samples_file) as f:\n control_K = np.full((len(f.keys()), 256), np.nan)\n for i, key in enumerate(f):\n n_samples = len(f[key]['K'])\n control_K[i, :n_samples] = f[key]['K'][:]\n \nln_control_K = np.log(control_K)\n\nn_samples = np.sum(np.logical_not(np.isnan(control_K)), axis=1)\nplt.hist(n_samples, bins=np.linspace(0, 256, 64));\nplt.yscale('log')\nplt.xlabel('$N$ samples returned')",
"How many are \"needs mcmc\" vs. \"needs more prior samples\":",
"needs_mcmc = 0\nneeds_more_prior = 0\nwith h5py.File(control_samples_file) as f:\n keys = list(f.keys())\n for k in tqdm.tqdm(np.where(n_samples < 256)[0]):\n key = keys[k]\n data = AllStar.get_apogee_id(session, key).apogeervdata()\n samples = JokerSamples.from_hdf5(f[key])\n uni = unimodal_P(samples, data)\n \n if uni:\n needs_mcmc += 1\n else:\n needs_more_prior += 1\n\nneeds_mcmc, needs_more_prior",
"Plot percentiles:",
"fig, axes = plt.subplots(1, 2, figsize=(10, 5))\n\nax = axes[0]\nfor perc in [1, 5, 15]:\n control_perc = np.nanpercentile(ln_control_K, perc, axis=1)\n ax.hist(control_perc, bins=np.linspace(-12, 10, 64), \n alpha=0.5, label='{0} percentile'.format(perc));\n\nax = axes[1]\nfor perc in [85, 95, 99]:\n control_perc = np.nanpercentile(ln_control_K, perc, axis=1)\n ax.hist(control_perc, bins=np.linspace(-12, 10, 64), \n alpha=0.5, label='{0} percentile'.format(perc));\n \nfor ax in axes:\n ax.legend(loc='best', fontsize=14)\n ax.set_xlabel(r'$\\ln \\left(\\frac{K}{{\\rm km}\\,{s}^{-1}} \\right)$')\n ax.set_yscale('log')\n \naxes[0].set_title('control sample')\nfig.tight_layout()\n\n# cut = -0.88 # 5% FPR\ncut = -0.12 # 1% FPR\nnp.sum(np.nanpercentile(ln_control_K, 1., axis=1) > cut) / control_K.shape[0]",
"Summary: if we cut at $\\ln K > -0.12$, 1% false-positive rate\n\nCompute percentiles in lnK for all stars\nWrite a table with APOGEE_ID, percentile value:",
"df = pd.read_hdf('../../cache/apogee-jitter-tbl.hdf5')\ngrouped = df.groupby('APOGEE_ID')\ndf.columns\n\nK_per = grouped.agg(lambda x: np.percentile(np.log(x['K']), 1))['K']\n(K_per > cut).sum()\n\nhigh_K_tbl = Table()\nhigh_K_tbl['APOGEE_ID'] = np.asarray(K_per.index).astype('U20')\nhigh_K_tbl['lnK_per_1'] = np.asarray(K_per)\nhigh_K_tbl.write(path.join(table_path, 'lnK-percentiles.fits'), overwrite=True)\n\nhigh_K_tbl[:8].write(path.join(table_path, 'lnK-percentiles.tex'), overwrite=True)",
"Now define the High-$K$ sample:",
"high_K = np.asarray(K_per[K_per > cut].index).astype('U20')\nlen(high_K)\n\nfor apogee_id in tqdm.tqdm(high_K):\n star = AllStar.get_apogee_id(session, apogee_id)\n res = star.results[0] # only one result...\n res.high_K = True\nsession.commit()\n\n_N = session.query(AllStar).join(StarResult).filter(StarResult.high_K).count()\nprint(_N)\nassert _N == len(high_K)\n\nfig, ax = plt.subplots(1, 1, figsize=(6, 5))\n\ncontrol_perc = np.nanpercentile(ln_control_K, 1, axis=1)\nbins = np.linspace(-12, 10, 64)\nax.hist(K_per, bins=bins, \n alpha=1, label='APOGEE sample', normed=True,\n histtype='stepfilled', rasterized=True)\n\nax.hist(control_perc, bins=bins, \n alpha=1, label='Control sample', normed=True, \n histtype='step', linewidth=2, color='#333333')\n\nax.legend(loc='best', fontsize=13)\n\nax.set_yscale('log')\nax.set_xlabel(r'$\\ln \\left(\\frac{K}{{\\rm km}\\,{s}^{-1}} \\right)$')\nax.set_ylabel('density')\n\nax.axvline(cut, linestyle='--', zorder=10, alpha=1., color='tab:orange')\n\nfig.tight_layout()\nfig.savefig(path.join(plot_path, 'lnK-percentiles.pdf'), dpi=250)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ceos-seo/data_cube_notebooks
|
notebooks/UN_SDG/UN_SDG_11_3_1.ipynb
|
apache-2.0
|
[
"<a id=\"top\"></a>\nUN SDG Indicator 11.3.1:<br> Ratio of Land Consumption Rate to Population Growth Rate\n<hr>\n\nNotebook Summary\nThe United Nations have prescribed 17 \"Sustainable Development Goals\" (SDGs). This notebook attempts to monitor SDG Indicator 11.3.1 - ratio of land consumption rate to population growth rate.\nUN SDG Indicator 11.3.1 provides a metric for determining wether or not land consumption is scaling responsibly with the growth of the population in a given region. \nCase Study\nThis notebook conducts analysis in the Dar es Salaam, Tanzania with reference years of 2000 and 2015.\nIndex\n\nDefine Formulas for Calculating the Indicator\nImport Dependencies and Connect to the Data Cube\nShow the Area\nDetermine Population Growth Rate\nDetermine Land Consumption Rate\nBuild Composites for the First and Last Years\nFilter Out Everything Except the Survey Region\nDetermine Urban Extent\nSDG Indicator 11.3.1\n\n<a id=\"define_formulas\"></a>Define Formulas for Calculating the Indicator ▴\n\nSDG Indicator 11.3.1\nThe ratio between land consumption and population growth rate. \n\n$$ SDG_{11.1.3} = \\frac{LandConsumptionRate}{PopulationGrowthRate} $$",
"def sdg_11_3_1(land_consumption, population_growth_rate): \n return land_consumption/population_growth_rate",
"Population Growth Rate\n\nFor calculating the indicator value for this SDG, the formula is the simple average yearly change in population.\nFor calculating the average yearly population growth rate as a percent (e.g. to show on maps), the following formula\nis used:\n$$ PopulationGrowthRate = 10 ^ {LOG( Pop_{t_2} \\space / \\space Pop_{t_1}) \\space / \\space {y}} - 1 $$\nWhere: \n\n$Pop_{t_2}$ - Total population within the area in the current/final year\n$Pop_{t_1}$ - Total population within the area in the past/initial year \n$y$ - The number of years between the two measurement periods $t = Year_{t_2} - Year_{t_1}$",
"import numpy as np \n\ndef population_growth_rate_pct(pop_t1 = None, pop_t2 = None, y = None):\n \"\"\"\n Calculates the average percent population growth rate per year.\n \n Parameters\n ----------\n pop_t1: numeric\n The population of the first year.\n pop_t2: numberic\n The population of the last year.\n y: int\n The numbers of years between t1 and t2.\n \n Returns\n -------\n pop_growth_rate: float\n The average percent population growth rate per year.\n \"\"\"\n return 10**(np.log10(pop_t2/pop_t1)/y) - 1\n \ndef population_growth_rate(pop_t1 = None, pop_t2 = None, y = None):\n \"\"\"\n Calculates the average increase in population per year.\n \n Parameters\n ----------\n pop_t1: numeric\n The population of the first year.\n pop_t2: numberic\n The population of the last year.\n y: int\n The numbers of years between t1 and t2.\n \n Returns\n -------\n pop_growth_rate: float\n The average increase in population per year.\n \"\"\"\n return (pop_t2 - pop_t1) / y",
"Land Consumption Rate\n\nFor calculating the indicator value for this SDG, the formula is the simple average yearly change in land consumption.",
"def land_consumption_rate(area_t1 = None, area_t2 = None, y = None):\n \"\"\"\n Calculates the average increase in land consumption per year.\n \n Parameters\n ----------\n area_t1: numeric\n The number of urbanized pixels for the first year.\n area_t2: numberic\n The number of urbanized pixels for the last year.\n y: int\n The numbers of years between t1 and t2.\n \n Returns\n -------\n pop_growth_rate: float\n The average increase in land consumption per year.\n \"\"\"\n return (area_t2 - area_t1) / y",
"<a id=\"import\"></a>Import Dependencies and Connect to the Data Cube ▴",
"# Supress Some Warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n# Allow importing of our utilities.\nimport sys\nimport os\nsys.path.append(os.environ.get('NOTEBOOK_ROOT'))\n\n# Prepare for plotting.\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport datacube\ndc = datacube.Datacube()",
"<a id=\"show_area\"></a>Show the Area ▴",
"# Dar es Salaam, Tanzania\nlatitude_extents = (-6.95, -6.70) \nlongitude_extents = (39.05, 39.45)\n\nfrom utils.data_cube_utilities.dc_display_map import display_map\ndisplay_map(latitude = latitude_extents, longitude = longitude_extents)",
"<a id=\"pop_rate\"></a>Determine Population Growth Rate ▴\n\nLoad Population Data\n<br>Shape files are based on GPW estimates. You can derive similar population figures from AidData GeoQuery at \n- http://geo.aiddata.org/query",
"CSV_FILE_PATH = \"../data/Tanzania/population_shape/ADM2_GPWV4_population.csv\"\nSHAPE_FILE_PATH = \"../data/Tanzania/population_shape/TZA_ADM2.geojson\"\n\nimport geopandas as gpd\nimport pandas as pd\n\nfirst_year, last_year = 2000, 2015\nfirst_year_pop_col = 'gpw_v4_count.{}.sum'.format(first_year)\nlast_year_pop_col = 'gpw_v4_count.{}.sum'.format(last_year)\n\nshape_data = gpd.read_file(SHAPE_FILE_PATH)\nshape_data = shape_data[['Name', 'geometry']]\npop_data = pd.read_csv(CSV_FILE_PATH)\npop_data = pop_data[[first_year_pop_col, last_year_pop_col, 'Name']]\npop_data = pop_data.rename({first_year_pop_col: 'pop_t1', \n last_year_pop_col: 'pop_t2'}, axis='columns')\ncountry_data = shape_data.merge(pop_data, on='Name')\n\ndef shapely_geom_intersects_rect(geom, x, y):\n \"\"\"\n Determines whether the bounding box of a Shapely polygon intesects \n a rectangle defined by `x` and `y` extents.\n \n Parameters\n ----------\n geom: shapely.geometry.polygon.Polygon\n The object to determine intersection with the region defined by `x` and `y`.\n x, y: list-like\n The x and y extents, expressed as 2-tuples.\n \n Returns\n -------\n intersects: bool\n Whether the bounding box of `geom` intersects the rectangle.\n \"\"\"\n geom_bounds = np.array(list(geom.bounds))\n x_shp, y_shp = geom_bounds[[0,2]], geom_bounds[[1,3]]\n x_in_range = (x_shp[0] < x[1]) & (x[0] < x_shp[1])\n y_in_range = (y_shp[0] < y[1]) & (y[0] < y_shp[1])\n return x_in_range & y_in_range\n\n# `intersecting_shapes` can be examined to determine which districts to ultimately keep.\nintersecting_shapes = country_data[country_data.apply(\n lambda row: shapely_geom_intersects_rect(row.geometry, longitude_extents, latitude_extents), \n axis=1).values]",
"Show the Survey Region in the Context of the Country",
"districts = ['Kinondoni', 'Ilala', 'Temeke']\ndistricts_mask = country_data.Name.isin(districts)\ncountry_data.plot(column=districts_mask, cmap='jet', figsize=(10,10))\nsurvey_region = country_data[districts_mask]\nplt.show()",
"Show the Survey Region Alone",
"survey_region.plot( figsize = (10,10))\nplt.show()",
"Determine the Shape that Masks the Survey Region",
"from shapely.ops import cascaded_union\ndisjoint_areas = cascaded_union([*survey_region.geometry]) ## Top Right is 'disjoint' from bottom left. ",
"Calculate Population Growth Rate\n\nCalcuate Population Growth Rate for All Regions Individually",
"time_range = last_year - first_year\ncountry_data = country_data.assign(population_growth_rate = \\\n population_growth_rate_pct(country_data[\"pop_t1\"], country_data[\"pop_t2\"], time_range))",
"Visualize Population Growth Rate",
"fig, ax = plt.subplots(figsize = (10, 10))\nax.set_title(\"Population Growth Rate {}-{}\".format(first_year, last_year))\nax1 = country_data.plot(column = \"population_growth_rate\", ax = ax, legend=True)\n\nsurvey_region_total_pop_t1 = survey_region[\"pop_t1\"].sum()\nsurvey_region_total_pop_t2 = survey_region[\"pop_t2\"].sum()\n\npop_growth = population_growth_rate(pop_t1 = survey_region_total_pop_t1,\n pop_t2 = survey_region_total_pop_t2,\n y = time_range)\n\nprint(\"Annual Population Growth Rate of the Survey Region: {:.2f} People per Year\".format(pop_growth))",
"<a id=\"land_consumption_rate\"></a>Determine Land Consumption Rate ▴\nSpecify Load Parameters",
"measurements = [\"red\", \"green\", \"blue\", \"nir\", \"swir1\", \"swir2\", \"pixel_qa\"]\n\n# Determine the bounding box of the survey region to load data for.\nmin_lon, min_lat, max_lon, max_lat = disjoint_areas.bounds\nlat = (min_lat, max_lat)\nlon = (min_lon, max_lon)\n\nproduct_1 = 'ls7_usgs_sr_scene' \nplatform_1 = 'LANDSAT_7'\ncollection_1 = 'c1'\nlevel_1 = 'l2'\n\nproduct_2 = 'ls8_usgs_sr_scene' \nplatform_2 = 'LANDSAT_8'\ncollection_2 = 'c1'\nlevel_2 = 'l2'\n\n# For a full test, each time extent should be 1 full year.\ntime_extents_t1 = ('2000-01-01', '2000-01-31')\ntime_extents_t2 = ('2017-01-01', '2017-01-31')\n\nload_params = dict(measurements = measurements, \n latitude = lat, longitude = lon, \\\n dask_chunks={'time':1, 'latitude':1000, 'longitude':1000})",
"<a id=\"false_color_composites\"></a>Build Composites for the First and Last Years ▴",
"from utils.data_cube_utilities.aggregate import xr_scale_res\nfrom utils.data_cube_utilities.clean_mask import landsat_clean_mask_full\nfrom utils.data_cube_utilities.dc_mosaic import create_median_mosaic\n\n# The fraction of the original resolution to use to reduce memory consumption.\nfrac_res = 0.25\n\ndataset_t1 = dc.load(**load_params, product=product_1, time=time_extents_t1)\nclean_mask_t1 = landsat_clean_mask_full(dc, dataset_t1, product=product_1, platform=platform_1,\n collection=collection_1, level=level_1)\ncomposite_t1 = create_median_mosaic(dataset_t1, clean_mask_t1.data).compute()\ncomposite_t1 = xr_scale_res(composite_t1, frac_res=frac_res)\ncomposite_t1.attrs = dataset_t1.attrs\ndel dataset_t1, clean_mask_t1\n\ndataset_t2 = dc.load(**load_params, product=product_2, time=time_extents_t2)\nclean_mask_t2 = landsat_clean_mask_full(dc, dataset_t2, product=product_2, platform=platform_2,\n collection=collection_2, level=level_2)\ncomposite_t2 = create_median_mosaic(dataset_t2, clean_mask_t2.data).compute()\ncomposite_t2 = xr_scale_res(composite_t2, frac_res=frac_res)\ncomposite_t2.attrs = dataset_t2.attrs\ndel dataset_t2, clean_mask_t2",
"First Year\nFalse Color Composite [nir, swir1, blue]",
"from utils.data_cube_utilities.dc_rgb import rgb\nrgb(composite_t1, bands = [\"nir\",\"swir1\",\"blue\"], width = 15)\nplt.title('Year {}'.format(first_year))\nplt.show()",
"Last Year\nFalse Color Composite [nir, swir1, blue]",
"rgb(composite_t2, bands = [\"nir\",\"swir1\",\"blue\"], width = 15)\nplt.title('Year {}'.format(last_year))\nplt.show()",
"<a id=\"filter_survey_region\"></a>Filter Out Everything Except the Survey Region ▴",
"import rasterio.features\nfrom datacube.utils import geometry\nimport xarray as xr\n\ndef generate_mask(loaded_dataset:xr.Dataset,\n geo_polygon: datacube.utils.geometry ):\n \n return rasterio.features.geometry_mask(\n [geo_polygon],\n out_shape = loaded_dataset.geobox.shape,\n transform = loaded_dataset.geobox.affine,\n all_touched = False,\n invert = True)\n\nmask = generate_mask(composite_t1, disjoint_areas)\n\nfiltered_composite_t1 = composite_t1.where(mask)\ndel composite_t1\nfiltered_composite_t2 = composite_t2.where(mask)\ndel composite_t2",
"First Year Survey Region\nFalse Color Composite [nir, swir1, blue]",
"rgb(filtered_composite_t1, bands = [\"nir\",\"swir1\",\"blue\"],width = 15)\nplt.show()",
"Last Year Survey Region\nFalse Color Composite [nir, swir1, blue]",
"rgb(filtered_composite_t2, bands = [\"nir\",\"swir1\",\"blue\"],width = 15)\nplt.show()",
"<a id=\"urban_extent\"></a>Determine Urban Extent ▴\n\nUrbanization Index Option 1: NDBI\nThe Normalized Difference Built-up Index (NDBI) is quick to calculate, but is sometimes inaccurate (e.g. in very arid regions).",
"def NDBI(dataset):\n return (dataset.swir1 - dataset.nir)/(dataset.swir1 + dataset.nir)",
"Urbanization Index Option 2: Fractional Cover Bare Soil\nThe fractional cover bare soil index is very slow to calculate in its current implementation, but is often more accurate than NDBI.",
"from utils.data_cube_utilities.dc_fractional_coverage_classifier import frac_coverage_classify",
"Choose the Urbanization Index to Use",
"# Can be 'NDBI' or 'Fractional Cover Bare Soil'.\nurbanization_index = 'Fractional Cover Bare Soil'\n\nurban_index_func = None\nurban_index_range = None\nif urbanization_index == 'NDBI':\n urban_index_func = NDBI\n urban_index_range = [-1, 1]\nif urbanization_index == 'Fractional Cover Bare Soil':\n urban_index_func = lambda dataset: frac_coverage_classify(dataset).bs\n urban_index_range = [0, 100] \nplot_kwargs = dict(vmin=urban_index_range[0], vmax=urban_index_range[1])",
"First Year Urban Composite",
"urban_composite_t1 = urban_index_func(filtered_composite_t1)\n\nplt.figure(figsize = (19.5, 14))\nurban_composite_t1.plot(**plot_kwargs)\nplt.show()",
"Last Year Urban Composite",
"urban_composite_t2 = urban_index_func(filtered_composite_t2)\n\nplt.figure(figsize = (19.5, 14))\nurban_composite_t2.plot(**plot_kwargs)\nplt.show()",
"Defining Binary Urbanization",
"def urbanizaton(urban_index: xr.Dataset, urbanization_index) -> xr.DataArray:\n bounds = None\n if urbanization_index == 'NDBI':\n bounds = (0,0.3)\n if urbanization_index == 'Fractional Cover Bare Soil':\n bounds = (20, 100)\n \n urban = np.logical_and(urban_index > min(bounds), urban_index < max(bounds))\n \n is_clean = np.isfinite(urban_index)\n urban = urban.where(is_clean)\n \n return urban\n\nurban_product_t1 = urbanizaton(urban_composite_t1, urbanization_index)\nurban_product_t2 = urbanizaton(urban_composite_t2, urbanization_index)",
"First Year\nUrbanization product overlayed on false color composite",
"rgb(filtered_composite_t1, \n bands = [\"nir\",\"swir1\",\"blue\"], \n paint_on_mask = [(np.logical_and(urban_product_t1.astype(bool), mask), [255,0,0])],\n width = 15)\nplt.show()",
"Last Year\nUrbanization Product overlayed on false color composite",
"rgb(filtered_composite_t2,\n bands = [\"nir\",\"swir1\",\"blue\"],\n paint_on_mask = [(np.logical_and(urban_product_t2.astype(bool), mask),[255,0,0])],\n width = 15)\nplt.show()",
"Urbanization Change",
"fig = plt.figure(figsize = (15,5))\n\n#T1 (LEFT)\nax1 = fig.add_subplot(121)\nurban_product_t1.plot(cmap = \"Reds\")\nax1.set_title(\"Urbanization Extent {}\".format(first_year))\n\n#T2 (RIGHT)\nax2 = fig.add_subplot(122)\nurban_product_t2.plot(cmap = \"Reds\")\nax2.set_title(\"Urbanization Extent {}\".format(last_year))\n\nplt.show()\n\ncomp_lat = filtered_composite_t1.latitude\nmeters_per_deg_lat = 111000 # 111 km per degree latitude\ndeg_lat = np.abs(np.diff(comp_lat[[0, -1]])[0])\nmeters_lat = meters_per_deg_lat * deg_lat\nsq_meters_per_px = (meters_lat / len(comp_lat))**2\n\n# Calculation the square meters of urbanized area.\nurbanized_area_t1 = float( urban_product_t1.sum() * sq_meters_per_px )\nurbanized_area_t2 = float( urban_product_t2.sum() * sq_meters_per_px )\n\nconsumption_rate = land_consumption_rate(area_t1 = urbanized_area_t1, area_t2 = urbanized_area_t2, y = time_range)\n\nprint(\"Land Consumption Rate of the Survey Region: {:.2f} Square Meters per Year\".format(consumption_rate))",
"<a id=\"indicator\"></a>SDG Indicator 11.3.1 ▴",
"indicator_val = sdg_11_3_1(consumption_rate,pop_growth)\nprint(\"The UN SDG 11.3.1 Indicator value (ratio of land consumption rate to population growth rate) \"\\\n \"for this survey region for the specified parameters \"\\\n \"is {:.2f} square meters per person.\".format(indicator_val))\nprint(\"\")\nprint(\"In other words, on average, according to this analysis, every new person is consuming {:.2f} square meters of land in total.\".format(indicator_val))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Hash--/documents
|
notebooks/TP Master Fusion/Mode Converter - Fast Fourier Transform.ipynb
|
mit
|
[
"In this notebook, we investigate the possibility to use the (fast) fourier transform to detect the mode content inside the mode converter.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.constants import c, pi",
"Spatial signal\nThe spatial signal is of the form:\n$$\nE(x) = \\sum_{m=0}^M \\sin\\left(\\frac{m\\pi}{a}x \\right)\n$$",
"a = 0.192 # m\ndx = 1e-4\nx = np.arange(0, a+dx, step=dx, )\nE = np.zeros_like(x)\n\n# weights of the modes (example)\nEms = np.r_[0.2, 0, 1, 0.3, 0.1] \n# total electric field is the sum of the modes\nfor m, Em in enumerate(Ems, start=1):\n E += Em*np.sin(pi*m/a * x)\n\n # display the \"measured field\"\nfig, ax = plt.subplots()\nax.plot(x, E)\n\n# pad the spatial dimensions to improve the spectral resolution and display the results\nx = np.pad(x, (20*1024,), 'reflect', reflect_type='odd')\nE = np.pad(E, (20*1024,), 'reflect', reflect_type='odd') # reflect the signal as in infinite large wg\n\nax.plot(x, E, alpha=0.5)",
"Fourier transform",
"from numpy.fft import fft, fftshift, fftfreq\n\nU = fftshift(fft(E))\nkx= fftshift(fftfreq(len(x), d=dx)*2*pi)\n\nfig, ax = plt.subplots()\nax.plot(kx, np.abs(U), marker='.')\nax.set_xlim(-.9*pi/a, 7*pi/a)\n\n# shows where the modes 1,2,... are\nfor mode_index in range(8):\n ax.axvline(mode_index*pi/a, color='#888888', linestyle='--')\nax.set_xticks(np.arange(0,8)*pi/a)\nax.set_xticklabels([0] + [f'${m}\\pi/a$' for m in range(1,7)])\nax.set_xlabel('$k_x$', size=16)",
"From the latter figure, we can clearly spot the various modes and their relative weights."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
syednasar/datascience
|
deeplearning/language-translation/translation with rnn.ipynb
|
mit
|
[
"Language Translation with RNN using Tensorflow\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.\nYou can get the <EOS> word id by doing:\npython\ntarget_vocab_to_int['<EOS>']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.",
"def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n source_sentences = source_text.split('\\n')\n target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\\n')]\n \n source_ids = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences]\n target_ids = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target_sentences]\n \n return (source_ids, target_ids)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()",
"Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoding_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)",
"def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n # TODO: Implement Function\n\n inputs = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n learning_rate = tf.placeholder(tf.float32, name='learning_rate')\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n return inputs, targets, learning_rate, keep_prob\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)",
"Process Decoding Input\nImplement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.",
"def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for dencoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n \n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n processed_target = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) \n \n return processed_target\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)",
"Encoding\nImplement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().",
"def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n # TODO: Implement Function\n \n lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, state_is_tuple=True)\n # Dropout\n drop_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=keep_prob)\n # Encoder\n enc_cell = tf.contrib.rnn.MultiRNNCell([drop_cell] * num_layers, state_is_tuple=True)\n _, rnn_state = tf.nn.dynamic_rnn(cell = enc_cell, inputs = rnn_inputs, dtype=tf.float32)\n \n \n return rnn_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)",
"Decoding - Training\nCreate training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.",
"def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param sequence_length: Sequence Length\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n # TODO: Implement Function\n with tf.variable_scope(\"decoding\") as decoding_scope:\n # Training Decoder\n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)\n train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(\n dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)\n\n # Apply output function\n train_logits = output_fn(train_pred) \n return train_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)",
"Decoding - Inference\nCreate inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().",
"def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param maximum_length: The maximum allowed time steps to decode\n :param vocab_size: Size of vocabulary\n :param decoding_scope: TensorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n # TODO: Implement Function\n #tf.variable_scope(\"decoder\") as varscope\n with tf.variable_scope(\"decoding\") as decoding_scope:\n # Inference Decoder\n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, \n maximum_length, vocab_size) \n\n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) \n return inference_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)",
"Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nCreate RNN cell for decoding using rnn_size and num_layers.\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.",
"def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n \n with tf.variable_scope('decoding') as decoding_scope:\n #Output Function\n output_fn= lambda x: tf.contrib.layers.fully_connected(x,vocab_size,None,scope=decoding_scope)\n \n #Train Logits\n train_logits=decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,output_fn, keep_prob)\n\n\n decoding_scope.reuse_variables()\n\n #Infer Logits\n infer_logits=decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],sequence_length-1, vocab_size, decoding_scope, output_fn, keep_prob)\n\n return train_logits, infer_logits\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)",
"Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).\nProcess target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).",
"def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param sequence_length: Sequence Length\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n \n #Apply embedding to the input data for the encoder.\n enc_input = tf.contrib.layers.embed_sequence(\n input_data,\n source_vocab_size,\n enc_embedding_size\n )\n \n #embed_target = tf.nn.embedding_lookup(dec_embed, dec_input)\n #Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). \n enc_layer = encoding_layer(\n enc_input,\n rnn_size,\n num_layers,\n keep_prob\n )\n \n #Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. \n dec_input = process_decoding_input(\n target_data, \n target_vocab_to_int, \n batch_size\n )\n #Apply embedding to the target data for the decoder. \n #embed_target = tf.contrib.layers.embed_sequence(dec_input,target_vocab_size,dec_embedding_size)\n dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) \n embed_target = tf.nn.embedding_lookup(dec_embed, dec_input) \n\n\n\n #Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob). \n train_logits, inf_logits = decoding_layer(\n embed_target,\n dec_embed,\n enc_layer,\n target_vocab_size,\n sequence_length,\n rnn_size,\n num_layers,\n target_vocab_to_int,\n keep_prob\n )\n \n return train_logits, inf_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability",
"# Number of Epochs\nepochs = None\n# Batch Size\nbatch_size = None\n# RNN Size\nrnn_size = None\n# Number of Layers\nnum_layers = None\n# Embedding Size\nencoding_embedding_size = None\ndecoding_embedding_size = None\n# Learning Rate\nlearning_rate = None\n# Dropout Keep Probability\nkeep_probability = None\n\n#Number of Epochs\nepochs = 5\n\n#Batch Size\nbatch_size = 256\n\n#RNN Size\nrnn_size = 512 #25\n\n#Number of Layers\nnum_layers = 2\n\n#Embedding Size\nencoding_embedding_size = 256 #13\ndecoding_embedding_size = 256 #13\n\n#Learning Rate\nlearning_rate = 0.01\n\n#Dropout Keep Probability\nkeep_probability = 0.5",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_source_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')",
"Save Parameters\nSave the batch_size and save_path parameters for inference.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()",
"Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the <UNK> word id.",
"def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n input_sentence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]\n \n return input_sentence\n\n #return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)",
"Translate\nThis will translate translate_sentence from English to French.",
"translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))",
"Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rhenanbartels/hrv
|
notebooks/Heart Rate Variability analyses using RRi series.ipynb
|
bsd-3-clause
|
[
"Analysis of an RRi series registered during REST condition and RECOVERY from maximal effort exercise",
"import matplotlib.pyplot as plt\n\nplt.rcParams['figure.figsize'] = (10, 6)",
"Reading the RRi file",
"from hrv.io import read_from_text\n\nrri = read_from_text(\"data/08012805.txt\")",
"Getting some information about the file",
"rri.info()",
"The RRi series has 2380 values and approximately 20 minutes.\nVisual Inspection",
"fig, ax = rri.plot()",
"This is an RRi series recorded during a maximal effort exercise. The first 180 seconds (3 minutes) the subject is resting, after this period the exercise test started and the workload was incremented each minute until the subject's fatigue. Following the exercise, there is a recovery period of approximately 600s (10 minutes).\nFiltering\nLooks like there are some noise in the RRi signal. Let's try to filter the time series:",
"from hrv.filters import quotient, moving_median",
"Quotient Filter",
"fq_rri = quotient(rri)\nfig, ax = fq_rri.plot()",
"Moving Median",
"fmm_rri = moving_median(rri, order=5)\nfig, ax = fmm_rri.plot()",
"Both filters removed the spikes, but seems that the quotient filter preserved the signal, only removing the noise, while the moving_median filtered the whole tachogram. Let's keep the quotient filter results.\nCalculate HRV indices during rest\nTo extract information about the RRi fluctuations during rest, first we need to slice the time series on the first 180 seconds.",
"rest_rri = fq_rri.time_range(start=0, end=180)\nfig, ax = rest_rri.plot()",
"Time Domain and Frequency Domain during Rest",
"from hrv.classical import frequency_domain, time_domain\n\nrest_time_domain = time_domain(rest_rri)\nrest_time_domain",
"Before extracting Frequency Domain features lets first remove the slow trend from the RRi signal:",
"from hrv.detrend import polynomial_detrend\n\ndetrended_rest_rri = polynomial_detrend(rest_rri, degree=3)\nfig, ax = detrended_rest_rri.plot()",
"Note how the Y-axis is now centered on zero.",
"detrended_rest_rri.info()",
"Once our rest signal has only 167 points, lets reduce the segment size and the overlap size of Welch's method to 64 and 32, respectively.",
"rest_freq_domain = frequency_domain(\n detrended_rest_rri,\n method=\"welch\",\n nperseg=64,\n noverlap=32,\n interp_method=\"cubic\",\n window=\"hanning\",\n fs=4.0\n)\n\nrest_freq_domain",
"Comparing the HRV during Rest and at the last three minutes of Recovery",
"recovery_rri = rri.time_range(start=rri.time[-1] - 180, end=rri.time[-1]).reset_time()\nfig, ax = recovery_rri.plot()\n\nrecovery_rri.info()",
"Time Domain and Frequency Domain during Recovery",
"recovery_time_domain = time_domain(recovery_rri)\nrecovery_time_domain\n\ndetrended_recovery_rri = polynomial_detrend(recovery_rri, degree=3)\n\nfig, ax = detrended_recovery_rri.plot()\n\nrecovery_freq_domain = frequency_domain(\n detrended_recovery_rri,\n method=\"welch\",\n nperseg=64,\n noverlap=32,\n interp_method=\"cubic\",\n window=\"hanning\",\n fs=4.0\n)\n\nrecovery_freq_domain\n\ndef compare_indices(ax, cond_1, cond_2, index_name, title, y_label):\n ax.bar([0, 1], [cond_1[index_name], cond_2[index_name]], color=[\"b\", \"r\"])\n ax.set_xticks([0, 1])\n ax.set_xticklabels([\"Rest\", \"Recovery\"])\n ax.set(ylabel=y_label)\n ax.set(title=title)\n\nfig, ax = plt.subplots(2, 2)\nfig.set_size_inches(15, 12)\n\ncompare_indices(\n ax[0][0],\n rest_time_domain,\n recovery_time_domain,\n \"rmssd\",\n title=\"Time Domain\",\n y_label=\"RMSSD (ms)\"\n)\ncompare_indices(\n ax[0][1],\n rest_time_domain,\n recovery_time_domain,\n \"pnn50\",\n title=\"Time Domain\",\n y_label=\"pNN50 (%)\"\n)\n\ncompare_indices(\n ax[1][0],\n rest_freq_domain,\n recovery_freq_domain,\n \"hf\",\n title=\"Frequency Domain\",\n y_label=\"HF (ms²)\"\n)\ncompare_indices(\n ax[1][1],\n rest_freq_domain,\n recovery_freq_domain,\n \"lf\",\n title=\"Frequency Domain\",\n y_label=\"LF (ms²)\"\n)",
"The figure above depicts the comparison between RMSSD, pNN50, HF, and LF extracted on the Rest (blue) and Recovery (red) periods. The reduced values of these indices in the recovery period might indicate that the vagal activity is, at least, partially suppressed after the maximal effort exercise. \nThe reduced LF (ms²) measure indicates that the RRi series at the recovery period has fewer overall fluctuations compared to the Rest period.\nMethods of assessment of the post-exercise cardiac autonomic recovery: A methodological review\nAbsence of parasympathetic reactivation after maximal exercise \nAnalysis of the dynamics of non-stationary RRi series\nOne of the reasons for selecting the Rest and the Recovery periods is due to its stationary behavior. Classical HRV indices expect that the statistical properties of the RRi signal are stable as a function of time. Therefore, extracting classical indices (Time and Frequency domain) in non-stationary segments might bring misleading results.\nLet's take a look at the RRi series at the peak of the maximal effort exercise:",
"peak_exercise_rri = rri.time_range(start=400, end=600)\nfig, ax = peak_exercise_rri.plot()",
"As shown in the above picture, the RRi series during exercise is non-stationary and for this reason, classical analyses are not recommended.\nTo overcome the non-stationary behavior and also extract information about the dynamics of the HRV in experiments involving physical exercise, Tilt maneuver it is possible to use time-varying method, which consists of splitting the RRi signal into smaller segments (ex: 30s) and calculate the time domain indices of each adjacent segment.\nThere are also Frequency domain analyses in adjacent smaller segments of the RRi signal like Short Time Fourier Transform, but it is still a work in progress in the hrv module.",
"from hrv.nonstationary import time_varying\n\ntv_results = time_varying(fq_rri, seg_size=30, overlap=0)\n\nfig, ax = tv_results.plot(index=\"rmssd\", marker=\"o\", color=\"k\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eggie5/UCSD-MAS-DSE220
|
hmwk3/boston.ipynb
|
mit
|
[
"Ensambles w/ Stacking",
"%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets import load_boston\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import Ridge\nfrom sklearn.linear_model import Lasso\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import f_regression\nfrom sklearn.tree import ExtraTreeRegressor\nfrom sklearn import preprocessing\n\nfrom sklearn.svm import SVR\nfrom sklearn import ensemble\nfrom sklearn.linear_model import LinearRegression\n\n# Load the boston housing data\nboston_house_data = load_boston()\n\n# Create a data frame of samples and feature values.\ndata_X_df = pd.DataFrame(boston_house_data.data, columns=boston_house_data.feature_names)\ndata_X_df.head()\n",
"Preprocessing\nStandardization\nNote: we will remove teh mean and scale the variance to help speed up the training.",
"data_scaler = preprocessing.MinMaxScaler()\ntarget_scaler = preprocessing.MinMaxScaler()\n\ndata = data_scaler.fit_transform(boston_house_data.data)\ntarget = target_scaler.fit_transform(boston_house_data.target)\n# data = data_X_df.values\n# target = boston_house_data.target\n\n# Print the dimensions of train and test data.\nX_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.25, random_state=4)\nprint \"Dimension of X_train =\", X_train.shape\nprint \"Dimension of y_train =\", y_train.shape\nprint \"Dimension of X_test =\", X_test.shape\nprint \"Dimension of y_test =\", y_test.shape",
"Stacking Testbench\nStacker Helper\nI have a simple stacking routine:",
"class StackedRegressor():\n\n def __init__(self, base_regressors, meta_regressor):\n \"\"\"Constructor for StackedRegressor. Takes list of base_regressors and a meta_regressor\"\"\"\n self.__base_regressors = base_regressors\n self.__meta_regressor = meta_regressor\n self.__kbest = None\n\n def fit(self, X, y, split=True, kbest=None):\n\n if kbest:\n kb = SelectKBest(f_regression, k=kbest).fit(X, y)\n self.__kbest = kb.scores_.argsort()[-kbest:]\n X = X[:, self.__kbest]\n if split:\n # Split the data so that it will not over fit.\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=4)\n else:\n X_train, X_test, y_train, y_test = X, X, y, y\n \n # Fit and predict the train data on the level0 regressors.\n meta_input = [regressor.fit(X_train, y_train).predict(X_test) for regressor in self.__base_regressors]\n \n # Fit the predicted values of above level 0 classifiers into meta regressor\n X_meta = np.array(meta_input).transpose()\n self.__meta_regressor.fit(X_meta, y_test)\n return self\n\n def predict(self, X):\n if not self.__kbest is None:\n X = X[:, self.__kbest]\n \n # Predict the test data on level0 regressors\n self.base_regressors_predict_ = [regressor.predict(X) for regressor in self.__base_regressors]\n \n # Predict the final values.\n X_meta = np.array(self.base_regressors_predict_).transpose()\n \n return self.__meta_regressor.predict(X_meta)\n\n def scores(self, X, y):\n if not self.__kbest is None:\n X = X[:, self.__kbest]\n\n X_meta = np.array([regressor.predict(X) for regressor in self.__base_regressors]).transpose()\n self.score_base_regressors_ = [regressor.score(X, y) for regressor in self.__base_regressors]\n self.score_meta_regressor_ = self.__meta_regressor.score(X_meta, y)\n\n self.mse_base_regressors_ = [mean_squared_error(y, X_meta[:, i]) for i in range(X_meta.shape[1])]\n self.mse_meta_regressor_ = mean_squared_error(y, self.__meta_regressor.predict(X_meta))",
"Evaluation\nI also have a simple routine to test my ensemable:",
"def evaluate_model(base_regressors, meta_regressor, names, split=True, kbest=None):\n stacked_regressor = StackedRegressor(base_regressors=base_regressors, meta_regressor=meta_regressor)\n stacked_regressor.fit(X_train, y_train, split, kbest)\n stacked_regressor.scores(X_test, y_test)\n print \"Scores of base regressors on test data =\", stacked_regressor.score_base_regressors_\n print \"Score of meta regressor on test data =\", stacked_regressor.score_meta_regressor_\n\n print \"Mean squared error of base regressors on test data =\", stacked_regressor.mse_base_regressors_\n print \"Mean squared error of meta regressor on test data =\", stacked_regressor.mse_meta_regressor_\n\n predicted_y = stacked_regressor.predict(X_test)\n df = pd.DataFrame(stacked_regressor.base_regressors_predict_ + [predicted_y, y_test],index=names + [\"Original\"]).T\n df2 = pd.DataFrame(\n {\"MSE\": stacked_regressor.mse_base_regressors_ + [stacked_regressor.mse_meta_regressor_],\n \"SCORE\" : stacked_regressor.score_base_regressors_ + [stacked_regressor.score_meta_regressor_]}, \n index=names)\n df2.plot(kind='bar', alpha=0.5, grid=True, rot=45, subplots=True, layout=(1,2), legend=False, figsize=(12, 4))\n return df",
"Experiments\nBelow I try various combinations of classifiers to get get the best stack:\n1\nBase regressors:\n\n\nExtraTreeRegressor with max_depth=2\n\n\nLinearRegression\n\n\nMeta regressor\n\nRidge\n\nBest features\n\n5 best features are choosen based on f_regression.",
"base_regressors=[ExtraTreeRegressor(max_depth=2), LinearRegression()]\nmeta_regressor = Ridge(alpha=0.5)\nnames = [\"Extra Tree (max_depth=2)\", \"Linear Regression\", \"Stacked Regressor\"]\nevaluate_model(base_regressors, meta_regressor, names, split=True, kbest=5).head()",
"2\nBase regressors:\n\n\nDecisionTreeRegressor with max_depth=2\n\n\nDecisionTreeRegressor with max_depth=3\n\n\nMeta regressor\n\nDecisionTreeRegressor with max_depth=3\n\nDetails\n\n\nWe split the train data into sets so that decision tree regressor dont perfectly predict.\n\n\nDefault parameters of DecisionTreeRegressor perfectly predicts the train data so we must be careful.\n\n\nUse max_depth=2 to reduce over fitting.\n\n\nStacking in this case improves but not a lot.",
"base_regressors=[DecisionTreeRegressor(max_depth=2), DecisionTreeRegressor(max_depth=3)]\nmeta_regressor = DecisionTreeRegressor(max_depth=3)\nnames = [\"Decision Tree (max_depth=2)\", \"Decision Tree (max_depth=3)\", \"Stacked Regressor\"]\nevaluate_model(base_regressors, meta_regressor, names).head()",
"3\nBase regressors:\n\n\nDecisionTreeRegressor with max_depth=2\n\n\nDecisionTreeRegressor with max_depth=3\n\n\nMeta regressor\n\nLinearRegression\n\nDetails\n\nStacked model improves a bit but not a lot. This could be because the the DecisionTreeRegressor with depth=3 is a good classifier.",
"base_regressors=[DecisionTreeRegressor(max_depth=2), DecisionTreeRegressor(max_depth=3)]\nmeta_regressor = LinearRegression()\nnames = [\"Decision Tree (max_depth=2)\", \"Decision Tree (max_depth=3)\", \"Stacked Regressor\"]\nevaluate_model(base_regressors, meta_regressor, names).head()",
"4\nBase regressors:\n\n\nDecisionTreeRegressor with max_depth=2\n\n\nLinearRegression\n\n\nMeta regressor\n\nRidge\n\nDetails\n\n\nStacked model improves a lot in this case. Here we chose 2 weak regressors in level 0.\n\n\nTrain data is not split in training phase.\n\n\nThe predicted",
"base_regressors=[DecisionTreeRegressor(max_depth=2), LinearRegression()]\nmeta_regressor = Ridge()\nnames = [\"Decision Tree (max_depth=2)\", \"Linear Regression\", \"Stacked Regressor\"]\nevaluate_model(base_regressors, meta_regressor, names).head()",
"5\nBase regressors:\n\n\nRidge\n\n\nLinearRegression\n\n\nMeta regressor\n\nLinearRegression\n\nDetails\n\nStacked model improves very less in this case. This is expected because there is no additional refinement on intermediate predicted values.",
"base_regressors=[Ridge(), LinearRegression()]\nmeta_regressor = LinearRegression()\nnames = [\"Ridge\", \"Linear\", \"Stacked\"]\nevaluate_model(base_regressors, meta_regressor, names).head()",
"6\nBase regressors:\n\n\nSVR\n\n\nLinearRegression\n\n\nMeta regressor\n\nGradientBoostingRegressor\n\nDetails\n\n\nStacked model improves a lot in this case. Here we chose 2 weak regressors in level 0.\n\n\nTrain data is not split in training phase.\n\n\nThe predicted",
"svr_poly = SVR(kernel='poly', C=1e3, degree=2)\nlm_model = LinearRegression()\nbase_regressors=[svr_poly, lm_model]\n\nparams = {'n_estimators': 500, 'max_depth': 4, 'min_samples_split': 1, 'learning_rate': 0.01, 'loss': 'ls'}\ngb_clf = ensemble.GradientBoostingRegressor(**params)\nmeta_regressor = gb_clf\n\nnames = [\"SVR\", \"LR\", \"Stacked Regressor (GBR)\"]\nevaluate_model(base_regressors, meta_regressor, names, split=True, kbest=None).head()",
"7\nBase regressors:\n\n\nRidge\n\n\nRandom Forest\n\n\nGradientBoostingClassifier\n\n\nMeta regressor\n\nLinear Regression",
"base_regressors = [Ridge(fit_intercept=True, normalize=True),\n ensemble.RandomForestRegressor(n_estimators=20, warm_start=True),\n ensemble.GradientBoostingRegressor(n_estimators=100, warm_start=True)]\n\nmeta_regressor = LinearRegression()\n \nnames = [\"Ridge\", \"RF\", \"GBR\", \"LR (meta)\"]\nevaluate_model(base_regressors, meta_regressor, names, split=True, kbest=None).head()",
"Conclusion\nSometimes the stacking ensemeble effect achives a higher accutacy than the individial classifiers."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pylada/pylada-light
|
notebooks/Creating a Job Folder.ipynb
|
gpl-3.0
|
[
"Organized high-throughput calculations: job-folders\nPylada provides tools to organize high-throughput calculations in a systematic\nmanner. The whole high-throughput experience revolves around job-folders.\nThese are convenient ways of organizing actual calculations. They can be though\nof as folders on a file system, or directories in unix parlance, each one\ndedicated to running a single actual calculation (eg launching :ref:VASP\n<vasp_ug> once). The added benefits beyond creating the same file-structure\nwith bash are:\n\nthe ability to create a tree of folders/calculations using the power of the\n python programming language. No more copy-pasting files and unintelligible\n bash scripts!\nthe ability to launch all folders simultaneously\nthe ability to collect the results across all folders simultaneously, all\n within python, and with all of python's goodies. E.g. no more copy-pasting\n into excel by hand. Just do the summing, and multiplying, and graphing\n there and then.\n\nActually, there are a lot more benefits. Having everything - from input to\noutput - within the same modern and efficient programming language means there\nis no limit to what can be achieved.\nThe following describes how job-folders are created. The fun bits, \nlaunching jobs, collecting results, manipulating all job-folders\nsimultaneously, can be found in the next section. Indeed, all of these are\nintrinsically linked to the Pylada's IPython interface.\nPrep: creating a dummy functional\nFirst off, we will need a functional. Rather that use something heavy, like VASP, we will use a dummy functional which does pretty much nothing... We will write it to a file, so that it can be imported later on.",
"%%writefile dummy.py\ndef functional(structure, outdir=None, value=False, **kwargs):\n \"\"\" A dummy functional \"\"\"\n from copy import deepcopy\n from pickle import dump\n from random import random\n from py.path import local\n\n structure = deepcopy(structure)\n structure.value = value\n outdir = local(outdir)\n outdir.ensure(dir=True)\n dump((random(), structure, value, functional), outdir.join('OUTCAR').open('wb'))\n\n return Extract(outdir)",
"This functional takes a few arguments, amongst which an output directory, and writes a file to disk. That's pretty much it.\nHowever, you'll notice that it returns an object of class Extract. We'll create this class in a second. This class is capable of checking whether the functional did run correctly or not (Extract.success attribute is True or False). For VASP or Espresso, it is also capable of parsing output files to recover quantities, like the total energy or the eigenvalues.\nThis class is not completely necessary to create the Job Folder, but knowing when a job a successful and being able to easily process it's ouput are really nice features to have.\nThe following is a dummy Extraction classs for the dummy functional. It knows to check for the existence of an OUTCAR file (a dummy OUTCAR, not a real one) and how to parse it.",
"%%writefile -a dummy.py\n\ndef Extract(outdir=None):\n \"\"\" An extraction function for a dummy functional \"\"\"\n from os import getcwd\n from collections import namedtuple\n from pickle import load\n from py.path import local\n\n if outdir == None:\n outdir = local()()\n Extract = namedtuple('Extract', ['success', 'directory',\n 'energy', 'structure', 'value', 'functional'])\n outdir = local(outdir)\n if not outdir.check():\n return Extract(False, str(outdir), None, None, None, None)\n if not outdir.join('OUTCAR').check(file=True):\n return Extract(False, str(outdir), None, None, None, None)\n with outdir.join('OUTCAR').open('rb') as file:\n structure, energy, value, functional = load(file)\n return Extract(True, outdir, energy, structure, value, functional)\nfunctional.Extract = Extract",
"Creating and accessing job-folders\nJob-folders can be created with two simple lines of codes:",
"from pylada.jobfolder import JobFolder\nroot = JobFolder()",
"To add further job-folders, one can do:",
"jobA = root / 'jobA'\njobB = root / 'another' / 'jobB'\njobBprime = root / 'another' / 'jobB' / 'prime'",
"As you can, see job-folders can be given any structure that on-disk directories can. What is more, a job-folder can access other job-folders with the same kind of syntax that one would use (on unices) to access other directories:",
"assert jobA['/'] is root\nassert jobA['../another/jobB'] is jobB\nassert jobB['prime'] is jobBprime\nassert jobBprime['../../'] is not jobB",
"And trying to access non-existing folders will get you in trouble:",
"try:\n root['..']\nexcept KeyError:\n pass\nelse:\n raise Exception(\"I expected an error\")",
"Furthermore, job-folders know what they are:",
"jobA.name",
"Who they're parents are:",
"jobB.parent.name",
"They know about their sub-folders:",
"assert 'prime' in jobB\nassert '/jobA' in jobBprime",
"As well as their ancestral lineage all the way to the first matriarch:",
"assert jobB.root is root",
"A Job-folder that executes code\nThe whole point of a job-folder is to create an architecture for calculations. Each job-folder can contain at most a single calculation. A calculation is setup by passing to the job-folder a function and the parameters for calling it.",
"from pylada.crystal.binary import zinc_blende\nfrom dummy import functional\n\njobA.functional = functional\njobA.params['structure'] = zinc_blende()\njobA.params['value'] = 5",
"In the above, the function functional from the dummy module created previously is imported into the namespace. The special attribute job.functional is set to functional. Two arguments, structure and value, are specified by adding the to the dictionary job.params. Please note that the third line does not contain parenthesis: this is not a function call, it merely saves a reference to the function with the object of calling it later. 'C' aficionados should think a saving a pointer to a function.\nWarning: The reference to functional is deepcopied: the instance that is saved to jod-folder is not the one that was passed to it. On the other hand, the parameters (jobA.params) are held by reference rather than by value.\nTip: To force a job-folder to hold a functional by reference rather than by value, do:\nPython\njobA._functional = functional\nThe parameters in job.params should be pickleable so that the folder can be saved to disk later. Jobfolder.functional must be a\npickleable and callable. Setting Jobfolder.functional to\nsomething else will immediately fail. In practice, this means it can be a\nfunction or a callable class, as long as that function or class is imported from a module. It cannot be defined in __main__, e.g. the script that you run to create the job-folders. And that's why the dummy functional in this example is written to it's own dummy.py file.\nThat said, we can now execute each jobA by calling the function compute:",
"directory = \"tmp/\" + jobA.name[1:]\nresult = jobA.compute(outdir=directory)\nassert result.success",
"Assuming that you the unix program tree, the following will show that an OUTCAR file was created in the right directory:",
"%%bash\n[ ! -e tree ] || tree tmp/",
"Running the job-folder jobA is exactly equivalent to calling the functional directly: \npython\nfunctional(structure=zinc_blende(), value=5, outdir='tmp/jobA')\nIn practice, what we have done is created an interface where any program can be called in the same way. This will be extremly useful when launching many jobs simultaneously.\nCreating multiple executable jobs\nThe crux of this setup is the ability to create jobs programmatically:\nFinally, let's not that executable job-folders (i.e. for which jofolder.functional is set) can be easily iterated over with jobfolder.keys(), jobfolder.values(), and jobfolder.items().",
"from pylada.jobfolder import JobFolder\nfrom pylada.crystal.binary import zinc_blende\n\nroot = JobFolder()\n\nstructures = ['diamond', 'diamond/alloy', 'GaAs']\nstuff = [0, 1, 2]\nspecies = [('Si', 'Si'), ('Si', 'Ge'), ('Ga', 'As')]\n\nfor name, value, species in zip(structures, stuff, species):\n job = root / name\n job.functional = functional\n job.params['value'] = value\n job.params['structure'] = zinc_blende()\n \n for atom, specie in zip(job.structure, species):\n atom.type = specie\n\nprint(root)",
"We can now iterate over executable subfolders:",
"print(list(root.keys()))",
"Or subsets of executable folders:",
"for jobname, job in root['diamond'].items():\n print(\"diamond/\", jobname, \" with \", len(job.params['structure']), \" atoms\")",
"Saving to disk using the python API\nJobfolders can be saved to and loaded from disk using python functions:",
"from pylada.jobfolder import load, save\nsave(root, 'root.dict', overwrite=True) # saves to file\nroot = load('root.dict') # loads from file\nprint(root)",
"But pylada also provides an ipython interface for dealing with jobfolders. It is described elsewhere. The difference between the python and the ipython interfaces are a matter of convenience."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mtchem/Twitter-Politics
|
Data_Wrangle.ipynb
|
mit
|
[
"import pandas as pd\nimport numpy as np\nfrom datetime import datetime\nimport requests\nimport re\nimport io\nimport urllib\nfrom pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter\nfrom pdfminer.converter import TextConverter\nfrom pdfminer.layout import LAParams\nfrom pdfminer.pdfpage import PDFPage\nfrom collections import defaultdict\nimport pickle\n\n\n# text cleaning imports\nimport nltk\nnltk.download('punkt')\nfrom nltk.tokenize import word_tokenize",
"Collect and Clean Twitter Data\nThe twitter data was obtained using the Trump Twitter Archive, the data is from 01/20/2017 - 03/02/2018 2:38 PM MST. I used the Federal Register's website to obtain all of the actions published by the Executive Office for the same time frame.",
"# load json twitter data\ntwitter_json = r'data/twitter_01_20_17_to_3-2-18.json'\n# Convert to pandas dataframe\ntweet_data = pd.read_json(twitter_json)",
"Using Pandas I will read the twitter json file, convert it to a dataframe, set the index to 'created at' as datetime objects, then write it to a csv",
"# read the json data into a pandas dataframe\ntweet_data = pd.read_json(twitter_json)\n# set column 'created_at' to the index\ntweet_data.set_index('created_at', drop=True, inplace= True)\n# convert timestamp index to a datetime index\npd.to_datetime(tweet_data.index)",
"The next step is to add columns with tokenized text and identify twitter specific puncutiations like hashtags and @ mentions",
"# function to identify hash tags\ndef hash_tag(text):\n return re.findall(r'(#[^\\s]+)', text) \n# function to identify @mentions\ndef at_tag(text):\n return re.findall(r'(@[A-Za-z_]+)[^s]', text)\n\n# tokenize all the tweet's text\ntweet_data['text_tokenized'] = tweet_data['text'].apply(lambda x: word_tokenize(x.lower()))\n# apply hash tag function to text column\ntweet_data['hash_tags'] = tweet_data['text'].apply(lambda x: hash_tag(x))\n# apply at_tag function to text column\ntweet_data['@_tags'] = tweet_data['text'].apply(lambda x: at_tag(x))\n\n# pickle data\ntweet_pickle_path = r'data/twitter_01_20_17_to_3-2-18.pickle'\ntweet_data.to_pickle(tweet_pickle_path)",
"Scrape Data from the Federal Register\nThis has already been done, and all of the pdfs published by the Executive Office of the U.S.A are in the data folder from 2017/01/20 - 2018/03/02\nDon't execute this code unless you need more up-to-date information",
"# Define the 2017 and 2018 url that contains all of the Executive Office of the President's published documents\nexecutive_office_url_2017 = r'https://www.federalregister.gov/index/2017/executive-office-of-the-president' \nexecutive_office_url_2018 = r'https://www.federalregister.gov/index/2018/executive-office-of-the-president' \n# scrape all urls for pdf documents published in 2017 and 2018 by the U.S.A. Executive Office\npdf_urls= []\nfor url in [executive_office_url_2017,executive_office_url_2018]:\n response = requests.get(url)\n pattern = re.compile(r'https:.*\\.pdf')\n pdfs = re.findall(pattern, response.text)\n pdf_urls.append(pdfs)\n \n\n# writes all of the pdfs to the data folder\nstart = 'data/'\nend = '.pdf'\nnum = 0\nfor i in range(0,(len(pdf_urls))):\n for url in pdf_urls[i]:\n ver = str(num)\n pdf_path = start + ver + end\n r = requests.get(url)\n file = open(pdf_path, 'wb')\n file.write(r.content)\n file.close()\n num = num + 1",
"Create dataframe with the date the pdf was published and the text of each pdf",
"# function to convert pdf to text from stack overflow (https://stackoverflow.com/questions/26494211/extracting-text-from-a-pdf-file-using-pdfminer-in-python/44476759#44476759)\ndef convert_pdf_to_txt(path):\n rsrcmgr = PDFResourceManager()\n retstr = io.StringIO()\n codec = 'utf-8'\n laparams = LAParams()\n device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)\n fp = open(path, 'rb')\n interpreter = PDFPageInterpreter(rsrcmgr, device)\n password = \"\"\n maxpages = 0\n caching = True\n pagenos = set()\n\n for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages,\n password=password,\n caching=caching,\n check_extractable=True):\n interpreter.process_page(page)\n\n text = retstr.getvalue()\n\n fp.close()\n device.close()\n retstr.close()\n return text\n# finds the first time the name of a day appears in the txt, and returns that name\n\ndef find_day(word_generator):\n day_list = ['Monday,', 'Tuesday,', 'Wednesday,', 'Thursday,', 'Friday,', 'Saturday,', 'Sunday,']\n day_name_dict = {'Mon':'Monday,', 'Tue':'Tuesday,','Wed':'Wednesday,','Thu':'Thursday,','Fri':'Friday,','Sat':'Saturday,','Sun':'Sunday,'}\n day_name = []\n for val in word_generator:\n if val in day_list:\n num_position = txt.index(val)\n day_name.append(txt[num_position] + txt[num_position + 1] + txt[num_position +2])\n break\n \n return day_name_dict[day_name[0]]\n# takes text and returns the first date in the document\ndef extract_date(txt):\n word_generator = (word for word in txt.split())\n day_name = find_day(word_generator)\n txt_start = int(txt.index(day_name))\n txt_end = txt_start + 40\n date_txt = txt[txt_start:txt_end].replace('\\n','')\n cleaned_txt = re.findall('.* \\d{4}', date_txt)\n date_list = cleaned_txt[0].split()\n clean_date_list = map(lambda x:x.strip(\",\"), date_list)\n clean_date_string = \", \".join(clean_date_list)\n date_obj = datetime.strptime(clean_date_string, '%A, %B, %d, %Y')\n return date_obj\n",
"Create a dictionary using DefaultDict where the date of publication is the key, and the text of the pdf is the value.",
"start_path = r'data/'\nend_path = '.pdf'\ndata_dict = defaultdict(list)\nfor i in range(0,270):\n file_path = start_path + str(i) + end_path\n txt = convert_pdf_to_txt(file_path)\n date_obj = extract_date(txt)\n data_dict[date_obj].append(txt)",
"Create a list of tuples, where the date is the first entry and the text of a pdf is the second entry, skipping over any values of None",
"tuple_lst = []\nfor k, v in data_dict.items():\n if v != None:\n for text in v:\n tuple_lst.append((k, text))\n \n\n# create dataframe from list of tuples\nfed_reg_dataframe = pd.DataFrame.from_records(tuple_lst, columns=['date','str_text'], index = 'date')\n\n# tokenize all the pdf text\nfed_reg_dataframe['token_text'] = fed_reg_dataframe['str_text'].apply(lambda x: word_tokenize(x.lower()))\n\n# final dataframe\nfed_reg_dataframe[fed_reg_dataframe.index > '2017-01-20']",
"Pickle the dataframe, so that you only need to process the text once",
"# pickle final data\nfed_reg_data = r'data/fed_reg_data.pickle'\nfinal_df.to_pickle(fed_reg_data)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
atlas-outreach-data-tools/notebooks
|
november_2017_v-1.0/ATLAS_OpenData_notebook_04.ipynb
|
gpl-3.0
|
[
"<CENTER>\n <a href=\"http://opendata.atlas.cern\" class=\"icons\"><img src=\"http://opendata.atlas.cern/DataAndTools/pictures/opendata-top-transblack.png\" style=\"width:40%\"></a>\n</CENTER>\nA more difficult notebook in python\nIn this notebook you can find a more difficult program that shows further high energy physics (HEP) analysis techniques.\nThe following analysis is searching for events where Z bosons decay to two leptons of same flavour and opposite charge (to be seen for example in the Feynman diagram).\n<CENTER><img src=\"Z_ElectronPositron.png\" style=\"width:40%\"></CENTER>\nFirst of all - like we did it in the first notebook - ROOT is imported to read the files in the .root data format.",
"import ROOT",
"In order to activate the interactive visualisation of the histogram that is later created we can use the JSROOT magic:",
"%jsroot on",
"Next we have to open the data that we want to analyze. As described above the data is stored in a *.root file.",
"f = ROOT.TFile.Open(\"mc_105986.ZZ.root\")\n#f = ROOT.TFile.Open(\"mc_147770.Zee.root\")\n#f = ROOT.TFile.Open(\"http://opendata.atlas.cern/release/samples/MC/mc_147770.Zee.root\")",
"After the data is opened we create a canvas on which we can draw a histogram. If we do not have a canvas we cannot see our histogram at the end. Its name is Canvas and its header is c. The two following arguments define the width and the height of the canvas.",
"canvas = ROOT.TCanvas(\"Canvas\",\"c\",800,600)",
"The next step is to define a tree named t to get the data out of the .root file.",
"tree = f.Get(\"mini\")",
"Now we define a histogram that will later be placed on this canvas. Its name is variable, the header of the histogram is Mass of the Z boson, the x axis is named mass [GeV] and the y axis is named events. The three following arguments indicate that this histogram contains 30 bins which have a range from 40 to 140.",
"hist = ROOT.TH1F(\"variable\",\"Mass of the Z boson; mass [GeV]; events\",30,40,140)",
"Time to fill our above defined histogram. At first we define some variables and then we loop over the data. We also make some cuts as you can see in the # comments.",
"leadLepton = ROOT.TLorentzVector()\ntrailLepton = ROOT.TLorentzVector()\n\nfor event in tree:\n \n # Cut #1: At least 2 leptons\n if tree.lep_n == 2:\n \n # Cut #2: Leptons with opposite charge\n if (tree.lep_charge[0] != tree.lep_charge[1]):\n \n # Cut #3: Leptons of the same family (2 electrons or 2 muons)\n if (tree.lep_type[0] == tree.lep_type[1]):\n \n # Let's define one TLorentz vector for each, e.i. two vectors!\n leadLepton.SetPtEtaPhiE(tree.lep_pt[0]/1000., tree.lep_eta[0], tree.lep_phi[0], tree.lep_E[0]/1000.)\n trailLepton.SetPtEtaPhiE(tree.lep_pt[1]/1000., tree.lep_eta[1], tree.lep_phi[1], tree.lep_E[1]/1000.)\n # Next line: addition of two TLorentz vectors above --> ask mass very easy (devide by 1000 to get value in GeV)\n invmass = leadLepton + trailLepton\n \n hist.Fill(invmass.M())",
"After filling the histogram we want to see the results of the analysis. First we draw the histogram on the canvas and then the canvas on which the histogram lies.",
"hist.Draw()\n\ncanvas.Draw()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
itoledoc/python_coffee
|
.ipynb_checkpoints/itoledoc_coffee-checkpoint.ipynb
|
mit
|
[
"Python Coffee, November 5, 2015\nImport required libraries",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n%matplotlib inline",
"The previous import code requires that you have pandas, numpy and matplotlib installed. If you are using conda\nyou already have all of this libraries installed. Otherwise, use pip to install them. The magic command %matplotlib inline loads the required variables and tools needed to embed matplotlib figures in a ipython notebook.\nImport optional libraries to use plotly.\nPlot.ly is a cloud based visualization tool, which has a mature python API. It is very useful to create profesional looking and interactive plots, that are\nshared publicly on the cloud; so be careful on publishing only data that you want (and can) share.\nInstalling plot.ly is done easily with pip or conda, but it requires you to create an account and then require a API token. If you don't want to install it, you can jump this section.",
"import plotly.tools as tls\nimport plotly.plotly as py\nimport cufflinks as cf\nimport plotly\nplotly.offline.init_notebook_mode()\ncf.offline.go_offline()",
"Import data file with pandas",
"df = pd.read_csv('data_files/baseline_channels_phase.txt', sep=' ')",
"df is an instance of the pandas object (data structure) pandas.DataFrame. A DataFrame instance has several methods (functions) to operate over the object. For example, is easy to display the data for a first exploration of what it contains using .head()",
"df.head()",
"A DataFrame can be converted into a numpy array by using the method .values:",
"df.values",
"For numpy expert, you have also methods to access the data using the numpy standards. If you want to extract the data at the coordinate (0,1) you can do:",
"df.iloc[0,1]",
"But also you can use the column names and index keys, to substract, for example, the name of the first antenna in a baseline pair from row 3:",
"df.ix[3, 'ant1name']",
"DataFrame are objects containgin tabular data, that can be grouped by columns and then used to aggreate data. Let's say you want to obtaing the mean frequency for the baselines and the number of channels used:",
"data_group = df.groupby(['ant1name', 'ant2name'])\ndf2 = data_group.agg({'freq': np.mean, 'chan': np.count_nonzero}).reset_index()\ndf2.head()\n\ndata_raw = df.groupby(['ant1name', 'ant2name', 'chan']).y.mean()\ndata_raw.head(30)\n\ndata_raw.unstack().head(20)\n\npd.options.display.max_columns = 200\ndata_raw.unstack().head(20)\n\ndata_raw = data_raw.unstack().reset_index()\ndata_raw.head()\n\ndata_raw.to_excel('test.xls', index=False)\n\ntodegclean = np.degrees(np.arcsin(np.sin(np.radians(data_raw.iloc[:,2:]))))\n\ntodegclean.head()\n\ntodegclean['mean'] = todegclean.mean(axis=1)\n\ntodegclean.head()\n\ndata_clean = todegclean.iloc[:,:-1].apply(lambda x: x - todegclean.iloc[:,-1])\ndata_clean.head(20)\n\ndata_ready = pd.merge(data_raw[['ant1name', 'ant2name']], todegclean, left_index=True, right_index=True)\ndata_ready.head()",
"Plot.ly",
"data_clean2 = data_clean.unstack().reset_index().copy()\n\ndata_clean2.query('100 < level_1 < 200')\n\ndata_clean2.query('100 < level_1 < 200').iplot(kind='scatter3d', x='chan', y='level_1', mode='markers', z=0, size=6, \n title='Phase BL', filename='phase_test', width=1, opacity=0.8, colors='blue', symbol='circle',\n layout={'scene': {'aspectratio': {'x': 1, 'y': 3, 'z': 0.7}}})\n\nploting = data_clean2.query('100 < level_1 < 200').figure(kind='scatter3d', x='chan', y='level_1', mode='markers', z=0, size=6, \n title='Phase BL', filename='phase_test', width=1, opacity=0.8, colors='blue', symbol='circle',\n layout={'scene': {'aspectratio': {'x': 1, 'y': 3, 'z': 0.7}}})\n\n# ploting\n\nploting.data[0]['marker']['color'] = 'blue'\nploting.data[0]['marker']['line'] = {'color': 'blue', 'width': 0.5}\nploting.data[0]['marker']['opacity'] = 0.5\n\nplotly.offline.iplot(ploting)",
"Matplotlib",
"fig=plt.figure()\nax=fig.gca(projection='3d')\n\nX = np.arange(0, data_clean.shape[1],1)\nY = np.arange(0, data_clean.shape[0],1)\n\nX, Y = np.meshgrid(X,Y)\n\nsurf = ax.scatter(X, Y, data_clean, '.', c=data_clean,s=2,lw=0,cmap='winter')\n\n%matplotlib notebook\n\nfig=plt.figure()\nax=fig.gca(projection='3d')\n\nX = np.arange(0, data_clean.shape[1],1)\nY = np.arange(0, data_clean.shape[0],1)\n\nX, Y = np.meshgrid(X,Y)\n\nsurf = ax.scatter(X, Y, data_clean, '.', c=data_clean,s=2,lw=0,cmap='winter')\n\ndata_clean2.plot(kind='scatter', x='chan', y=0)\n\nimport seaborn as sns\n\ndata_clean2.plot(kind='scatter', x='level_1', y=0)\n\ndata_ready['noise'] = todegclean.iloc[:,2:].std(axis=1)\n\ndata_ready[['ant1name', 'ant2name', 'noise']].head(10)\n\ncorr = data_ready[['ant1name', 'ant2name', 'noise']].pivot_table(index=['ant1name'], columns=['ant2name'])\n\ncorr.columns.levels[1]\n\ncorr2 = pd.DataFrame(corr.values, index=corr.index.values, columns=corr.columns.levels[1].values)\n\ncorr2.head(10)\n\nf, ax = plt.subplots(figsize=(11, 9))\ncmap = sns.diverging_palette(220, 10, as_cmap=True)\nsns.heatmap(corr2, cmap=cmap,\n square=True, xticklabels=5, yticklabels=5,\n linewidths=.5, cbar_kws={\"shrink\": .5}, ax=ax)\n\n?sns.heatmap"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session03/Day4/Parallel.ipynb
|
mit
|
[
"Parallelization and Algorithm Development\n\nBy C Hummels (Caltech)",
"import random\nimport numpy as np\nfrom matplotlib import pyplot as plt",
"It can be hard to guess which code is going to operate faster just by looking at it because the interactions between software and computers can be extremely complex. The best way to optimize code is through using profilers to identify bottlenecks in your code and then attempt to address these problems through optimization. Let's give it a whirl.\nProblem 1) Searching lists by bisecting them\nProblem 1a\nLet's say you have a sorted list of random elements, and you want to see where a new number fits into them to remain ordered. The simplest way to search through an ordered list to find the location for your desired element is to just step through it one-by-one, but this is not algorithmically ideal. Please write out such a function, and comment on the complexity of this algorithm (seen below) in big O notation. Then use timeit to test how fast it is.",
"# Create sorted random array and random element of that array; this just sets up the problem.\ndef rand_arr(n_elements=100000):\n \n random_list = [random.random() for i in np.arange(n_elements)]\n random_list.sort()\n return random_list\n\nrandom_list = rand_arr()\nnew_number = random.random()\n\n%%timeit\n# complete",
"Problem 1b\nWe can cut corners all we want to optimize that code, but at the end of the day, it's still going to not be very good. So let's try to make a better algorithm for searching the ordered array to find where to put our new number. One such way is by continually bisecting the array in what is called a binary search, and checking if the special_number is greater than, or less than the value at the bisection, then shifting to the remaining values to continue the search. Please code this up, and comment on its complexity in big O notation, and use timeit to get its speed. Did you get the result you expected to get? If not, why do you think that happened? You may get different results if you change the input parameters. Think about it.",
"%%timeit\n# complete",
"Problem 1c\nOK, now let's say we have to do a lot of these operations. Like do this binary search on a thousand different random lists. This is what is known as an embarassingly parallel operation, because you're repeating a single function/algorithm on a bunch of independent objects. We can use our good friend, the map function to map a function to a list of objects! Do this with your binary_search function on 100 random lists and time it.",
"%%timeit\nlist_of_list_of_ran = [rand_arr() for i in range(100)]\n# complete",
"Problem 1d\nBut it gets better! We can actually use multiprocessing.Pool.map() to do the same thing, but this way it uses every processor available to it! Load this up and see what you get, or if you run into any problems.",
"%%timeit\n# complete",
"Problem 1e\nDid it succeed? What sort of speed bonus did you see? Or if it failed, what do you think you could do in order to make it not fail?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
WomensCodingCircle/CodingCirclePython
|
Lesson05_Strings/Strings.ipynb
|
mit
|
[
"Strings\nA sting is a sequence of characters. A string's characters can be treated similarly to a list. Each character has an index starting at 0 and can be accessed just like a list\nmy_str = \"M y S t r i n g\"\n#index 0 1 2 3 4 5 6 7 8\n\nprint my_str[0]",
"chinese_zodiac = \"Rat Ox Tiger Rabbit Dragon Snake Horse Goat Monkey Rooster Dog Pig\"\nprint(chinese_zodiac[0])\nprint(chinese_zodiac[1])",
"TRY IT\nCreate a string with the 5 elements of the Chinese zodiac (wood fire earth metal water) and store it in a variable called elements. Get the 4th letter of elements.\nYou can get the length (number of characters) of a string by using the len operator\nlen(my_string)",
"print(len(chinese_zodiac))",
"Be careful with length, it is the number of characters, not the last index.\nThe last index is len(string) - 1",
"zlen = len(chinese_zodiac)\n# WRONG\nprint(chinese_zodiac[zlen])\n# RIGHT\nprint(chinese_zodiac[zlen - 1])",
"But actually you can use negative indexing to get the last character in a string. -1 is the last character, -2 is the second to last and so on. \nWondering why negative indexing starts with -1 and not 0? It's because -0 and 0 are the same thing, so you would just get the first character.",
"print(chinese_zodiac[-1])",
"TRY IT\nCreate a string with your name and store it in a variable called name. Print the last character of your name using both indexing methods (positive and negative).\nString Slices\nYou can take more than a single character; you can take a whole slice. To take a slice, give the first index and then the last index + 1. (The first index is inclusive, second index is exclusive)\nstring[0:5]",
"second_animal = chinese_zodiac[4:6]\nprint(second_animal)",
"You can omit the first index and it will start at the beginning, you can omit the last index and it will go to the end.",
"first_six = chinese_zodiac[:32]\nprint(first_six)\n\nlast_six = chinese_zodiac[33:]\nprint(last_six)",
"TRY IT\nWhat happens when you omit both indices? Try it on the chinese_zodiac.\nImmutable strings\nStrings are immutable. You cannot change their contents; you must make a new string.",
"# This will fail\nchinese_zodiac[0] = 'A'\n\n# This is better\nprint(chinese_zodiac)\nchinese_zodiac_minus_r = chinese_zodiac[1:]\nprint(chinese_zodiac_minus_r)\n\nate_the_rat = \"C\" + chinese_zodiac_minus_r\nprint(ate_the_rat)",
"Strings and in\nThe in operator checks if a substring is in a string. It returns a boolean.\n'substring' in 'string'\n\nHint case matters",
"print('Cat in zodiac:') \nprint('Cat' in chinese_zodiac)\n\nprint('Dragon in zodiac:')\nprint('Dragon' in chinese_zodiac)",
"Looping through strings\nYou can loop through each character in a string using a for loop (or a while loop, but that is more straightforward)\nfor character in string:\n print character",
"for character in 'Rat':\n print(character)\n\n# Print only vowels\nfor letter in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':\n if letter in 'AEIOUY':\n print(letter)",
"TRY IT\nLoop through the chinese zodiac characters, printing only letters that are in you name\nString Comparison\nYou can compare strings using the ==, >, >=, <, and <= operators. \nNumbers come first, then capital letters and then lowercase letters. They are actually sorted based on their ascii value http://ascii.cl/",
"print('A' > 'a')\nprint('A' < 'a')\nprint('A' > 'B')\nprint('0' > 'A')\n\n# Case matters in equality\nprint(('cat' == 'Cat'))\nprint(('cat' == 'cat'))",
"String Methods\nThere are several built in methods you can use on your strings. To find them all use dir(str). \n'my string'.method_name(params)",
"dir(chinese_zodiac)",
"lower, upper, title, capitalize, and swapcase all change the case of the string\nHINT: use one of these methods to transform input to functions so that you don't have to worry about what case the user's input is in",
"print(chinese_zodiac.lower())\nprint(chinese_zodiac.upper())",
"The split method slices the string into a list\nIt takes one parameter, the character(s) to split on\nmy_str.split(slice_string)\n\nAnd the join method merges a list into a string\nIt operates on the 'glue' string, and the list is the parameter\nglue_string.join(my_list)",
"cz_list = chinese_zodiac.split(' ')\nprint(cz_list)\n\nprint(', '.join(cz_list))",
"The find method finds the index of a substring (and -1 if it doesn't exist) (Why -1 and not 0?)\ncount counts the occurrence of a substring\nstartswith and endswith checks if a string starts or ends with a given substring and returns a boolean",
"print(chinese_zodiac.find('Snake'))\n\nprint(chinese_zodiac.count('at'))\n\nprint(chinese_zodiac.startswith('Ra'))",
"You can chain some string methods if the method also returns a string",
"print(''.join(cz_list).lower().startswith('ra'))",
"TRY IT\nChange the case of your elements string to upper case and then split the result on spaces (' ')\n Challenge do this by chaining string methods\nParsing Strings\nYou can use string methods and indexing to get exactly the substring you want.",
"# Lets find the zodiac animals between Ox and Monkey\nox_idx = chinese_zodiac.find('Ox')\nmonkey_idx = chinese_zodiac.find('Monkey')\n\nprint(chinese_zodiac[ox_idx:monkey_idx])\n\n# Wait, I wanted to exclude Ox\nox_end = ox_idx + len('Ox ')\nprint(chinese_zodiac[ox_end:monkey_idx])",
"TRY IT\nUse string parsing strategies to extract the host name (in this case gmail) from the email address. (And no, just counting, doesn't count)",
"email = 'my.name@gmail.com'",
"Formatting\nString concatenation gets old really fast, and casting numbers and booleans as strings does too. Luckily, there is a better option.\nString formatting allows you to include variables directly in you string.\n'string {var} '.format(var1)\n\nEach parameter passed to format has an index and can be accessed in the string using {idx}. They don't have to be in order and can be repeated",
"print('The {0} has {1} toes per limb and thus is considered {2}'.format('ox', 4, 'yin'))\nprint('The {0} has {1} toes per limb and thus is considered {2}'.format('tiger', 5, 'yang'))\n\n# I learned something when creating this notebook",
"You don't have to put the elements in the correct order",
"print('The {1} has {0} toes per limb and thus is considered {2}'.format('ox', 4, 'yin'))\nprint('The {2} has {2} toes per limb and thus is considered {2}'.format('tiger', 5, 'yang'))",
"You can also use variable names. In the parameters use a dictionary (you'll learn about these soon) or key=value syntax.",
"\"The {animal}'s attribute is {attribute}\".format(animal='snake', attribute='flexibility')",
"You can even format the variables in various ways. Reference the docs for everything, there is just too much you can do and I only have so much time to show you.\nhttps://docs.python.org/3.4/library/string.html",
"print('{:<30}'.format('left aligned'))\nprint('{:>30}'.format('right aligned'))\nprint('{:0.2f}; {:0.7f}'.format(3.14, -3.14)) ",
"TRY IT\nUse string formatting to make a sentence \"[Your name] finds string formatting [difficulty level]\"\nPROJECT: DNA EXTRAVAGANZA\nYou are going to create a program that does some very simple bioinformatics functions on a DNA input.\nBackground\nA little bit of molecular biology. Codons are non-overlapping triplets of nucleotides. \nATG CCC CTG GTA ... - this corresponds to four codons; spaces added for emphasis\n\nThe start codon is 'ATG'\nStop codons can be 'TGA' , 'TAA', or 'TAG', but they must be 'in frame' with the start codon. The first stop codon usually determines the end of the gene. \nIn other words:\n'ATGCCTGA...' - here TGA is not a stop codon, because the T is part of CCT\n'ATGCCTTGA...' - here TGA is a stop codon because it is in frame (i.e. a multiple of 3 nucleic acids from ATG)\n\nThe gene is start codon to stop codon, inclusive \nExample:\ndna - GGCATGAAAGTCAGGGCAGAGCCATCTATTTGAGCTTAC\ngene - ATGAAAGTCAGGGCAGAGCCATCTATTTGA\n\nInstructions\n\nWrite a function called numCodons that takes a dna string and returns to you how many codons are in it (a codon is a group of 3 DNA bases). Examples: AAACCC -> 2 GT -> 0\nWrite a function called startCodonIndex which finds the index of the first start codon 'ATG' and returns -1 if none are found.\nWrite a function called stopCodonIndex which finds the index of the first stop codon 'TAA' or 'TAG' or 'TGA' in frame with the start codon (found from startCodonIndex) and returns -1 if none are found.\nWrite a function called codingDNA which returns the substring of the DNA from the beginning of the start codon to the end of the stop codon (please for the love of all things, use the functions you already wrote to calculate start and stop)\nWrite a function called transcription that takes the DNA and translates it to RNA. Each letter should be translated using these mappings (A->U), (T->A), (C->G), (G->C).\n\nWrite a function called DNAExtravaganza that calls your functions and prints out (using string formatting)\nDNA: [DNA]\nCODONS: [Number of codons]\nSTART: [start index]\nSTOP: [stop index]\nCODING DNA: [coding DNA string]\nTRANSCRIBED RNA: [transcribed DNA]\n\n\nYou can use these as test DNA string:\n dna='GGCATGAAAGTCAGGGCAGAGCCATCTATTGCTTACATTTGCTTCTGACACAACTGTGTTCACTAGCAACCTCAAACAGACACCATGGTGCACCTGACTCCTGAGGAGAAGTCTGCCGTTACTGCCCTGTGGGGCAAGGTGAACGTGGATGAAGTTGGTGGTGAGGCCCTGGGCAGGTTGGTATCAAGGTTACAAGACAGGTTTAAGGAGACCAATAGAAACTGGGCATGTGGAGACAGAGAAGACTCTTGGGTTTCTGATAGGCACTGACTCTCTCTGCCTATTGGTCTATTTTCCCACCCTTAGGCTGCTGGTGGTCTACCCTTGGACCCAGAGGTTCTTTGAGTCCTTTGGGGATCTGTCCACTCCTGATGCTGTTATGGGCAACCCTAAGGTGAAGGCTCATGGCAAGAAAGTGCTCGGTGCCTTTAGTGATGGCCTGGCTCACCTGGACAACCTCAAGGGCACCTTTGCCACACTGAGTGAGCTGCACTGTGACAAGCTGCACGTGGATCCTGAGAACTTCAGGGTGAGTCTATGGGACCCTTGATGTTTTCTTTCCCCTTCTTTTCTATGGTTAAGTTCATGTCATAGGAAGGGGAGAAGTAACAGGGTACAGTTTAGAATGGGAAACAGACGAATGATT'\n\n dna = 'GGGATGTTTGGGCCCTACGGGCCCTGATCGGCT'"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
compsocialscience/summer-institute
|
2018/materials/boulder/day2-digital-trace-data/BoulderSICSS.ipynb
|
mit
|
[
"<h1>Constructing Ego Networks from Retweets</h1>\n\nYotam Shmargad<br>\nUniversity of Arizona<br>\nEmail: yotam@email.arizona.edu<br>\nWeb: www.yotamshmargad.com\n<h2>Introduction</h2>\n<br>\nTwitter has become a prominent online social network, playing a major role in how people all over the world share and consume information. Moreover, while some social networks have made it difficult for researchers to extract data from their servers, Twitter remains relatively open for now. This tutorial will go through the details of how to construct a Twitter user’s ego network from retweets they have received on their tweets. Instead of focusing on who follows who on Twitter, the method instead conceptualizes edges as existing between users if they have recently retweeted each other.<br><br>\nConceptualizing edges as retweets has two primary benefits. First, it captures recent interactions between users rather than decisions that they may have made long ago (i.e. following each other) that may not translate into meaningful interaction today. Second, users often have many more followers than they do retweeters. The method proposed can thus be used to analyze even relatively popular users. The code goes through obtaining authorization from Twitter, taking into account the limits that Twitter imposes on data extraction, and handling errors generated from deleted tweets or users.\n<h2>1. Importing libraries and getting Twitter's approval</h2>",
"# Install tweepy\n# !pip install tweepy\n\n# Import the libraries we need\nimport tweepy\nimport json\nimport time\nimport networkx\nimport os\nimport matplotlib.pyplot as plt\nfrom collections import Counter\n\n# Authenticate!\nauth = tweepy.OAuthHandler(\"Consumer Key\", \"Consumer Secret\")\nauth.set_access_token(\"Access Token\", \"Access Token Secret\")\n\napi = tweepy.API(auth)\n\n# Check working directory\nos.getcwd()\n\n# Set working directory\nos.chdir('FOLDER FOR SAVING FILES')\n\n# Check working directory\nos.getcwd()",
"<h2>2. Pulling ego tweets</h2>",
"# Keep track of API calls\n# User timeline\ncallsUT = 0\n\n# Retweeters\ncallsRT = 0\n\n# Number of tweets to be pulled\n# Ego\nE = 10\n\n# Alter\nA = 10\n\n# Existing user with tweets\nego = api.user_timeline(screen_name = \"CUBoulder\", count = E, include_rts = False, exclude_replies = True)\ncallsUT += 1\n\nlen(ego)\n\n# Existing user with no tweets\nego = api.user_timeline(screen_name = \"DeveloperYotam\", count = E, include_rts = False, exclude_replies = True)\ncallsUT += 1\n\nlen(ego)\n\n# Non-existing user\nego = api.user_timeline(screen_name = \"fakeuserq4587937045\", count = E, include_rts = False, exclude_replies = True)\ncallsUT += 1\n\n# Handling errors\nego = []\negosn = \"CUBoulder\"\n\ntry:\n ego_raw = api.user_timeline(screen_name = egosn, count = E, include_rts = False, exclude_replies = True)\nexcept tweepy.TweepError:\n print(\"fail!\")\n\ncallsUT += 1\n\n# Converting results to a list of json objects\nego = [egotweet._json for egotweet in ego_raw]\n\n# Writing ego tweets to a json file\nwith open('egotweet.json', 'w') as file:\n json.dump(ego, file)\n\ncallsUT\n\n# Looking at a json object\nego[0]\n\n# Accessing an element of ego tweets\nego[0][\"id_str\"]\n\n# Storing one of ego's tweet id\negoid = ego[0][\"id_str\"]\n\n# Storing and printing ego tweet ids and retweet counts\ntweetids = []\nretweets = []\n\nif len(ego) != 0:\n for egotweet in ego:\n tweetids.append(egotweet[\"id_str\"])\n retweets.append(egotweet[\"retweet_count\"])\n print(egotweet[\"id_str\"],egotweet[\"retweet_count\"])",
"<h2>3. Pulling retweeters</h2>",
"# Collecting Retweets\negort = api.retweets(ego[0][\"id_str\"])\ncallsRT += 1\n\nlen(egort)\n\ncallsRT\n\n# Non-existing tweet\negort = api.retweets(\"garblegarble\")\ncallsRT += 1\n\n# Note: callsRT did not increase in the last command\ncallsRT\n\ncallsRT += 1\n\n# Sleep for 10 seconds\ntime.sleep(10)\n\n# Collecting retweeters of ego tweets\nallretweeters = []\nself = []\ncheck = []\n\nfor egotweet in ego:\n retweeters = []\n try:\n selftweet = 0\n if callsRT >= 75:\n time.sleep(900)\n egort_raw = api.retweets(egotweet[\"id_str\"])\n egort = [egoretweet._json for egoretweet in egort_raw]\n for retweet in egort:\n if retweet[\"user\"][\"id_str\"]!=egoid:\n allretweeters.append((egoid,retweet[\"user\"][\"id_str\"]))\n retweeters.append(retweet[\"user\"][\"id_str\"])\n else:\n selftweet = 1\n check.append(len(retweeters))\n self.append(selftweet) \n except tweepy.TweepError:\n check.append(0)\n self.append(0)\n \n callsRT += 1\n\n# Writing results to files\nwith open('check.json', 'w') as file:\n json.dump(check, file)\n\nwith open('self.json', 'w') as file:\n json.dump(self, file)\n \nwith open('allretweeters.json', 'w') as file:\n json.dump(allretweeters, file)\n\n# Printing tweet ids, retweet counts, \n# retweeters obtained, and whether a self tweet is included\nfor a, b, c, d in zip(tweetids,retweets,check,self):\n print(a, b, c, d)\n\nlen(allretweeters)\n\nallretweeters",
"<h2>4. Visualizing the network of retweeters</h2>",
"# Assigning edge weight to be number of tweets retweeted\nweight = Counter()\nfor (i, j) in allretweeters:\n weight[(i, j)] +=1\n\nweight\n\n# Defining weighted edges\nweighted_edges = list(weight.items())\n\nweighted_edges\n\n# Defining the network object\nG = networkx.Graph()\nG.add_edges_from([x[0] for x in weighted_edges])\n\n# Visualizing the network\nnetworkx.draw(G, width=[x[1] for x in weighted_edges])",
"<h2>5. Pulling retweeter tweets</h2>",
"# Defining the set of unique retweeters\nunique = [x[0][1] for x in weighted_edges]\n\nlen(unique)\n\nunique\n\ncallsUT\n\n# Collecting and storing the tweets of retweeters\nalter = []\nalters = []\n\nfor retweeter in unique:\n try:\n if callsUT >= 900:\n time.sleep(900)\n alter_raw = api.user_timeline(retweeter, count = A, include_rts = False, exclude_replies = True)\n alter = [altertweet._json for altertweet in alter_raw]\n alters.append(alter)\n except tweepy.TweepError:\n print(\"fail!\")\n callsUT += 1\n\nwith open('alters.json', 'w') as file:\n json.dump(alters, file)\n\ncallsUT\n\nlen(alters)\n\n# Printing the number of tweets pulled for each retweeter\nfor alt in alters:\n print(len(alt))\n\n# Storing and printing alter ids, tweet ids, and retweet counts\naltids = []\nalttweetids = []\naltretweets = []\n\nfor alt in alters:\n for alttweet in alt:\n altids.append(alttweet[\"user\"][\"id_str\"])\n alttweetids.append(alttweet[\"id_str\"])\n altretweets.append(alttweet[\"retweet_count\"])\n print(alttweet[\"user\"][\"id_str\"],alttweet[\"id_str\"],alttweet[\"retweet_count\"]) ",
"<h2>6. Pulling retweeters of retweeters</h2>",
"# Collecting retweeters of alter tweets\nallalt = []\naltself = []\naltcheck = []\n\nfor alt in alters:\n for alttweet in alt:\n altid = alttweet[\"user\"][\"id_str\"]\n altretweeters = []\n try:\n selftweet = 0\n if callsRT >= 75:\n time.sleep(900)\n altrt_raw = api.retweets(alttweet[\"id_str\"])\n altrt = [altretweet._json for altretweet in altrt_raw]\n for retweet in altrt:\n if retweet[\"user\"][\"id_str\"]!=altid:\n allalt.append((altid,retweet[\"user\"][\"id_str\"]))\n altretweeters.append(retweet[\"user\"][\"id_str\"])\n else:\n selftweet = 1\n altcheck.append(len(altretweeters))\n altself.append(selftweet) \n except tweepy.TweepError:\n altcheck.append(0)\n altself.append(0)\n \n callsRT += 1\n\n# Writing results to files\nwith open('altcheck.json', 'w') as file:\n json.dump(altcheck, file)\n\nwith open('altself.json', 'w') as file:\n json.dump(altself, file)\n \nwith open('altretweeters.json', 'w') as file:\n json.dump(altretweeters, file)\n\nwith open('allalt.json', 'w') as file:\n json.dump(allalt, file)\n\n# Printing alter user ids, tweet ids, retweet counts, \n# retweeters obtained, and whether a self tweet is included\nfor a, b, c, d, e in zip(altids,alttweetids,altretweets,altcheck,altself):\n print(a, b, c, d, e)\n\nlen(allalt)\n\nallalt",
"<h2>7. Visualizing the full network of retweeters</h2>",
"weight = Counter()\nfor (i, j) in allalt:\n weight[(i, j)] +=1\n\nweight\n\nall_edges = weighted_edges + list(weight.items())\n\nall_edges\n\n# Defining the full network object\nG = networkx.Graph()\nG.add_edges_from([x[0] for x in all_edges])\n\n# Visualizing the full network\nnetworkx.draw(G, width=[x[1] for x in all_edges])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LucaCanali/Miscellaneous
|
PLSQL_Neural_Network/MNIST_tensorflow_exp_to_oracle.ipynb
|
apache-2.0
|
[
"TensorFlow training of an artificial neural network to recognize handwritten digits in the MNIST dataset and export it to Oracle RDBMS\nThis notebook contains the preparation steps for the notebook MNIST_oracle_plsql.ipynb where you can find the steps for deploying a neural network serving engine in Oracle using PL/SQL \nAuthor: Luca.Canali@cern.ch - July 2016\nInitialize the environment and load the training set\nCredits: the code for defining and training the neural network is adapted (with extensions) from the Google TensorFlow tutorial https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_softmax.py",
"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n# Import data\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport tensorflow as tf\n\nflags = tf.app.flags\nFLAGS = flags.FLAGS\nflags.DEFINE_string('data_dir', '/tmp/data/', 'Directory for storing data')\n\n# Load training and test data sets with labels\nmnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)\n",
"Definition of the neural network:\n\nThe following defines a basic feed forward neural network with one hidden layer\nOther standard techniques used are the definition of cross entropy as loss function and the use of gradient descent as optimizer",
"# define and initialize the tensors\n\nx = tf.placeholder(tf.float32, shape=[None, 784])\ny_ = tf.placeholder(tf.float32, shape=[None, 10])\n\nW0 = tf.Variable(tf.truncated_normal([784, 100], stddev=0.1))\nb0 = tf.Variable(tf.zeros([100])) \n\nW1 = tf.Variable(tf.truncated_normal([100, 10], stddev=0.1))\nb1 = tf.Variable(tf.zeros([10])) \n\n# Feed forward neural network with one hidden layer\n\n# y0 is the hidden layer with sigmoid activation\ny0 = tf.sigmoid(tf.matmul(x, W0) + b0)\n\n# y1 is the output layer (softmax)\n# y1[n] is the predicted probability that the input image depicts number 'n'\ny1 = tf.nn.softmax(tf.matmul(y0, W1) + b1)\n\n# The the loss function is defined as cross_entropy\ncross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y1), reduction_indices=[1]))\n\n# train the network using gradient descent\ntrain_step = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cross_entropy)\n\n\n# start a TensorFlow interactive session\nsess = tf.InteractiveSession()\nsess.run(tf.initialize_all_variables())\n",
"Train the network\n\nThe training uses 55000 images with labels\nIt is performed over 30000 iterations using mini batch size of 100 images",
"batch_size = 100\ntrain_iterations = 30000\n\n# There are mnist.train.num_examples=55000 images in the train sample\n# train in batches of 'batch_size' images at a time\n# Repeat for 'train_iterations' number of iterations\n# Training batches are randomly calculated as each new epoch starts\n\nfor i in range(train_iterations):\n batch = mnist.train.next_batch(100)\n train_data = feed_dict={x: batch[0], y_: batch[1]}\n train_step.run(train_data)\n\n# Test the accuracy of the trained network\ncorrect_prediction = tf.equal(tf.argmax(y1, 1), tf.argmax(y_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nprint(\"Accuracy of the trained network over the test images: %s\" % \n accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))",
"Learning exercise: extract the tensors as 'manually' run the neural network scoring\nIn the following you can find an example of how to manually run the neural network scoring in Python using numpy. This is intended as an example to further the understanding of how the scoring engine works and opens the way for the next steps, that is the implementation of the scoring engine for Oracle using PL/SQL (see also the notebook MNIST_oracle_plsql.ipynb)",
"# There are 2 matrices and 2 vectors used in this neural network:\nW0_matrix=W0.eval()\nb0_array=b0.eval()\nW1_matrix=W1.eval()\nb1_array=b1.eval()\n\nprint (\"W0 is matrix of size: %s \" % (W0_matrix.shape,) )\nprint (\"b0 is array of size: %s \" % (b0_array.shape,) )\nprint (\"W1 is matrix of size: %s \" % (W1_matrix.shape,) )\nprint (\"b1 is array of size: %s \" % (b1_array.shape,) )\n",
"Extracting the test images and labels as numpy arrays",
"testlabels=tf.argmax(mnist.test.labels,1).eval()\ntestimages=mnist.test.images\n\nprint (\"testimages is matrix of size: %s \" % (testimages.shape,) )\nprint (\"testlabels is array of size: %s \" % (testlabels.shape,) )\n",
"Example of how to run the neural network \"manually\" using the tensor values extracted into numpy arrays",
"import numpy as np\n\ndef softmax(x):\n \"\"\"Compute the softmax function on a numpy array\"\"\"\n return np.exp(x) / np.sum(np.exp(x), axis=0)\n\ndef sigmoid(x):\n \"\"\"Compute the sigmoid function on a numpy array\"\"\"\n return (1 / (1 + np.exp(-x)))\n\ntestimage=testimages[0]\ntestlabel=testlabels[0]\n\nhidden_layer = sigmoid(np.dot(testimage, W0_matrix) + b0_array)\npredicted = np.argmax(softmax(np.dot(hidden_layer, W1_matrix) + b1_array))\n\nprint (\"image label %d, predicted value by the neural network: %d\" % (testlabel, predicted))",
"Visual test that the predicted value is indeed correct",
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.imshow(testimage.reshape(28,28), cmap='Greys')",
"Transfer of the tensors and test data into Oracle tables\nFor the following you should have access to a (test) Oracle database. This procedure has been tested with Oracle 11.2.0.4 and 12.1.0.2 on Linux.\nTo keep the test isolated you can create a dedicated user (suggested name, mnist) for the data transfer, as follows:\n<code>\nFrom a DBA account (for example the user system) execute:\nSQL> create user mnist identified by mnist default tablespace users quota unlimited on users;\nSQL> grant connect, create table, create procedure to mnist;\nSQL> grant read, write on directory DATA_PUMP_DIR to mnist;\n</code>\nThese are the tables that will be used in the following code to transfer the tensors and testdata:\n<code>\nSQL> connect mnist/mnist@ORCL\nSQL> create table tensors(name varchar2(20), val_id number, val binary_float, primary key(name, val_id));\nSQL> create table testdata(image_id number, label number, val_id number, val binary_float, primary key(image_id, val_id));\n</code>\nOpen the connection to the database using cx_Oracle:\n(for tips on how to install and use of cx_Oracle see also https://github.com/LucaCanali/Miscellaneous/tree/master/Oracle_Jupyter)",
"import cx_Oracle\nora_conn = cx_Oracle.connect('mnist/mnist@dbserver:1521/orcl.cern.ch')\ncursor = ora_conn.cursor()",
"Transfer the matrixes W0 and W1 into the table tensors (which must be precreated as described above)",
"i = 0\nsql=\"insert into tensors values ('W0', :val_id, :val)\"\nfor column in W0_matrix:\n array_values = []\n for element in column:\n array_values.append((i, float(element)))\n i += 1\n cursor.executemany(sql, array_values)\n\nora_conn.commit()\n\ni = 0\nsql=\"insert into tensors values ('W1', :val_id, :val)\"\nfor column in W1_matrix:\n array_values = []\n for element in column:\n array_values.append((i, float(element)))\n i += 1\n cursor.executemany(sql, array_values)\n\nora_conn.commit()",
"Transfer the vectors b0 and b1 into the table \"tensors\" (the table is expected to exist on the DB, create it using the SQL described above)",
"i = 0\nsql=\"insert into tensors values ('b0', :val_id, :val)\"\narray_values = []\nfor element in b0_array:\n array_values.append((i, float(element)))\n i += 1\ncursor.executemany(sql, array_values)\n\ni = 0\nsql=\"insert into tensors values ('b1', :val_id, :val)\"\narray_values = []\nfor element in b1_array:\n array_values.append((i, float(element)))\n i += 1\ncursor.executemany(sql, array_values)\n\nora_conn.commit()",
"Transfer the test data with images and labels into the table \"testdata\" (the table is expected to exist on the DB, create it using the SQL described above)",
"image_id = 0\narray_values = []\nsql=\"insert into testdata values (:image_id, :label, :val_id, :val)\"\nfor image in testimages:\n val_id = 0\n array_values = []\n for element in image:\n array_values.append((image_id, testlabels[image_id], val_id, float(element)))\n val_id += 1\n cursor.executemany(sql, array_values)\n image_id += 1\n\nora_conn.commit()",
"Export the neural network tensors and test data\nThis will create a datapump export of the testdata_array and tensors_array tables into the destination DATA_PUMP_DIR \n(by default on $ORACLE_HOME/rdbms/log). Note if note executed earlier, run:\n<code>\nSQL> grant read, write on directory DATA_PUMP_DIR to mnist;\n</code>\nFrom the command line as Oracle export the tables in a datapump dump file:\n<code><b>\nexpdp mnist/mnist tables=testdata,tensors directory=DATA_PUMP_DIR dumpfile=MNIST_tables.dmp\n</b></code>\nThis ends the preparation of the network\nPlease move on to the instructions for deploying the scoring engine in Oracle, see MNIST_oracle_plsql.ipynb"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
strandbygaard/deep-learning
|
tensorboard/Anna_KaRNNa.ipynb
|
mit
|
[
"Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"First we'll load the text file and convert it into integers for our network to use.",
"with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)\n\ntext[:100]\n\nchars[:100]",
"Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y\n\ntrain_x, train_y, val_x, val_y = split_data(chars, 10, 200)\n\ntrain_x.shape\n\ntrain_x[:,:10]",
"I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]\n\ndef build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n \n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n\n\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n \n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)\n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n \n seq_output = tf.concat(outputs, axis=1,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN putputs to a softmax layer and calculate the cost\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n\n preds = tf.nn.softmax(logits, name='predictions')\n \n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n\n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"Hyperparameters\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.",
"batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001",
"Write out the graph for TensorBoard",
"model = build_rnn(len(vocab),\n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n file_writer = tf.summary.FileWriter('./logs/1', sess.graph)",
"Training\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.",
"!mkdir -p checkpoints/anna\n\nepochs = 1\nsave_every_n = 200\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))\n\ntf.train.get_checkpoint_state('checkpoints/anna')",
"Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.",
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)\n\ncheckpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ChrCoello/PythonClub
|
2017_05_19__Work_with_images.ipynb
|
mit
|
[
"Working with images in Python\nDealing with images in a command line fashion can be approach: \n - using Python to do the image processing / manipulation itself\n - using Python to batch process the call of a third-party command (i.e. Image Magick) that will do the image manipulation\n - other?\nPython Imaging Library (PIL or Pillow)\nThe PIL package or the more recent fork called Pillow seems to be a robust environement to work with images.\nStart by installing Pillow package from your Anaconda prompt\nconda install pillow\nOnce this is successfull import Pillow",
"from PIL import Image",
"If you are downloading it from Internet, use the following command by modifying the local_loc to the folder you are interested in saving the image.",
"import urllib.request\nlocal_loc = 'C:\\data\\local_image.tif'\nurllib.request.urlretrieve('http://folk.uio.no/sebastcc/images/Mtg01_bl1_MOAB2_s023_scaled20perc.tif', local_loc)\nim = Image.open(local_loc)\n\np=1\n\ntype(p)\n\ntype(im)\n\nprint(im.height,im.width)\n\nim.mode?\n\nim.show()\n\n# split the image into individual bands\nsource = im.split()\n\nR, G, B = 0, 1, 2\n\ntype(source)\n\ntype(source[R])\n\ntype(source[G])\n\nsource[R]",
"The point() method can be used to translate the pixel values of an image (e.g. image contrast manipulation). In most cases, a function object expecting one argument can be passed to this method. Each pixel is processed according to that function:",
"def inv_pix(pix):\n inv = 255-pix\n return inv\n\nsource[R].point(inv_pix);\n\nsource[G].point(inv_pix);\n\nsource[B].point(inv_pix);\n\nsource[G].paste\n\nsource[G].paste\n\nsource[R].paste(source[R].point(inv_pix))\n\nsource[G].paste(source[G].point(inv_pix))\n\nsource[B].paste(source[B].point(inv_pix))\n\nImage.merge??\n\nim.mode\n\nimnew = Image.merge(im.mode, source)\n\nimnew.show()\n\nimnew\n\nimnew.save('C:\\data\\GitHub\\PythonClub\\Mtg01_bl1_MOAB2_s023_scaled20perc_inverted.tif')",
"Much faster solution using eval method from Image module (thanks to Gergely). No need to split the channels.",
"im = Image.open(t)\nimfast = Image.eval(im,inv_pix)",
"Calling Image Magick from Python\nTBD\nImage Magick : http://www.imagemagick.org/script/index.php",
"convert"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JoaoFelipe/snowballing
|
snowballing/example/Progress.ipynb
|
mit
|
[
"Index\n\nWork\nWorkOk\nWorkSnowball\nForward Snowballing\nOther\nWorkUnrelated\nWorkNoFile\nWorkLang",
"import database\nfrom datetime import datetime\nfrom snowballing.operations import load_work, reload\nfrom snowballing.jupyter_utils import work_button, idisplay\nreload()",
"Work\nSince this is the default class, it is used as a safety check.\nNo work should be stored with this class at the end of the snowballing.\nIf the work has this class, we did not decide yet whether the work is related or not.",
"reload()\nquery = [idisplay(work_button(w)) for w in load_work() if w.category == \"work\"]\nlen(query)",
"WorkOk\nThis class is used for related work that has not been explored with a backward snowballing yet.",
"reload()\nquery = [idisplay(work_button(w)) for w in load_work() if w.category == \"ok\"]\nlen(query)",
"WorkSnowball\nThis class is used for related work that has been explored with backward snowballing.",
"reload()\nquery = [idisplay(work_button(w)) for w in load_work() if w.category == \"snowball\"]\nlen(query)",
"Forward Snowballing\nUse the attribute .snowball to indicate when were performed the last snowball.\nThe query search WorkOk and WorkSnowball with outdated snowball attributes.",
"current_snowball = datetime(2017, 7, 26)\n\nreload()\nquery = [\n idisplay(work_button(w))\n for w in load_work()\n if w.category in (\"ok\", \"snowball\")\n if not hasattr(w, \"snowball\")\n or w.snowball < current_snowball\n]\nlen(query)",
"Other\nWorkUnrelated\nNumber of work unrelated to the snowballing",
"sum(\n 1\n for w in load_work()\n if w.category == \"unrelated\"\n)",
"WorkNoFile\nNumber of work without files",
"sum(\n 1\n for w in load_work()\n if w.category == \"nofile\"\n)",
"WorkLang\nNumber of work in foreign languages",
"sum(\n 1\n for w in load_work()\n if w.category == \"lang\"\n)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
CSB-book/CSB
|
good_code/solutions/Jiang2013_solution.ipynb
|
gpl-3.0
|
[
"Solution of 4.10.1, Jiang et al. 2013\nWrite a function that takes as input the desired Taxon, and returns the mean value of r.\nFirst, we're going to import the csv module, and read the data. We store the taxon name in the list Taxa, and the corresponding r value in the list r_values. Note that we need to convert the values to float (we need numbers, and they are read as strings).",
"import csv\n\nwith open('../data/Jiang2013_data.csv') as csvfile:\n # set up csv reader and specify correct delimiter\n reader = csv.DictReader(csvfile, delimiter = '\\t')\n taxa = []\n r_values = []\n for row in reader:\n taxa.append(row['Taxon'])\n r_values.append(float(row['r']))",
"We check the first five entries to make sure that everything went well:",
"taxa[:5]\n\nr_values[:5]",
"Now we write a function that, given a list of taxa names and corresponding r values, calculates the mean r for a given category of taxa:",
"def get_mean_r(names, values, target_taxon = 'Fish'):\n n = len(names)\n mean_r = 0.0\n sample_size = 0\n for i in range(n):\n if names[i] == target_taxon:\n mean_r = mean_r + values[i]\n sample_size = sample_size + 1\n return mean_r / sample_size",
"Test the function using Fish as target taxon:",
"get_mean_r(taxa, r_values, target_taxon = 'Fish')",
"Let's try to run this on all taxa. We can write a separate function that returns the set of unique taxa in the database:",
"def get_taxa_list(names):\n return(set(names))\n\nget_taxa_list(taxa)",
"Calculate the mean r for each taxon:",
"for t in get_taxa_list(taxa):\n print(t, get_mean_r(taxa, r_values, target_taxon = t))",
"You should see that fish have a positive value of r, but that this is also true for other taxa. Is the mean value of r especially high for fish? To test this, compute a p-value by repeatedly sampling 37 values of r at random (37 experiments on fish are reported in the database), and calculating the probability of observing a higher mean value of r. To get an accurate estimate of the p-value, use 50,000 randomizations.\nAre these values of assortative mating high, compared to what is expected by chance? We can try associating a p-value to each r value by repeatedly computing the mean r of randomized taxa and observing how often we obtain a mean r larger than the observed value. There are many other ways of obtaining such an emperical p-value, for example counting how many times a certain taxon is represented, and sampling the values at random.",
"import scipy # scipy for random shuffle\n\ndef get_p_value_for_mean_r(names, \n values, \n target_taxon = 'Fish', \n num_simulations = 1000):\n # compute the (observed) mean_r\n obs_mean_r = get_mean_r(names, values, target_taxon)\n # create a copy of the names, to be randomized\n rnd_names = names[:]\n # create counter for observations that are higher than obs_mean_r\n count_mean_r = 0.0\n for i in range(num_simulations):\n # shuffle the taxa names\n scipy.random.shuffle(rnd_names)\n # calculate mean r value of randomized data\n rnd_mean_r = get_mean_r(rnd_names, values, target_taxon)\n # count number of rdn_mean_r that are larger or equal to obs_mean_r\n if rnd_mean_r >= obs_mean_r:\n count_mean_r = count_mean_r + 1.0\n # calculate p_value: chance of observing rnd_r_mean larger than r_mean\n p_value = count_mean_r / num_simulations\n return [target_taxon, round(obs_mean_r, 3), round(p_value, 5)]",
"Let's try the function on Fish:",
"get_p_value_for_mean_r(taxa, r_values, 'Fish', 50000)",
"A very small p-value: this means that the observed mean r value (0.397) is larger than what we would expect by chance. Note that your calculated p-value might deviate slightly from ours given the randomness in a simulation.\nRepeat the procedure for all taxa.",
"for t in get_taxa_list(taxa):\n print(get_p_value_for_mean_r(taxa, r_values, t, 50000))",
"Fish, Protists and Crustaceans have higher mean r values than expected by chance (p-value $\\leq$ 0.01). Insects, Amphibians and Birds have lower values than expected by chance (p-value $\\geq$ 0.99)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
stevetjoa/stanford-mir
|
dtw.ipynb
|
mit
|
[
"%matplotlib inline\nimport seaborn\nimport numpy, scipy, scipy.spatial, matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (14, 3)",
"← Back to Index\nDynamic Time Warping\nIn MIR, we often want to compare two sequences of different lengths. For example, we may want to compute a similarity measure between two versions of the same song. These two signals, $x$ and $y$, may have similar sequences of chord progressions and instrumentations, but there may be timing deviations between the two. Even if we were to express the two audio signals using the same feature space (e.g. chroma or MFCCs), we cannot simply sum their pairwise distances because the signals have different lengths.\nAs another example, you might want to align two different performances of the same musical work, e.g. so you can hop from one performance to another at any moment in the work. This problem is known as music synchronization (FMP, p. 115).\nDynamic time warping (DTW) (Wikipedia; FMP, p. 131) is an algorithm used to align two sequences of similar content but possibly different lengths. \nGiven two sequences, $x[n], n \\in {0, ..., N_x - 1}$, and $y[n], n \\in {0, ..., N_y - 1}$, DTW produces a set of index coordinate pairs ${ (i, j) ... }$ such that $x[i]$ and $y[j]$ are similar.\nWe will use the same dynamic programming approach described in the notebooks Dynamic Programming and Longest Common Subsequence.\nExample\nCreate two arrays, $x$ and $y$, of lengths $N_x$ and $N_y$, respectively.",
"x = [0, 4, 4, 0, -4, -4, 0]\ny = [1, 3, 4, 3, 1, -1, -2, -1, 0]\nnx = len(x)\nny = len(y)\n\nplt.plot(x)\nplt.plot(y, c='r')\nplt.legend(('x', 'y'))",
"In this simple example, there is only one value or \"feature\" at each time index. However, in practice, you can use sequences of vectors, e.g. spectrograms, chromagrams, or MFCC-grams.\nDistance Metric\nDTW requires the use of a distance metric between corresponding observations of x and y. One common choice is the Euclidean distance (Wikipedia; FMP, p. 454):",
"scipy.spatial.distance.euclidean(0, [3, 4])\n\nscipy.spatial.distance.euclidean([0, 0], [5, 12])",
"Another choice is the Manhattan or cityblock distance:",
"scipy.spatial.distance.cityblock(0, [3, 4])\n\nscipy.spatial.distance.cityblock([0, 0], [5, 12])",
"Another choice might be the cosine distance (Wikipedia; FMP, p. 376) which can be interpreted as the (normalized) angle between two vectors:",
"scipy.spatial.distance.cosine([1, 0], [100, 0])\n\nscipy.spatial.distance.cosine([1, 0, 0], [0, 0, -1])\n\nscipy.spatial.distance.cosine([1, 0], [-1, 0])",
"For more distance metrics, see scipy.spatial.distance.\nStep 1: Cost Table Construction\nAs described in the notebooks Dynamic Programming and Longest Common Subsequence, we will use dynamic programming to solve this problem. First, we create a table which stores the solutions to all subproblems. Then, we will use this table to solve each larger subproblem until the problem is solved for the full original inputs.\nThe basic idea of DTW is to find a path of index coordinate pairs the sum of distances along the path $P$ is minimized:\n$$ \\min \\sum_{(i, j) \\in P} d(x[i], y[j]) $$\nThe path constraint is that, at $(i, j)$, the valid steps are $(i+1, j)$, $(i, j+1)$, and $(i+1, j+1)$. In other words, the alignment always moves forward in time for at least one of the signals. It never goes forward in time for one signal and backward in time for the other signal.\nHere is the optimal substructure. Suppose that the best alignment contains index pair (i, j), i.e., x[i] and y[j] are part of the optimal DTW path. Then, we prepend to the optimal path \n$$ \\mathrm{argmin} \\ { d(x[i-1], y[j]), d(x[i], y[j-1]), d(x[i-1], j-1]) } $$\nWe create a table where cell (i, j) stores the optimum cost of dtw(x[:i], y[:j]), i.e. the optimum cost from (0, 0) to (i, j). First, we solve for the boundary cases, i.e. when either one of the two sequences is empty. Then we populate the table from the top left to the bottom right.",
"def dtw_table(x, y):\n nx = len(x)\n ny = len(y)\n table = numpy.zeros((nx+1, ny+1))\n \n # Compute left column separately, i.e. j=0.\n table[1:, 0] = numpy.inf\n \n # Compute top row separately, i.e. i=0.\n table[0, 1:] = numpy.inf\n \n # Fill in the rest.\n for i in range(1, nx+1):\n for j in range(1, ny+1):\n d = scipy.spatial.distance.euclidean(x[i-1], y[j-1])\n table[i, j] = d + min(table[i-1, j], table[i, j-1], table[i-1, j-1])\n return table\n\ntable = dtw_table(x, y)",
"Let's visualize this table:",
"print(' ', ''.join('%4d' % n for n in y))\nprint(' +' + '----' * (ny+1))\nfor i, row in enumerate(table):\n if i == 0:\n z0 = ''\n else:\n z0 = x[i-1]\n print(('%4s |' % z0) + ''.join('%4.0f' % z for z in row))",
"The time complexity of this operation is $O(N_x N_y)$. The space complexity is $O(N_x N_y)$.\nStep 2: Backtracking\nTo assemble the best path, we use backtracking (FMP, p. 139). We will start at the end, $(N_x - 1, N_y - 1)$, and backtrack to the beginning, $(0, 0)$.\nFinally, just read off the sequences of time index pairs starting at the end.",
"def dtw(x, y, table):\n i = len(x)\n j = len(y)\n path = [(i, j)]\n while i > 0 or j > 0:\n minval = numpy.inf\n if table[i-1, j] < minval:\n minval = table[i-1, j]\n step = (i-1, j)\n if table[i][j-1] < minval:\n minval = table[i, j-1]\n step = (i, j-1)\n if table[i-1][j-1] < minval:\n minval = table[i-1, j-1]\n step = (i-1, j-1)\n path.insert(0, step)\n i, j = step\n return path\n\npath = dtw(x, y, table)\npath",
"The time complexity of this operation is $O(N_x + N_y)$.\nAs a sanity check, compute the total distance of this alignment:",
"sum(abs(x[i-1] - y[j-1]) for (i, j) in path if i >= 0 and j >= 0)",
"Indeed, that is the same as the cumulative distance of the optimal path computed earlier:",
"table[-1, -1]",
"← Back to Index"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mcc-petrinets/formulas
|
spot/tests/python/word.ipynb
|
mit
|
[
"import spot\nspot.setup()",
"Let's build a small automaton to use as example.",
"aut = spot.translate('!a & G(Fa <-> XXb)'); aut",
"Build an accepting run:",
"run = aut.accepting_run(); run",
"Accessing the contents of the run can be done via the prefix and cycle lists.",
"print(spot.bdd_format_formula(aut.get_dict(), run.prefix[0].label))\nprint(run.cycle[0].acc)",
"To convert the run into a word, using spot.twa_word(). Note that our runs are labeled by Boolean formulas that are not necessarily a conjunction of all involved litterals. The word is just the projection of the run on its labels.",
"word = spot.twa_word(run); word",
"A word can be represented as a collection of signals (one for each atomic proposition). The cycle part is shown twice.",
"word.show()",
"Accessing the different formulas (stored as BDDs) can be done again via the prefix and cycle lists.",
"print(spot.bdd_format_formula(aut.get_dict(), word.prefix[0]))\nprint(spot.bdd_format_formula(aut.get_dict(), word.prefix[1]))\nprint(spot.bdd_format_formula(aut.get_dict(), word.cycle[0]))",
"Calling simplifify() will produce a shorter word that is compatible with the original word. For instance in the above word, the initial a is compatible with both a & b and a & !b. The word obtained by restricting a to a & b is therefore still accepted, allowing us to remove the prefix.",
"word.simplify()\nword",
"Such a simplified word can be created directly from the automaton:",
"aut.accepting_word()",
"Words can be created using the parse_word function:",
"print(spot.parse_word('a; b; cycle{a&b}'))\nprint(spot.parse_word('cycle{a&bb|bac&(aaa|bbb)}'))\nprint(spot.parse_word('a; b;b; qiwuei;\"a;b&c;a\" ;cycle{a}'))\n\n# make sure that we can parse a word back after it has been printed\nw = spot.parse_word(str(spot.parse_word('a;b&a;cycle{!a&!b;!a&b}'))); w\n\nw.show()",
"Words can be easily converted as automata",
"w.as_automaton()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
windj007/tablex-dataset
|
dataset_from_latex.ipynb
|
apache-2.0
|
[
"# !pip install git+https://github.com/windj007/TexSoup timeout-decorator\n# !apt-get install -y latexmk\n# !pip install ngram\n\n%load_ext autoreload\n%autoreload 2\n\nimport matplotlib.pyplot as plt\nimport tqdm\n%pylab inline\n\nfrom table_gen import *\n\n# # pdf2samples('./data/arxiv/1/1312.6989.tar.gz', './data/arxiv/buf/', get_table_info, aggregate_object_bboxes, display_demo=True)\n# pdf2samples('./data/arxiv/1/44/1601.04208.tar.gz', './data/arxiv/buf/', get_table_info, aggregate_object_bboxes, display_demo=True)\n# pdf2samples('./data/arxiv/sources/1006.1798.tar.gz', './data/arxiv/buf/', get_table_info, aggregate_object_bboxes, display_demo=True)\n# # pdf2samples('./data/arxiv/1/5/1201.2088.tar.gz', './data/arxiv/buf/', get_table_info, aggregate_object_bboxes, display_demo=True)\n# pdf2samples('./data/arxiv/1/8/0708.1672.tar.gz', './data/arxiv/buf/', get_table_info, aggregate_object_bboxes, display_demo=True)",
"Analyze error logs",
"# frequent_errors = collections.Counter(err\n# for f in glob.glob('./data/arxiv/err_logs/*.log')\n# for err in {line\n# for line in open(f, 'r', errors='replace')\n# if \"error:\" in line})\n# frequent_errors.most_common(10)",
"Debug",
"# preprocess_latex_file('./data/arxiv/1/44/The_Chiral_Anomaly_Final_Posting.tex')\ncompile_latex('./111/tex-playground/')\n# !mkdir ./data/arxiv/1/44/pages/\npages = pdf_to_pages('./111/tex-playground/playground.pdf', './111/tex-playground/pages/')\nwith open('./111/tex-playground/playground.tex') as f:\n soup = TexSoup.TexSoup(f.read())\n\n# test_latex = r'''\n# \\documentclass{llncs}\n# \\usepackage{graphicx}\n# \\usepackage{multirow}\n# \\usepackage{hyperref}\n# \\usepackage[a4paper, landscape, margin={0.1in, 0.1in}]{geometry}\n# \\usepackage{tabularx}\n# \\usepackage{makecell}\n\n# \\begin{document}\n\n\n# \\begin{table}\n# \\renewcommand{\\arraystretch}{0.42}\n# \\setlength{\\tabcolsep}{1.52pt}\n# \\begin{tabular}{ c c r|c|r|l|c|}\n# & .Myrnvnl & \\multicolumn{5}{ c }{Bd iH VXDy -aL} \\\\\n# & & \\multicolumn{2}{|c|}{AlUBLk.cv} & \\multicolumn{2}{ c }{ \\makecell{ nUd qLoieco jVsmTLRAf \\\\ UPS TJL xGIH } } & qe.V.. \\\\\n# & & \\makecell{ MG MTBSgR, \\\\ ,lHm Ihmd \\\\ lbrT } & -OfQuxW & MeY XR & kSG,dEFX & \\\\\n# \\hline \\makecell{ LuekQjL NSs TVq \\\\ NDC } & 8.80 Mv & osw & K*Dgc & 53.16 Tr & 8.92 & 44.18 j- \\\\\n# \\hline oL & 55.67 UueS & vGkGl & -MUJhqduw & 67.86 sxRy- & 63.51 & 10.85 A*,hKg \\\\\n# nA & 7.46 ll & yVw,P & vuege & 96.36 FuEa & 80.27 & 40.46 NeWuNVi \\\\\n# fA & 0.47 j,Gg.Gv & TrwtXRS & yfhyTWJ & 42.20 sWdg & 8.76 & 98.68 ND \\\\\n# \\hline \\makecell{ hD XXOl dMCTp Yib \\\\ p.IE TcBn } & 7.90 Pm & CbyWQtUTY, & FPFh.M & 22.38 Hs & 16.03 & 33.20 hU \\\\\n# \\hline \\makecell{ LAxtFM cmBvrJj hCRx, \\\\ LiQYh } & 97.15 *a & ..pb & ejNtniag & 84.67 F.xHN & 10.31 & 23.57 R,rdK \\\\\n# x*d afKGwJw & 82.46 REuwGLME & cIQv & iCLkFNY & 95.92 iHL & 79.26 & 80.85 L-NR \\\\\n\n# \\end{tabular}\n# \\end{table}\n\n\n# \\end{document}\n# '''\n# soup = TexSoup.TexSoup(test_latex)\n\n# !cat -n ./data/arxiv/1/44/The_Chiral_Anomaly_Final_Posting.tex\n\ntables = list(soup.find_all('table'))\n\nt = tables[0]\n\nt.tabular\n\nqq = structurize_tabular_contents(t.tabular)\nqq\n\nlist(get_all_tokens(qq.rows[8][2]))\n\nww = next(iter(get_all_tokens(qq.rows[6][0])))\nprint(ww)\nprint(type(ww))\nsrc_pos = soup.char_pos_to_line(ww.position + len(ww.text) // 2)\nsrc_pos\n\no = subprocess.check_output(['synctex', 'view',\n '-i', '{}:{}:{}'.format(src_pos[0] + 1,\n src_pos[1] + 1,\n 'playground.tex'),\n '-o', 'playground.pdf'],\n cwd='./111/tex-playground/').decode('ascii')\np = parse_synctex_output(o)\n\npage_i, boxes = list(p.items())[0]\nbox = boxes[2]\nprint(page_i, boxes)\n\npdf = PdfMinerWrapper('./111/tex-playground/playground.pdf')\npdf.load()\n\npage_info = pdf.get_page(page_i-1)\nfound_boxes = list(pdf.get_boxes(page_i-1, [convert_coords_to_pq(b, page_info[1].cropbox)\n for b in boxes]))\nprint('; '.join(pdf.get_text(page_i-1,\n [convert_coords_to_pq(b, page_info[1].cropbox)])\n for b in boxes))\n\ntable_info = list(get_table_info(soup))[1]\n\npage_img = load_image_opaque(pages[page_i - 1])\nmake_demo_mask(page_img,\n [(1,\n (convert_coords_from_pq(fb.bbox, page_info[1].cropbox) * POINTS_TO_PIXELS_FACTOR).astype('int'))\n for fb in found_boxes] +\n [(1, (numpy.array(b) * POINTS_TO_PIXELS_FACTOR).astype('int')) for b in boxes])\n\npdf_latex_to_samples('1',\n '.',\n './111/tex-playground/playground.tex',\n './111/tex-playground/playground.pdf',\n './111/tex-playground/',\n get_table_info,\n boxes_aggregator=aggregate_object_bboxes,\n display_demo=True)\n\n# print('\\n*********\\n'.join(map(str, get_all_tokens(t.tabular))))",
"Generate tables",
"# table_def = gen_table_contents()\n# print('columns', len(table_def[2][0]), 'rows', len(table_def[2]))\n\n# # %%prun\n# render_table(table_def, '/notebook/templates/springer/', '/notebook/data/generated/1.pdf',\n# print_latex_content=True,\n# display_demo=True,\n# on_wrong_parse='ignore')\n\ndef gen_and_save_table(i, seed):\n numpy.random.seed(seed)\n table_def = gen_table_contents()\n render_table(table_def, '/notebook/templates/springer/', '/notebook/data/generated_with_char_info/big_simple_lined/src/{}'.format(i))\n\nseeds = numpy.random.randint(0, 2000, size=2000)\njoblib.Parallel(n_jobs=6)(joblib.delayed(gen_and_save_table)(i, s) for i, s in enumerate(seeds))\n\n# for dirname in ['complex_clean', 'dense', 'lined', 'multiline_lined', 'no_lined', 'big_simple_lined', 'big_simple_no_lined']:\n# print(dirname)\n# for subdir in ['demo', 'src']:\n# print(subdir)\n# src_full_dirname = os.path.join('./data/generated', dirname, subdir)\n# target_full_dirname = os.path.join('./data/generated/full', subdir)\n# for fname in tqdm.tqdm(os.listdir(src_full_dirname)):\n# shutil.copy2(os.path.join(src_full_dirname, fname),\n# os.path.join(target_full_dirname, dirname + '_' + fname))",
"Get some statistics",
"archive_files = list(glob.glob('./data/arxiv/sources/*.tar.gz'))\nprint('Total downloaded', len(archive_files))\n\n# def _get_archive_content_type(fname):\n# return read_metadata(fname)['content_type']\n# print('Types:\\n', collections.Counter(joblib.Parallel(n_jobs=-1)(joblib.delayed(_get_archive_content_type)(archive)\n# for archive in archive_files)).most_common())\n# print()",
"Total downloaded 208559\nTypes:\n[('application/x-eprint-tar', 149642), ('application/x-eprint', 40360), ('application/pdf', 18292), ('application/vnd.openxmlformats-officedocument.wordprocessingml.document', 218), ('application/postscript', 47)]",
"good_papers = set()\nbad_papers = set()\n\nif os.path.exists('./good_papers.lst'):\n with open('./good_papers.lst', 'r') as f:\n good_papers = set(line.strip() for line in f)\nif os.path.exists('./bad_papers.lst'):\n with open('./bad_papers.lst', 'r') as f:\n bad_papers = set(line.strip() for line in f)\n\nprint('Good papers', len(good_papers))\nprint('Bad papers', len(bad_papers))\n\n# def check_archive_func(fname):\n# return (fname,\n# contains_something_interesting(fname, get_table_info))\n\n# archive_files_with_check_res = joblib.Parallel(n_jobs=12)(joblib.delayed(check_archive_func)(fname)\n# for fname in archive_files\n# if not (fname in bad_papers or fname in good_papers))\n# for fname, is_good in archive_files_with_check_res:\n# if is_good:\n# good_papers.add(fname)\n# else:\n# bad_papers.add(fname)\n\n# with open('./good_papers.lst', 'w') as f:\n# f.write('\\n'.join(sorted(good_papers)))\n# with open('./bad_papers.lst', 'w') as f:\n# f.write('\\n'.join(sorted(bad_papers)))",
"Apply pipeline to some papers",
"ARXIV_INOUT_PAIRS_DIR = './data/arxiv/inout_pairs/'\n\ndef _pdf2samples_mp(archive):\n try:\n pdf2samples(archive,\n ARXIV_INOUT_PAIRS_DIR,\n lambda s: get_table_info(s, extract_cells=False),\n aggregate_object_bboxes)\n except Exception as ex:\n with open(os.path.join(ARXIV_INOUT_PAIRS_DIR, os.path.basename(archive) + '.log'), 'w') as f:\n f.write(str(ex) + '\\n')\n f.write(traceback.format_exc())\n\n_ = joblib.Parallel(n_jobs=10)(joblib.delayed(_pdf2samples_mp)(arc)\n for arc in good_papers)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
xmnlab/skdata
|
notebooks/SkData.ipynb
|
mit
|
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\" style=\"margin-top: 1em;\"><ul class=\"toc-item\"><li><span><a href=\"#SkData---Data-Specification\" data-toc-modified-id=\"SkData---Data-Specification-1\"><span class=\"toc-item-num\">1 </span>SkData - Data Specification</a></span><ul class=\"toc-item\"><li><span><a href=\"#Importing-data\" data-toc-modified-id=\"Importing-data-1.1\"><span class=\"toc-item-num\">1.1 </span>Importing data</a></span></li><li><span><a href=\"#Data-preparing-and-cleaning\" data-toc-modified-id=\"Data-preparing-and-cleaning-1.2\"><span class=\"toc-item-num\">1.2 </span>Data preparing and cleaning</a></span></li></ul></li></ul></div>\n\nSkData - Data Specification\nSkData provide a data class to structure and organize the preprocessing data.",
"from skdata.data import (\n SkDataFrame as DataFrame,\n SkDataSeries as Series\n)\n\nimport pandas as pd",
"Importing data",
"df_train = DataFrame(\n pd.read_csv('../data/train.csv', index_col='PassengerId')\n)\n\ndf_train.head()\n\ndf_train.summary()",
"Data preparing and cleaning",
"df_train['Sex'].replace({\n 'male': 'Male', 'female': 'Female'\n}, inplace=True)\n\ndf_train['Embarked'].replace({\n 'C': 'Cherbourg', 'Q': 'Queenstown', 'S': 'Southampton'\n}, inplace=True)\n\ndf_train.summary()\n\ndf_train['Sex'] = df_train['Sex'].astype('category')\ndf_train['Embarked'] = df_train['Embarked'].astype('category')\n\ndf_train.summary()\n\nsurvived_dict = {0: 'Died', 1: 'Survived'}\npclass_dict = {1: 'Upper Class', 2: 'Middle Class', 3: 'Lower Class'}\n\n# df_train['Pclass'].categorize(categories=pclass_dict)\n# df_train['Survived'].categorize(categories=survived_dict)\n\nprint('STEPS:')\ndf_train.steps"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dnc1994/MachineLearning-UW
|
ml-classification/module-4-linear-classifier-regularization-solution.ipynb
|
mit
|
[
"Logistic Regression with L2 regularization\nThe goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:\n\nExtract features from Amazon product reviews.\nConvert an SFrame into a NumPy array.\nWrite a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.\nImplement gradient ascent with an L2 penalty.\nEmpirically explore how the L2 penalty can ameliorate overfitting.\n\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create. Upgrade by\npip install graphlab-create --upgrade\nSee this page for detailed instructions on upgrading.",
"from __future__ import division\nimport graphlab",
"Load and process review dataset\nFor this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.",
"products = graphlab.SFrame('amazon_baby_subset.gl/')",
"Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string functionality.\nCompute word counts (only for the important_words)\n\nRefer to Module 3 assignment for more details.",
"# The same feature processing (same as the previous assignments)\n# ---------------------------------------------------------------\nimport json\nwith open('important_words.json', 'r') as f: # Reads the list of most frequent words\n important_words = json.load(f)\nimportant_words = [str(s) for s in important_words]\n\n\ndef remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\n# Remove punctuation.\nproducts['review_clean'] = products['review'].apply(remove_punctuation)\n\n# Split out the words into individual columns\nfor word in important_words:\n products[word] = products['review_clean'].apply(lambda s : s.split().count(word))",
"Now, let us take a look at what the dataset looks like (Note: This may take a few minutes).",
"products",
"Train-Validation split\nWe split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.\nNote: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.",
"train_data, validation_data = products.random_split(.8, seed=2)\n\nprint 'Training set : %d data points' % len(train_data)\nprint 'Validation set : %d data points' % len(validation_data)",
"Convert SFrame to NumPy array\nJust like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. \nNote: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.",
"import numpy as np\n\ndef get_numpy_data(data_sframe, features, label):\n data_sframe['intercept'] = 1\n features = ['intercept'] + features\n features_sframe = data_sframe[features]\n feature_matrix = features_sframe.to_numpy()\n label_sarray = data_sframe[label]\n label_array = label_sarray.to_numpy()\n return(feature_matrix, label_array)",
"We convert both the training and validation sets into NumPy arrays.\nWarning: This may take a few minutes.",
"feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')\nfeature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment') ",
"Building on logistic regression with no L2 penalty assignment\nLet us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))},\n$$\nwhere the feature vector $h(\\mathbf{x}_i)$ is given by the word counts of important_words in the review $\\mathbf{x}_i$. \nWe will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)",
"'''\nproduces probablistic estimate for P(y_i = +1 | x_i, w).\nestimate ranges between 0 and 1.\n'''\ndef predict_probability(feature_matrix, coefficients):\n # Take dot product of feature_matrix and coefficients \n ## YOUR CODE HERE\n scores = np.dot(feature_matrix, coefficients)\n \n # Compute P(y_i = +1 | x_i, w) using the link function\n ## YOUR CODE HERE\n predictions = 1. / (1. + np.exp(- scores))\n \n return predictions",
"Adding L2 penalty\nLet us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.\nRecall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right)\n$$\n Adding L2 penalty to the derivative \nIt takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.\n\nRecall from the lecture that the link function is still the sigmoid:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))},\n$$\nWe add the L2 penalty term to the per-coefficient derivative of log likelihood:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right) \\color{red}{-2\\lambda w_j }\n$$\n\nThe per-coefficient derivative for logistic regression with an L2 penalty is as follows:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right) \\color{red}{-2\\lambda w_j }\n$$\nand for the intercept term, we have\n$$\n\\frac{\\partial\\ell}{\\partial w_0} = \\sum{i=1}^N h_0(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right)\n$$\nNote: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.\nWrite a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:\n * errors vector containing $(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w}))$ for all $i$\n * feature vector containing $h_j(\\mathbf{x}_i)$ for all $i$\n * coefficient containing the current value of coefficient $w_j$.\n * l2_penalty representing the L2 penalty constant $\\lambda$\n * feature_is_constant telling whether the $j$-th feature is constant or not.",
"def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): \n \n # Compute the dot product of errors and feature\n ## YOUR CODE HERE\n derivative = np.dot(errors, feature)\n\n # add L2 penalty term for any feature that isn't the intercept.\n if not feature_is_constant: \n ## YOUR CODE HERE\n derivative -= 2 * l2_penalty * coefficient\n \n return derivative",
"Quiz question: In the code above, was the intercept term regularized?\nTo verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).\n$$\\ell\\ell(\\mathbf{w}) = \\sum_{i=1}^N \\Big( (\\mathbf{1}[y_i = +1] - 1)\\mathbf{w}^T h(\\mathbf{x}_i) - \\ln\\left(1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))\\right) \\Big) \\color{red}{-\\lambda\\|\\mathbf{w}\\|_2^2} $$",
"def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):\n indicator = (sentiment==+1)\n scores = np.dot(feature_matrix, coefficients)\n \n lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)\n \n return lp",
"Quiz question: Does the term with L2 regularization increase or decrease $\\ell\\ell(\\mathbf{w})$?\nThe logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.",
"def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):\n coefficients = np.array(initial_coefficients) # make sure it's a numpy array\n for itr in xrange(max_iter):\n # Predict P(y_i = +1|x_i,w) using your predict_probability() function\n ## YOUR CODE HERE\n predictions = predict_probability(feature_matrix, coefficients)\n \n # Compute indicator value for (y_i = +1)\n indicator = (sentiment==+1)\n \n # Compute the errors as indicator - predictions\n errors = indicator - predictions\n for j in xrange(len(coefficients)): # loop over each coefficient\n is_intercept = (j == 0)\n # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].\n # Compute the derivative for coefficients[j]. Save it in a variable called derivative\n ## YOUR CODE HERE\n derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)\n \n # add the step size times the derivative to the current coefficient\n ## YOUR CODE HERE\n coefficients[j] += step_size * derivative\n \n # Checking whether log likelihood is increasing\n if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \\\n or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:\n lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)\n print 'iteration %*d: log likelihood of observed labels = %.8f' % \\\n (int(np.ceil(np.log10(max_iter))), itr, lp)\n return coefficients",
"Explore effects of L2 regularization\nNow that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.\nBelow, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.",
"# run with L2 = 0\ncoefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=0, max_iter=501)\n\n# run with L2 = 4\ncoefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=4, max_iter=501)\n\n# run with L2 = 10\ncoefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=10, max_iter=501)\n\n# run with L2 = 1e2\ncoefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e2, max_iter=501)\n\n# run with L2 = 1e3\ncoefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e3, max_iter=501)\n\n# run with L2 = 1e5\ncoefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e5, max_iter=501)",
"Compare coefficients\nWe now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.\nBelow is a simple helper function that will help us create this table.",
"table = graphlab.SFrame({'word': ['(intercept)'] + important_words})\ndef add_coefficients_to_table(coefficients, column_name):\n table[column_name] = coefficients\n return table",
"Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.",
"add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')\nadd_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')\nadd_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')\nadd_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')\nadd_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')\nadd_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')",
"Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.\nQuiz Question. Which of the following is not listed in either positive_words or negative_words?",
"subtable = table[['word', 'coefficients [L2=0]']]\n\nptable = sorted(subtable, key=lambda x: x['coefficients [L2=0]'], reverse=True)[:5]\n\nntable = sorted(subtable, key=lambda x: x['coefficients [L2=0]'], reverse=False)[:5]\n\npositive_words = [w['word'] for w in ptable]\nprint positive_words\n\nnegative_words = [w['word'] for w in ntable]\nprint negative_words",
"Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.",
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 10, 6\n\ndef make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):\n cmap_positive = plt.get_cmap('Reds')\n cmap_negative = plt.get_cmap('Blues')\n \n xx = l2_penalty_list\n plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')\n \n table_positive_words = table.filter_by(column_name='word', values=positive_words)\n table_negative_words = table.filter_by(column_name='word', values=negative_words)\n del table_positive_words['word']\n del table_negative_words['word']\n \n for i in xrange(len(positive_words)):\n color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))\n plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),\n '-', label=positive_words[i], linewidth=4.0, color=color)\n \n for i in xrange(len(negative_words)):\n color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))\n plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),\n '-', label=negative_words[i], linewidth=4.0, color=color)\n \n plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)\n plt.axis([1, 1e5, -1, 2])\n plt.title('Coefficient path')\n plt.xlabel('L2 penalty ($\\lambda$)')\n plt.ylabel('Coefficient value')\n plt.xscale('log')\n plt.rcParams.update({'font.size': 18})\n plt.tight_layout()",
"Run the following cell to generate the plot. Use the plot to answer the following quiz question.",
"make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])",
"Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.\nQuiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)\nMeasuring accuracy\nNow, let us compute the accuracy of the classifier model. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified data points}}{\\mbox{# total data points}}\n$$\nRecall from lecture that that the class prediction is calculated using\n$$\n\\hat{y}_i = \n\\left{\n\\begin{array}{ll}\n +1 & h(\\mathbf{x}_i)^T\\mathbf{w} > 0 \\\n -1 & h(\\mathbf{x}_i)^T\\mathbf{w} \\leq 0 \\\n\\end{array} \n\\right.\n$$\nNote: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.\nBased on the above, we will use the same code that was used in Module 3 assignment.",
"def get_classification_accuracy(feature_matrix, sentiment, coefficients):\n scores = np.dot(feature_matrix, coefficients)\n apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)\n predictions = apply_threshold(scores)\n \n num_correct = (predictions == sentiment).sum()\n accuracy = num_correct / len(feature_matrix) \n return accuracy",
"Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.",
"train_accuracy = {}\ntrain_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)\ntrain_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)\ntrain_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)\ntrain_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)\ntrain_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)\ntrain_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)\n\nvalidation_accuracy = {}\nvalidation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)\nvalidation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)\nvalidation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)\nvalidation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)\nvalidation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)\nvalidation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)\n\n# Build a simple report\nfor key in sorted(validation_accuracy.keys()):\n print \"L2 penalty = %g\" % key\n print \"train accuracy = %s, validation_accuracy = %s\" % (train_accuracy[key], validation_accuracy[key])\n print \"--------------------------------------------------------------------------------\"",
"Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the training data?\nQuiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the validation data?\nQuiz question: Does the highest accuracy on the training data imply that the model is the best one?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mclaughlin6464/pearce
|
notebooks/Make MCMC Cfgs for Aemulus.ipynb
|
mit
|
[
"Out of date, please use Make MCMC Cfgs for Aemulus SLAC (update)\nNow that I've streamlined the MCMC process, I am going to submit multiple chains simultaneously. This notebook will make multiple, similar config files, for broad comparison. \nThis may be rolled into pearce as a helper function, I haven't decided.\nFor rmin 0, 0.5, 1.0:\nFor no ab, HSAB and CorrAB emu:\n\n Vpeak sham\n Mpeak sham\n HOD\n HSAB HOD",
"import yaml\nimport copy\nfrom os import path\nimport numpy as np\n\norig_cfg_fname = '/u/ki/swmclau2/Git/pearce/bin/mcmc/nh_gg_sham_hsab_mcmc_config.yaml'\nwith open(orig_cfg_fname, 'r') as yamlfile:\n orig_cfg = yaml.load(yamlfile)\n\nbsub_template=\"\"\"#BSUB -q long \n#BSUB -W 72:00\n#BSUB -J {jobname}\n#BSUB -oo /u/ki/swmclau2/Git/pearce/bin/mcmc/config/{jobname}.out \n#BSUB -n 8 \n#BSUB -R \"span[ptile=8]\"\n\npython /u/ki/swmclau2/Git/pearce/pearce/inference/initialize_mcmc.py {jobname}.yaml\npython /u/ki/swmclau2/Git/pearce/pearce/inference/run_mcmc.py {jobname}.yaml\n\"\"\"\n\nr_bins = np.logspace(-1, 1.6, 19)\n\nemu_names = ['HOD','HSAB','CorrAB']\nemu_fnames = [['/u/ki/swmclau2/des/wp_zheng07/PearceWpCosmo.hdf5', '/u/ki/swmclau2/des/ds_zheng07/PearceDsCosmo.hdf5'],\\\n ['/u/ki/swmclau2/des/wp_hsab/PearceWpHSABCosmo.hdf5', '/u/ki/swmclau2/des/ds_hsab/PearceDsHSABCosmo.hdf5'],\n ['/u/ki/swmclau2/des/wp_corrab/PearceWpCorrABCosmo.hdf5', '/u/ki/swmclau2/des/ds_corrab/PearceDsCorrABCosmo.hdf5']]\n\n\nmeas_cov_fname = '/u/ki/swmclau2/Git/pearce/bin/covmat/wp_ds_full_covmat.npy'\n\n# TODO replace with actual ones onace test boxes are done \nemu_cov_fnames = [['/u/ki/swmclau2/Git/pearce/bin/optimization/wp_hod_emu_cov.npy', \n '/u/ki/swmclau2/Git/pearce/bin/optimization/ds_hod_emu_cov.npy'],\n ['/u/ki/swmclau2/Git/pearce/bin/optimization/wp_hsab_emu_cov.npy', \n '/u/ki/swmclau2/Git/pearce/bin/optimization/ds_hsab_emu_cov.npy'],\n ['/u/ki/swmclau2/Git/pearce/bin/optimization/wp_corrab_emu_cov.npy',\n '/u/ki/swmclau2/Git/pearce/bin/optimization/ds_corrab_emu_cov.npy']]\n\nn_walkers = 200\nn_steps = 50000",
"Vpeak SHAM",
"tmp_cfg = copy.deepcopy(orig_cfg)\ndirectory = \"/afs/slac.stanford.edu/u/ki/swmclau2/Git/pearce/bin/mcmc/config/\"\noutput_dir = \"/nfs/slac/g/ki/ki18/des/swmclau2/PearceMCMC/\"\njobname_template = \"VpeakSHAM_wp_ds_rmin_{rmin}_{emu_name}\"\n\nfor rmin in [None, 0.5, 1.0, 2.0]:\n for emu_fname, emu_name, emu_cov in zip(emu_fnames, emu_names, emu_cov_fnames):\n \n if rmin is not None:\n tmp_cfg['emu']['fixed_params'] = {'z': 0.0, 'rmin':rmin}\n \n tmp_cfg['emu']['training_file'] = emu_fname\n tmp_cfg['emu']['emu_type'] = ['NashvilleHot' for i in xrange(len(emu_fname))]\n tmp_cfg['emu']['emu_cov_fname'] = emu_cov\n \n tmp_cfg['data']['true_data_fname']= ['/u/ki/swmclau2/Git/pearce/bin/mock_measurements/SHAMmock_wp.npy',\n '/u/ki/swmclau2/Git/pearce/bin/mock_measurements/SHAMmock_ds.npy']\n tmp_cfg['data']['true_cov_fname'] = meas_cov_fname\n \n tmp_cfg['data']['obs']['obs'] = ['wp','ds']\n tmp_cfg['data']['obs']['rbins'] = list(r_bins)\n \n\n tmp_cfg['chain']['nsteps'] = n_steps\n tmp_cfg['chain']['nwalkers'] = n_walkers\n tmp_cfg['chain']['mcmc_type'] = 'normal'\n\n \n tmp_cfg['data']['sim']['sim_hps']['system'] = 'ki-ls'\n tmp_cfg['data']['cov']['emu_cov_fname'] = tmp_cfg['emu']['emu_cov_fname'] \n tmp_cfg['data']['cov']['meas_cov_fname'] = tmp_cfg['data']['true_cov_fname']\n \n jobname = jobname_template.format(rmin=rmin, emu_name=emu_name)\n tmp_cfg['fname'] = path.join(output_dir, jobname+'.hdf5')\n\n with open(path.join(directory, jobname +'.yaml'), 'w') as f:\n yaml.dump(tmp_cfg, f)\n\n with open(path.join(directory, jobname + '.bsub'), 'w') as f:\n f.write(bsub_template.format(jobname=jobname))",
"Shuffled SHAM",
"tmp_cfg = copy.deepcopy(orig_cfg)\ndirectory = \"/afs/slac.stanford.edu/u/ki/swmclau2/Git/pearce/bin/mcmc/config/\"\noutput_dir = \"/nfs/slac/g/ki/ki18/des/swmclau2/PearceMCMC/\"\njobname_template = \"ShuffledSHAM_wp_ds_rmin_{rmin}_{emu_name}\"\n\nfor rmin in [None, 0.5, 1.0, 2.0]:\n for emu_fname, emu_name, emu_cov in zip(emu_fnames, emu_names, emu_cov_fnames):\n \n if rmin is not None:\n tmp_cfg['emu']['fixed_params'] = {'z': 0.0, 'rmin':rmin}\n \n tmp_cfg['emu']['training_file'] = emu_fname\n tmp_cfg['emu']['emu_type'] = ['NashvilleHot' for i in xrange(len(emu_fname))]\n tmp_cfg['emu']['emu_cov_fname'] = emu_cov\n \n tmp_cfg['data']['true_data_fname']= ['/u/ki/swmclau2/Git/pearce/bin/mock_measurements/SHUFFLED_SHAMmock_wp.npy',\n '/u/ki/swmclau2/Git/pearce/bin/mock_measurements/SHUFFLED_SHAMmock_ds.npy']\n tmp_cfg['data']['true_cov_fname'] = meas_cov_fname\n \n tmp_cfg['data']['obs']['obs'] = ['wp','ds']\n tmp_cfg['data']['obs']['rbins'] = list(r_bins)\n \n\n tmp_cfg['chain']['nsteps'] = n_steps\n tmp_cfg['chain']['nwalkers'] = n_walkers\n tmp_cfg['chain']['mcmc_type'] = 'normal'\n\n \n tmp_cfg['data']['sim']['sim_hps']['system'] = 'ki-ls'\n tmp_cfg['data']['cov']['emu_cov_fname'] = tmp_cfg['emu']['emu_cov_fname'] \n tmp_cfg['data']['cov']['meas_cov_fname'] = tmp_cfg['data']['true_cov_fname']\n \n jobname = jobname_template.format(rmin=rmin, emu_name=emu_name)\n tmp_cfg['fname'] = path.join(output_dir, jobname+'.hdf5')\n\n with open(path.join(directory, jobname +'.yaml'), 'w') as f:\n yaml.dump(tmp_cfg, f)\n\n with open(path.join(directory, jobname + '.bsub'), 'w') as f:\n f.write(bsub_template.format(jobname=jobname))",
"Universe Machine",
"tmp_cfg = copy.deepcopy(orig_cfg)\ndirectory = \"/afs/slac.stanford.edu/u/ki/swmclau2/Git/pearce/bin/mcmc/config/\"\noutput_dir = \"/nfs/slac/g/ki/ki18/des/swmclau2/PearceMCMC/\"\njobname_template = \"UniverseMachine_wp_ds_rmin_{rmin}_{emu_name}\"\n\nfor rmin in [None, 0.5, 1.0, 2.0]:\n for emu_fname, emu_name, emu_cov in zip(emu_fnames, emu_names, emu_cov_fnames):\n \n if rmin is not None:\n tmp_cfg['emu']['fixed_params'] = {'z': 0.0, 'rmin':rmin}\n \n tmp_cfg['emu']['training_file'] = emu_fname\n tmp_cfg['emu']['emu_type'] = ['NashvilleHot' for i in xrange(len(emu_fname))]\n tmp_cfg['emu']['emu_cov_fname'] = emu_cov\n \n tmp_cfg['data']['true_data_fname']= ['/u/ki/swmclau2/Git/pearce/bin/mock_measurements/UMmock_wp.npy',\n '/u/ki/swmclau2/Git/pearce/bin/mock_measurements/UMmock_ds.npy']\n tmp_cfg['data']['true_cov_fname'] = meas_cov_fname\n \n tmp_cfg['data']['obs']['obs'] = ['wp','ds']\n tmp_cfg['data']['obs']['rbins'] = list(r_bins)\n \n\n tmp_cfg['chain']['nsteps'] = n_steps\n tmp_cfg['chain']['nwalkers'] = n_walkers\n tmp_cfg['chain']['mcmc_type'] = 'normal'\n\n \n tmp_cfg['data']['sim']['sim_hps']['system'] = 'ki-ls'\n tmp_cfg['data']['cov']['emu_cov_fname'] = tmp_cfg['emu']['emu_cov_fname'] \n tmp_cfg['data']['cov']['meas_cov_fname'] = tmp_cfg['data']['true_cov_fname']\n \n jobname = jobname_template.format(rmin=rmin, emu_name=emu_name)\n tmp_cfg['fname'] = path.join(output_dir, jobname+'.hdf5')\n\n with open(path.join(directory, jobname +'.yaml'), 'w') as f:\n yaml.dump(tmp_cfg, f)\n\n with open(path.join(directory, jobname + '.bsub'), 'w') as f:\n f.write(bsub_template.format(jobname=jobname))",
"HOD",
"#orig_cfg_fname = '/u/ki/swmclau2//Git/pearce/bin/mcmc/nh_gg_sham_hsab_mcmc_config.yaml'\nwith open(orig_cfg_fname, 'r') as yamlfile:\n orig_cfg = yaml.load(yamlfile)\n\ntmp_cfg = copy.deepcopy(orig_cfg)\ndirectory = \"/u/ki/swmclau2/Git/pearce/bin/mcmc/config/\"\noutput_dir = \"/nfs/slac/g/ki/ki18/des/swmclau2/PearceMCMC/\"\n#output_dir = \"/afs/slac.stanford.edu/u/ki/swmclau2\"\n\njobname_template = \"HOD_wp_ds_rmin_{rmin}_{emu_name}\"#_fixed_HOD\"\n\nfor rmin in [None, 0.5, 1.0, 2.0]:\n for emu_fname, emu_name, emu_cov in zip(emu_fnames, emu_names, emu_cov_fnames):\n \n if rmin is not None:\n tmp_cfg['emu']['fixed_params'] = {'z': 0.0, 'rmin':rmin}\n \n tmp_cfg['emu']['training_file'] = emu_fname\n tmp_cfg['emu']['emu_type'] = ['NashvilleHot' for i in xrange(len(emu_fname))]\n tmp_cfg['emu']['emu_cov_fname'] = emu_cov\n \n tmp_cfg['data']['obs']['obs'] = ['wp','ds']\n tmp_cfg['data']['obs']['rbins'] = list(r_bins)\n \n tmp_cfg['data']['cov']['meas_cov_fname'] = meas_cov_fname\n tmp_cfg['data']['cov']['emu_cov_fname'] = tmp_cfg['emu']['emu_cov_fname'] # TODO make this not be redundant\n \n jobname = jobname_template.format(rmin=rmin, emu_name=emu_name)\n tmp_cfg['fname'] = path.join(output_dir, jobname+'.hdf5')\n \n tmp_cfg['sim']= {'gal_type': 'HOD',\n 'hod_name': 'zheng07',\n 'hod_params': {'alpha': 1.083,\n 'logM0': 13.2,\n 'logM1': 14.2,\n 'sigma_logM': 0.2,\n 'conc_gal_bias': 1.0},\n 'nd': '5e-4',\n 'scale_factor': 1.0,\n 'min_ptcl': 100, \n 'sim_hps': {'boxno': 1,\n 'downsample_factor': 1e-2,\n 'particles': True,\n 'realization': 0,\n 'system': 'ki-ls'},\n 'simname': 'testbox'}\n \n # TODO i shouldnt have to specify this this way\n tmp_cfg['data']['sim'] = tmp_cfg['sim']\n \n tmp_cfg['chain']['nwalkers'] = n_walkers\n tmp_cfg['chain']['nsteps'] = n_steps\n tmp_cfg['chain']['mcmc_type'] = 'normal'\n \n # fix params during MCMC \n #tmp_cfg['chain']['fixed_params'].update(tmp_cfg['sim']['hod_params'])\n \n try:\n del tmp_cfg['data']['true_data_fname']\n del tmp_cfg['data']['true_cov_fname']\n except KeyError:\n pass\n\n with open(path.join(directory, jobname +'.yaml'), 'w') as f:\n yaml.dump(tmp_cfg, f)\n\n with open(path.join(directory, jobname + '.bsub'), 'w') as f:\n f.write(bsub_template.format(jobname=jobname))",
"HSAB HOD"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mayank-johri/LearnSeleniumUsingPython
|
Section 1 - Core Python/Chapter 02 - Basics/2.2. Python Identifiers.ipynb
|
gpl-3.0
|
[
"Python Identifiers aka Variables\n\nIn Python, variable names are kind of tags/pointers to the memory location which hosts the data. We can also think of it as a labeled container that can store a single value. That single value can be of practically any data type.\nStoring Values in Variables:\nIn Python, the declaration & assignation of value to the variable are done at the same time, i.e. as soon as we assign a value to a non-existing or existing variable, the required memory location is assigned to it and proper data is populated in it.\n\nNOTE: Storing Values in Python is one of the most important concepts and should be understood with great care.",
"current_month = \"MAY\"\nprint(current_month)",
"In the above example, current_month is the variable name and \"MAY\" is the value associated with it. Operation performed in the first line is called assignment and such statements are called assignment statements. Lets discuss them in details.\nAssignment Statements\nYou’ll store values in variables with an assignment statement. An assignment statement consists of a variable name, an equal sign (called the assignment operator), and the value to be stored. If you enter the assignment statement current_month = \"MAY\", then a variable named current_month will be pointing to a memory location which has the string value \"MAY\" stored in it.\n\nIn Python, we do not need to declare variable explicitly. They are declared automatically when any value is assigned. The assignment is done using the equal (=) operator as shown in the below example:",
"current_month = \"MAY\" # A comment.\ndate = 10",
"The pictorial representation of variables from above example.\n<img src=\"files/variables.png\">\nNow lets perform some actions on the variable current_month and observe the changes happening on it. \nIn the example shown below, we will reassign a new value JUNE to the variable current_month and observe the effects of it. \nImage below shows the process of re-assignation. You will note that a new memory is assigned to the variable instead of using the existing one.",
"current_month = \"JUNE\"",
"current_month was initially pointing to memory location containing value MAY and after reassination, it was pointing to a new memory location containing value JUNE and if no other referencing the previous value, then automatically Python GC will clean it at some future time.",
"current_month = \"JUNE\"\nprint(id(current_month))\n\nnext_month = \"JUNE\"\nprint(id(next_month))\n\nnext_month = \"June\"\nprint(id(next_month))",
"Note: That value of MAY has not updated but a new memory was allocated for value JUNE and varialbe now points to it.\n\nLater in the chapter, we will show the above senario with more examples.\nHow to find the reference count of a value",
"########## Reference count ###################\n# NOTE: Please test the below code by saving \n# it as a file and executing it instead\n# of running it here.\n#############################################\nimport sys\n\nnew_var = 10101010101000\nprint(sys.getrefcount(new_var))",
"NOTE:\nThe value of refcount will almost always be more than you think. It is done internally by python to optimize the code. I will be adding more details about it in \"Section 2 -> Chapter: GC & Cleanup\"\n\nMultiple Assignment:\nIn multiple assignment, multiple variables are assigned values in a single line. There are two ways multiple assignment can be done in python. In first format all the variables point to the same value and in next all variables point to individual values. \n1. Assigning single value to multiple variables:",
"x=y=z=1000 \nprint(x, y, z)",
"In the above example, all x, y and z are pointing to same memory location which contains 1000, which we are able to identify by checking the id of the variables. They are pointing to the same memory location, thus value of id for all three are same.",
"print(id(x))\nprint(id(y))\nprint(id(z))",
"Now, lets change value of one varialbe and again check respective ides.",
"x = 200\nprint(x) \nprint(y) \nprint(z) \nprint(id(x))\nprint(id(y))\nprint(id(z))",
"Now, lets test something else. Can different data types impact the behavior of python memory optimization. We will first test it with integer, string and then with list.",
"### INTEGER \nx=1000\ny=1000\nz=1000 \nprint(x) \nprint(y) \nprint(z) \nprint(id(x))\nprint(id(y))\nprint(id(z))\n\n### String\nx=\"1000\"\ny=1000\nz=\"1000\" \nprint(x) \nprint(y) \nprint(z) \nprint(id(x))\nprint(id(y))\nprint(id(z))",
"check the id of both x and z, they are same but y is not same.",
"### list\nx = [\"1000\"]\ny = [1000]\nz = [\"1000\"] \na = [1000]\nprint(x) \nprint(y) \nprint(z) \nprint(a) \nprint(id(x))\nprint(id(y))\nprint(id(z))\nprint(id(a))",
"2. Assigning multiple values to multiple variables:",
"x, y, z = 10, 20, 30\nprint(x) \nprint(y) \nprint(z) \nprint(id(x))\nprint(id(y))\nprint(id(z))\n\nx, y, z = 10, 120, 10\nprint(x) \nprint(y) \nprint(z) \nprint(id(x))\nprint(id(y))\nprint(id(z))",
"Variable Names & Naming Conventions\nThere are a couple of naming conventions in use in Python:\n- lower_with_underscores: Uses only lower case letters and connects multiple words with underscores.\n- UPPER_WITH_UNDERSCORES: Uses only upper case letters and connects multiple words with underscores.\n- CapitalWords: Capitalize the beginning of each letter in a word; no underscores. With these conventions in mind, here are the naming conventions in use.\n\nVariable Names: lower_with_underscores\nConstants: UPPER_WITH_UNDERSCORES\nFunction Names: lower_with_underscores\nFunction Parameters: lower_with_underscores\nClass Names: CapitalWords\nMethod Names: lower_with_underscores\nMethod Parameters and Variables: lower_with_underscores\nAlways use self as the first parameter to a method\nTo indicate privacy, precede name with a single underscore.",
"pm_name = \"Narendra Modi\"\nprime_minister = \"Narendra Modi\"\ncong_p_name = \"Rahul Gandhi\"\ncorrent_name_of_cong_president = \"Rahul Gandhi\"\ncong_president = \"Rahul Gandhi\"\ncname = \"RG\"",
"Options can be used to override the default regular expression associated to each type. The table below lists the types, their associated options, and their default regular expressions.\n| Type | Default Expression |\n|:-----------------:|:-----------------------------------------:|\n| Argument | [a-z_][a-z0-9_] |\n| Attribute | [a-z_][a-z0-9_] |\n| Class | [A-Z_][a-zA-Z0-9] |\n| Constant | (([A-Z_][A-Z0-9_] |\n| Function | [a-z_][a-z0-9_] |\n| Method | [a-z_][a-z0-9_] |\n| Module | (([a-z_][a-z0-9_]), ([A-Z][a-zA-Z0-9])) |\n| Variable | [a-z_][a-z0-9_] |\n| Variable, inline1 | [A-Za-z_][A-Za-z0-9_] |\nPlease find the invalid variables name from the below list",
"this_is_my_number \nTHIS_IS_MY_NUMBER \nThisIsMyNumber\nthis_is_number \nanotherVarible\nThis1\nthis1home\n1This\n__sd__\n_sd",
"Good Variable Name\n\nChoose meaningful name instead of short name. roll_no is better than rn.\nMaintain the length of a variable name. Roll_no_of_a_student is too long?\nBe consistent; roll_no or RollNo\nBegin a variable name with an underscore(_) character for a special case.\n\nExercises\nQ 1. Find the valid and in-valid variable names from the followings:\n\nbalance\ncurrent-balance \ncurrent balance \ncurrent_balance \n4account \n_spam \n42 \nSPAM \ntotal_$um \naccount4 \n'hello' \n\nQ 2. Multiple Choice Questions & Answers\n\n\nIs Python case sensitive when dealing with identifiers?\na) yes\nb) no\nc) machine dependent\nd) none of the mentioned\n\n\nWhat is the maximum possible length of an identifier?\na) 31 characters\nb) 63 characters\nc) 79 characters\nd) none of the mentioned\n\n\nWhat does local variable names beginning with an underscore mean?\na) they are used to indicate a private variables of a class\nb) they confuse the interpreter\nc) they are used to indicate global variables\nd) None of the \n\n\nWhich of the following is true for variable names in Python?\na) unlimited length\nb) Only _ and $ special characters allowed in variable name\nc) private members should have leading & trailing underscores\nd) None of the above",
"_ is used \n* To use as ‘Internationalization(i18n)’ or ‘Localization(l10n)’ functions.",
"Q 3: Good Code / Bad Code: Find if the code in question will run or not ( with error message)\n\npython\ntest1 = 101\ntest2 = \"Arya Sharma\"\ntest3 = test1 + test2"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tesera/pygypsy
|
notebooks/#32-address-testing-findings/#32-isolated-profiling-1.ipynb
|
mit
|
[
"Initial attemps at profiling had very confusing results; possibly because of module loading and i/o\nHere, gypsy will be run and profiled on one plot, with no module loading/io recorded in profiling\nCharacterize what is happening\nIn several places, we append data to a data frame",
"%%bash\ngrep --colour -nr append ../gypsy/*.py",
"Either in the way we do it, or by its nature, it is a slow operation.",
"import pandas as pd\n\nhelp(pd.DataFrame.append)",
"There is nothing very clear about performance from the documentation. It may be worth examining the source, and of course googling append performance.\npython - Improve Row Append Performance On Pandas DataFrames - Stack Overflow\nhttp://stackoverflow.com/questions/27929472/improve-row-append-performance-on-pandas-dataframes\npython - Pandas: Why should appending to a dataframe of floats and ints be slower than if its full of NaN - Stack Overflow\nhttp://stackoverflow.com/questions/17141828/pandas-why-should-appending-to-a-dataframe-of-floats-and-ints-be-slower-than-if\npython - Creating large Pandas DataFrames: preallocation vs append vs concat - Stack Overflow\nhttp://stackoverflow.com/questions/31690076/creating-large-pandas-dataframes-preallocation-vs-append-vs-concat\npython - efficient appending to pandas dataframes - Stack Overflow\nhttp://stackoverflow.com/questions/32746248/efficient-appending-to-pandas-dataframes\npython - Pandas append perfomance concat/append using \"larger\" DataFrames - Stack Overflow\nhttp://stackoverflow.com/questions/31860671/pandas-append-perfomance-concat-append-using-larger-dataframes\npandas.DataFrame.append — pandas 0.18.1 documentation\nhttp://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html\nDecide on the action\nDo not append in a loop. It makes a copy each time and the memory allocation is poor. Should have known; it's interesting to see it demonstrated in the wild!\nPre-allocate the dataframe length by giving it an index and assigning to the index\nMWE",
"%%timeit\nd = pd.DataFrame(columns=['A'])\nfor i in xrange(1000):\n d.append({'A': i}, ignore_index=True)\n\n%%timeit\nd = pd.DataFrame(columns=['A'], index=xrange(1000))\nfor i in xrange(1000):\n d.loc[i,'A'] = i\n\n1.39/.150",
"Speedup of nearly 1 order of magnitude\nRevise the code\nGo on. Do it.\nReview code changes",
"%%bash\ngit log --since 2016-11-07 --oneline | head -n 8\n\n! git diff HEAD~7 ../gypsy",
"Tests\nThere are some issues with the tests - the data does not match the old output data to within 3 or even 2 decimal places. The mismatch is always:\n(mismatch 0.221052631579%)\nIt was resolved in fe82864:",
"%%bash \ngit log --since '2016-11-08' --oneline | grep tests",
"Run profiling",
"from gypsy.forward_simulation import simulate_forwards_df\n\ndata = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10)\n\n%%prun -D forward-sim-1.prof -T forward-sim-1.txt -q\nresult = simulate_forwards_df(data)\n\n!head forward-sim-1.txt\n\n!diff -y forward-sim-1.txt forward-sim.txt",
"Compare performance visualizations\nNow use either of these commands to visualize the profiling\n```\npyprof2calltree -k -i forward-sim-1.prof forward-sim-1.txt\nor\ndc run --service-ports snakeviz notebooks/forward-sim-1.prof\n```\nOld\n\nNew\n\nSummary of performance improvements\nforward_simulation is now 4x faster due to the changes outlined in the code review section above\non my hardware, this takes 1000 plots to ~8 minutes\non carol's hardware, this takes 1000 plots to ~25 minutes\nFor 1 million plots, we're looking at 5 to 17 days on desktop hardware\nCaveat\n\nthis isn't dealing with i/o. reading the plot table in is not a huge problem, especially if we declare the field types, but writing the growth curves for each plot will be time consuming. threads may be necessary\n\nIdentify new areas to optimize\n\nneed to find another order of magnitude improvement to get to 2.4-15 hours \npandas indexing .ix (get and set item) is taking 6 and 19% respectively\ncollectively, the lambdas being applied to output data frame are taking 19%\n\nBAFromZeroToDataAw is slow (50% of total time) because of (in order):\n\npandas init (dict)\nbaincrementnonspatial\npandas setting\n\n\n\nparallel (3 cores) gets us to 2 - 6 days - save for last\n\nAWS with 36 cores gets us to 4 - 12 hours ($6.70 - $20.10 USD on a c4.8xlarge instance in US West Region)",
"!cat forward-sim-1.txt | grep -i fromzero",
"Identify some means of optimization\nIn order of priority/time taken\n\npandas init dict\nbasal_area_aw_df = pd.DataFrame(columns=['BA_Aw'], index=xrange(max_age))\nfind a faster way to create this data frame\nrelax the tolerance for aspen\n\n\npandas set item\nuse at method \nhttp://pandas.pydata.org/pandas-docs/stable/indexing.html#fast-scalar-value-getting-and-setting\n\n\nlambdas\nuse cython for the gross tot vol and merch vol functions\nmight be wise to refactor these first to have conventional names, keyword arguments, and a base implementation to get rid of the boilerplate\ndon't be deceived - the callable is a miniscule portion; series.getitem is taking most of the time\nagain, using .at here would probably be a significant improvement\n\n\nbasalareaincremementnonspatialaw\nthis is actually slow because of the number of times the BAFromZeroToDataAw function is called as shown above\nrelaxing the tolerance may help\nindeed the tolerance is 0.01 * some value while the other factor finder functions have 0.1 tolerance i think\ncan also use cython for the increment functions\n\n\n\ndo a profiling run with IO (of reading input data and writing the plot curves to files) in next run"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ernestyalumni/CompPhys
|
crack/searchsort.ipynb
|
apache-2.0
|
[
"Binary Search\ncf. Binary Search \nGiven a sorted array arr[], of n elements, write a function to search a given element x in arr[]",
"def binarySearch(arr,l,r,x):\n \"\"\" \n @details denote L = len(arr)\n @param indices l,r=0,1...L-1\n @note l <= r expected\n base case is if l>r, then stop, nothing was found \n \"\"\"\n if (r<l):\n return -1 # x was not found\n\n else:\n mid = l + (r-l)/2\n \n if arr[mid] == x: # found it, success! \n return mid\n # if arr[mid] > x, then we know x is \"to the left\" of mid\n elif arr[mid]> x:\n r = mid-1\n return binarySearch(arr,l,r,x)\n else: # then we know x is \"to the right\" of mid\n l=mid+1\n return binarySearch(arr,l,r,x)\n\n\narr=[2,3,4,10,40]\n\nx=10\n\nresult=binarySearch(arr,0,len(arr)-1,x)\nprint(result)\n\n# Returns index of x in arr if present, else -1\ndef binarySearch (arr, l, r, x):\n \n # Check base case\n if r >= l:\n \n mid = l + (r - l)/2\n \n # If element is present at the middle itself\n if arr[mid] == x:\n return mid\n \n # If element is smaller than mid, then it can only\n # be present in left subarray\n elif arr[mid] > x:\n return binarySearch(arr, l, mid-1, x)\n \n # Else the element can only be present in right subarray\n else:\n return binarySearch(arr, mid+1, r, x)\n \n else:\n # Element is not present in the array\n return -1\n\ndef binarySearch_iter(arr,l,r,x):\n while (l<=r):\n mid = l + (r-l)/2\n if arr[mid] == x:\n return mid\n elif arr[mid] > x: # x is \"to the left\" of mid and it's not at mid\n r = mid-1\n else: # x is \"to the right\" of mid and it's not mid\n l=mid+1\n \n return -1 # x was not found at all\n\nprint(binarySearch_iter(arr,0,len(arr)-1,x))",
"Time complexity, time complexity of Binary Search is $T(n) = T(n/2)+c$, above recurrence can be solved using Recurrence, \n$O(\\log{n})$. \nMerge Sort\nhttp://www.geeksforgeeks.org/merge-sort/ \nhttp://interactivepython.org/runestone/static/pythonds/SortSearch/TheMergeSort.html \nbase case is sublist contains only 1 item or is empty. \nIf list has more than 1 item, split list and recursively invoke a merge sort on both halves. \nMerging is the process of taking 2 smaller sorted lists and combining them together into a single sorted, new list.",
"def mergeSort(arr):\n print(\"splitting\",arr)\n if len(arr) > 1: # then we continue to split\n mid = len(arr) // 2\n l_arr=arr[:mid] # left half of arr\n r_arr = arr[mid:] # right half of arr\n \n mergeSort(l_arr)\n mergeSort(r_arr)\n \n # now difficult part; how do we \"blend\" or merge together 2 sorted halves into 1 sorted list?\n i=0 # index for l_arr, i=0,1...len(l_arr)-1\n j=0 # index for r_arr, j=0,1...len(r_arr)-1\n k=0 # index for arr, k=0,1...len(arr)-1\n while i<len(l_arr) and j < len(r_arr):\n if l_arr[i] < r_arr[j]: # so l_arr into arr, and increment l_arr's index, i \n arr[k] = l_arr[i]\n i+=1 \n else:\n arr[k] = r_arr[j]\n j+=1\n k=k+1\n # then deal with \"leftovers\"\n while i < len(l_arr):\n arr[k] = l_arr[i]\n i+=1\n k+=1\n \n while j < len(r_arr):\n arr[k] = r_arr[j]\n j+=1\n k+=1\n print(\"Merging\",arr)\n# return arr\n\n \n \n\nalist = [54,26,93,17,77,31,44,55,20]\n#alist_1 = mergeSort(alist)\nmergeSort(alist)\n\nalist\n\ndef mergeSort(alist):\n print(\"Splitting \",alist)\n if len(alist)>1:\n mid = len(alist)//2\n lefthalf = alist[:mid]\n righthalf = alist[mid:]\n\n mergeSort(lefthalf)\n mergeSort(righthalf)\n\n i=0\n j=0\n k=0\n while i < len(lefthalf) and j < len(righthalf):\n if lefthalf[i] < righthalf[j]:\n alist[k]=lefthalf[i]\n i=i+1\n else:\n alist[k]=righthalf[j]\n j=j+1\n k=k+1\n\n while i < len(lefthalf):\n alist[k]=lefthalf[i]\n i=i+1\n k=k+1\n\n while j < len(righthalf):\n alist[k]=righthalf[j]\n j=j+1\n k=k+1\n print(\"Merging \",alist)\n\nalist = [54,26,93,17,77,31,44,55,20]\nmergeSort(alist)\nprint(alist)",
"Quick Sort",
"def quickSort(arr,l,r):\n if l < r:\n ip = partition(arr,l,r)\n \n quickSort(arr,l,ip-1)\n quickSort(arr,ip+1,r)\n \ndef partition(arr,l,r):\n # the goal of the partition process is to move items that are on the wrong side \n # with respect to the pivot value, while also converging on the split point\n i = (l-1) # index of smaller element\n pivot = arr[r]\n \n for j in range(l,r):\n if arr[j]<= pivot:\n i +=1 # increment smaller element\n arr[i],arr[j] = arr[j],arr[i] # swap\n arr[i+1],arr[r] = arr[r],arr[i+1] # position of i+1 is now the split point, \n # pivot value can be exchanged with contents of split point\n return (i+1)\n\narr = [10,7,8,9,1,5]\nn=len(arr)\nquickSort(arr,0,n-1)\nprint(arr)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sot/aimpoint_mon
|
fit_aimpoint_drift-2021-01.ipynb
|
bsd-2-clause
|
[
"Model for aimpoint drift (aka ACA alignment drift) 2021-01\nThis notebook documents and computes fit coefficients for a simple model that\ngives the relative ACA alignment as a linear function of the ACA CCD temperature.\nIt also includes validation of the implementation of the new model in the\nchandra_aca.drift package.\nNOTE (Jan 2, 2021): no action required\nTA re-ran this notebook and found that the best-fit jumps in DY, DZ are less\nthan one arcsec and therefore no action is required. There was about 2 arcsec\nfrom the 2020:145 safe mode, but after the IRU gyro swap it was reduced.\nSummary (not updated from previous version)\nThis is based on the origin fit_aimpoint_drift notebook but updated through\n2020:190 to include data from after the 2020:145 safe mode normal sun dwell.\nOne jump was added to the model corresponding to the 2020:145 safe mode.\nThe ACA alignment is measured accurately for each science observation via the apparent\npositions of the fid lights. These are referred to by their CXC aspect solution\ndesignation as the SIM DY and DZ offsets. This is actually a misnomer based on\nthe pre-launch understanding of what physical mechanism would generate such offsets.\nWe now know via HRMA optical axis measurements that a temperature-dependent change \nin the ACA boresight alignment is responsible. The HRMA to SIM alignment is quite\nstable.\nThe ACA alignment relates directly to the X-ray detector aimpoint that is used in\nobservation planning and analysis. With this model it will be possible to improve\nthe aimpoint accuracy by introducing a dynamic pointing offset based on the\npredicted ACA CCD temperature for each observation.\nThe model is\nDY/Z = (t_ccd - offset) * scale + (year - 2016.0) * trend + JUMPS\nwhere\nt_ccd : ACA CCD temperature (degF)\n scale : scaling in arcsec / degF\n offset : ACA CCD temperature corresponding to DY/Z = 0.0 arcsec\n trend : Trend in DY/Z (arcsec / year)\n year : decimal year\n jumpYYYYDDD : step function from 0.0 to jumpYYYYDDD (arcsec) for date > YYYY:DDD\nThe jumps are persistent step function changes in alignment that have been observed following\nextended dwells at normal sun where the ACA gets substantially hotter than during\nnormal operations. The exact mechanism is not understood, but could be due to\na non-linear stiction release of a stress point that impacts alignment.\nNote that the ACA alignment has a direct linear correlation to the ACA housing temperature (AACH1T).\nHowever, in this model we use the ACA CCD temperature as the model dependent variable because it\nis linearly related to housing temperature (AACCDPT = m * AACH1T + b) as long as the TEC is at\nmax drive current. Since there is already\nan existing Xija model to predict ACA CCD temperature this reduces duplication.\nThis model was fitted to data from 2012:180 to 2020:190 using Sherpa. The key fit results are:\n```\nDY\n\nscale = 2.1 arcsec / degF = 3.9 arcsec / degC\ntrend = -0.95 arcsec / year\njumps ~ -2 to -13 arcsec\nmodel error = +/- 1.9 arcsec (1st to 99th percentile range)\nDZ\nscale = 1.0 arcsec / degF = 1.8 arcsec / degC\ntrend = -0.09 arcsec / year\njumps ~ -0.4 to -6.1 arcsec\nmodel error = +/- 2.6 arcsec (1st to 99th percentile range)\n```\nThe model accuracy will be degraded somewhat when ACA CCD temperature\nis taken from a predictive Xija model instead of from telemetry.\nThis notebook lives in the aimpoint_mon project repository\nCode",
"import re\nimport os\n# See https://stackoverflow.com/questions/59119396/\n# how-to-use-django-3-0-orm-in-a-jupyter-notebook-without-triggering-the-async-con\n# os.environ[\"DJANGO_ALLOW_ASYNC_UNSAFE\"] = \"true\"\nimport sys\n\nimport tables\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom astropy.time import Time\nfrom astropy.table import Table\nimport Ska.engarchive.fetch_eng as fetch\nfrom Ska.engarchive import fetch_sci\nfrom Chandra.Time import DateTime\nfrom Ska.Numpy import interpolate\nfrom kadi import events\nfrom sherpa import ui\nfrom Ska.Matplotlib import plot_cxctime\n\nsys.version\n\n%matplotlib inline\n\nSIM_MM_TO_ARCSEC = 20.493\n\n# Discrete jumps after 2012:001. Note also jumps at:\n# '2008:293', # IU-reset\n# '2010:151', # IU-reset\n# '2011:190', # Safe mode\nJUMPS = ['2015:006', # IU-reset\n '2015:265', # Safe mode 6\n '2016:064', # Safe mode 7\n '2017:066', # NSM\n '2018:285', # Safe mode 8\n '2020:146', # Safe mode 9\n ]\n\nltt_bads = events.ltt_bads(pad=(0, 200000))\nnormal_suns = events.normal_suns(pad=(0, 100000))\nsafe_suns = events.safe_suns(pad=(0, 86400 * 7))\n\n# Aspect camera CCD temperature trend since 2010\nt_ccd = fetch.Msid('aacccdpt', start='2010:001', stat='5min')\nt_ccd.remove_intervals(ltt_bads | normal_suns | safe_suns)\nplt.figure(figsize=(12, 4.5))\nt_ccd.plot()\nplt.ylabel('T_ccd (degF)')\nplt.title('ACA CCD temperature')\nplt.ylim(None, 20)\nplt.grid()\n\n# Get aspect solution DY and DZ (apparent SIM offsets via fid light positions)\n# which are sampled at 1 ksec intervals and updated daily.\nif 'adat' not in globals():\n h5 = tables.open_file(f'{os.environ[\"SKA\"]}/data/aimpoint_mon/aimpoint_asol_values.h5')\n adat = h5.root.data[:]\n h5.close()\n\n adat.sort(order=['time'])\n\n # Filter bad data when asol DY and DZ are both exactly 0.0 (doesn't happen normally)\n bad = (adat['dy'] == 0.0) & (adat['dz'] == 0.0)\n adat = adat[~bad]\n\nclass AcaDriftModel(object):\n \"\"\"\n Class to encapsulate necessary data and compute the model of ACA\n alignment drift. The object created from this class is called\n by Sherpa as a function during fitting. This gets directed to\n the __call__() method.\n \"\"\"\n YEAR0 = 2016.0 # Reference year for linear offset\n \n def __init__(self, adat, start='2012:001', stop=None):\n \"\"\"\n adat is the raw data array containing aspect solution data\n sampled at 1 ksec intervals.\n \"\"\"\n # Get the ACA CCD temperature telemetry\n t_ccd = fetch.Msid('aacccdpt', stat='5min', start=start, stop=stop)\n \n # Slice the ASOL data corresponding to available ACA CCD temps\n i0, i1 = np.searchsorted(adat['time'], [t_ccd.times[0], t_ccd.times[-1]])\n self.asol = adat[i0:i1].copy()\n \n # Convert from mm to arcsec for convenience\n self.asol['dy'] *= SIM_MM_TO_ARCSEC\n self.asol['dz'] *= SIM_MM_TO_ARCSEC\n \n self.times = self.asol['time']\n self.years = Time(self.times, format='cxcsec').decimalyear\n self.years_0 = self.years - self.YEAR0\n \n # Resample CCD temp. data to the 1 ksec ASOL time stamps\n self.t_ccd = interpolate(t_ccd.vals, t_ccd.times, self.asol['time'], method='linear')\n \n # Get indices corresponding to jump times for later model computation\n self.jump_times = Time(JUMPS).cxcsec\n self.jump_idxs = np.searchsorted(self.times, self.jump_times)\n\n def __call__(self, pars, years=None, t_ccd=None):\n \"\"\"\n Calculate model prediction for DY or DZ. Params are:\n \n scale : scaling in arcsec / degF\n offset : ACA CCD temperature corresponding to DY/Z = 0.0 arcsec\n trend : Trend in DY/Z (arcsec / year)\n jumpYYYYDDD : discrete jump in arcsec at date YYYY:DDD\n \"\"\"\n # Sherpa passes the parameters as a list\n scale, offset, trend = pars[0:3]\n jumps = pars[3:]\n \n # Allow for passing in a different value for ACA CCD temperature\n if t_ccd is None:\n t_ccd = self.t_ccd\n\n # Compute linear part of model\n out = (t_ccd - offset) * scale + self.years_0 * trend\n\n # Put in the step function jumps\n for jump_idx, jump in zip(self.jump_idxs, jumps):\n if jump_idx > 10 and jump_idx < len(out) - 10:\n out[jump_idx:] += jump\n\n return out\n\ndef fit_aimpoint_aca_temp(axis='dy', start='2012:180', stop=None):\n \"\"\"\n Use Sherpa to fit the model parameters\n \"\"\"\n # Create the object used to define the Sherpa user model, then\n # load as a model and create parameters\n aca_drift = AcaDriftModel(adat, start, stop)\n ui.load_user_model(aca_drift, 'aca_drift_model')\n parnames = ['scale', 'offset', 'trend']\n parnames += ['jump{}'.format(re.sub(':', '', x)) for x in JUMPS]\n ui.add_user_pars('aca_drift_model', parnames)\n \n # Sherpa automatically puts 'aca_drift_model' into globals, but\n # make this explicit so code linters don't complain.\n aca_drift_model = globals()['aca_drift_model']\n\n # Get the DY or DZ values and load as Sherpa data\n dyz = aca_drift.asol[axis]\n ui.load_arrays(1, aca_drift.years, dyz)\n\n # Set the model and fit using Simplex (Nelder-Mead) minimization\n ui.set_model(1, aca_drift_model)\n ui.set_method('simplex')\n ui.fit(1)\n \n return aca_drift, ui.get_fit_results() \n\ndef plot_aimpoint_drift(axis, aca_drift, fit_results, start='2010:001', stop=None, plot_t_ccd=False):\n \"\"\"\n Plot our results\n \"\"\"\n y_start = DateTime(start).frac_year\n y_stop = DateTime(stop).frac_year\n years = aca_drift.years\n ok = (years > y_start) & (years < y_stop)\n years = aca_drift.years[ok]\n times = aca_drift.times[ok]\n\n # Call model directly with best-fit parameters to get model values\n dyz_fit = aca_drift(fit_results.parvals)[ok]\n\n # DY or DZ values from aspect solution\n dyz = aca_drift.asol[axis][ok]\n dyz_resid = dyz - dyz_fit\n \n if plot_t_ccd:\n plt.figure(figsize=(12, 4.5))\n plt.subplot(1, 2, 1)\n\n plot_cxctime(times, dyz, label='Data')\n plot_cxctime(times, dyz_fit, 'r-', alpha=0.5, label='Fit')\n plot_cxctime(times, dyz_resid, 'r-', label='Residual')\n plt.title('Fit aspect solution {} to scaled ACA CCD temperature'\n .format(axis.upper()))\n plt.ylabel('{} (arcsec)'.format(axis.upper()))\n plt.grid()\n plt.legend(loc='upper left', framealpha=1.0)\n \n if plot_t_ccd:\n dat = fetch_sci.Msid('aacccdpt', start, stop, stat='5min')\n plt.subplot(1, 2, 2)\n dat.plot()\n plt.grid()\n plt.ylabel('AACCCDPT (degC)')\n if isinstance(plot_t_ccd, tuple):\n plt.ylim(*plot_t_ccd)\n \n std = dyz_resid.std()\n p1, p99 = np.percentile(dyz_resid, [1, 99])\n print('Fit residual stddev = {:.2f} arcsec'.format(std))\n print('Fit residual 99th - 1st percentile = {:.2f}'.format(p99 - p1))",
"Fit model coefficients for DY and plot results",
"aca_drift_dy, fit_dy = fit_aimpoint_aca_temp('dy')\n\nplot_aimpoint_drift('dy', aca_drift_dy, fit_dy)",
"Zoom in around the 2020:145 safe mode time",
"start = '2020:140'\nstop = '2020:160'\nplot_aimpoint_drift('dy', aca_drift_dy, fit_dy, start=start, stop=stop, plot_t_ccd=(-12, -4))",
"Illustrate model behavior by assuming a constant ACA CCD temperature",
"dyz_fit = aca_drift_dy(fit_dy.parvals, t_ccd=14) # degF = -10 C\nplot_cxctime(aca_drift_dy.times, dyz_fit)\nplt.title('DY drift model assuming constant ACA temperature')\nplt.grid();",
"Fit model coefficients for DZ and plot results",
"aca_drift_dz, fit_dz = fit_aimpoint_aca_temp('dz')\n\nplot_aimpoint_drift('dz', aca_drift_dz, fit_dz)\n\nstart = '2020:140'\nstop = '2020:160'\nplot_aimpoint_drift('dz', aca_drift_dz, fit_dz, start=start, stop=stop, plot_t_ccd=(-16, -8))",
"Illustrate model behavior by assuming a constant ACA CCD temperature",
"dyz_fit = aca_drift_dz(fit_dz.parvals, t_ccd=14) # degF = -10 C\nplot_cxctime(aca_drift_dz.times, dyz_fit)\nplt.title('DZ drift model assuming constant ACA temperature')\nplt.grid();",
"Comparison to current flight model for NOV0518B\nCompare the actual flight aca_offset_y/z from the *_dynamical_offsets.txt files\nto predictions with the new chandra_aca.drift module.\nA key point is to use the observed mean T_ccd with the new model to be able to reproduce\nthe observed aimpoint shift of about 8 arcsec. The jump was 13 arcsec but we did not\nsee that directly because of the ~1.4 C error in the temperatures being used to\npredict the aimpoint offset.",
"text = \"\"\"\nobsid detector chipx chipy chip_id aca_offset_y aca_offset_z mean_t_ccd mean_date \n----- -------- ------- ------- ------- ------------ ------------ ---------- ---------------------\n21152 ACIS-S 210.0 520.0 7 -0.9 -22.67 -11.72 2018:307:18:07:54.816\n20332 ACIS-I 970.0 975.0 3 -14.27 -21.89 -11.88 2018:308:04:03:46.816\n21718 HRC-I 7590.0 7745.0 0 -13.39 -22.8 -11.53 2018:313:03:14:10.816\n21955 HRC-S 2195.0 8915.0 2 -12.50 -22.57 -11.53 2018:305:16:28:34.816 \n\"\"\"\nobss = Table.read(text, format='ascii.fixed_width_two_line')\n\nimport sys\nimport os\nsys.path.insert(0, os.path.join(os.environ['HOME'], 'git', 'chandra_aca'))\nimport chandra_aca\nfrom chandra_aca import drift\nfrom kadi import events\n\nchandra_aca.test(get_version=True)\n\nfor obs in obss:\n dwell = events.dwells.filter(obsid=21152)[0]\n t_ccd = fetch_sci.Msid('aacccdpt', dwell.start, dwell.stop, stat='5min')\n mean_t_ccd = np.mean(t_ccd.vals)\n offsets = drift.get_aca_offsets(obs['detector'], chip_id=obs['chip_id'],\n chipx=obs['chipx'], chipy=obs['chipy'], \n time=obs['mean_date'], t_ccd=mean_t_ccd)\n print(obs)\n print('T_ccd:', mean_t_ccd, ' Delta offsets Y Z:',\n '%.2f' % (obs['aca_offset_y'] - offsets[0]), \n '%.2f' % (obs['aca_offset_z'] - offsets[1]))\n print()",
"Comparison of local model prediction to implementation in chandra_aca",
"from chandra_aca.tests.test_all import simple_test_aca_drift\n\ndy, dz, times = simple_test_aca_drift()\n\nplt.figure(figsize=(12, 4.5))\nplt.subplot(1, 2, 1)\ndy_fit = aca_drift_dy(fit_dy.parvals, t_ccd=14) # degF = -10 C\nplot_cxctime(aca_drift_dy.times, dy_fit)\nplt.title('DY drift model assuming constant ACA temperature')\nplt.grid();\n\nplt.subplot(1, 2, 2)\nplot_cxctime(times, dy);\nplt.grid()\nplt.ylabel('DY (arcsec)');\nplt.title('DY drift model from chandra_aca');\n\nplt.figure(figsize=(12, 4.5))\nplt.subplot(1, 2, 1)\ndz_fit = aca_drift_dz(fit_dz.parvals, t_ccd=14) # degF = -10 C\nplot_cxctime(aca_drift_dz.times, dz_fit)\nplt.title('DZ drift model assuming constant ACA temperature')\nplt.grid();\n\nplt.subplot(1, 2, 2)\nplot_cxctime(times, dz);\nplt.grid()\nplt.ylabel('DZ (arcsec)');\nplt.title('DZ drift model from chandra_aca');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/miroc/cmip6/models/nicam16-7s/aerosol.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: MIROC\nSource ID: NICAM16-7S\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 70 (38 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'miroc', 'nicam16-7s', 'aerosol')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Meteorological Forcings\n5. Key Properties --> Resolution\n6. Key Properties --> Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --> Absorption\n12. Optical Radiative Properties --> Mixtures\n13. Optical Radiative Properties --> Impact Of H2o\n14. Optical Radiative Properties --> Radiative Scheme\n15. Optical Radiative Properties --> Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of aerosol model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrognostic variables in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of tracers in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre aerosol calculations generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the aerosol model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Variables 2D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Frequency\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of transport in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for aerosol transport modeling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n",
"7.3. Mass Conservation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to ensure mass conservation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.4. Convention\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTransport by convention",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prescribed Climatology\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nSpecify the climatology type for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n",
"8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Other Method Characteristics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCharacteristics of the "other method" used for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as mass mixing ratios.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Prescribed Fields Aod Plus Ccn\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of optical and radiative properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Optical Radiative Properties --> Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.2. Dust\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Organics\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12. Optical Radiative Properties --> Mixtures\n**\n12.1. External\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there external mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Internal\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.3. Mixing Rule\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Optical Radiative Properties --> Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact size?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.2. Internal Mixture\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact aerosol internal mixture?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.3. External Mixture\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact aerosol external mixture?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Optical Radiative Properties --> Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Shortwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of shortwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Optical Radiative Properties --> Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol-cloud interactions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Twomey\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the Twomey effect included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.3. Twomey Minimum Ccn\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Drizzle\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect drizzle?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.5. Cloud Lifetime\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect cloud lifetime?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the Aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n",
"16.3. Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther model components coupled to the Aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.4. Gas Phase Precursors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of gas phase aerosol precursors.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.5. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.6. Bulk Scheme Species\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of species covered by the bulk scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
snegirigens/DLND
|
sentiment-rnn/Sentiment_RNN.ipynb
|
mit
|
[
"Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.\nThe architecture for this network is shown below.\n<img src=\"assets/network_diagram.png\" width=400px>\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.",
"import numpy as np\nimport tensorflow as tf\n\nwith open('../sentiment-network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment-network/labels.txt', 'r') as f:\n labels = f.read()\n\nprint (len(reviews))\nreviews[:2000]",
"Data preprocessing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. Then I can combined all the reviews back together into one big string.\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.",
"from string import punctuation\n\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()\n\nprint (len(reviews))\nreviews[:1]\n\nwords[:10]",
"Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.",
"from collections import Counter\nword_counts = Counter(words).most_common()\n\n# Create your dictionary that maps vocab words to integers here\nvocab_to_int = { wc[0]:i+1 for i, wc in enumerate(word_counts)}\n\n#int_to_word = { i:wc[0] for i, wc in enumerate(word_counts)}\n#for k in list(sorted(int_to_word.keys(), reverse=False))[:10]:\n# print (k, int_to_word[k])\n\n# Convert the reviews to integers, same shape as reviews list, but with integers\nreviews_ints = []\n\nfor review in reviews:\n reviews_ints.append ([vocab_to_int[w] for w in review.split()])\n\nreviews_ints[1]",
"Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively.",
"# Convert labels to 1s and 0s for 'positive' and 'negative'\nlabels = np.array([1 if label == 'positive' else 0 for label in labels.split('\\n')])",
"If you built labels correctly, you should see the next output.",
"from collections import Counter\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))",
"Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\nExercise: First, remove the review with zero length from the reviews_ints list.",
"# Filter out that review with 0 length\nempty_idx = [i for i, r in enumerate(reviews) if len(r) == 0]\n\nfor idx in sorted(empty_idx, reverse=True):\n print ('{}: {}'.format(idx, reviews_ints[idx]))\n del (reviews_ints[idx])\n \nlabels = np.delete (labels, empty_idx, axis=0)\n\nprint ('Labels: {}'.format (len(labels)))\nprint ('Reviews: {}'.format (len(reviews_ints)))",
"Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.",
"seq_len = 200\nfeatures = np.array([np.zeros(seq_len, dtype=int) for review in reviews_ints])\n\nfor i, review in enumerate(reviews_ints):\n review_len = min(len(review), seq_len)\n start = seq_len - review_len if seq_len > review_len else 0\n end = seq_len\n features[i][start:end] = review[0:review_len]",
"If you build features correctly, it should look like that cell output below.",
"features[:10,:100]",
"Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.",
"import random\n\nsplit_frac = 0.8\ntotal_samples = len(features)\ntest_samples = int(total_samples*split_frac)\n\nshuffled_idx = random.sample (range(total_samples), k=total_samples)\ntrain_idx = shuffled_idx[:test_samples]\ntest_idx = shuffled_idx[test_samples:]\n\nvalid_idx = test_idx[:int(len(test_idx)/2)]\ntest_idx = test_idx[int(len(test_idx)/2):]\n\nprint ('Train = {}. Valid = {}. Test = {}'.format (len(train_idx), len(valid_idx), len(test_idx)))\nprint ('Train = {}. Valid = {}. Test = {}'.format (train_idx[-1], valid_idx[-1], test_idx[0]))\n\n\ntrain_x, val_x, test_x = features[train_idx], features[valid_idx], features[test_idx]\ntrain_y, val_y, test_y = labels[train_idx], labels[valid_idx], labels[test_idx]\n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))\n\nprint(\"\\t\\t\\tLabel Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_y.shape), \n \"\\nValidation set: \\t{}\".format(val_y.shape),\n \"\\nTest set: \\t\\t{}\".format(test_y.shape))",
"With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2500, 200)\nBuild the graph\nHere, we'll build the graph. First up, defining the hyperparameters.\n\nlstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nlstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\nbatch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\nlearning_rate: Learning rate",
"lstm_size = 256\nlstm_layers = 1\nbatch_size = 500\nlearning_rate = 0.001",
"For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.\n\nExercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.",
"# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = tf.placeholder (dtype=tf.int32, shape=[None, None], name='inputs')\n labels_ = tf.placeholder (dtype=tf.int32, shape=[None, None], name='labels')\n keep_prob = tf.placeholder (dtype=tf.float32, name='keep_prob')",
"Embedding\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\nExercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].",
"n_words = len(vocab_to_int) + 1\n#print ('Words = {}'.format (n_words))\n\n# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = tf.Variable (tf.random_uniform ([n_words, embed_size], -1, 1, dtype=tf.float32), name='embedding')\n embed = tf.nn.embedding_lookup (embedding, inputs_)",
"LSTM cell\n<img src=\"assets/network_diagram.png\" width=400px>\nNext, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\nTo create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)\nyou can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like \nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nto create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\nMost of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\nHere, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\nExercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.\n\nHere is a tutorial on building RNNs that will help you out.",
"with graph.as_default():\n # Your basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell (lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper (lstm, output_keep_prob=keep_prob)\n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell ([drop] * lstm_layers)\n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)",
"RNN forward pass\n<img src=\"assets/network_diagram.png\" width=400px>\nNow we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\nAbove I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\nExercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.",
"with graph.as_default():\n outputs, final_state = tf.nn.dynamic_rnn (cell, embed, initial_state=initial_state)",
"Output\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.",
"with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)",
"Validation accuracy\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.",
"with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"Batching\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].",
"def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]",
"Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.",
"epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")",
"Testing",
"#with graph.as_default():\n# saver = tf.train.Saver()\n\ntest_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/ukesm1-0-mmh/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: UKESM1-0-MMH\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:27\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-mmh', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
marcinofulus/ProgramowanieRownolegle
|
MPI/PR_MPI_p2p.ipynb
|
gpl-3.0
|
[
"MPI - point to point operations\nWe will use mpi4py",
"import numpy as np\n\nimport ipyparallel as ipp\nc = ipp.Client(profile='mpi')\nprint(c.ids)\nview = c[:]\nview.activate()",
"Parallel eigenvalues:\npython\nimport numpy as np\n%time np.max(np.real(np.linalg.eigvals(np.random.randn(400,400))))\nA task: find a biggest entry in a random matrix:",
"%time np.max(np.random.randn(5000,5000))\n\n%%px --block\nfrom mpi4py import MPI\nimport time\nimport numpy as np \ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\nsize = comm.Get_size()\nt = MPI.Wtime()\nprint(\"result =\",np.max(np.random.randn(5000,5000//4)))\nt = MPI.Wtime() - t\nprint(rank,\":: execution time:\",t)",
"Send and receive\n\nWe use rank to differentiate code between processors.\nNote that mpi4py serializes arbitrary data before send.\n\nImportant!\n\nIn MPI for Python, the Send(), Recv() and Sendrecv() can communicate memory buffers. \nThe variants send(), recv() and sendrecv() can communicate generic Python objects.",
"%%px --block\n\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\ndata = None\n\nif rank == 0:\n data = {'a': 7, 'b': 3.14}\n comm.send(data, dest=1)\n\nelif rank == 1:\n data = comm.recv(source=0)\n\nprint(\"OK, rank= \",rank,\"dane: \",data)",
"Sending and receiving numpy arrays\n\nwe can send the whole array",
"%%px --block\nimport numpy as np\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\na = np.zeros((2,2))\n\nif rank == 0:\n a[:] = 2\n comm.send(a, dest=1)\nelif rank == 1:\n a = comm.recv(source=0)\n\nprint (\"OK,\",rank,np.sum(a))\n",
"we can send a slice !",
"%%px --block\nimport numpy as np\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\na = np.zeros((2,2))\n\nif rank == 0:\n a[:] = 2\n comm.send(a[0,:], dest=1)\nelif rank == 1:\n a[0,:] = comm.recv(source=0)\n\nprint(\"OK,\",rank,np.sum(a))\n\n\nview['rank']\n\nview['a'][5]\n\nnp.argsort(view['rank'])\n\nview['a'][np.argsort(view['rank'])[1]]\n\nprint(view['a'][view['rank'][0]])\nprint(view['a'][view['rank'][1]])\n\n%%px --block\n\nimport numpy as np\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\na = np.zeros((2,2))\nif rank == 0:\n a[:] = 2\n comm.send(a[:,0], dest=1)\nelif rank == 1:\n a[:,0] = comm.recv(source=0)\n\nprint (\"OK,\",rank,np.sum(a))\n\n\nprint(view['a'][view['rank'][0]])\nprint(view['a'][view['rank'][1]])",
"Communicating memorybuffers: Send and Recv",
"%%px --block\n\nimport numpy as np\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\na = np.zeros((2,2))\nif rank == 0:\n a[:] = 2\n comm.Send(a[0,:], dest=1)\nelif rank == 1:\n comm.Recv(a[0,:], source=0)\n\nprint (\"OK,\",rank,np.sum(a))\n\n\n%%px --block\n\nimport numpy as np\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\na = np.zeros((2,2))\nif rank == 0:\n a[:] = 2\n comm.Send(a[:,0], dest=1)\nelif rank == 1:\n comm.Recv(a[:,0], source=0)\n\nprint (\"OK,\",rank,np.sum(a))\n",
"Contiguous memory buffers",
"a = np.zeros((2,2))\na.flags\n\na[:,0].flags\n\na[0,:].flags\n\n%%px --block\n\nimport numpy as np\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\na = np.zeros((2,2))\nif rank == 0:\n a[:] = 2\n buf = a[:,0].copy()\n comm.Send(buf, dest=1) \nelif rank == 1:\n buf = np.empty(2)\n comm.Recv(buf, source=0)\n a[:,0] = buf \n print (\"OK,\",np.sum(a))\n\n\nimport ipyparallel as ipp\nc = ipp.Client(profile='mpi')\nprint(c.ids)\nview = c[:]\nview.activate()\n\n%%px --block --target :3\nprint(\"OK\")\n\n%%px --block\nprint(\"OK\")\n\n%%px --block\nimport numpy as np\nfrom mpi4py import MPI\n\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\n\na = np.zeros((2,2))\n\nif rank == 0:\n import os\n print(os.getcwd())\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mansweet/GaussianLDA
|
NIPS Topic Modeling.ipynb
|
apache-2.0
|
[
"Topic Modeling on NIPS Dataset Using Gaussian LDA w/ Word-Embeddings",
"import numpy as np\nimport os\nfrom operator import itemgetter\nfrom collections import Counter\nimport scipy.stats as stat\nfrom gensim.models import Word2Vec\nfrom nltk import corpus\nimport FastGaussianLDA2",
"Loading the word_vector model with GenSim",
"wvmodel = Word2Vec.load_word2vec_format(\n \"/Users/michael/Documents/Gaussian_LDA-master/data/glove.wiki/glove.6B.50d.txt\", binary=False)\nprint \"word-vector dimension: {}\".format(wvmodel.vector_size())",
"Sets of vocab to filter on: NLTK StopWords and Glove vocab",
"wv_vocab = set(wvmodel.vocab.keys())\nstops = set(corpus.stopwords.words(fileids=\"english\"))",
"Document cleaning\n\nTokenizing just on spaces\nno lemmatization or stemming\nremoving non-ascci characters\nremoving stop words\nremoving words not in Glove vocab\nremoving non-alpha words (e.g. Letter and symbols)\nremoving short words < 2 characters long\nlowercasing all words",
"corpus = []\nnips_path = \"/Users/michael/Documents/GaussianLDA/data/\"\nfor folder in os.listdir(nips_path)[1:]:\n for doc in os.listdir(nips_path + folder):\n with open(nips_path + folder + \"/\" + doc, 'r') as f:\n txt = f.read().split()\n txt = map(lambda x: x.lower(), txt) # Lowercasing each word\n txt = filter(lambda word: [letter for letter in word if ord(letter) < 128 ], txt) # Checking each word for ascci error\n txt = filter(lambda x: x not in stops, txt) # Removing stop words\n txt = filter(lambda x: x.isalpha(), txt) # Removing non-letter words (eg numbers and symbols)\n txt = filter(lambda x: len(x) > 2, txt) # removing super short words and single letters\n txt = filter(lambda x: x in wv_vocab, txt) \n txt = ' '.join(txt)\n corpus.append(txt)\n\nprint \"Number of documents in corpus: {}\".format(len(corpus))\n\nreload(FastGaussianLDA2)\ntopics = 50\ndim = 50\nrun_num = 1\noutputfile = \"/Users/michael/Documents/GaussianLDA/output/NIPS_{}_{}T_{}D_\".format(str(run_num),\n str(topics), \n str(dim))\nlda = FastGaussianLDA2.Gauss_LDA(topics, corpus, word_vector_model=wvmodel, alpha=.5, outputfile=outputfile)\nlda.fit(50) # Number of samples to run"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Ccaccia73/Intro2Aero_Edx
|
problems02.ipynb
|
artistic-2.0
|
[
"Exercises and Problems for Module 2",
"import numpy as np\nfrom pint import UnitRegistry\nimport matplotlib.pyplot as plt\nimport Utils16101\nimport sympy\nsympy.init_printing()\n%matplotlib inline\n\nureg = UnitRegistry()\nQ_ = ureg.Quantity",
"Exercise 2.4.2: compute lift coefficient\nFirst aircraft (Cessna like)",
"w1 = Q_(2400.,'lbf')\nSref1 = Q_(180.,'foot**2')\nv1 = Q_(140.,'mph')\nalt1 = Q_(12e3,'foot')\nρ1 = Q_(1.6e-3,'slug/foot**3')",
"Second aircraft (B777 like)",
"w2 = Q_(550e3,'lbf')\nSref2 = Q_(4.6e3,'foot**2')\nv2 = Q_(560.,'mph')\nalt2 = Q_(35e3,'foot')\nρ2 = Q_(7.4e-4,'slug/foot**3')",
"Results",
"print(\"First aircraft: \",Utils16101.computeLiftCoeff(w1,Sref1,v1,alt1,ρ1))\nprint(\"Second aircraft: \",Utils16101.computeLiftCoeff(w2,Sref2,v2,alt2,ρ2))",
"Exercise 2.4.3: drag comparison\nHypoteses:\n* $C_{Dcyl}\\approx1$ and $C_{Dfair}\\approx0.01$\n* $S_{ref\\ cyl} = d\\cdot h$, and $S_{ref\\ fair} = c\\cdot h$, with $c = 10d$\n* same $V_{\\infty}$\nExpression of Drag:\n$$D = \\frac{1}{2} \\cdot C_D \\rho V_{\\infty}^2 S_{ref}$$\nRatio of Drags\n$$\\frac{D_{cyl}}{D_{fair}} = \\frac{\\frac{1}{2} \\cdot C_{Dcyl} \\rho V_{\\infty}^2 S_{ref\\ cyl}}{\\frac{1}{2} \\cdot C_{Dfair} \\rho V_{\\infty}^2 S_{ref\\ fair}} = \\frac{C_{Dcyl} \\cdot dh}{C_{Dfair} \\cdot 10dh} $$\nExercise 2.4.7: Mach and Reynolds number comparisons\nFirst aircraft additional parameters:",
"c1 = Q_(5.0,'foot')\nμ1 = Q_(3.5e-7,'slug/foot/second')\na1 = Q_(1.1e3,'foot/second')",
"Second aircraft additional parameters",
"c2 = Q_(23.0,'foot')\nμ2 = Q_(3.0e-7,'slug/foot/second')\na2 = Q_(9.7e2,'foot/second')\n\nMa1, Re1 = Utils16101.computeMachRe(v1,a1,μ1,c1,ρ1)\nMa2, Re2 = Utils16101.computeMachRe(v2,a2,μ2,c2,ρ2)\nprint(\"First aircraft - Ma: {0:10.3e} Re: {1:10.3e}\".format(Ma1.magnitude,Re1.magnitude))\nprint(\"Second aircraft - Ma: {0:10.3e} Re: {1:10.3e}\".format(Ma2.magnitude,Re2.magnitude))",
"Exercise 2.4.10: dynamic similarity\nWind tunnel test conditions",
"ρ_inf = Q_(2.4e-3,'slug/ft**3')\na_inf = Q_(1.1e3,'ft/s')\nμ_inf = Q_(3.7e-7,'slug/ft/s')\n\nv = Q_(200.,'mph')\nc = c1/4\n\nMa_wt, Re_wt = Utils16101.computeMachRe(v,a_inf,μ_inf,c,ρ_inf)\nprint(\"Wind tunnel - Ma: {0:10.3e} Re: {1:10.3e}\".format(Ma_wt.magnitude,Re_wt.magnitude))",
"Exercise 2.5.2: minimum Takeoff velocity\nMinimum required lift: L = W as $V_{\\infty} \\perp \\vec{g} $\n$$L = W = \\frac{1}{2} \\cdot \\rho V_{\\infty}^2 C_L * S_{ref} $$",
"W = Q_(650e3,'lbf')\nSref = Q_(4.6e3,'ft**2')\nρ_inf = Q_(2.4e-3,'slug/ft**3')\nCL_max = 2.5\n\nV_inf = np.sqrt(2*W.to('slug*ft/s**2')/(ρ_inf*CL_max*Sref))\nprint(V_inf.to('mph'))",
"Exercise 2.6.2: Range estimate\nBreguet equation for determining range (level flight, no takeoff or landing):\n$$R = \\eta_0 \\cdot \\frac{L}{D} \\cdot \\frac{Q_R}{g} \\cdot \\ln \\left(1+\\frac{W_{fuel}}{W_{final}}\\right)$$",
"η0 = Q_(0.32,'dimensionless')\nLoverD = Q_(17.,'dimensionless')\nQR = Q_(42.,'MJ/kg')\ng = Q_(9.80665,'m/s**2')\nW_in = Q_(400e3,'kg')\nW_fuel = Q_(175e3,'kg')\nW_final = W_in - W_fuel\n\nR = η0 * LoverD * QR.to('m**2/s**2')/g*np.log(1+W_fuel/W_final)\nprint(\"Range = {0:10.3e}\".format(R.to('km')))",
"Sample Problems\nProblem 2.7.1: Lift and Drag for flat plate in supersonic flow\nHypoteses:\n* $\\Delta p = p_l - p_u > 0$\n* $p_l , p_u constant $\n* $\\alpha \\ small \\rightarrow \\cos(\\alpha) \\approx 1, \\sin(\\alpha) \\approx \\alpha$\nRelations:\n$$\n\\begin{align}\nL &= \\Delta p \\cdot S \\cos(\\alpha) \\\nD &= \\Delta p \\cdot S \\sin(\\alpha)\n\\end{align}\n$$\nLift and Drag coefficients:\n$$\n\\begin{align}\nC_L &= \\frac{L}{\\frac{1}{2}\\rho_{\\infty} V_{\\infty}^2S} &\\approx \\frac{\\Delta p}{\\frac{1}{2}\\rho_{\\infty} V_{\\infty}^2} \\\nC_D &= \\frac{D}{\\frac{1}{2}\\rho_{\\infty} V_{\\infty}^2S} &\\approx \\frac{\\Delta p \\alpha}{\\frac{1}{2}\\rho_{\\infty} V_{\\infty}^2}\n\\end{align}\n$$\n$\\Delta p \\propto \\alpha$ for supersonic flow and small angle\n$$\n\\begin{align}\nC_L &\\approx \\frac{\\Delta p}{\\frac{1}{2}\\rho_{\\infty} V_{\\infty}^2} &\\propto \\frac{\\alpha}{\\frac{1}{2}\\rho_{\\infty} V_{\\infty}^2}\\\nC_D &\\approx \\frac{\\Delta p \\alpha}{\\frac{1}{2}\\rho_{\\infty} V_{\\infty}^2} &\\propto \\frac{\\alpha^2}{\\frac{1}{2}\\rho_{\\infty} V_{\\infty}^2S} \n\\end{align}\n$$\nProblem 2.7.2: Aerodynamic performance\nAircraft parameters:",
"W = Q_(550e3,'lbf')\nSref = Q_(4.6e3,'ft**2')\nAR = Q_(9.,'dimensionless')",
"Air parameters at two different altitudes",
"ρ_inf1 = Q_(1.6e-3,'slug/ft**3') #1.2e4 ft\nρ_inf2 = Q_(7.3e-4,'slug/ft**3') #3.5e4 ft\na_inf1 = Q_(1069.,'ft/s')\na_inf2 = Q_(973.,'ft/s')",
"Aircraft speed",
"Ma = Q_(0.85,'dimensionless')",
"Parabolic drag model\n$$C_D = C_{D0} + \\frac{C_L^2}{\\pi e AR}$$\nwith:\n* AR: Aspect ratio\n* e: Oswald span efficiency",
"C_D0 = Q_(0.05,'dimensionless')\ne_osw = Q_(0.8,'dimensionless')\n\nV_inf1 = Ma*a_inf1\nV_inf2 = Ma*a_inf2\n\nC_L1 = W.to('slug*ft/s**2')/(0.5*ρ_inf1*V_inf1**2*Sref)\nC_L2 = W.to('slug*ft/s**2')/(0.5*ρ_inf2*V_inf2**2*Sref)\nprint(\"Lift coefficient at 12000ft: {0:10.3e}\".format(C_L1))\nprint(\"Lift coefficient at 35000ft: {0:10.3e}\".format(C_L2))",
"NB: Drag count $\\rightarrow C_D \\cdot 10^4$",
"C_D1 = C_D0 + C_L1**2/(np.pi*e_osw*AR)\nC_D2 = C_D0 + C_L2**2/(np.pi*e_osw*AR)\nprint(\"Drag count at 12000ft: {0:10.1f}\".format(C_D1*1e4))\nprint(\"Drag count at 35000ft: {0:10.1f}\".format(C_D2*1e4))",
"Lift to Drag ratio:",
"L_D1 = C_L1/C_D1\nL_D2 = C_L2/C_D2\nprint(\"Lift to Drag ratio at 12000ft: {0:10.3e}\".format(L_D1))\nprint(\"Lift to Drag ratio at 35000ft: {0:10.3e}\".format(L_D2))",
"Required Thrust: $T = D$",
"T1 = 0.5*C_D1*ρ_inf1*V_inf1**2*Sref\nT2 = 0.5*C_D2*ρ_inf2*V_inf2**2*Sref\nprint(\"Thrust required at 12000ft: {0:10.3e}\".format(T1.to('lbf')))\nprint(\"Thrust required at 35000ft: {0:10.3e}\".format(T2.to('lbf')))",
"Required Power: $P = T \\cdot V_{\\infty}$",
"P1 = T1.to('lbf')*V_inf1\nP2 = T2.to('lbf')*V_inf2\nprint(\"Power required at 12000ft: {0:10.3e}\".format(P1))\nprint(\"Power required at 35000ft: {0:10.3e}\".format(P2))",
"Problem 2.7.3: sensitivity of payload\nUsing Breguet equation and comparing terms to get the same range\n$$ 0.99 \\eta_0 \\frac{L}{D} \\cdot \\frac{Q_R}{g} \\ln \\left(\\frac{W_{in}-100n}{W_{fin}-100n}\\right) = \n\\eta_0 \\frac{L}{D} \\cdot \\frac{Q_R}{g} \\ln \\left(\\frac{W_{in}}{W_{fin}}\\right)$$\nwhich gives:\n$$ \\left(\\frac{W_{in}-100n}{W_{fin}-100n}\\right)^{0.99} = \\left(\\frac{W_{in}}{W_{fin}}\\right)$$",
"Win = 400e3\nWfin = 400e3-175e3\n\nn = np.arange(25.,35.)\ny = ((Win-100*n)/(Wfin-100*n))**0.99 - Win/Wfin\n\nplt.figure(figsize=(16,10), dpi=300)\nplt.plot(n, y, lw=3.)\nplt.grid();\n\nzero_crossing = np.where(np.diff(np.sign(y)))[0]+1\n\nprint(\"number of passengers: {0:d}\".format(int(n[zero_crossing])))",
"Problem 2.7.4: rate of climb\nRelations:\n- $\\dot{h} = V_{\\infty} \\sin(\\theta)$\n- $ T = D + W \\sin(\\theta)$\nso:\n$$ \\dot{h} = V_{\\infty} \\cdot \\frac{T-D}{W}$$\nProblem 2.7.5: maximum lift-to-drag ratio",
"Cd, Cd0, K = sympy.symbols('C_D C_D0 K')\n\nexpr = sympy.sqrt((Cd-Cd0)*K)/Cd\nexpr\n\nsympy.simplify(sympy.diff(expr,Cd))",
"Maximum lift to drag ratio for $C_D = 2D_{D0}$\n$$ \\left(\\frac{L}{D} \\right){max} = \\frac{1}{2}\\sqrt{\\frac{\\pi e AR}{C{D0}}}$$\nHomework\nProblem 2.8.1: cryogenic wind tunnel test\nSmall aircraft flying at following conditions:",
"V_full = Q_(10.,'m/s')\nρ_full = Q_(0.5,'kg/m**3')\nT_full = Q_(233.,'K')",
"Air supposed to be ideal gas:",
"R = Q_(287,'J/kg/K')\nγ = Q_(1.4,'dimensionless')",
"Temperature - viscosity dependance: $\\frac{\\mu_1}{\\mu_2} = \\sqrt{\\frac{T_1}{T_2}}$\nFreestream pressure",
"p_full = ρ_full*R*T_full\nprint(\"Freestream pressure: {0:10.3e}\".format(p_full.to('Pa')))",
"Mach number",
"a_full = np.sqrt(γ*R.to('m**2/s**2/K')*T_full)\nMa_full = V_full/a_full\nprint(\"Fullscale Mach number: {0:10.3e}\".format(Ma_full))\n\nscale = Q_(0.2,'dimensionless')\np_scale = Q_(1e5,'Pa')",
"Compare Reynolds and Mach numbers:\n$$\n\\begin{align}\nRe: & \\frac{\\rho_f V_f l_f }{\\mu_f} &=& \\frac{\\rho_s V_s l_s}{\\mu_s} &\\rightarrow & \\frac{\\rho_s}{\\rho_f} &=&\n\\frac{\\mu_s}{\\mu_f} \\cdot \\frac{V_f}{V_s} \\cdot \\frac{1}{scale} \\\nMach: & \\frac{V_f}{a_f} &=& \\frac{V_s}{a_s} &\\rightarrow & \\frac{}{} \\frac{V_s}{V_f} &=&\n\\sqrt{\\frac{T_f}{T_s}} \\\n\\end{align}\n$$\nUsing temperature - viscosity dependance:\n$$ \\frac{\\rho_s}{\\rho_f} = \\frac{1}{scale} $$\nKnowing $\\rho_s$ from relation above and $p_s$ and using $p = \\rho RT$ we find $T_s$\nFrom Mach number relation we find $V_s$",
"ρ_scale = ρ_full / scale\nT_scale = p_scale.to('kg/m/s**2')/R.to('m**2/s**2/K')/ρ_scale\nV_scale = np.sqrt(T_scale/T_full)*V_full\n\nprint(\"Scaled model density: {0:10.3f}\".format(ρ_scale))\nprint(\"Scaled model Temperature: {0:10.3f}\".format(T_scale))\nprint(\"Scaled model velocity: {0:10.3f}\".format(V_scale))",
"Drag comparison \n$$D = \\frac{1}{2}C_D\\rho V_{\\infty}^2S_{ref}$$\ncomparing drag:\n$$\\frac{D_f}{D_s} = \\frac{\\rho_f V_{\\infty f}^2}{\\rho_s V_{\\infty s}^2} \\cdot \\frac{1}{scale^2}$$",
"D_scale = Q_(100.,'N')\n\nD_full = D_scale*ρ_full/ρ_scale*(V_full/V_scale)**2/(scale**2)\n\nprint(\"Full model Drag: {0:10.3f}\".format(D_full))",
"Problem 2.8.2: impact of winglet on performance\nData:",
"η0 = Q_(0.34,'dimensionless')\nLD = Q_(16.,'dimensionless')\nWin = Q_(225e3,'kg')\nWfuel = Q_(105e3,'kg')\nWfinal = Win-Wfuel\nQr = Q_(42.,'MJ/kg')\ng = Q_(9.81,'m/s**2')\n\nrng0 = LD*η0*Qr.to('m**2/s**2')/g*np.log(Win/Wfinal)\nprint(\"Original range: {0:10.3f}\".format(rng0.to('km')))",
"Winglets give 5% of reduction of Drag:\nFuel consumption over the same range\n$$\n\\begin{align}\n\\eta_0 \\frac{L}{D} \\frac{Q_R}{g} \\ln \\left(1+\\frac{W_{fuel0}}{W_{final}}\\right) &= \\eta_0 \\frac{L}{0.95D} \\frac{Q_R}{g} \\ln \\left(1+\\frac{W_{fuel1}}{W_{final}}\\right) \\\n\\left(1+\\frac{W_{fuel0}}{W_{final}}\\right)^{0.95} &= \\left(1+\\frac{W_{fuel1}}{W_{final}}\\right)\n\\end{align}\n$$",
"Wfuel1 = Wfinal*( (1+Wfuel/Wfinal)**0.95 -1)\nprint(\"Improved fuel consumption: {0:10.3f}\".format(Wfuel1))\n\nFuel_dens = Q_(0.81,'kg/l')\nFuel_cost = Q_(0.75,'mol/l') # just joking... can we define new units?\n\nfuel_savings = (Wfuel-Wfuel1)*Q_(365,'1/year')/Fuel_dens*Fuel_cost\nprint(\"Annual savings: {0:10.3e}\".format(fuel_savings))",
"Winglets again give 5% of reduction of Drag:\nWeight increase over the same range given 1% of fuel reduction\n$$\n\\begin{align}\n\\eta_0 \\frac{L}{D} \\frac{Q_R}{g} \\ln \\left(1+\\frac{W_{fuel}}{W_{final}}\\right) &= \\eta_0 \\frac{L}{0.95D} \\frac{Q_R}{g} \\ln \\left(1+\\frac{0.99W_{fuel}}{W_{final1}}\\right) \\\n\\left(1+\\frac{W_{fuel}}{W_{final}}\\right)^{0.95} &= \\left(1+\\frac{0.99W_{fuel}}{W_{final1}}\\right)\n\\end{align}\n$$",
"Wfinal1 = 0.99*Wfuel/((1+Wfuel/Wfinal)**0.95-1)\nprint(\"Aircraft mass increment: {0:10.3f}\".format(Wfinal1-Wfinal))\n\nfuel_savings1 = 0.01*Wfuel*Q_(365,'1/year')/Fuel_dens*Fuel_cost\nprint(\"Annual savings: {0:10.3e}\".format(fuel_savings1))",
"Problem 2.8.3: Minimum power flight with parabolic Drag Model\nPower consumption $P = D \\cdot V_{\\infty}$\n$$\n\\begin{align}\nD &= \\frac{1}{2}C_D\\rho_{\\infty}V_{\\infty}^2S_{ref}\\\nL &= W \\\nL &= \\frac{1}{2}C_L\\rho_{\\infty}V_{\\infty}^2S_{ref}\n\\end{align}\n$$\nFrom the above relations:\n$$\n\\begin{align}\nP &= \\frac{1}{2}C_D\\rho_{\\infty}V_{\\infty}^3S_{ref}\\\nV_{\\infty} &= \\sqrt{\\frac{2W}{C_L \\rho_{\\infty} S_{ref}}}\\\nP &= W \\cdot \\sqrt{\\frac{2W}{\\rho_{\\infty}S_{ref}}} \\cdot C_D \\cdot C_L^{-\\frac{3}{2}}\n\\end{align}\n$$\n$C_L$ that minimizes power consumption",
"Cl, Cd0, K, e, AR, rho, Sr, W = sympy.symbols('C_L C_D0 K e AR rho S_r W')\n\nP_expr = sympy.sqrt(2*W/(rho*Sr))*W*(Cd0+Cl**2/(sympy.pi*e*AR))*sympy.sqrt(Cl**(-3))\nP_expr\n\nsympy.simplify(sympy.diff(P_expr,Cl))",
"Lift coefficient at minimum power consumption: $C_L = \\sqrt{3 \\pi e AR C_{D0}}$\nInduced Drag - Total Drag ratio: $C_D = C_{D0} + \\frac{C_L^2}{\\pi e AR} = C_{D0} + 3 C_{D0}$\n$$\\frac{C_{Di}}{C_D} = \\frac{3}{4}$$\nCase of autonomous aircraft",
"Splan = Q_(0.3,'m**2')\nW = Q_(3.5,'N').to('kg*m/s**2')\nρ = Q_(1.225,'kg/m**3')\nAR = Q_(10,'dimensionless')\ne = Q_(0.95,'dimensionless')\nCd0 = Q_(0.02,'dimensionless')\n\nCl_min = np.sqrt(3*np.pi*e*AR*Cd0)\nprint(\"Lift Coefficient at minimum power consumption: {0:10.3f}\".format(Cl_min))\n\nCd_min = 4*Cd0\nprint(\"Drag Coefficient at minimum power consumption: {0:10.3f}\".format(Cd_min))\n\nVinf = np.sqrt(2*W/(Cl_min*ρ*Splan))\nprint(\"Velocity at minimum power consumption: {0:10.3f}\".format(Vinf))\n\nT = (0.5*Cd_min*ρ*Vinf**2*Splan).to('N')\nprint(\"Thrust required at minimum power consumption: {0:10.3f}\".format(T))\n\nP = (T*Vinf).to('W')\nprint(\"Power required at minimum power consumption: {0:10.3f}\".format(P))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DAInamite/programming-humanoid-robot-in-python
|
kinematics/inverse_kinematics_2d_jacobian.ipynb
|
gpl-2.0
|
[
"Inverse Kinematics (2D)",
"%matplotlib notebook\nfrom matplotlib import pylab as plt\nfrom numpy import sin, cos, pi, matrix, random, linalg, asarray\nfrom scipy.linalg import pinv\nfrom __future__ import division\nfrom math import atan2\nfrom IPython import display\nfrom ipywidgets import interact, fixed",
"Coordinate Transformation",
"def trans(x, y, a):\n '''create a 2D transformation'''\n s = sin(a)\n c = cos(a)\n return matrix([[c, -s, x],\n [s, c, y],\n [0, 0, 1]])\n\ndef from_trans(m):\n '''get x, y, theta from transform matrix'''\n return [m[0, -1], m[1, -1], atan2(m[1, 0], m[0, 0])]\n\ntrans(0, 0, 0)",
"Parameters of robot arm",
"l = [0, 3, 2, 1]\n#l = [0, 3, 2, 1, 1]\n#l = [0, 3, 2, 1, 1, 1]\n#l = [1] * 30\nN = len(l) - 1 # number of links\nmax_len = sum(l)\na = random.random_sample(N) # angles of joints\nT0 = trans(0, 0, 0) # base",
"Forward Kinematics",
"def forward_kinematics(T0, l, a):\n T = [T0]\n for i in range(len(a)):\n Ti = T[-1] * trans(l[i], 0, a[i])\n T.append(Ti)\n Te = T[-1] * trans(l[-1], 0, 0) # end effector\n T.append(Te)\n return T\n\ndef show_robot_arm(T):\n plt.cla()\n x = [Ti[0,-1] for Ti in T]\n y = [Ti[1,-1] for Ti in T]\n plt.plot(x, y, '-or', linewidth=5, markersize=10)\n plt.plot(x[-1], y[-1], 'og', linewidth=5, markersize=10)\n plt.xlim([-max_len, max_len])\n plt.ylim([-max_len, max_len]) \n ax = plt.axes()\n ax.set_aspect('equal')\n t = atan2(T[-1][1, 0], T[-1][0,0])\n ax.annotate('[%.2f,%.2f,%.2f]' % (x[-1], y[-1], t), xy=(x[-1], y[-1]), xytext=(x[-1], y[-1] + 0.5))\n plt.show()\n return ax",
"Inverse Kinematics\nNumerical Solution with Jacobian\nNOTE: while numerical inverse kinematics is easy to implemente, two issues have to be keep in mind:\n* stablility: the correction step (lambda_) has to be small, but it will take longer time to converage\n* singularity: there are singularity poses (all 0, for example), the correction will be 0, so the algorithm won't work. That's why many robots bends its leg when walking",
"theta = random.random(N) * 1e-5\nlambda_ = 1\nmax_step = 0.1\ndef inverse_kinematics(x_e, y_e, theta_e, theta):\n target = matrix([[x_e, y_e, theta_e]]).T\n for i in range(1000):\n Ts = forward_kinematics(T0, l, theta)\n Te = matrix([from_trans(Ts[-1])]).T\n e = target - Te\n e[e > max_step] = max_step\n e[e < -max_step] = -max_step\n T = matrix([from_trans(i) for i in Ts[1:-1]]).T\n J = Te - T\n dT = Te - T\n J[0, :] = -dT[1, :] # x\n J[1, :] = dT[0, :] # y\n J[-1, :] = 1 # angular\n d_theta = lambda_ * pinv(J) * e\n theta += asarray(d_theta.T)[0]\n if linalg.norm(d_theta) < 1e-4:\n break\n return theta\n\nT = forward_kinematics(T0, l, theta)\nshow_robot_arm(T)\nTe = matrix([from_trans(T[-1])])\n\n@interact(x_e=(0, max_len, 0.01), y_e=(-max_len, max_len, 0.01), theta_e=(-pi, pi, 0.01), theta=fixed(theta))\ndef set_end_effector(x_e=Te[0,0], y_e=Te[0,1], theta_e=Te[0,2], theta=theta):\n theta = inverse_kinematics(x_e, y_e, theta_e, theta)\n T = forward_kinematics(T0, l, theta)\n show_robot_arm(T)\n return theta"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tomfaulkenberry/courses
|
summer2019/mathpsychREU/lecture2.ipynb
|
mit
|
[
"<a href=\"https://colab.research.google.com/github/tomfaulkenberry/courses/blob/master/spring2019/mathpsychREU/lecture2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nLecture 2 - Fitting a \"forgetting curve\"\nIt is well-known that once we learn something, we tend forget some things as time passes. \nMurdock (1961) presented subjects with a set of memory items (i.e., words or letters) and asked them to recall the items after six different retention intervals: $t=1,3,6,9,12,18$ (in seconds). He recorded the proportion recalled at each retention interval (based on 100 independent trials for each $t$). These data were (respectively)\n$$\ny=0.94, 0.77, 0.40, 0.26, 0.24, 0.16\n$$\nOur goal: fit a mathematical model that will predict the proportion recalled $y$ as a function of retention interval ($t$)\nFirst step - look at the data!",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nT = np.array([1, 3, 6, 9, 12, 18])\nY = np.array([0.94, 0.77, 0.40, 0.26, 0.24, 0.16])\n\nplt.plot(T, Y, 'o')\nplt.xlabel('Retention interval (sec.)')\nplt.ylabel('Proportion recalled')\nplt.show()",
"Some things to notice:\n\nour model should be a decreasing function\nit is NOT linear\n\nTwo candidate models:\n\nPower function model: $y=ax^b$\nExponential model: $y=ab^x$\n\nWhich one should we use?\nmathematical properties?\nTake logs and look at structure of data\nPower function model: $\\ln y = \\ln a + b\\ln x$\n* so power $\\implies$ $\\ln y$ should be linear wrt $\\ln x$\nExponential model: $\\ln y = \\ln a + x\\ln b$ \n* so exponential $\\implies$ $\\ln y$ should be linear wrt $x$",
"# check power function model\n\nplt.plot(np.log(T), np.log(Y), 'o')\nplt.xlabel('$\\ln t$')\nplt.ylabel('$\\ln y$')\nplt.show()\n\n# check exponential model\n\nplt.plot(T, np.log(Y), 'o')\nplt.xlabel('$t$')\nplt.ylabel('$\\ln y$')\nplt.show()",
"Both are reasonably linear, but neither is a perfect fit!\nFit both models with MLE\nAt this point, our best bet is to find parameter sets for both models that provide best fit to observed data $y$. We will use maximum likelihood estimation.\nStep 1 -- compute the likelihood function\nFirst, let's cast our data as the number of items recalled correctly on $n=100$ trials.",
"X = 100*Y\nprint(X)",
"Let's assume each of these 100 trials is independent of the others, and consider each trial a success if item is correctly recalled.\nThen the probability of correctly recalling $x$ items is:\n$$\nf(x\\mid\\theta) = \\binom{100}{x}\\theta^x(1-\\theta)^{100-x}\n$$\nThe critical parameter here is $\\theta$ -- the probability of success on any one trial. How do we determine $\\theta$?\nLet's assume that probability of recall is governed by a power function. That is, assume\n$$\n\\theta(t) = at^b\n$$\nfor constants $a,b$.\nThen we can write\n$$\nf(x\\mid a,b) = \\binom{100}{x}(at^b)^x(1-at^b)^{100-x}\n$$\nwhich we cast as a likelihood\n$$\nL(a,b\\mid x) = \\binom{100}{x}(at^b)^x(1-at^b)^{100-x}\n$$\nStep 2 -- compute log likelihood\nThis gives us:\n$$\n\\ln L = \\ln \\Biggl[ \\binom{100}{x}\\Biggr] + x\\ln(at^b) + (100-x)\\ln(1-at^b)\n$$\nStep 3 -- extend to multiple observations\nNote that the formula above is for a single observation $x$. But we have 5 observations!\nIf we assume each is independent from the others, then we can multiply the likelihoods:\n$$\nL(a,b\\mid x=(x_1,\\dots,x_5)) = \\prod_{i=1}^5 L(a,b\\mid x_i)\n$$\nThus we have\n$$\n\\ln L = \\ln\\Biggl(\\prod_{i=1}^5 L(a,b\\mid x_i)\\Biggr )\n$$\nBut since logs turn products into sums, we can write\n$$ \\ln L = \\sum_{i=1}^5 \\ln L(a,b\\mid x_i) = \\sum_{i=1}^5 \\Biggl(\\ln \\binom{100}{x_i} + x_i\\ln(at^b) + (100-x_i)\\ln(1-at^b)\\Biggr)$$\nNotes:\n\nwe really only care about the terms that have $a$ and $b$, so we'll ignore the binomial term\nPython really likes to minimize. So, we will minimize the negative log likelihood (NLL)",
"def nllP(pars):\n a, b = pars\n tmp1 = X*np.log(a*T**b) \n tmp2 = (100-X)*np.log(1-a*T**b)\n return(-1*np.sum(tmp1+tmp2))\n\n# check some examples\n\na = 0.9\nb = -0.4\npars = np.array([a,b])\n\nnllP(pars)\n\nfrom scipy.optimize import minimize\n\na_init = np.random.uniform()\nb_init = -np.random.uniform()\ninits = np.array([a_init, b_init])\n\nmleP = minimize(nllP, \n inits,\n method=\"nelder-mead\")\nprint(mleP)\n\ndef power(t,pars):\n a, b = pars\n return(a*t**b)\n\nfitPars = mleP.x\nprint(f\"a={fitPars[0]:.3f}, b={fitPars[1]:.3f}\")\n\nx = np.linspace(0.5,18,100)\nplt.plot(T,Y,'o')\nplt.plot(x, power(x,fitPars))\n\nplt.show()\n ",
"Exercises\n\n\nOften, the \"power law of forgetting\" is written as $f(t) = at^{-b}$ (e.g., Wixted, 1990), the purpose of which is to reinforce the idea that forgetting = decay. What does this change do to the likelihood function? Use Python to compute MLEs for $a$ and $b$ given the observed data $y$ above.\n\n\nDemonstrate (either through computation or a mathematical proof) that we can safely ignore the $\\binom{100}{x_i}$ term in the likelihood function.\n\n\nRubin and Baddeley (1989) measured the proportion of participants who correctly recalled details from a past colloquium talk as a function of time in years. The data below are approximately equal to what they orignally found:\n\n\n|time (years) | proportion recall|\n|:--:|:--:|\n| 0.05 | 0.38|\n|0.25 | 0.26|\n|0.30|0.22|\n|0.60|0.20|\n|0.95|0.11|\n|1.3|0.07|\n|1.4|0.16|\n|1.6|0.10|\n|1.8|0.08|\n|2.5|0.05|\n|2.7|0.01|\nFor simplicity, assume there were 100 participants. Construct a reasonable model of forgetting for this data, and estimate its parameters."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IS-ENES-Data/submission_forms
|
test/Templates/CMIP6_submission_form.ipynb
|
apache-2.0
|
[
"DKRZ CMIP6 submission form for ESGF data publication\nGeneral Information (to be completed based on official CMIP6 references)\nData to be submitted for ESGF data publication must follow the rules outlined in the CMIP6 Archive Design <br /> (https://...) \nThus file names have to follow the pattern:<br />\n\nVariableName_Domain_GCMModelName_CMIP6ExperimentName_CMIP5EnsembleMember_RCMModelName_RCMVersionID_Frequency[_StartTime-EndTime].nc <br />\nExample: tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc\n\nThe directory structure in which these files are stored follow the pattern:<br />\n\nactivity/product/Domain/Institution/\nGCMModelName/CMIP5ExperimentName/CMIP5EnsembleMember/\nRCMModelName/RCMVersionID/Frequency/VariableName <br />\nExample: CORDEX/output/AFR-44/MPI-CSC/MPI-M-MPI-ESM-LR/rcp26/r1i1p1/MPI-CSC-REMO2009/v1/mon/tas/tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc\n\nNotice: If your model is not yet registered, please contact contact .... \nThis 'data submission form' is used to improve initial information exchange between data providers and the DKZ data managers. The form has to be filled before the publication process can be started. In case you have questions please contact cmip6@dkrz.de",
"from dkrz_forms import form_widgets\nform_widgets.show_status('form-submission')",
"Start submission procedure\nThe submission is based on this interactive document consisting of \"cells\" you can modify and then evaluate.\nEvaluation of cells is done by selecting the cell and then press the keys \"Shift\" + \"Enter\"\n<br /> Please evaluate the following cell to initialize your form based on the information provided as part of the form generation (name, email, etc.)",
"MY_LAST_NAME = \"....\" # e.gl MY_LAST_NAME = \"schulz\" \n#-------------------------------------------------\n\n\nfrom dkrz_forms import form_handler, form_widgets, checks\nform_info = form_widgets.check_pwd(MY_LAST_NAME)\nsf = form_handler.init_form(form_info)\nform = sf.sub.entity_out.form_info",
"please provide information on the contact person for this CORDEX data submission request\nType of submission\nplease specify the type of this data submission:\n- \"initial_version\" for first submission of data\n- \"new _version\" for a re-submission of previousliy submitted data\n- \"retract\" for the request to retract previously submitted data",
"sf.submission_type = \"...\" # example: sf.submission_type = \"initial_version\"",
"Requested general information\n... to be finalized as soon as CMIP6 specification is finalized ....\nPlease provide model and institution info as well as an example of a file name\ninstitution\nThe value of this field has to equal the value of the optional NetCDF attribute 'institution' \n(long version) in the data files if the latter is used.",
"sf.institution = \"...\" # example: sf.institution = \"Alfred Wegener Institute\"",
"institute_id\nThe value of this field has to equal the value of the global NetCDF attribute 'institute_id' \nin the data files and must equal the 4th directory level. It is needed before the publication \nprocess is started in order that the value can be added to the relevant CORDEX list of CV1 \nif not yet there. Note that 'institute_id' has to be the first part of 'model_id'",
"sf.institute_id = \"...\" # example: sf.institute_id = \"AWI\"",
"model_id\nThe value of this field has to be the value of the global NetCDF attribute 'model_id' \nin the data files. It is needed before the publication process is started in order that \nthe value can be added to the relevant CORDEX list of CV1 if not yet there.\nNote that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name, \nseparated by a dash. It is part of the file name and the directory structure.",
"sf.model_id = \"...\" # example: sf.model_id = \"AWI-HIRHAM5\"",
"experiment_id and time_period\nExperiment has to equal the value of the global NetCDF attribute 'experiment_id' \nin the data files. Time_period gives the period of data for which the publication \nrequest is submitted. If you intend to submit data from multiple experiments you may \nadd one line for each additional experiment or send in additional publication request sheets.",
"sf.experiment_id = \"...\" # example: sf.experiment_id = \"evaluation\"\n # [\"value_a\",\"value_b\"] in case of multiple experiments\nsf.time_period = \"...\" # example: sf.time_period = \"197901-201412\" \n # [\"time_period_a\",\"time_period_b\"] in case of multiple values",
"Example file name\nPlease provide an example file name of a file in your data collection, \nthis name will be used to derive the other",
"sf.example_file_name = \"...\" # example: sf.example_file_name = \"tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc\"\n\n# Please run this cell as it is to check your example file name structure \n# to_do: implement submission_form_check_file function - output result (attributes + check_result)\nform_handler.cordex_file_info(sf,sf.example_file_name)",
"information on the grid_mapping\nthe NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.), \ni.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids",
"sf.grid_mapping_name = \"...\" # example: sf.grid_mapping_name = \"rotated_latitude_longitude\"",
"Does the grid configuration exactly follow the specifications in ADD2 (Table 1) \nin case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.",
"sf.grid_as_specified_if_rotated_pole = \"...\" # example: sf.grid_as_specified_if_rotated_pole = \"yes\"",
"Please provide information on quality check performed on the data you plan to submit\nPlease answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.\n'QC1' refers to the compliancy checker that can be downloaded at http://cordex.dmi.dk. \n'QC2' refers to the quality checker developed at DKRZ. \nIf your answer is 'other' give some informations.",
"sf.data_qc_status = \"...\" # example: sf.data_qc_status = \"QC2-CORDEX\"\nsf.data_qc_comment = \"...\" # any comment of quality status of the files",
"Terms of use\nPlease give the terms of use that shall be asigned to the data.\nThe options are 'unrestricted' and 'non-commercial only'.\nFor the full text 'Terms of Use' of CORDEX data refer to\nhttp://cordex.dmi.dk/joomla/images/CORDEX/cordex_terms_of_use.pdf",
"sf.terms_of_use = \"...\" # example: sf.terms_of_use = \"unrestricted\"",
"Information on directory structure and data access path\n(and other information needed for data transport and data publication)\nIf there is any directory structure deviation from the CORDEX standard please specify here. \nOtherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.",
"sf.directory_structure = \"...\" # example: sf.directory_structure = \"compliant\"",
"Give the path where the data reside, for example:\nblizzard.dkrz.de:/scratch/b/b364034/. If not applicable write N/A and give data access information in the data_information string",
"sf.data_path = \"...\" # example: sf.data_path = \"mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/\"\nsf.data_information = \"...\" # ...any info where data can be accessed and transfered to the data center ... \"",
"Exclude variable list\nIn each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.",
"sf.exclude_variables_list = \"...\" # example: sf.exclude_variables_list=[\"bnds\", \"vertices\"]",
"Uniqueness of tracking_id and creation_date\nIn case any of your files is replacing a file already published, it must not have the same tracking_id nor \nthe same creation_date as the file it replaces. \nDid you make sure that that this is not the case ? \nReply 'yes'; otherwise adapt the new file versions.",
"sf.uniqueness_of_tracking_id = \"...\" # example: sf.uniqueness_of_tracking_id = \"yes\"",
"Variable list\nlist of variables submitted -- please remove the ones you do not provide:",
"\nsf.variable_list_day = [\n\"clh\",\"clivi\",\"cll\",\"clm\",\"clt\",\"clwvi\",\n\"evspsbl\",\"evspsblpot\",\n\"hfls\",\"hfss\",\"hurs\",\"huss\",\"hus850\",\n\"mrfso\",\"mrro\",\"mrros\",\"mrso\",\n\"pr\",\"prc\",\"prhmax\",\"prsn\",\"prw\",\"ps\",\"psl\",\n\"rlds\",\"rlus\",\"rlut\",\"rsds\",\"rsdt\",\"rsus\",\"rsut\",\n\"sfcWind\",\"sfcWindmax\",\"sic\",\"snc\",\"snd\",\"snm\",\"snw\",\"sund\",\n\"tas\",\"tasmax\",\"tasmin\",\"tauu\",\"tauv\",\"ta200\",\"ta500\",\"ta850\",\"ts\",\n\"uas\",\"ua200\",\"ua500\",\"ua850\",\n\"vas\",\"va200\",\"va500\",\"va850\",\"wsgsmax\",\n\"zg200\",\"zg500\",\"zmla\"\n]\n\nsf.variable_list_mon = [\n\"clt\",\n\"evspsbl\",\n\"hfls\",\"hfss\",\"hurs\",\"huss\",\"hus850\",\n\"mrfso\",\"mrro\",\"mrros\",\"mrso\",\n\"pr\",\"psl\",\n\"rlds\",\"rlus\",\"rlut\",\"rsds\",\"rsdt\",\"rsus\",\"rsut\",\n\"sfcWind\",\"sfcWindmax\",\"sic\",\"snc\",\"snd\",\"snm\",\"snw\",\"sund\",\n\"tas\",\"tasmax\",\"tasmin\",\"ta200\",\n\"ta500\",\"ta850\",\n\"uas\",\"ua200\",\"ua500\",\"ua850\",\n\"vas\",\"va200\",\"va500\",\"va850\",\n\"zg200\",\"zg500\"\n]\nsf.variable_list_sem = [\n\"clt\",\n\"evspsbl\",\n\"hfls\",\"hfss\",\"hurs\",\"huss\",\"hus850\",\n\"mrfso\",\"mrro\",\"mrros\",\"mrso\",\n\"pr\",\"psl\",\n\"rlds\",\"rlus\",\"rlut\",\"rsds\",\"rsdt\",\"rsus\",\"rsut\",\n\"sfcWind\",\"sfcWindmax\",\"sic\",\"snc\",\"snd\",\"snm\",\"snw\",\"sund\",\n\"tas\",\"tasmax\",\"tasmin\",\"ta200\",\"ta500\",\"ta850\",\n\"uas\",\"ua200\",\"ua500\",\"ua850\",\n\"vas\",\"va200\",\"va500\",\"va850\",\n\"zg200\",\"zg500\" \n]\n\nsf.variable_list_fx = [\n\"areacella\",\n\"mrsofc\",\n\"orog\",\n\"rootd\",\n\"sftgif\",\"sftlf\" \n]",
"Check your submission before submission",
"# simple consistency check report for your submission form\nres = form_handler.check_submission(sf)\nsf.sub['status_flag_validity'] = res['valid_submission']\nform_handler.DictTable(res)",
"Save your form\nyour form will be stored (the form name consists of your last name plut your keyword)",
"form_handler.form_save(sf)\n\n#evaluate this cell if you want a reference to the saved form emailed to you\n# (only available if you access this form via the DKRZ form hosting service)\nform_handler.email_form_info()\n\n# evaluate this cell if you want a reference (provided by email)\n# (only available if you access this form via the DKRZ hosting service)\nform_handler.email_form_info(sf)",
"officially submit your form\nthe form will be submitted to the DKRZ team to process\nyou also receive a confirmation email with a reference to your online form for future modifications",
"form_handler.email_form_info(sf)\nform_handler.form_submission(sf)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jasonjensen/Montreal-Python-Web
|
2.Python_Quickstart.ipynb
|
apache-2.0
|
[
"Python Quickstart\nWorkshop on Web Scraping and Text Processing with Python\nby Radhika Saksena, Princeton University, saksena@princeton.edu, radhika.saksena@gmail.com\nDisclaimer: The code examples presented in this workshop are for educational purposes only. Please seek advice from a legal expert about the legal implications of using this code for web scraping.\n1. First things first\nThis notebook describes some Python basics which we will be using throughout the workshop. Please go through this material and try out the code examples using IPython Notebook which comes with Anaconda (https://store.continuum.io/cshop/anaconda/).\n1.1 Executing code in IPython Notebook\n\nClick within an existing \"Code\" Cell or write new code in a \"Code\" Cell.\nType shift-Enter to execute the Python code contained in the Cell.\n\n1.2 Python Indentation\n\n\nIndentation is significant in Python. Instead of curly braces to demarcate a code block (as in C++, Java, R, etc.), consecutive statements with the same level of indentation are identified as being in the same block.\n\n\nAny number of spaces is valid indentation. Four spaces for each level of indentation is conventional among programmers.\n\nIn IPython Notebook, simply use the [tab] key for each new level of indentation. This gets converted to four spaces automatically.\n\n1.3 Comments in Python\n\nSingle-line comments start with a # symbol and end with the end of the line\nComments can be placed on a line by themselves",
"# Assign value 1 to variable x\nx = 1",
"Comments can also be placed on the same line as the code as shown here.",
"x = 1 # Assign value 1 to variable x",
"For multi-line comments, use triple-quoted strings.",
"\"\"\"This is a multi-line comment.\nAssign value 1 to variable x.\"\"\"\nx = 1",
"1.4 Python's print() function\nThe print function is used to print variables and expressions to screen. print() offers a lot of functionality which we'll encounter during the workshop. For now, note that:<br/>\n\nYou can pass anything to the print() function and it will attempt to print its arguments.",
"print(1) # Print a constant\n\nx = 2014\nprint(x) # Print an integer variable\n\nxstr = \"Hello World.\" # Print a string\nprint(xstr)\n\n\nprint(x,xstr) # Print multiple objects\n\n\nprint(\"String 1\" + \" \" + \"String2\") # Concatenate multiple strings and print them",
"For web-scraping and text-processing type tasks, we'd like better control over how things get printed out, such as the number of decimal places when printing out floating point numbers. Use the format() method on the string to be printed out to control the output format.",
"x = 1\nprint(\"Formatted integer is {0:06d}\".format(x)) # Note the format specification, 06d, for the integer.\n\ny = 12.66666666667\nprint(\"Formatted floating point number is {0:2.3f}\".format(y)) # Note the format specification, 2.3f, for the floating point number.\n\n\niStr = \"Hello World\"\nfStr = \"Goodbye World\"\nprint(\"Initial string: {0:s} . Final string: {1:s}.\".format(iStr,fStr)) # Note the format specification, s, for the string.\n\nprint(\"Initial string: {0} . Final string: {1}.\".format(iStr,fStr)) # In this case, omitting the s format specified works too.\n\nx = 1\nprint(\"Formatted integer is {0:06d}\".format(x))\n\n\n\ny = 12.66666666667\nprint(\"Formatted floating point number is {0:2.3f}\".format(y))",
"2. Numeric Variable Types\n2.1 Integers",
"year = 2014\nprint(year)\n\nprint(\"The year is %d.\" % year)\n\nprint(type(year))\n\nhelp(year)\n\nhelp(int)",
"2.2. Floating Point Numbers",
"mean = (1.0 + 0.7 + 2.1)/3.0\nprint(mean)\n\nprint(\"The mean is %6.2f.\" % mean)\n\nprint(type(mean))\n\nhelp(mean)\n\nhelp(float)",
"3. Basic Operators\n3.1 Arithmetic Operators\nStandard arithmetic operators for addition (+), subtraction (-), multiplication (*) and division (/) are supported in Python. We have already seen use of the addition (+) and division (/) operators. Some more operators that are commonly encountered are demonstrated below.",
"x = 2**3 # ** is the exponentiation operator\nprint(x)\n\nx = 9 % 4 # % is the modulus operator\nprint(x)\n\nx = 9 // 4 # // is the operator for floor division\nprint(x)",
"3.2. Assignment Operators\nIn addition to using the = (simple assignment operator) for assigning values to variables, one can use a composite assignment operator(+=, -=, etc.) that combines the simple assignment operator with all of these arithmetic expressions. For example:",
"x = 2.0\ny = 5.0\ny += x # y = y + x\nprint(y)\n\ny %= x # y = y%x\nprint(y)",
"3.3. Comparison Operators",
"x = 1\ny = 1\nx == y # Check for equality\n\nx = 1\ny = 1\nx != 1 # Check for inequality\n\nx = 0.5\ny = 1.0\nx > y # Check if x greater than y\n\nx < y # Check if x less than y\n\nx >= y # Check if x greater than equal to y\n\nx <= y # Check if x less than equal to y",
"3.4. Logical Operators\nLogical operators such as <tt>and</tt>, <tt>or</tt>, <tt>not</tt> allow specification of composite conditions, for example in <tt>if</tt> statements as we will see shortly.",
"a = 99\nb = 99\n(a == b) and (a <= 100) # use the and operator to check if both the operands are true\n\na = True\nb = False\na and b\n\na = True\nb = False\na or b # use the or operator to check if at least one of the two operands is true\n\na = 100\nb = 100\na == b\nnot(a == b) # use the not operator to reverse a logical statement",
"4. Strings\n\nA string is a sequence of characters. \nStrings are specified by using single quotes (' ') or double quotes (\" \"). Multi-line strings can be specified with triple quotes.",
"pythonStr = 'A first Python string.' # String specified with single quotes.\nprint(type(pythonStr))\nprint(pythonStr)\n\n\npythonStr = \"A first Python string\" # String specified with double quotes.\nprint(type(pythonStr))\nprint(pythonStr)\n\n\npythonStr = \"\"\"A multi-line string.\nA first Python string.\"\"\" # Multi-line string specified with triple quotes.\nprint(type(pythonStr))\nprint(pythonStr)",
"Strings can be concatenated using the addition(+) operator.",
"str1 = \" Rock \"\nstr2 = \" Paper \"\nstr3 = \" Scissors \"\nlongStr = str1 + str2 + str3\nprint(longStr)",
"Strings can also be repeated with the multiplication (*) operator.",
"str1 = \"Rock,Paper,Scissors\\n\"\nrepeatStr = str1*5\nprint(repeatStr)",
"The len() function returns the length of a string.",
"str1 = \"Python\"\nlenStr1 = len(str1)\nprint(\"The length of str: is \" + str(lenStr1) + \".\")",
"Since, the Python string is a sequence of characters, individual characters in the string can be indexed. Note that, unlike R, in Python sequences indexing starts at 0 and goes up to one less than the length of the sequence.",
"str1 = \"Python\"\nprint(str1[0]) # Print the first character element of the string.\n\nprint(str1[len(str1)-1]) # Print the last character element of the string.\n\nprint(str1[2:4]) # Print a 2-element slice of the string, starting from the 2-nd element up to but not including the 4-th element.",
"Strings are immutable. That is, an existing instance of a string cannot be modified. Instead, a new string that contains the modification should be created.",
"str1 = \"Python\"\nstr1[1] = \"3\" # Error, strings can't be modified.",
"Strings come with some powerful methods (https://docs.python.org/2/library/stdtypes.html#string-methods). Some of the string methods that we will often use in web scraping are shown bellow.",
"str1 = \"Python\"\nprint(str1.upper()) # Convert str1 to all uppercase.\n\nstr2 = \"PYTHON\"\nprint(str1.lower()) # Convert str2 to all lowercase.\n\nstr3 = \"Rock,Paper,Scissors,Lizard,Spock\"\nprint(str3.split(\",\")) # Split str3 using \",\" as the separator. A list of string elements is returned.\n\nstr4 = \"The original string has trailing spaces.\\t\\n\"\nprint(\"***\"+str4.strip()+\"***\") # Print stripped string with trailing space characters removed.",
"5. Python Data Structures\n5.1 Lists\n\nList is an indexed collection of items. Each of the list items can be of arbitrary type. Note the square brackets in the pyList list declaration below. The len() function returns the length of the list.",
"# pyList contains an integer, string and floating point number\npyList = [2014,\"02 June\", 74.5]\n\n# Print all the elements of pyList\nprint(pyList)\nprint(\"\\n\")\n\n# Print the length of pyList obtained using the len() function\nprint(\"Length of pyList is: {0}.\\n\".format(len(pyList)))",
"List elements can be individually referenced using their index in the list. Python indexing starts with 0 and runs up to the length of the sequence - 1. The square bracket is used to specify the index in to the list. This notation can also be used to assign values to the elements of the list. In contrast to strings, lists are mutable.",
"print(pyList)\nprint(\"\\n\")\n\n# Print the first element of pyList. Remember, indexing starts with 0.\nprint(\"First element of pyList: {0}.\\n\".format(pyList[0]))\n\n# Print the last element of pyList. Last element can be conveniently indexed using -1.\nprint(\"Last element of pyList: {0}.\\n\".format(pyList[-1]))\n\n# Also the last element has index = (length of list - 1)\ncheck = (pyList[2] == pyList[-1])\nprint(\"Is pyList[2] equal to pyList[-1]?\\n{0}.\\n\".format(check))\n\n# Assign a new value to the third element of the list\npyList[2] = -99.0\nprint(\"Modified element of pyList[2]: {0}.\\n\".format(pyList[2]))",
"Python lists can be sliced using the slice notation of two indices separated by a colon. An omitted first index indicates 0 and an omitted second index indicates the length of the list/sequence.",
"pyList = [\"rock\",\"paper\",\"scissors\",\"lizard\",\"Spock\"]\n\nprint(pyList[2:4]) # Print elements of a starting from the second, up to but not including the fourth.\n\nprint(pyList[:2]) # Print the first two elements of pyList.\n\nprint(pyList[2:]) # Print all the elements of pyList starting from the second.\n\nprint(pyList[:]) # Print all the elements of pyList",
"Python slice notation can also be used to assign into lists.",
"pyList = [\"rock\",\"paper\",\"scissors\",\"lizard\",\"Spock\"]\n\npyList[2:4] = [\"gu\",\"pa\"] # Replace the second and third elements of pyList\n\nprint(\"Original contents of pyList:\")\nprint(pyList)\nprint(\"\\n\")\n\npyList[:] = [] # Clear pyList, replace all items with an empty list\n\nprint(\"Modified contents of pyList:\")\nprint(pyList)",
"Python lists come with useful methods to add elements - append() and extend()",
"pyList = [\"rock\",\"paper\"]\nprint(\"Printing Python list pyList:\")\nprint(pyList)\nprint(\"\\n\")\n\npyList.append(\"scissors\")\nprint(\"Appended the string 'scissors' to pyList:\")\nprint(pyList)\nprint(\"\\n\")\n\nanotherList = [\"lizard\",\"Spock\"]\npyList.extend(anotherList)\nprint(\"Extended pyList:\")\nprint(pyList)\nprint(\"\\n\")",
"Python lists can be concatenated using the \"+\" operator (similar to strings).",
"pyList1 = [\"rock\",\"paper\",\"scissors\"]\npyList2 = [\"lizard\",\"Spock\"]\nnewList = pyList1 + pyList2\nprint(\"New list:\")\nprint(newList)",
"Python lists can be nested - list within a list within a list and so on. An index needs to be specified for each level of nesting.",
"pyLists = [[\"rock\",\"paper\",\"scissors\"], [\"ji\",\"gu\",\"pa\"]]\n\n# Print the first element (0-th index) of pyLists which is itself a list\nprint(\"pyLists[0] = \")\nprint(pyLists[0])\nprint(\"\\n\")\n\n# Print the 0-th index element of the first list element in pyLists\nprint(\"pylists[0][0] = \" + pyLists[0][0] + \".\")\nprint(\"\\n\")\n\n# Print the second element of pyLists which is itself a list\nprint(\"pyLists[1] = \")\nprint(pyLists[1])\nprint(\"\\n\")\n\n# Print the 0-th index element of the second list element in pyLists\nprint(\"pyLists[1][0] = \" + pyLists[1][0] + \".\")\nprint(\"\\n\")\n\npyList = [1,3,4,2]\npyList.sort(reverse=True)\nsum(pyList)\n2*(pyList)\n#2**(pyList)",
"5.2. Tuples\n\nTuples are another sequence data type consisting of arbitrary items separated by commas. In contrast to lists, tuples are immutable, i.e., they cannot be modified. See below for a declaration of a tuple. Note the parentheses in the declaration.",
"# pyTuple contains an integer, string and floating point number\npyTuple = (2014,\"02 June\", 74.5)\n\n# Print all the elements of pyTuple\nprint(\"pyTuple is: \")\nprint(pyTuple)\nprint(\"\\n\")\n\n# Print the length of pyTuple obtained using the len() function\nprint(\"Length of pyTuple is: {0}.\\n\".format(len(pyTuple)))",
"Tuples are immutable. Attempting to change elements of a tuple will result in errors.",
"pyTuple[1] = \"31 December\" # Error as pyTuple is a tuple and hence, immutable",
"Tuples can be packed from and unpacked into individual elements.",
"pyTuple = \"rock\", \"paper\", \"scissors\" # pack the strings into a tuple named pyTuple\nprint(pyTuple)\n\nstr0,str1,str2 = pyTuple # unpack the tuple into strings named str0, str1, str2\nprint(\"str0 = \" + str0 + \".\")\nprint(\"str1 = \" + str1 + \".\")\nprint(\"str2 = \" + str2 + \".\")",
"One can declare tuples of tuples.",
"pyTuples = ((\"rock\",\"paper\",\"scissors\"),(\"ji\",\"gu\",\"pa\"))\nprint(\"pyTuples[0] = {0}.\".format(pyTuples[0])) # Print the first sub-tuple in pyTuples.\nprint(\"pyTuples[1] = {0}.\".format(pyTuples[1])) # Print the second sub-tuple in pyTuples.",
"One can declare a tuple of lists.",
"pyNested = ([\"rock\",\"paper\",\"scissors\"],[\"ji\",\"gu\",\"pa\"])\npyNested[0][2] = \"lizard\" # OK, list within the tuple is mutable\nprint(pyNested[0]) # Print first list element of the tuple",
"One can also declare a list of tuples.",
"pyNested = [(\"rock\",\"paper\",\"scissors\"),(\"ji\",\"gu\",\"pa\")]\npyNested[0][2] = \"lizard\" # Error, tuples is immutable* ",
"5.3. Dictionaries\n\nA Python dictionary is an unordered set of key:value pairs that acts as an associate arrays. The keys are immutable and unique within one dictionary. In contrast to lists and tuples, dictionaries are indexed by keys. Note the use of curly braces in the declaration of the dictionary below.",
"pyDict = {\"Canada\":\"CAN\",\"Argentina\":\"ARG\",\"Austria\":\"AUT\"}\nprint(\"pyDict: {0}.\".format(pyDict))\n\n\n\nprint(\"pyDict['Argentina']: \" + pyDict['Argentina'] + \".\") # Print the value corresponding to key 'afghanistan'\n\nprint(pyDict.keys())\n\nprint(pyDict.values()) # Return all the values in the dictionary as a list.\n\nprint(pyDict.items()) # Return key, value pairs from the dictionary as a list of tuples.",
"Parsing hierarchical data structures involving Python dictionaries will be very useful when working with the JSON data format and APIs such as the Twitter API.\n\nValues in a dictionary can be any object including other dictionaries.",
"pyDicts = {\"Canada\":{\"Alpha-2\":\"CA\",\"Alpha-3\":\"CAN\",\"Numeric\":\"124\"},\n \"Argentina\":{\"Alpha-2\":\"AR\",\"Alpha-3\":\"ARG\",\"Numeric\":\"032\"},\n \"Austria\":{\"Alpha-2\":\"AT\",\"Alpha-3\":\"AUT\",\"Numeric\":\"040\"}}\n\nprint(\"pyDicts['Canada'] = {0}.\".format(pyDicts['Canada']))\n\n\nprint(\"pyDicts['Canada']['Alpha-2'] = {0}.\".format(pyDicts['Canada']['Alpha-2']))",
"Values in a dictionary can also be lists.",
"pyNested = {\"Canada\":[2011,2008,2006,2004,2000 ],\"Argentina\":[2013,2011,2009,2007,2005],\"Austria\":[2013,2008,2006,2002,1999]}\nprint(\"pyNested['Canada'] = {0}\".format(pyNested['Canada']))\n\nprint(\"pyNested['Austria'][4] = {0}.\".format(pyNested['Austria'][4]))",
"Lastly, we can have lists of dictionaries",
"pyNested = [{\"year\":2011,\"countries\":[\"Canada\",\"Argentina\"]},\n {\"year\":2008,\"countries\":[\"Canada\",\"Austria\"]},\n {\"year\":2006,\"countries\":[\"Canada\",\"Austria\"]},\n {\"year\":2013,\"countries\":[\"Argentina\",\"Austria\"]}]\nprint(\"pyNested[0] = {0}\".format(pyNested[0]))\n\n\nprint(\"pyNested[0]['year'] = {0}, pyNested[0]['countries'] = {1}.\".format(pyNested[0]['year'],pyNested[0]['countries']))",
"6. Control Flow\n6.1 <tt>if</tt> Statements\n\nAn if statement, coupled with zero or more elif statements can allow the execution of the script to be altered based on some condition. Here is an example.",
"pyNested = [{\"year\":2011,\"countries\":[\"Canada\",\"Argentina\"]},\n {\"year\":2008,\"countries\":[\"Canada\",\"Austria\"]},\n {\"year\":2006,\"countries\":[\"Canada\",\"Austria\"]},\n {\"year\":2013,\"countries\":[\"Argentina\",\"Austria\"]}]\n\n# Check if first dictionary element of pyNested corresponds to years 2006 or 2008\nif(pyNested[0][\"year\"] == 2008):\n print(\"Countries corresponding to year 2008 are: {0}.\".format(pyNested[0][\"countries\"]))\nelif(pyNested[0][\"year\"] == 2011):\n print(\"Countries corresponding to year 2011 are: {0}.\".format(pyNested[0][\"countries\"]))\nelse:\n print(\"The first element does not correspond to either 2008 or 2011.\")\n \n",
"Scripting languages, such as Python, make it easy to automate repetitive tasks. In this workshop, we'll use two of Python's syntactic constructs for iteration - the for loop and the while loop.\n\n6.2 <tt>for</tt> Statements\n\nGiven an iterable, such as a list, the for loop construct can iterate over each of its values as shown below.",
"countryList = [\"Canada\", \"United States of America\", \"Mexico\"]\nfor country in countryList: # Loop over countryList, set country to next element in list.\n print(country)\n\ncountryDict = {\"Canada\":\"124\",\"United States\":\"840\",\"Mexico\":\"484\"}\nprint(\"Country\\t\\tISO 3166-1 Numeric Code\")\nfor country,code in countryDict.items(): # Loop over all the key and value pairs in the dictionary\n print(\"{0:12s}\\t\\t{1:12s}\".format(country,code))",
"6.3 <tt>range()</tt> Function\n\nAnother common use of the <tt>for</tt> loop is to iterate over an index which takes specific values. The range() function generates integers within the range specified by its arguments.",
"countryList = [\"Canada\", \"United States of America\", \"Mexico\"]\nfor i in range(0,3): # Loop over values of i in the range 0 up to, but not including, 3\n print(countryList[i])",
"6.4 while() Statement\n\nAnother syntactic construct used for iteration is the while loop. This is generally used in conjunction with the conditional and logical operators which we saw earlier.",
"countryList = [\"Canada\", \"United States of America\", \"Mexico\"]\n# iterate over countryList backwards, starting from the last element\nwhile(countryList):\n print(countryList[-1])\n countryList.pop()\n\ni = 0\ncountryList = [\"Canada\", \"United States of America\", \"Mexico\"]\nwhile(i < len(countryList)):\n print(\"Iteration variable i = {0}, Country = {1}.\".format(i,countryList[i]))\n i += 1",
"6.5 <tt>break</tt> and <tt>continue</tt> Statements\n\nNow, if some condition is evaluated within the for/while loop and based on that, we wish to exit the loop, we can use the break statement. Note that the break statement exits the innermost loop which contains it.",
"countryList = [\"Canada\", \"United States of America\", \"Mexico\"]\nfor country in countryList:\n if(country == \"United States of America\"):\n # if the country name matches, then break out of the for loop\n break\n else:\n # do some processing\n print(country)\n",
"If, instead of exiting the loop, one merely wishes to skip that iteration, then use the continue statement as shown here.",
"countryList = [\"Canada\", \"United States of America\", \"Mexico\"]\nfor country in countryList:\n if(country == \"United States of America\"):\n # if the country name matches, then break out of the for loop\n continue\n else:\n # do some processing\n print(country)\n",
"7. Python File I/O\n\nThis is a quick intro to reading and writing plain text files in Python. As we proceed through the workshop, we'll look at more sophisticated ways of reading/writing files, in non-English languages and using specialized Python modules to handle files in formats such as CSV, JSON.\n\n7.1 Writing to a File\n\nIn order to write to a file, the syntax is very similar. Open the file using the \"w\" mode instead of the \"r\" mode. Use the write() method of the file object as shown below. The syntax for the write() method is very similar to print(). Although, it does not automatically insert a newline at the end of the statement as does print().",
"filename = \"tmp.txt\"\nfout = open(filename,\"w\") # The 'r' option indicates that the file is being opened to be read\n\nfor i in range(0,5): # Read in each line from the file\n # Do some processing\n fout.write(\"i = {0}.\\n\".format(i))\n \nfout.close() # Once the file has been read, close the file",
"Alternative syntax for writing to file using 'with open' is shown below.",
"filename = \"tmp.txt\"\nwith open(filename,\"w\") as fout:\n for i in range(0,5):\n fout.write(\"i = {0}.\\n\".format(i))\n\nfout.close()",
"7.2 Reading from a file\n\nTo open a file for reading each of its line use the open() function. Make sure that such a file does exist. Once the file has been read, close it using the close() method of the file object - this will free up system resources being used up by the open file.",
"filename = \"tmp.txt\"\nfin = open(filename,\"r\") # The 'r' option indicates that the file is being opened to be read\n\nfor line in fin: # Read in each line from the file\n # Do some processing\n print(line)\n\nfin.close() # Once the file has been read, close the file",
"The code below demonstrates another way to open a file and read each line. With this syntax, the file is automatically closed after the <tt >with</tt> block.",
"filename = \"tmp.txt\"\nwith open(filename,\"r\") as fin:\n for line in fin:\n # Do some processing\n print(line)",
"An input file can also be read in as one string by using the read() method.\n\n7.3. The <tt>csv</tt> module\n\n\nPython's <tt>csv</tt> module provides convenient functionality for reading and writing csv files similar to that available in R. The csv files can then be imported in other statistical packages such as R and Excel.\n\n\nHere is a short example of using the csv module to write consecutive rows in to a comma-separated file. The delimiter can be chosen to be an arbitrary string.",
"import csv\n\nwith open(\"game.csv\",\"wb\") as csvfile:\n csvwriter = csv.writer(csvfile,delimiter=',')\n csvwriter.writerow([\"rock\",\"paper\",\"scissor\"])\n csvwriter.writerow([\"ji\",\"gu\",\"pa\"])\n csvwriter.writerow([\"rock\",\"paper\",\"scissor\",\"lizard\",\"Spock\"])\n\ncat game.csv",
"And this is an example of reading the games.csv file. Each row of the csv file is read in as a list.",
"import csv\n\nwith open(\"game.csv\",\"r\") as csvfile:\n csvreader = csv.reader(csvfile,delimiter=\",\")\n for row in csvreader:\n print(row)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
trangel/Data-Science
|
reinforcement_learning/qlearning.ipynb
|
gpl-3.0
|
[
"Q-learning\nThis notebook will guide you through implementation of vanilla Q-learning algorithm.\nYou need to implement QLearningAgent (follow instructions for each method) and use it on a number of tests below.",
"#XVFB will be launched if you run on a server\nimport os\nif type(os.environ.get(\"DISPLAY\")) is not str or len(os.environ.get(\"DISPLAY\")) == 0:\n !bash ../xvfb start\n os.environ['DISPLAY'] = ':1'\n \nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\n%%writefile qlearning.py\nfrom collections import defaultdict\nimport random, math\nimport numpy as np\n\nclass QLearningAgent:\n def __init__(self, alpha, epsilon, discount, get_legal_actions):\n \"\"\"\n Q-Learning Agent\n based on http://inst.eecs.berkeley.edu/~cs188/sp09/pacman.html\n Instance variables you have access to\n - self.epsilon (exploration prob)\n - self.alpha (learning rate)\n - self.discount (discount rate aka gamma)\n\n Functions you should use\n - self.get_legal_actions(state) {state, hashable -> list of actions, each is hashable}\n which returns legal actions for a state\n - self.get_qvalue(state,action)\n which returns Q(state,action)\n - self.set_qvalue(state,action,value)\n which sets Q(state,action) := value\n\n !!!Important!!!\n Note: please avoid using self._qValues directly. \n There's a special self.get_qvalue/set_qvalue for that.\n \"\"\"\n\n self.get_legal_actions = get_legal_actions\n self._qvalues = defaultdict(lambda: defaultdict(lambda: 0))\n self.alpha = alpha\n self.epsilon = epsilon\n self.discount = discount\n\n def get_qvalue(self, state, action):\n \"\"\" Returns Q(state,action) \"\"\"\n return self._qvalues[state][action]\n\n def set_qvalue(self,state,action,value):\n \"\"\" Sets the Qvalue for [state,action] to the given value \"\"\"\n self._qvalues[state][action] = value\n\n #---------------------START OF YOUR CODE---------------------#\n\n def get_value(self, state):\n \"\"\"\n Compute your agent's estimate of V(s) using current q-values\n V(s) = max_over_action Q(state,action) over possible actions.\n Note: please take into account that q-values can be negative.\n \"\"\"\n possible_actions = self.get_legal_actions(state)\n\n #If there are no legal actions, return 0.0\n if len(possible_actions) == 0:\n return 0.0\n\n #<YOUR CODE HERE>\n value = -999999\n for action in possible_actions:\n qvalue = self.get_qvalue(state, action)\n if qvalue > value:\n value = qvalue\n \n return value\n\n def update(self, state, action, reward, next_state):\n \"\"\"\n You should do your Q-Value update here:\n Q(s,a) := (1 - alpha) * Q(s,a) + alpha * (r + gamma * V(s'))\n \"\"\"\n\n #agent parameters\n gamma = self.discount\n learning_rate = self.alpha\n\n #<YOUR CODE HERE>\n qvalue = self.get_qvalue(state, action)\n value = self.get_value(next_state)\n qvalue = (1 - learning_rate) * qvalue + learning_rate * (reward + gamma * value)\n \n self.set_qvalue(state, action, qvalue)\n\n \n def get_best_action(self, state):\n \"\"\"\n Compute the best action to take in a state (using current q-values). \n \"\"\"\n possible_actions = self.get_legal_actions(state)\n\n #If there are no legal actions, return None\n if len(possible_actions) == 0:\n return None\n\n #<YOUR CODE HERE>\n value = -999999\n for action in possible_actions:\n qvalue = self.get_qvalue(state, action)\n if qvalue > value:\n value = qvalue\n best_action = action\n \n return best_action\n\n def get_action(self, state):\n \"\"\"\n Compute the action to take in the current state, including exploration. \n With probability self.epsilon, we should take a random action.\n otherwise - the best policy action (self.getPolicy).\n \n Note: To pick randomly from a list, use random.choice(list). \n To pick True or False with a given probablity, generate uniform number in [0, 1]\n and compare it with your probability\n \"\"\"\n\n # Pick Action\n possible_actions = self.get_legal_actions(state)\n action = None\n\n #If there are no legal actions, return None\n if len(possible_actions) == 0:\n return None\n\n #agent parameters:\n epsilon = self.epsilon\n\n #<YOUR CODE HERE>\n p = np.random.random_sample()\n if p <= epsilon:\n # take random action\n chosen_action = np.random.choice(possible_actions)\n else:\n # best_policy action\n chosen_action = self.get_best_action(state)\n \n return chosen_action",
"Try it on taxi\nHere we use the qlearning agent on taxi env from openai gym.\nYou will need to insert a few agent functions here.",
"import gym\nenv = gym.make(\"Taxi-v2\")\n\nn_actions = env.action_space.n\n\nfrom qlearning import QLearningAgent\n\nagent = QLearningAgent(alpha=0.5, epsilon=0.25, discount=0.99,\n get_legal_actions = lambda s: range(n_actions))\n\ndef play_and_train(env,agent,t_max=10**4):\n \"\"\"\n This function should \n - run a full game, actions given by agent's e-greedy policy\n - train agent using agent.update(...) whenever it is possible\n - return total reward\n \"\"\"\n total_reward = 0.0\n s = env.reset()\n \n for t in range(t_max):\n # get agent to pick action given state s.\n a = agent.get_action(s) #<YOUR CODE>\n \n next_s, r, done, _ = env.step(a)\n \n # train (update) agent for state s\n #<YOUR CODE HERE>\n agent.update(s, a, r, next_s)\n \n s = next_s\n total_reward +=r\n if done: break\n \n return total_reward\n \n \n \n\nfrom IPython.display import clear_output\n\nrewards = []\nfor i in range(1000):\n rewards.append(play_and_train(env, agent))\n agent.epsilon *= 0.99\n \n if i %100 ==0:\n clear_output(True)\n print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))\n plt.plot(rewards)\n plt.show()\n ",
"Submit to Coursera I: Preparation",
"submit_rewards1 = rewards.copy()",
"Binarized state spaces\nUse agent to train efficiently on CartPole-v0.\nThis environment has a continuous set of possible states, so you will have to group them into bins somehow.\nThe simplest way is to use round(x,n_digits) (or numpy round) to round real number to a given amount of digits.\nThe tricky part is to get the n_digits right for each state to train effectively.\nNote that you don't need to convert state to integers, but to tuples of any kind of values.",
"env = gym.make(\"CartPole-v0\")\nn_actions = env.action_space.n\n\nprint(\"first state:%s\" % (env.reset()))\nplt.imshow(env.render('rgb_array'))",
"Play a few games\nWe need to estimate observation distributions. To do so, we'll play a few games and record all states.",
"all_states = []\nfor _ in range(1000):\n all_states.append(env.reset())\n done = False\n while not done:\n s, r, done, _ = env.step(env.action_space.sample())\n all_states.append(s)\n if done: break\n \nall_states = np.array(all_states)\n\nfor obs_i in range(env.observation_space.shape[0]):\n plt.hist(all_states[:, obs_i], bins=20)\n plt.show()",
"Binarize environment",
"from gym.core import ObservationWrapper\nclass Binarizer(ObservationWrapper):\n \n def observation(self, state): \n \n #state = <round state to some amount digits.>\n decimals = [1, 1, 2, 1]\n\n for i, d in enumerate(decimals):\n state[i] = np.round(state[i], d)\n #hint: you can do that with round(x,n_digits)\n #you will need to pick a different n_digits for each dimension\n\n return tuple(state)\n\nenv = Binarizer(gym.make(\"CartPole-v0\"))\n\nall_states = []\nfor _ in range(1000):\n all_states.append(env.reset())\n done = False\n while not done:\n s, r, done, _ = env.step(env.action_space.sample())\n all_states.append(s)\n if done: break\n \nall_states = np.array(all_states)\n\nfor obs_i in range(env.observation_space.shape[0]):\n \n plt.hist(all_states[:,obs_i],bins=20)\n plt.show()",
"Learn binarized policy\nNow let's train a policy that uses binarized state space.\nTips: \n* If your binarization is too coarse, your agent may fail to find optimal policy. In that case, change binarization. \n* If your binarization is too fine-grained, your agent will take much longer than 1000 steps to converge. You can either increase number of iterations and decrease epsilon decay or change binarization.\n* Having 10^3 ~ 10^4 distinct states is recommended (len(QLearningAgent._qvalues)), but not required.",
"agent = QLearningAgent(alpha=0.5, epsilon=0.09999, discount=0.99,\n get_legal_actions=lambda s: range(n_actions))\n\nrewards = []\nfor i in range(10000):\n rewards.append(play_and_train(env,agent)) \n \n #OPTIONAL YOUR CODE: adjust epsilon\n if i %100 ==0:\n clear_output(True)\n print('eps =', agent.epsilon, 'mean reward =', np.mean(rewards[-10:]))\n plt.plot(rewards)\n plt.show()\n \n ",
"Submit to Coursera II: Submission",
"submit_rewards2 = rewards.copy()\n\nfrom submit import submit_qlearning\nsubmit_qlearning(submit_rewards1, submit_rewards2, \"tonatiuh_rangel@hotmail.com\", \"p8edi37LJ6BKh61a\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Danghor/Formal-Languages
|
Python/Regexp-Tutorial.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open (\"../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)",
"Regular Expressions in Python (A Short Tutorial)\nThis is a tutorial showing how regular expressions are supported in Python.\nThe assumption is that the reader already has a grasp of the concept of \nregular expressions as it is taught in lectures \non formal languages, for example in \nFormal Languages and Their Application, but does not know how regular expressions are supported in Python.\nIn Python, regular expressions are not part of the core language but are rather implemented in the module re. This module is part of the Python standard library and therefore there is no need \nto install this module. The full documentation of this module can be found at\nhttps://docs.python.org/3/library/re.html.",
"import re",
"Regular expressions are strings that describe <em style=\\color:blue>languages</em>, where a\n<em style=\\color:blue>language</em> is defined as a <em style=\\color:blue\\ a>set of strings</em>. \nIn the following, let us assume that $\\Sigma$ is the set of all Unicode characters and $\\Sigma^$ is the set \nof strings consisting of Unicode characters. We will define the set $\\textrm{RegExp}$ of regular expressions inductively.\nIn order to define the meaning of a regular expression $r$ we define a function \n$$\\mathcal{L}:\\textrm{RegExp} \\rightarrow 2^{\\Sigma^} $$\nsuch that $\\mathcal{L}(r)$ is the <em style=\\color:blue>language</em> specified by the regular expression $r$.\nIn order to demonstrate how regular expressions work we will use the function findall from the module \nre. This function is called in the following way:\n$$ \\texttt{re.findall}(r, s, \\textrm{flags}=0) $$\nHere, the arguments are interpreted as follows:\n- $r$ is a string that is interpreted as a regular expression,\n- $s$ is a string that is to be searched by $r$, and\n- $\\textrm{flags}$ is an optional argument of type int which is set to $0$ by default.\n This argument is useful to set flags that might be used to alter the interpretation of the regular \n expression $r$. \n For example, if the flag re.IGNORECASE is set, then the search performed by findall is not\n case sensitive.\nThe function findall returns a list of those non-overlapping substrings of the string $s$ that \nmatch the regular expression $r$. In the following example, the regular expression $r$ searches\nfor the letter a and since the string $s$ contains the character a two times, findall returns a \nlist with two occurrences of a:",
"re.findall('a', 'abcabcABC')",
"In the next example, the flag re.IGNORECASE is set and hence the function call returns a list of length 3.",
"re.findall('a', 'abcabcABC', re.IGNORECASE)",
"To begin our definition of the set $\\textrm{RegExp}$ of Python regular expressions, we first have to define\nthe set $\\textrm{MetaChars}$ of all <em style=\"color:blue\">meta-characters</em>:\nMetaChars := { '.', '^', '$', '*', '+', '?', '{', '}', '[', ']', '\\', '|', '(', ')' }\nThese characters are used as <em style=\"color:blue\">operator symbols</em> or as \npart of operator symbols inside of regular expressions.\nNow we can start our inductive definition of regular expressions:\n- Any Unicode character $c$ such that $c \\not\\in \\textrm{MetaChars}$ is a regular expression.\n The regular expressions $c$ matches the character $c$, i.e. we have\n $$ \\mathcal{L}(c) = { c }. $$\n- If $c$ is a meta character, i.e. we have $c \\in \\textrm{MetaChars}$, then the string $\\backslash c$\n is a regular expression matching the meta-character $c$, i.e. we have\n $$ \\mathcal{L}(\\backslash c) = { c }. $$",
"re.findall('a', 'abaa')",
"In the following example we have to use <em style=\"color:blue\">raw strings</em> in order to prevent\nthe backlash character to be mistaken as an <em style=\"color:blue\">escape sequence</em>. A string is a \n<em style=\"color:blue\">raw string</em> if the opening quote character is preceded with the character\nr.",
"re.findall(r'\\+', '+-+')",
"Concatenation\nThe next rule shows how regular expressions can be <em style=\"color:blue\">concatenated</em>:\n- If $r_1$ and $r_2$ are regular expressions, then $r_1r_2$ is a regular expression. This\n regular expression matches any string $s$ that can be split into two substrings $s_1$ and $s_2$ \n such that $r_1$ matches $s_1$ and $r_2$ matches $s_2$. Formally, we have\n $$\\mathcal{L}(r_1r_2) := \n \\bigl{ s_1s_2 \\mid s_1 \\in \\mathcal{L}(r_1) \\wedge s_2 \\in \\mathcal{L}(r_2) \\bigr}.\n $$\nIn the lecture notes we have used the notation $r_1 \\cdot r_2$ instead of the Python notation $r_1r_2$. \nUsing concatenation of regular expressions, we can now find words.",
"re.findall(r'the', 'The horse, the dog, and the cat.', flags=re.IGNORECASE)",
"Choice\nRegular expressions provide the operator | that can be used to choose between \n<em style=\"color:blue\">alternatives:</em>\n- If $r_1$ and $r_2$ are regular expressions, then $r_1|r_2$ is a regular expression. This\n regular expression matches any string $s$ that can is matched by either $r_1$ or $r_2$.\n Formally, we have\n $$\\mathcal{L}(r_1|r_2) := \\mathcal{L}(r_1) \\cup \\mathcal{L}(r_2). $$\nIn the lecture notes we have used the notation $r_1 + r_2$ instead of the Python notation $r_1|r_2$.",
"re.findall(r'The|a', 'The horse, the dog, and a cat.', flags=re.IGNORECASE)",
"Quantifiers\nThe most interesting regular expression operators are the <em style=\"color:blue\">quantifiers</em>.\nThe official documentation calls them <em style=\"color:blue\">repetition qualifiers</em> but in this notebook \nthey are called quantifiers, since this is shorter. Syntactically, quantifiers are \n<em style=\"color:blue\">postfix operators</em>.\n- If $r$ is a regular expressions, then $r+$ is a regular expression. This\n regular expression matches any string $s$ that can be split into a list on $n$ substrings $s_1$, \n $s_2$, $\\cdots$, $s_n$ such that $r$ matches $s_i$ for all $i \\in {1,\\cdots,n}$.\n Formally, we have\n $$\\mathcal{L}(r+) := \n \\Bigl{ s \\Bigm| \\exists n \\in \\mathbb{N}: \\bigl(n \\geq 1 \\wedge \n \\exists s_1,\\cdots,s_n : (s_1 \\cdots s_n = s \\wedge \n \\forall i \\in {1,\\cdots, n}: s_i \\in \\mathcal{L}(r)\\bigr)\n \\Bigr}.\n $$\n Informally, $r+$ matches $r$ any positive number of times.",
"re.findall(r'a+', 'abaabaAaba.', flags=re.IGNORECASE)",
"If $r$ is a regular expressions, then $r$ is a regular expression. This\n regular expression matches either the empty string or any string $s$ that can be split into a list on $n$ substrings $s_1$, \n $s_2$, $\\cdots$, $s_n$ such that $r$ matches $s_i$ for all $i \\in {1,\\cdots,n}$.\n Formally, we have\n $$\\mathcal{L}(r) := \\bigl{ \\texttt{''} \\bigr} \\cup\n \\Bigl{ s \\Bigm| \\exists n \\in \\mathbb{N}: \\bigl(n \\geq 1 \\wedge \n \\exists s_1,\\cdots,s_n : (s_1 \\cdots s_n = s \\wedge \n \\forall i \\in {1,\\cdots, n}: s_i \\in \\mathcal{L}(r)\\bigr)\n \\Bigr}.\n $$\n\nInformally, $r*$ matches $r$ any number of times, including zero times. Therefore, in the following example the result also contains various empty strings. For example, in the string 'abaabaaaba' the regular expression a* will find an empty string at the beginning of each occurrence of the character 'b'. The final occurrence of the empty string is found at the end of the string:",
"re.findall(r'a*', 'abaabbaaaba')",
"If $r$ is a regular expressions, then $r?$ is a regular expression. This\n regular expression matches either the empty string or any string $s$ that is matched by $r$. Formally we have\n $$\\mathcal{L}(r?) := \\bigl{ \\texttt{''} \\bigr} \\cup \\mathcal{L}(r). $$\n Informally, $r?$ matches $r$ at most one times but also zero times. Therefore, in the following example the result also contains two empty strings. One of these is found at the beginning of the character 'b', the second is found at the end of the string.",
"re.findall(r'a?', 'abaa')",
"If $r$ is a regular expressions and $m,n\\in\\mathbb{N}$ such that $m \\leq n$, then $r{m,n}$ is a \n regular expression. This regular expression matches any number $k$ of repetitions of $r$ such that $m \\leq k \\leq n$.\n Formally, we have\n $$\\mathcal{L}(r{m,n}) =\n \\Bigl{ s \\mid \\exists k \\in \\mathbb{N}: \\bigl(m \\leq k \\leq n \\wedge \n \\exists s_1,\\cdots,s_k : (s_1 \\cdots s_k = s \\wedge \n \\forall i \\in {1,\\cdots, k}: s_i \\in \\mathcal{L}(r)\\bigr)\n \\Bigr}.\n $$\n Informally, $r{m,n}$ matches $r$ at least $m$ times and at most $n$ times.",
"re.findall(r'a{2,3}', 'aaaa')",
"Above, the regular expression r'a{2,3}' matches the string 'aaaa' only once since the first match consumes three occurrences of a and then there is only a single a left.\nIf $r$ is a regular expressions and $n\\in\\mathbb{N}$, then $r{n}$ is a regular expression. This regular expression matches exactly $n$ repetitions of $r$. Formally, we have\n $$\\mathcal{L}(r{n}) = \\mathcal{L}(r{n,n}).$$",
"re.findall(r'a{2}', 'aabaaaba')",
"If $r$ is a regular expressions and $n\\in\\mathbb{N}$, then $r{,n}$ is a regular expression. This regular expression matches up to $n$ repetitions of $r$. Formally, we have\n $$\\mathcal{L}(r{,n}) = \\mathcal{L}(r{0,n}).$$",
"re.findall(r'a{,2}', 'aabaaaba')",
"If $r$ is a regular expressions and $n\\in\\mathbb{N}$, then $r{n,}$ is a regular expression. This regular expression matches $n$ or more repetitions of $r$. Formally, we have\n $$\\mathcal{L}(r{n,}) = \\mathcal{L}(r{n}r*).$$",
"re.findall(r'a{2,}', 'aabaaaba')",
"Non-Greedy Quantifiers\nThe quantifiers ?, +, *, {m,n}, {n}, {,n}, and {n,} are <em style=\"color:blue\">greedy</em>, i.e. they \nmatch the longest possible substrings. Suffixing these operators with the character ? makes them \n<em style=\"color:blue\">non-greedy</em>. For example, the regular expression a{2,3}? matches either \ntwo occurrences or three occurrences of the character a but will prefer to match only two characters. Hence, the regular expression a{2,3}? will find two matches in the string 'aaaa', while the regular expression a{2,3} only finds a single match.",
"re.findall(r'a{2,3}?', 'aaaa'), re.findall(r'a{2,3}', 'aaaa')",
"Character Classes\nIn order to match a set of characters we can use a <em style=\"color:blue\">character class</em>.\nIf $c_1$, $\\cdots$, $c_n$ are Unicode characters, then $[c_1\\cdots c_n]$ is a regular expression that \nmatches any of the characters from the set ${c_1,\\cdots,c_n}$:\n$$ \\mathcal{L}\\bigl([c_1\\cdots c_n]\\bigr) := { c_1, \\cdots, c_n } $$",
"re.findall(r'[abc]+', 'abcdcba')",
"Character classes can also contain <em style=\"color:blue\">ranges</em>. Syntactically, a range has the form\n$c_1\\texttt{-}c_2$, where $c_1$ and $c_2$ are Unicode characters.\nFor example, the regular expression [0-9] contains the range 0-9 and matches any decimal digit. To find all natural numbers embedded in a string we could use the regular expression [1-9][0-9]*|[0-9]. This regular expression matches either a single digit or a string that starts with a non-zero digit and is followed by any number of digits.",
"re.findall(r'[1-9][0-9]*|0', '11 abc 12 2345 007 42 0')",
"Note that the next example looks quite similar but gives a different result:",
"re.findall(r'[0-9]|[1-9][0-9]*', '11 abc 12 2345 007 42 0')",
"Here, the regular expression starts with the alternative [0-9], which matches any single digit. \nSo once a digit is found, the resulting substring is returned and the search starts again. Therefore, if this regular expression is used in findall, it will only return a list of single digits. \nThere are some predefined character classes:\n- \\d matches any digit.\n- \\D matches any non-digit character.\n- \\s matches any whitespace character.\n- \\S matches any non-whitespace character.\n- \\w matches any alphanumeric character.\n If we would use only <font style=\"font-variant: small-caps\">Ascii</font> characters this would \n be equivalent to the character class [0-9a-zA-Z_].\n- \\W matches any non-alphanumeric character.\n- \\b matches at a word boundary. The string that is matched is the empty string.\n- \\B matches at any place that is not a word boundary.\n Again, the string that is matched is the empty string.\nThese escape sequences can also be used inside of square brackets.",
"re.findall(r'[\\dabc]+', '11 abc12 1a2 2b3c4d5')",
"Character classes can be negated if the first character after the opening [ is the character ^.\nFor example, [^abc] matches any character that is different from a, b, or c.",
"re.findall(r'[^abc]+', 'axyzbuvwchij')\n\nre.findall(r'\\b\\w+\\b', 'This is some text where we want to count the words.')",
"The following regular expression uses the character class \\b to isolate numbers. Note that we had to use parentheses since concatenation of regular expressions binds stronger than the choice operator |.",
"re.findall(r'\\b(0|[1-9][0-9]*)\\b', '11 abc 12 2345 007 42 0')",
"Grouping\nIf $r$ is a regular expression, then $(r)$ is a regular expression describing the same language as \n$r$. There are two reasons for using parentheses:\n- Parentheses can be used to override the precedence of an operator.\n This concept is the same as in programming languages. For example, the regular expression ab+\n matches the character a followed by any positive number of occurrences of the character b because\n the precedence of a quantifiers is higher than the precedence of concatenation of regular expressions. \n However, (ab)+ matches the strings ab, abab, ababab, and so on.\n- Parentheses can be used for <em style=\"color:blue\">back-references</em> because inside \n a regular expression we can refer to the substring matched by a regular expression enclosed in a pair of\n parentheses using the syntax $\\backslash n$ where $n \\in {1,\\cdots,9}$.\n Here, $\\backslash n$ refers to the $n$th parenthesized <em style=\"color:blue\">group</em> in the regular \n expression, where a group is defined as any part of the regular expression enclosed in parentheses.\n Counting starts with the left parentheses, For example, the regular expression\n (a(b|c)*d)?ef(gh)+\n has three groups:\n 1. (a(b|c)*d) is the first group,\n 2. (b|c) is the second group, and\n 3. (gh) is the third group.\nFor example, if we want to recognize a string that starts with a number followed by some white space and then\n followed by the <b>same</b> number we can use the regular expression (\\d+)\\w+\\1.",
"re.findall(r'(\\d+)\\s+\\1', '12 12 23 23 17 18')",
"In general, given a digit $n$, the expression $\\backslash n$ refers to the string matched in the $n$-th group of the regular expression.\nThe Dot\nThe regular expression . matches any character except the newline. For example, c.*?t matches any string that starts with the character c and ends with the character t and does not contain any newline. If we are using the non-greedy version of the quantifier *, we can find all such words in the string below.",
"re.findall(r'c.*?t', 'ct cat caat could we look at that!')",
"The dot . does not have any special meaning when used inside a character range. Hence, the regular expression\n[.] matches only the character ..\nStart and End of a Line\nThe regular expression ^ matches at the start of a string. If we set the flag re.MULTILINE, which we \nwill usually do when working with this regular expression containing the expression ^, \nthen ^ also matches at the beginning of each line,\ni.e. it matches after every newline character.\nSimilarly, the regular expression $ matches at the end of a string. If we set the flag re.MULTILINE, then $ also matches at the end of each line,\ni.e. it matches before every newline character.",
"data = \\\n'''\nThis is a text containing five lines, two of which are empty.\nThis is the second non-empty line,\nand this is the third non-empty line.\n'''\nre.findall(r'^.*$', data, flags=re.MULTILINE)",
"Lookahead Assertions\nSometimes we need to look ahead in order to know whether we have found what we are looking for. Consider the case that you want to add up all numbers followed by a dollar symbol but you are not interested in any other numbers. In this case a \nlookahead assertion comes in handy. The syntax of a lookahead assertion is:\n$$ r_1 (\\texttt{?=}r_2) $$\nHere $r_1$ and $r_2$ are regular expressions and ?= is the <em style=\"color:blue\">lookahead operator</em>. $r_1$ is the regular expression you are searching for while $r_2$ is the regular expression describing the lookahead. Note that this lookahead is not matched. It is only checked whether $r_1$ is followed by $r_2$ but only the text matching $r_1$ is matched. Syntactically, the\nlookahead $r_2$ has to be preceded by the lookahead operator and both have to be surrounded by parentheses.\nIn the following example we are looking for all numbers that are followed by dollar symbols and we sum these numbers up.",
"text = 'Here is 1$, here are 21 €, and there are 42 $.'\nL = re.findall(r'([0-9]+)(?=\\s*\\$)', text)\nprint(f'L = {L}')\nsum(int(x) for x in L)",
"There are also <em style=\"color:blue\">negative lookahead assertion</em>. The syntax is:\n$$ r_1 (\\texttt{?!}r_2) $$\nHere $r_1$ and $r_2$ are regular expressions and ?! is the <em style=\"color:blue\">negative lookahead operator</em>. \nThe expression above checks for all occurrences of $r_1$ that are <b>not</b> followed by $r_2$. \nIn the following examples we sum up all numbers that are <u>not</u> followed by a dollar symbol.\nNote that the lookahead expression has to ensure that there are no additional digits. In general, negative lookahead is very tricky and I recommend against using it.",
"text = 'Here is 1$, here are 21 €, and there are 42 $.'\nL = re.findall(r'[0-9]+(?![0-9]*\\s*\\$)', text)\nprint(f'L = {L}')\nsum(int(x) for x in L)",
"Examples\nIn order to have some strings to play with, let us read the file alice.txt, which contains the book\nAlice's Adventures in Wonderland written by \nLewis Carroll.",
"with open('alice.txt', 'r') as f:\n text = f.read()\n\nprint(text[:1020])",
"How many non-empty lines does this story have?",
"len(re.findall(r'^.*[^\\s].*?$', text, flags=re.MULTILINE))",
"Next, let us check, whether this text is suitable for minors. In order to do so we search for all four\nletter words that start with either d, f or s and end with k or t.",
"set(re.findall(r'\\b[dfs]\\w{2}[kt]\\b', text, flags=re.IGNORECASE))",
"How many words are in this text and how many different words are used?",
"L = re.findall(r'\\b\\w+\\b', text.lower())\nS = set(L)\nprint(f'There are {len(L)} words in this book and {len(S)} different words.')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
charlesll/RamPy
|
examples/Mixing_spectra.ipynb
|
gpl-2.0
|
[
"Example of the mixing_sp() function\nAuthor: Charles Le Losq\nThis function allows one to mix two endmembers spectra, $ref1$ and $ref2$, to an observed one $obs$:\n$obs = ref1 * F1 + ref2 * (1-F1)$ .\nThe calculation is done with performing least absolute regression, which presents advantages compared to least squares to fit problems with outliers as well as non-Gaussian character (see wikipedia for instance).",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport rampy as rp",
"Problem setting\nWe will setup a simple problem in which we mix two Gaussian peaks in different ratios. The code below is going to create those peaks, and to plot them for reference.",
"x = np.arange(0,100,1.0) # a dummy x axis\nref1 = 50.0*np.exp(-1/2*((x-40)/20)**2) + np.random.randn(len(x)) # a gaussian with added noise\nref2 = 70.0*np.exp(-1/2*((x-60)/15)**2) + np.random.randn(len(x)) # a gaussian with added noise\nplt.figure()\nplt.plot(x,ref1,label=\"ref1\")\nplt.plot(x,ref2,label=\"ref2\")\nplt.xlabel(\"X\")\nplt.ylabel(\"Y\")\nplt.legend()",
"We now create 4 intermediate $obs$ signals, with $F1$ = 20%,40%,60% and 80% of ref1.",
"F1_true = np.array([0.80,0.60,0.40,0.20])\nobs = np.dot(ref1.reshape(-1,1),F1_true.reshape(1,-1)) + np.dot(ref2.reshape(-1,1),(1-F1_true.reshape(1,-1)))\nplt.figure()\nplt.plot(x,obs)\nplt.xlabel(\"X\")\nplt.ylabel(\"Y\")\nplt.title(\"Observed signals\")",
"Now we can use rp.mixing_sp() to retrieve $F1$.\nWe suppose here that we have some knowledge of $ref1$ and $ref2$.",
"F1_meas = rp.mixing_sp(obs,ref1,ref2)\nplt.figure()\nplt.plot(F1_true,F1_meas,'ro',label=\"Measurements\")\nplt.plot([0,1],[0,1],'k-',label=\"1:1 line\")\nplt.xlabel(\"True $F1$ value\")\nplt.ylabel(\"Determined $F1$ value\")\nplt.legend()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tdhopper/notes-on-dirichlet-processes
|
pages/2015-08-03-nonparametric-latent-dirichlet-allocation.ipynb
|
mit
|
[
"%matplotlib inline\n%precision 2",
"Nonparametric Latent Dirichlet Allocation\nLatent Dirichlet Allocation is a generative model for topic modeling. Given a collection of documents, an LDA inference algorithm attempts to determined (in an unsupervised manner) the topics discussed in the documents. It makes the assumption that each document is generated by a probability model, and, when doing inference, we try to find the parameters that best fit the model (as well as unseen/latent variables generated by the model). If you are unfamiliar with LDA, Edwin Chen has a friendly introduction you should read.\nBecause LDA is a generative model, we can simulate the construction of documents by forward-sampling from the model. The generative algorithm is as follows (following Heinrich):\n\nfor each topic $k\\in [1,K]$ do\nsample term distribution for topic $\\overrightarrow \\phi_k \\sim \\text{Dir}(\\overrightarrow \\beta)$\n\n\nfor each document $m\\in [1, M]$ do\nsample topic distribution for document $\\overrightarrow\\theta_m\\sim \\text{Dir}(\\overrightarrow\\alpha)$\nsample document length $N_m\\sim\\text{Pois}(\\xi)$\nfor all words $n\\in [1, N_m]$ in document $m$ do\nsample topic index $z_{m,n}\\sim\\text{Mult}(\\overrightarrow\\theta_m)$\nsample term for word $w_{m,n}\\sim\\text{Mult}(\\overrightarrow\\phi_{z_{m,n}})$\n\n\n\n\n\nYou can implement this with a little bit of code and start to simulate documents.\nIn LDA, we assume each word in the document is generated by a two-step process:\n\nSample a topic from the topic distribution for the document.\nSample a word from the term distribution from the topic. \n\nWhen we fit the LDA model to a given text corpus with an inference algorithm, our primary objective is to find the set of topic distributions $\\underline \\Theta$, term distributions $\\underline \\Phi$ that generated the documents, and latent topic indices $z_{m,n}$ for each word.\nTo run the generative model, we need to specify each of these parameters:",
"vocabulary = ['see', 'spot', 'run']\nnum_terms = len(vocabulary)\nnum_topics = 2 # K\nnum_documents = 5 # M\nmean_document_length = 5 # xi\nterm_dirichlet_parameter = 1 # beta\ntopic_dirichlet_parameter = 1 # alpha",
"The term distribution vector $\\underline\\Phi$ is a collection of samples from a Dirichlet distribution. This describes how our 3 terms are distributed across each of the two topics.",
"from scipy.stats import dirichlet, poisson\nfrom numpy import round\nfrom collections import defaultdict\nfrom random import choice as stl_choice\n\nterm_dirichlet_vector = num_terms * [term_dirichlet_parameter]\nterm_distributions = dirichlet(term_dirichlet_vector, 2).rvs(size=num_topics)\nprint(term_distributions)",
"Each document corresponds to a categorical distribution across this distribution of topics (in this case, a 2-dimensional categorical distribution). This categorical distribution is a distribution of distributions; we could look at it as a Dirichlet process!\nThe base base distribution of our Dirichlet process is a uniform distribution of topics (remember, topics are term distributions).",
"base_distribution = lambda: stl_choice(term_distributions)\n# A sample from base_distribution is a distribution over terms\n# Each of our two topics has equal probability\nfrom collections import Counter\nfor topic, count in Counter([tuple(base_distribution()) for _ in range(10000)]).most_common():\n print(\"count:\", count, \"topic:\", [round(prob, 2) for prob in topic])",
"Recall that a sample from a Dirichlet process is a distribution that approximates (but varies from) the base distribution. In this case, a sample from the Dirichlet process will be a distribution over topics that varies from the uniform distribution we provided as a base. If we use the stick-breaking metaphor, we are effectively breaking a stick one time and the size of each portion corresponds to the proportion of a topic in the document.\nTo construct a sample from the DP, we need to again define our DP class:",
"from scipy.stats import beta\nfrom numpy.random import choice\n\nclass DirichletProcessSample():\n def __init__(self, base_measure, alpha):\n self.base_measure = base_measure\n self.alpha = alpha\n \n self.cache = []\n self.weights = []\n self.total_stick_used = 0.\n\n def __call__(self):\n remaining = 1.0 - self.total_stick_used\n i = DirichletProcessSample.roll_die(self.weights + [remaining])\n if i is not None and i < len(self.weights) :\n return self.cache[i]\n else:\n stick_piece = beta(1, self.alpha).rvs() * remaining\n self.total_stick_used += stick_piece\n self.weights.append(stick_piece)\n new_value = self.base_measure()\n self.cache.append(new_value)\n return new_value\n \n @staticmethod \n def roll_die(weights):\n if weights:\n return choice(range(len(weights)), p=weights)\n else:\n return None",
"For each document, we will draw a topic distribution from the Dirichlet process:",
"topic_distribution = DirichletProcessSample(base_measure=base_distribution, \n alpha=topic_dirichlet_parameter)",
"A sample from this topic distribution is a distribution over terms. However, unlike our base distribution which returns each term distribution with equal probability, the topics will be unevenly weighted.",
"for topic, count in Counter([tuple(topic_distribution()) for _ in range(10000)]).most_common():\n print(\"count:\", count, \"topic:\", [round(prob, 2) for prob in topic])",
"To generate each word in the document, we draw a sample topic from the topic distribution, and then a term from the term distribution (topic).",
"topic_index = defaultdict(list)\ndocuments = defaultdict(list)\n\nfor doc in range(num_documents):\n topic_distribution_rvs = DirichletProcessSample(base_measure=base_distribution, \n alpha=topic_dirichlet_parameter)\n document_length = poisson(mean_document_length).rvs()\n for word in range(document_length):\n topic_distribution = topic_distribution_rvs()\n topic_index[doc].append(tuple(topic_distribution))\n documents[doc].append(choice(vocabulary, p=topic_distribution))",
"Here are the documents we generated:",
"for doc in documents.values():\n print(doc)",
"We can see how each topic (term-distribution) is distributed across the documents:",
"for i, doc in enumerate(Counter(term_dist).most_common() for term_dist in topic_index.values()):\n print(\"Doc:\", i)\n for topic, count in doc:\n print(5*\" \", \"count:\", count, \"topic:\", [round(prob, 2) for prob in topic])",
"To recap: for each document we draw a sample from a Dirichlet Process. The base distribution for the Dirichlet process is a categorical distribution over term distributions; we can think of the base distribution as an $n$-sided die where $n$ is the number of topics and each side of the die is a distribution over terms for that topic. By sampling from the Dirichlet process, we are effectively reweighting the sides of the die (changing the distribution of the topics).\nFor each word in the document, we draw a sample (a term distribution) from the distribution (over term distributions) sampled from the Dirichlet process (with a distribution over term distributions as its base measure). Each term distribution uniquely identifies the topic for the word. We can sample from this term distribution to get the word.\nGiven this formulation, we might ask if we can roll an infinite sided die to draw from an unbounded number of topics (term distributions). We can do exactly this with a Hierarchical Dirichlet process. Instead of the base distribution of our Dirichlet process being a finite distribution over topics (term distributions) we will instead make it an infinite Distribution over topics (term distributions) by using yet another Dirichlet process! This base Dirichlet process will have as its base distribution a Dirichlet distribution over terms. \nWe will again draw a sample from a Dirichlet Process for each document. The base distribution for the Dirichlet process is itself a Dirichlet process whose base distribution is a Dirichlet distribution over terms. (Try saying that 5-times fast.) We can think of this as a countably infinite die each side of the die is a distribution over terms for that topic. The sample we draw is a topic (distribution over terms).\nFor each word in the document, we will draw a sample (a term distribution) from the distribution (over term distributions) sampled from the Dirichlet process (with a distribution over term distributions as its base measure). Each term distribution uniquely identifies the topic for the word. We can sample from this term distribution to get the word.\nThese last few paragraphs are confusing! Let's illustrate with code.",
"term_dirichlet_vector = num_terms * [term_dirichlet_parameter]\nbase_distribution = lambda: dirichlet(term_dirichlet_vector).rvs(size=1)[0]\n\nbase_dp_parameter = 10\nbase_dp = DirichletProcessSample(base_distribution, alpha=base_dp_parameter)",
"This sample from the base Dirichlet process is our infinite sided die. It is a probability distribution over a countable infinite number of topics. \nThe fact that our die is countably infinite is important. The sampler base_distribution draws topics (term-distributions) from an uncountable set. If we used this as the base distribution of the Dirichlet process below each document would be constructed from a completely unique set of topics. By feeding base_distribution into a Dirichlet Process (stochastic memoizer), we allow the topics to be shared across documents. \nIn other words, base_distribution will never return the same topic twice; however, every topic sampled from base_dp would be sampled an infinite number of times (if we sampled from base_dp forever). At the same time, base_dp will also return an infinite number of topics. In our formulation of the the LDA sampler above, our base distribution only ever returned a finite number of topics (num_topics); there is no num_topics parameter here.\nGiven this setup, we can generate documents from the hierarchical Dirichlet process with an algorithm that is essentially identical to that of the original latent Dirichlet allocation generative sampler:",
"nested_dp_parameter = 10\n\ntopic_index = defaultdict(list)\ndocuments = defaultdict(list)\n\nfor doc in range(num_documents):\n topic_distribution_rvs = DirichletProcessSample(base_measure=base_dp, \n alpha=nested_dp_parameter)\n document_length = poisson(mean_document_length).rvs()\n for word in range(document_length):\n topic_distribution = topic_distribution_rvs()\n topic_index[doc].append(tuple(topic_distribution))\n documents[doc].append(choice(vocabulary, p=topic_distribution))",
"Here are the documents we generated:",
"for doc in documents.values():\n print(doc)",
"And here are the latent topics used:",
"for i, doc in enumerate(Counter(term_dist).most_common() for term_dist in topic_index.values()):\n print(\"Doc:\", i)\n for topic, count in doc:\n print(5*\" \", \"count:\", count, \"topic:\", [round(prob, 2) for prob in topic])",
"Our documents were generated by an unspecified number of topics, and yet the topics were shared across the 5 documents. This is the power of the hierarchical Dirichlet process!\nThis non-parametric formulation of Latent Dirichlet Allocation was first published by Yee Whye Teh et al. \nUnfortunately, forward sampling is the easy part. Fitting the model on data requires complex MCMC or variational inference. There are a limited number of implementations of HDP-LDA available, and none of them are great."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
InsightSoftwareConsortium/SimpleITK-Notebooks
|
Python/01_Image_Basics.ipynb
|
apache-2.0
|
[
"SimpleITK Image Basics <a href=\"https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F01_Image_Basics.ipynb\"><img style=\"float: right;\" src=\"https://mybinder.org/badge_logo.svg\"></a>\nThis document will give a brief orientation to the SimpleITK Image class.\nFirst we import the SimpleITK Python module. By convention our module is imported into the shorter and more Pythonic \"sitk\" local name.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport SimpleITK as sitk",
"Image Construction\nThere are a variety of ways to create an image. All images' initial value is well defined as zero.",
"image = sitk.Image(256, 128, 64, sitk.sitkInt16)\nimage_2D = sitk.Image(64, 64, sitk.sitkFloat32)\nimage_2D = sitk.Image([32, 32], sitk.sitkUInt32)\nimage_RGB = sitk.Image([128, 128], sitk.sitkVectorUInt8, 3)",
"Pixel Types\nThe pixel type is represented as an enumerated type. The following is a table of the enumerated list.\n<table>\n <tr><td>sitkUInt8</td><td>Unsigned 8 bit integer</td></tr>\n <tr><td>sitkInt8</td><td>Signed 8 bit integer</td></tr>\n <tr><td>sitkUInt16</td><td>Unsigned 16 bit integer</td></tr>\n <tr><td>sitkInt16</td><td>Signed 16 bit integer</td></tr>\n <tr><td>sitkUInt32</td><td>Unsigned 32 bit integer</td></tr>\n <tr><td>sitkInt32</td><td>Signed 32 bit integer</td></tr>\n <tr><td>sitkUInt64</td><td>Unsigned 64 bit integer</td></tr>\n <tr><td>sitkInt64</td><td>Signed 64 bit integer</td></tr>\n <tr><td>sitkFloat32</td><td>32 bit float</td></tr>\n <tr><td>sitkFloat64</td><td>64 bit float</td></tr>\n <tr><td>sitkComplexFloat32</td><td>complex number of 32 bit float</td></tr>\n <tr><td>sitkComplexFloat64</td><td>complex number of 64 bit float</td></tr>\n <tr><td>sitkVectorUInt8</td><td>Multi-component of unsigned 8 bit integer</td></tr>\n <tr><td>sitkVectorInt8</td><td>Multi-component of signed 8 bit integer</td></tr>\n <tr><td>sitkVectorUInt16</td><td>Multi-component of unsigned 16 bit integer</td></tr>\n <tr><td>sitkVectorInt16</td><td>Multi-component of signed 16 bit integer</td></tr>\n <tr><td>sitkVectorUInt32</td><td>Multi-component of unsigned 32 bit integer</td></tr>\n <tr><td>sitkVectorInt32</td><td>Multi-component of signed 32 bit integer</td></tr>\n <tr><td>sitkVectorUInt64</td><td>Multi-component of unsigned 64 bit integer</td></tr>\n <tr><td>sitkVectorInt64</td><td>Multi-component of signed 64 bit integer</td></tr>\n <tr><td>sitkVectorFloat32</td><td>Multi-component of 32 bit float</td></tr>\n <tr><td>sitkVectorFloat64</td><td>Multi-component of 64 bit float</td></tr>\n <tr><td>sitkLabelUInt8</td><td>RLE label of unsigned 8 bit integers</td></tr>\n <tr><td>sitkLabelUInt16</td><td>RLE label of unsigned 16 bit integers</td></tr>\n <tr><td>sitkLabelUInt32</td><td>RLE label of unsigned 32 bit integers</td></tr>\n <tr><td>sitkLabelUInt64</td><td>RLE label of unsigned 64 bit integers</td></tr>\n</table>\n\nThere is also sitkUnknown, which is used for undefined or erroneous pixel ID's. It has a value of -1.\nThe 64-bit integer types are not available on all distributions. When not available the value is sitkUnknown.\nMore Information about the Image class be obtained in the Docstring\nSimpleITK classes and functions have the Docstrings derived from the C++ definitions and the Doxygen documentation.",
"help(image)",
"Accessing Attributes\nIf you are familiar with ITK, then these methods will follow your expectations:",
"print(image.GetSize())\nprint(image.GetOrigin())\nprint(image.GetSpacing())\nprint(image.GetDirection())\nprint(image.GetNumberOfComponentsPerPixel())",
"Note: The starting index of a SimpleITK Image is always 0. If the output of an ITK filter has non-zero starting index, then the index will be set to 0, and the origin adjusted accordingly.\nThe size of the image's dimensions have explicit accessors:",
"print(image.GetWidth())\nprint(image.GetHeight())\nprint(image.GetDepth())",
"Since the dimension and pixel type of a SimpleITK image is determined at run-time accessors are needed.",
"print(image.GetDimension())\nprint(image.GetPixelIDValue())\nprint(image.GetPixelIDTypeAsString())",
"What is the depth of a 2D image?",
"print(image_2D.GetSize())\nprint(image_2D.GetDepth())",
"What is the dimension and size of a Vector image?",
"print(image_RGB.GetDimension())\nprint(image_RGB.GetSize())\n\nprint(image_RGB.GetNumberOfComponentsPerPixel())",
"For certain file types such as DICOM, additional information about the image is contained in the meta-data dictionary.",
"for key in image.GetMetaDataKeys():\n print(f'\"{key}\":\"{image.GetMetaData(key)}\"')",
"Accessing Pixels\nThere are the member functions GetPixel and SetPixel which provides an ITK-like interface for pixel access.",
"help(image.GetPixel)\n\nprint(image.GetPixel(0, 0, 0))\nimage.SetPixel(0, 0, 0, 1)\nprint(image.GetPixel(0, 0, 0))\n\nprint(image[0, 0, 0])\nimage[0, 0, 0] = 10\nprint(image[0, 0, 0])",
"Conversion between numpy and SimpleITK",
"nda = sitk.GetArrayFromImage(image)\nprint(nda)\n\nhelp(sitk.GetArrayFromImage)\n\n# Get a view of the image data as a numpy array, useful for display\nnda = sitk.GetArrayViewFromImage(image)\n\nnda = sitk.GetArrayFromImage(image_RGB)\nimg = sitk.GetImageFromArray(nda)\nimg.GetSize()\n\nhelp(sitk.GetImageFromArray)\n\nimg = sitk.GetImageFromArray(nda, isVector=True)\nprint(img)",
"The order of index and dimensions need careful attention during conversion\nITK's Image class does not have a bracket operator. It has a GetPixel which takes an ITK Index object as an argument, which is ordered as (x,y,z). This is the convention that SimpleITK's Image class uses for the GetPixel method and slicing operator as well. In numpy, an array is indexed in the opposite order (z,y,x). Also note that the access to channels is different. In SimpleITK you do not access the channel directly, rather the pixel value representing all channels for the specific pixel is returned and you then access the channel for that pixel. In the numpy array you are accessing the channel directly.",
"import numpy as np\n\nmulti_channel_3Dimage = sitk.Image([2, 4, 8], sitk.sitkVectorFloat32, 5)\nx = multi_channel_3Dimage.GetWidth() - 1\ny = multi_channel_3Dimage.GetHeight() - 1\nz = multi_channel_3Dimage.GetDepth() - 1\nmulti_channel_3Dimage[x, y, z] = np.random.random(\n multi_channel_3Dimage.GetNumberOfComponentsPerPixel()\n)\n\nnda = sitk.GetArrayFromImage(multi_channel_3Dimage)\n\nprint(\"Image size: \" + str(multi_channel_3Dimage.GetSize()))\nprint(\"Numpy array size: \" + str(nda.shape))\n\n# Notice the index order and channel access are different:\nprint(\"First channel value in image: \" + str(multi_channel_3Dimage[x, y, z][0]))\nprint(\"First channel value in numpy array: \" + str(nda[z, y, x, 0]))",
"Are we still dealing with Image, because I haven't seen one yet...\nWhile SimpleITK does not do visualization, it does contain a built in Show method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use ImageJ, because it is readily supports all the image types which SimpleITK has and load very quickly. However, it's easily customizable by setting environment variables.",
"sitk.Show(image)\n\n?sitk.Show",
"By converting into a numpy array, matplotlib can be used for visualization for integration into the scientific python environment.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nz = 0\nslice = sitk.GetArrayViewFromImage(image)[z, :, :]\nplt.imshow(slice)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
davidthomas5412/PanglossNotebooks
|
MassLuminosityProject/SummerResearch/MassMapsFromMassLuminosity_20170626.ipynb
|
mit
|
[
"Mass Maps From Mass-Luminosity Inference Posterior\nIn this notebook we start to explore the potential of using a mass-luminosity relation posterior to refine mass maps. \nContent:\n- Math\n- Imports, Constants, Utils, Data\n- Probability Functions\n- Results\n- Discussion\nMath\nInfering mass from mass-luminosity relation posterior ...\n\\begin{align}\nP(M|L_{obs},z,\\sigma_L^{obs}) &= \\iint P(M|\\alpha, S, L_{obs}, z)P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})\\ d\\alpha dS\\\n&\\propto \\iiint P(L_{obs}| L,\\sigma_L^{obs})P(L|M,\\alpha,S,z)P(M|z)P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})\\ dLd\\alpha dS\\\n&\\approx \\frac{P(M|z)}{n_{\\alpha,S}}\\sum_{\\alpha,S \\sim P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})}\\left( \\frac{1}{n_L}\\sum_{L\\sim P(L|M,\\alpha,S,z)}P(L_{obs}|L,\\sigma_L^{obs})\\right)\\\n&= \\frac{P(M|z)}{n_{\\alpha,S}}\\sum_{\\alpha,S \\sim P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})}f(M;\\alpha,S,z)\\\n\\end{align}\nRefine for individual halo ...\n\\begin{align}\nP(M_k|L_{obs},z,\\sigma_L^{obs}) &= \\iint P(M_k|\\alpha, S, L_{obs\\ k}, z_k)P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})\\ d\\alpha dS\\\n&\\propto \\iiint P(L_{obs\\ k}| L_k,\\sigma_L^{obs})P(L_k|M_k,\\alpha,S,z_k)P(M_k|z_k)P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})\\ dLd\\alpha dS\\\n&\\approx \\frac{P(M_k|z_k)}{n_{\\alpha,S}}\\sum_{\\alpha,S \\sim P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})}\\left( \\frac{1}{n_L}\\sum_{L\\sim P(L_k|M_k,\\alpha,S,z_k)}P(L_{obs\\ k}|L_k,\\sigma_L^{obs})\\right)\\\n&=\\frac{P(M_k|z_k)}{n_{\\alpha,S}}\\sum_{\\alpha,S \\sim P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})}f(M_k;\\alpha,S,z_k)\\\n\\end{align}\nCan also factor it more conventionally for MCMC ...\n\\begin{align}\n\\underbrace{P(M_k|L_{obs},z,\\sigma_L^{obs})}{posterior} \n&\\propto \\underbrace{P(M_k|z_k)}{prior}\\underbrace{\\iiint P(L_{obs\\ k}| L_k,\\sigma_L^{obs})P(L_k|M_k,\\alpha,S,z_k)P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})\\ dLd\\alpha dS}_{likelihood}\\\n\\end{align}\nIn the code we have the following naming convention:\n- p1 for $P(M|z)$\n- p2 for $P(\\alpha, S|L_{obs},z,\\sigma_L^{obs})$\n- p3 for $P(L_k|M_k,\\alpha,S,z_k)$\n- p4 for $P(L_{obs\\ k}|L_k, \\sigma^{obs}_L)$\nWe use the terms eval and samp to help distinguish between evaluating a distribution and sampling from it. \nImports, Constants, Utils, Data",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc\nrc('text', usetex=True)\nfrom bigmali.grid import Grid\nfrom bigmali.prior import TinkerPrior\nfrom bigmali.hyperparameter import get\nimport numpy as np\nfrom scipy.stats import lognorm\nfrom numpy.random import normal\n\n#globals that functions rely on\ngrid = Grid()\nprior = TinkerPrior(grid)\na_seed = get()[:-1]\nS_seed = get()[-1]\nmass_points = prior.fetch(grid.snap(0)).mass[2:-2] # cut edges\ntmp = np.loadtxt('/Users/user/Code/PanglossNotebooks/MassLuminosityProject/SummerResearch/mass_mapping.txt')\nz_data = tmp[:,0]\nlobs_data = tmp[:,1]\nmass_data = tmp[:,2]\nra_data = tmp[:,3]\ndec_data = tmp[:,4]\nsigobs = 0.05\n\ndef fast_lognormal(mu, sigma, x):\n return (1/(x * sigma * np.sqrt(2 * np.pi))) * np.exp(- 0.5 * (np.log(x) - np.log(mu)) ** 2 / sigma ** 2)",
"Probability Functions",
"def p1_eval(zk):\n return prior.fetch(grid.snap(zk)).prob[2:-2]\n\ndef p2_samp(nas=100):\n \"\"\"\n a is fixed on hyperseed,\n S is normal distribution centered at hyperseed.\n \"\"\"\n return normal(S_seed, S_seed / 10, size=nas)\n\ndef p3_samp(mk, a, S, zk, nl=100):\n mu_lum = np.exp(a[0]) * ((mk / a[2]) ** a[1]) * ((1 + zk) ** (a[3]))\n return lognorm(S, scale=mu_lum).rvs(nl)\n \ndef p4_eval(lobsk, lk, sigobs):\n return fast_lognormal(lk, sigobs, lobsk)\n \ndef f(a, S, zk, lobsk, nl=100):\n ans = []\n for mk in mass_points:\n tot = 0\n for x in p3_samp(mk, a, S, zk, nl):\n tot += p4_eval(lobsk, x, sigobs)\n ans.append(tot / nl)\n return ans\n\ndef mass_dist(ind=1, nas=10, nl=100):\n lobsk = lobs_data[ind]\n zk = z_data[ind]\n tot = np.zeros(len(mass_points))\n for S in p2_samp(nas):\n tot += f(a_seed, S, zk, lobsk, nl)\n prop = p1_eval(zk) * tot / nas\n return prop / np.trapz(prop, x=mass_points)",
"Results",
"plt.subplot(3,3,1)\ndist = p1_eval(zk)\nplt.plot(mass_points, dist)\nplt.gca().set_xscale('log')\nplt.gca().set_yscale('log')\nplt.ylim([10**-25, 10])\nplt.xlim([mass_points.min(), mass_points.max()])\nplt.title('Prior')\nplt.xlabel(r'Mass $(M_\\odot)$')\nplt.ylabel('Density')\n\nfor ind in range(2,9):\n plt.subplot(3,3,ind)\n dist = mass_dist(ind)\n plt.plot(mass_points, dist, alpha=0.6, linewidth=2)\n plt.xlim([mass_points.min(), mass_points.max()])\n plt.gca().set_xscale('log')\n plt.gca().set_yscale('log')\n plt.ylim([10**-25, 10])\n plt.gca().axvline(mass_data[ind], color='red', linewidth=2, alpha=0.6)\n plt.title('Mass Distribution')\n plt.xlabel(r'Mass $(M_\\odot)$')\n plt.ylabel('Density')\n \n \n# most massive\nind = np.argmax(mass_data)\nplt.subplot(3,3,9)\ndist = mass_dist(ind)\nplt.plot(mass_points, dist, alpha=0.6, linewidth=2)\nplt.gca().set_xscale('log')\nplt.gca().set_yscale('log')\nplt.xlim([mass_points.min(), mass_points.max()])\nplt.ylim([10**-25, 10])\nplt.gca().axvline(mass_data[ind], color='red', linewidth=2, alpha=0.6)\nplt.title('Mass Distribution')\nplt.xlabel(r'Mass $(M_\\odot)$')\nplt.ylabel('Density')\n\n# plt.tight_layout()\nplt.gcf().set_size_inches((10,6))",
"Turning into Probabilistic Catalogue",
"index = range(2,9) + [np.argmax(mass_data)]\n\nplt.title('Simple Sketch of Field of View')\nplt.scatter(ra_data[index], dec_data[index] , s=np.log(mass_data[index]), alpha=0.6)\nplt.xlabel('ra')\nplt.ylabel('dec');",
"Need to build graphic:\n- Make grid that will correspond to pixels\n- map ra-dec window to grid\n- snap objects onto grid and accumulate mass in each bin of the grid\n- plot the grayscale image\nDiscussion\n\nWhile this is a simple toy model, the consistency between then predicted mass distribution and true mass is encouraging.\nThe noise in the mass distribution plots is interesting. The noise increases for masses that are further away than the truth. A similar effect may also exist in bigmali, could it lead to a failure mode?\nIn order to build probabilistic mass maps we will need to be able to sample from the mass distributions. One way to do this would be fitting a normal distribution and drawing from that distribution. This will also mitigate the influence of the noise for masses far from the true mass."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
csyhuang/hn2016_falwa
|
examples/.ipynb_checkpoints/example_barotropic-checkpoint.ipynb
|
mit
|
[
"Instructions\nThis sample code demonstrate how the wrapper function \"barotropic_eqlat_lwa\" in thepython package \"hn2016_falwa\" computes the finite-amplitude local wave activity \n(LWA) from absolute vorticity fields in a barotropic model with spherical geometry according to the definition in Huang & Nakamura (2016,JAS) equation (13). This \nsample code reproduces the LWA plots (Fig.4 in HN15) computed based on an absolute vorticity map.\nContact\nPlease make inquiries and report issues via Github: https://github.com/csyhuang/hn2016_falwa/issues",
"from hn2016_falwa.wrapper import barotropic_eqlat_lwa # Module for plotting local wave activity (LWA) plots and \n # the corresponding equivalent-latitude profile\nfrom math import pi\nfrom netCDF4 import Dataset\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# --- Parameters --- #\nEarth_radius = 6.378e+6 # Earth's radius\n\n# --- Load the absolute vorticity field [256x512] --- #\nreadFile = Dataset('barotropic_vorticity.nc', mode='r')\n\n# --- Read in longitude and latitude arrays --- #\nxlon = readFile.variables['longitude'][:]\nylat = readFile.variables['latitude'][:]\nclat = np.abs(np.cos(ylat*pi/180.)) # cosine latitude\nnlon = xlon.size\nnlat = ylat.size\n\n# --- Parameters needed to use the module HN2015_LWA --- #\ndphi = (ylat[2]-ylat[1])*pi/180. # Equal spacing between latitude grid points, in radian\narea = 2.*pi*Earth_radius**2 *(np.cos(ylat[:,np.newaxis]*pi/180.)*dphi)/float(nlon) * np.ones((nlat,nlon))\narea = np.abs(area) # To make sure area element is always positive (given floating point errors). \n\n# --- Read in the absolute vorticity field from the netCDF file --- #\nabsVorticity = readFile.variables['absolute_vorticity'][:]\nreadFile.close()\n\n",
"Obtain equivalent-latitude relationship and also the LWA from an absolute vorticity snapshot",
"# --- Obtain equivalent-latitude relationship and also the LWA from the absolute vorticity snapshot ---\nQ_ref,LWA = barotropic_eqlat_lwa(ylat,absVorticity,area,Earth_radius*clat*dphi,nlat) # Full domain included",
"Plotting the results",
"# --- Color axis for plotting LWA --- #\nLWA_caxis = np.linspace(0,LWA.max(),31,endpoint=True)\n\n# --- Plot the abs. vorticity field, LWA and equivalent-latitude relationship and LWA --- #\nfig = plt.subplots(figsize=(14,4))\n\nplt.subplot(1,3,1) # Absolute vorticity map\nc=plt.contourf(xlon,ylat,absVorticity,31)\ncb = plt.colorbar(c) \ncb.formatter.set_powerlimits((0, 0))\ncb.ax.yaxis.set_offset_position('right') \ncb.update_ticks()\nplt.title('Absolute vorticity [1/s]')\nplt.xlabel('Longitude (degree)')\nplt.ylabel('Latitude (degree)')\n\nplt.subplot(1,3,2) # LWA (full domain)\nplt.contourf(xlon,ylat,LWA,LWA_caxis)\nplt.colorbar()\nplt.title('Local Wave Activity [m/s]')\nplt.xlabel('Longitude (degree)')\nplt.ylabel('Latitude (degree)')\n\nplt.subplot(1,3,3) # Equivalent-latitude relationship Q(y)\nplt.plot(Q_ref,ylat,'b',label='Equivalent-latitude relationship')\nplt.plot(np.mean(absVorticity,axis=1),ylat,'g',label='zonal mean abs. vorticity')\nplt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))\nplt.ylim(-90,90)\nplt.legend(loc=4,fontsize=10)\nplt.title('Equivalent-latitude profile')\nplt.ylabel('Latitude (degree)')\nplt.xlabel('Q(y) [1/s] | y = latitude')\nplt.tight_layout()\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dcavar/python-tutorial-for-ipython
|
notebooks/spaCy Tutorial.ipynb
|
apache-2.0
|
[
"spaCy Tutorial\n(C) 2019-2020 by Damir Cavar\nVersion: 1.4, February 2020\nDownload: This and various other Jupyter notebooks are available from my GitHub repo.\nThis is a tutorial related to the L665 course on Machine Learning for NLP focusing on Deep Learning, Spring 2018 at Indiana University. The following tutorial assumes that you are using a newer distribution of Python 3 and spaCy 2.2 or newer.\nIntroduction to spaCy\nFollow the instructions on the spaCy homepage about installation of the module and language models. Your local spaCy module is correctly installed, if the following command is successfull:",
"import spacy",
"We can load the English NLP pipeline in the following way:",
"nlp = spacy.load(\"en_core_web_sm\")",
"Tokenization",
"doc = nlp(u'Human ambition is the key to staying ahead of automation.')\nfor token in doc:\n print(token.text)",
"Part-of-Speech Tagging\nWe can tokenize and part of speech tag the individual tokens using the following code:",
"doc = nlp(u'John bought a car and Mary a motorcycle.')\n\nfor token in doc:\n print(\"\\t\".join( (token.text, str(token.idx), token.lemma_, token.pos_, token.tag_, token.dep_,\n token.shape_, str(token.is_alpha), str(token.is_stop) )))",
"The above output contains for every token in a line the token itself, the lemma, the Part-of-Speech tag, the dependency label, the orthographic shape (upper and lower case characters as X or x respectively), the boolean for the token being an alphanumeric string, and the boolean for it being a stopword.\nDependency Parse\nUsing the same approach as above for PoS-tags, we can print the Dependency Parse relations:",
"for token in doc:\n print(token.text, token.dep_, token.head.text, token.head.pos_,\n [child for child in token.children])",
"As specified in the code, each line represents one token. The token is printed in the first column, followed by the dependency relation to it from the token in the third column, followed by its main category type.\nNamed Entity Recognition\nSimilarly to PoS-tags and Dependency Parse Relations, we can print out Named Entity labels:",
"for ent in doc.ents:\n print(ent.text, ent.start_char, ent.end_char, ent.label_)",
"We can extend the input with some more entities:",
"doc = nlp(u'Ali Hassan Kuban said that Apple Inc. will buy Google in May 2018.')",
"The corresponding NE-labels are:",
"for ent in doc.ents:\n print(ent.text, ent.start_char, ent.end_char, ent.label_)",
"Pattern Matching in spaCy",
"from spacy.matcher import Matcher\n\nmatcher = Matcher(nlp.vocab)\npattern = [{'LOWER': 'hello'}, {'IS_PUNCT': True}, {'LOWER': 'world'}]\nmatcher.add('HelloWorld', None, pattern)\n\ndoc = nlp(u'Hello, world! Hello... world!')\nmatches = matcher(doc)\nfor match_id, start, end in matches:\n string_id = nlp.vocab.strings[match_id] # Get string representation\n span = doc[start:end] # The matched span\n print(match_id, string_id, start, end, span.text)\nprint(\"-\" * 50)\ndoc = nlp(u'Hello, world! Hello world!')\nmatches = matcher(doc)\nfor match_id, start, end in matches:\n string_id = nlp.vocab.strings[match_id] # Get string representation\n span = doc[start:end] # The matched span\n print(match_id, string_id, start, end, span.text)",
"spaCy is Missing\nFrom the linguistic standpoint, when looking at the analytical output of the NLP pipeline in spaCy, there are some important components missing:\n\nClause boundary detection\nConstituent structure trees (scope relations over constituents and phrases)\nAnaphora resolution\nCoreference analysis\nTemporal reference resolution\n...\n\nClause Boundary Detection\nComplex sentences consist of clauses. For precise processing of semantic properties of natural language utterances we need to segment the sentences into clauses. The following sentence:\nThe man said that the woman claimed that the child broke the toy.\ncan be broken into the following clauses:\n\nMatrix clause: [ the man said ]\nEmbedded clause: [ that the woman claimed ]\nEmbedded clause: [ that the child broke the toy ]\n\nThese clauses do not form an ordered list or flat sequence, they in fact are hierarchically organized. The matrix clause verb selects as its complement an embedded finite clause with the complementizer that. The embedded predicate claimed selects the same kind of clausal complement. We express this hierarchical relation in form of embedding in tree representations:\n[ the man said [ that the woman claimed [ that the child broke the toy ] ] ]\nOr using a graphical representation in form of a tree:\n<img src=\"Embedded_Clauses_1.png\" width=\"60%\" height=\"60%\">\nThe hierarchical relation of sub-clauses is relevant when it comes to semantics. The clause John sold his car can be interpreted as an assertion that describes an event with John as the agent, and the car as the object of a selling event in the past. If the clause is embedded under a matrix clause that contains a sentential negation, the proposition is assumed to NOT be true: [ Mary did not say that [ John sold his car ] ] \nIt is possible with additional effort to translate the Dependency Trees into clauses and reconstruct the clause hierarchy into a relevant form or data structure. SpaCy does not offer a direct data output of such relations.\nOne problem still remains, and this is clausal discontinuities. None of the common NLP pipelines, and spaCy in particular, can deal with any kind of discontinuities in any reasonable way. Discontinuities can be observed when sytanctic structures are split over the clause or sentence, or elements ocur in a cannoically different position, as in the following example:\nWhich car did John claim that Mary took?\nThe embedded clause consists of the sequence [ Mary took which car ]. One part of the sequence appears dislocated and precedes the matrix clause in the above example. Simple Dependency Parsers cannot generate any reasonable output that makes it easy to identify and reconstruct the relations of clausal elements in these structures.\nConstitutent Structure Trees\nDependency Parse trees are a simplification of relations of elements in the clause. They ignore structural and hierarchical relations in a sentence or clause, as shown in the examples above. Instead the Dependency Parse trees show simple functional relations in the sense of sentential functions like subject or object of a verb.\nSpaCy does not output any kind of constituent structure and more detailed relational properties of phrases and more complex structural units in a sentence or clause.\nSince many semantic properties are defined or determined in terms of structural relations and hierarchies, that is scope relations, this is more complicated to reconstruct or map from the Dependency Parse trees.\nAnaphora Resolution\nSpaCy does not offer any anaphora resolution annotation. That is, the referent of a pronoun, as in the following examples, is not annotated in the resulting linguistic data structure:\n\nJohn saw him.\nJohn said that he saw the house.\nTim sold his house. He moved to Paris.\nJohn saw himself in the mirror.\n\nKnowing the restrictions of pronominal binding (in English for example), we can partially generate the potential or most likely anaphora - antecedent relations. This - however - is not part of the spaCy output.\nOne problem, however, is that spaCy does not provide parse trees of the constituent structure and clausal hierarchies, which is crucial for the correct analysis of pronominal anaphoric relations.\nCoreference Analysis\nSome NLP pipelines are capable of providing coreference analyses for constituents in clauses. For example, the two clauses should be analyzed as talking about the same subject:\nThe CEO of Apple, Tim Cook, decided to apply for a job at Google. Cook said that he is not satisfied with the quality of the iPhones anymore. He prefers the Pixel 2.\nThe constituents [ the CEO of Apple, Tim Cook ] in the first sentence, [ Cook ] in the second sentence, and [ he ] in the third, should all be tagged as referencing the same entity, that is the one mentioned in the first sentence. SpaCy does not provide such a level of analysis or annotation.\nTemporal Reference\nFor various analysis levels it is essential to identify the time references in a sentence or utterance, for example the time the utterance is made or the time the described event happened.\nCertain tenses are expressed as periphrastic constructions, including auxiliaries and main verbs. SpaCy does not provide the relevant information to identify these constructions and tenses.\nUsing the Dependency Parse Visualizer\nMore on Dependency Parse trees",
"import spacy",
"We can load the visualizer:",
"from spacy import displacy",
"Loading the English NLP pipeline:",
"nlp = spacy.load(\"en_core_web_sm\")",
"Process an input sentence:",
"#doc = nlp(u'John said yesterday that Mary bought a new car for her older son.')\n#doc = nlp(u\"Dick ran and Jane danced yesterday.\")\n#doc = nlp(u\"Tim Cook is the CEO of Apple.\")\n#doc = nlp(u\"Born in a small town, she took the midnight train going anywhere.\")\ndoc = nlp(u\"John met Peter and Susan called Paul.\")",
"If you want to generate a visualization running code outside of the Jupyter notebook, you could use the following code. You should not use this code, if you are running the notebook. Instead, use the function display.render two cells below.\nVisualizing the Dependency Parse tree can be achieved by running the following server code and opening up a new tab on the URL http://localhost:5000/. You can shut down the server by clicking on the stop button at the top in the notebook toolbar.",
"displacy.serve(doc, style='dep')",
"Instead of serving the graph, one can render it directly into a Jupyter Notebook:",
"displacy.render(doc, style='dep', jupyter=True, options={\"distance\": 120})",
"In addition to the visualization of the Dependency Trees, we can visualize named entity annotations:",
"text = \"\"\"Apple decided to fire Tim Cook and hire somebody called John Doe as the new CEO.\nThey also discussed a merger with Google. On the long run it seems more likely that Apple\nwill merge with Amazon and Microsoft with Google. The companies will all relocate to\nAustin in Texas before the end of the century. John Doe bought a Prosche.\"\"\"\n\ndoc = nlp(text)\ndisplacy.render(doc, style='ent', jupyter=True)",
"Vectors\nTo use vectors in spaCy, you might consider installing the larger models for the particular language. The common module and language packages only come with the small models. The larger models can be installed as described on the spaCy vectors page:\npython -m spacy download en_core_web_lg\n\nThe large model en_core_web_lg contains more than 1 million unique vectors.\nLet us restart all necessary modules again, in particular spaCy:",
"import spacy",
"We can now import the English NLP pipeline to process some word list. Since the small models in spacy only include context-sensitive tensors, we should use the dowloaded large model for better word vectors. We load the large model as follows:",
"nlp = spacy.load('en_core_web_lg')\n#nlp = spacy.load(\"en_core_web_sm\")",
"We can process a list of words by the pipeline using the nlp object:",
"tokens = nlp(u'dog poodle beagle cat banana apple')",
"As described in the spaCy chapter Word Vectors and Semantic Similarity, the resulting elements of Doc, Span, and Token provide a method similarity(), which returns the similarities between words:",
"for token1 in tokens:\n for token2 in tokens:\n print(token1, token2, token1.similarity(token2))",
"We can access the vectors of these objects using the vector attribute:",
"tokens = nlp(u'dog cat banana sasquatch')\n\nfor token in tokens:\n print(token.text, token.has_vector, token.vector_norm, token.is_oov)",
"The attribute has_vector returns a boolean depending on whether the token has a vector in the model or not. The token sasquatch has no vector. It is also out-of-vocabulary (OOV), as the fourth column shows. Thus, it also has a norm of $0$, that is, it has a length of $0$.\nHere the token vector has a length of $300$. We can print out the vector for a token:",
"n = 0\nprint(tokens[n].text, len(tokens[n].vector), tokens[n].vector)",
"Here just another example of similarities for some famous words:",
"tokens = nlp(u'queen king chef')\n\nfor token1 in tokens:\n for token2 in tokens:\n print(token1, token2, token1.similarity(token2))",
"Similarities in Context\nIn spaCy parsing, tagging and NER models make use of vector representations of contexts that represent the meaning of words. A text meaning representation is represented as an array of floats, i.e. a tensor, computed during the NLP pipeline processing. With this approach words that have not been seen before can be typed or classified. SpaCy uses a 4-layer convolutional network for the computation of these tensors. In this approach these tensors model a context of four words left and right of any given word.\nLet us use the example from the spaCy documentation and check the word labrador:",
"tokens = nlp(u'labrador')\n\nfor token in tokens:\n print(token.text, token.has_vector, token.vector_norm, token.is_oov)",
"We can now test for the context:",
"doc1 = nlp(u\"The labrador barked.\")\ndoc2 = nlp(u\"The labrador swam.\")\ndoc3 = nlp(u\"the labrador people live in canada.\")\n\ndog = nlp(u\"dog\")\n\ncount = 0\nfor doc in [doc1, doc2, doc3]:\n lab = doc[1]\n count += 1\n print(str(count) + \":\", lab.similarity(dog))",
"Using this strategy we can compute document or text similarities as well:",
"docs = ( nlp(u\"Paris is the largest city in France.\"),\n nlp(u\"Vilnius is the capital of Lithuania.\"),\n nlp(u\"An emu is a large bird.\") )\n\nfor x in range(len(docs)):\n for y in range(len(docs)):\n print(x, y, docs[x].similarity(docs[y]))",
"We can vary the word order in sentences and compare them:",
"docs = [nlp(u\"dog bites man\"), nlp(u\"man bites dog\"),\n nlp(u\"man dog bites\"), nlp(u\"cat eats mouse\")]\n\nfor doc in docs:\n for other_doc in docs:\n print('\"' + doc.text + '\"', '\"' + other_doc.text + '\"', doc.similarity(other_doc))",
"Custom Models\nOptimization",
"nlp = spacy.load('en_core_web_lg')",
"Training Models\nThis example code for training an NER model is based on the training example in spaCy.\nWe will import some components from the future module. Read its documentation here.",
"from __future__ import unicode_literals, print_function",
"We import the random module for pseudo-random number generation:",
"import random",
"We import the Path object from the pathlib module:",
"from pathlib import Path",
"We import spaCy:",
"import spacy",
"We also import the minibatch and compounding module from spaCy.utils:",
"from spacy.util import minibatch, compounding",
"The training data is formated as JSON:",
"TRAIN_DATA = [\n (\"Who is Shaka Khan?\", {\"entities\": [(7, 17, \"PERSON\")]}),\n (\"I like London and Berlin.\", {\"entities\": [(7, 13, \"LOC\"), (18, 24, \"LOC\")]}),\n]",
"We created a blank 'xx' model:",
"nlp = spacy.blank(\"xx\") # create blank Language class\nner = nlp.create_pipe(\"ner\")\nnlp.add_pipe(ner, last=True)",
"We add the named entity labels to the NER model:",
"for _, annotations in TRAIN_DATA:\n for ent in annotations.get(\"entities\"):\n ner.add_label(ent[2])",
"Assuming that the model is empty and untrained, we reset and initialize the weights randomly using:",
"nlp.begin_training()",
"We would not do this, if the model is supposed to be tuned or retrained on new data.\nWe get all pipe-names in the model that are not our NER related pipes to disable them during training:",
"pipe_exceptions = [\"ner\", \"trf_wordpiecer\", \"trf_tok2vec\"]\nother_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]",
"We can now disable the other pipes and train just the NER uing 100 iterations:",
"with nlp.disable_pipes(*other_pipes): # only train NER\n for itn in range(100):\n random.shuffle(TRAIN_DATA)\n losses = {}\n # batch up the examples using spaCy's minibatch\n batches = minibatch(TRAIN_DATA, size=compounding(4.0, 32.0, 1.001))\n for batch in batches:\n texts, annotations = zip(*batch)\n nlp.update(\n texts, # batch of texts\n annotations, # batch of annotations\n drop=0.5, # dropout - make it harder to memorise data\n losses=losses,\n )\n print(\"Losses\", losses)",
"We can test the trained model:",
"for text, _ in TRAIN_DATA:\n doc = nlp(text)\n print(\"Entities\", [(ent.text, ent.label_) for ent in doc.ents])\n print(\"Tokens\", [(t.text, t.ent_type_, t.ent_iob) for t in doc])",
"We can define the output directory where the model will be saved as the models folder in the directory where the notebook is running:",
"output_dir = Path(\"./models/\")",
"Save model to output dir:",
"if not output_dir.exists():\n output_dir.mkdir()\nnlp.to_disk(output_dir)",
"To make sure everything worked out well, we can test the saved model:",
"nlp2 = spacy.load(output_dir)\nfor text, _ in TRAIN_DATA:\n doc = nlp2(text)\n print(\"Entities\", [(ent.text, ent.label_) for ent in doc.ents])\n print(\"Tokens\", [(t.text, t.ent_type_, t.ent_iob) for t in doc])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jsub10/MLCourse
|
Notebooks/Non-Linear-Logistic-Regression.ipynb
|
mit
|
[
"Non-Linear Logistic Regression\nIn the last session we looked at the basic concepts of logistic regression.\n\nLogistic classification is about predicting one or another category.\nModels give us numerical values.\nThe way to convert numerical values to categorical values is by using the sigmoid.\nA new penalty function that has small values if you guess very close to correctly and very large values otherwise (roughly).",
"# Import our usual libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Set up the path\nimport os\n# OS-independent way to navigate the file system\n# Data directory is one directory up in relation to directory of this notebook\ndata_dir_root = os.path.normpath(os.getcwd() + os.sep + os.pardir + os.sep + \"Data\")\n# Where the file is\nfile_url = data_dir_root + os.sep + \"forged-bank-notes.csv\"\n#file_url\n\n# Load the data\n# header=0 drops the header row in the csv file\ndata = pd.read_csv(file_url, header=0, names=['V1', 'V2', 'V3', 'V4', 'Genuine'])\n\n# Set up the inputs and \n# display the few rows of the input\ninputs_v1_v2 = data[['V1', 'V2']]\ninputs_v3_v4 = data[['V3', 'V4']]\ninputs_v1_v3 = data[['V1', 'V3']]\ninputs_v1_v4 = data[['V1', 'V4']]\ninputs_v2_v3 = data[['V2', 'V3']]\ninputs_v2_v4 = data[['V2', 'V4']]",
"Let's start where we left off last time.\nWe were looking at a bank notes dataset. The dataset has features V1, V2, V3, and V4.\nWe were looking just at V1 and V2 -- to keep things simple enough to visualize things easily.\nWe'll continue to look at V1 and V2...",
"# What the first few rows of the dataset looks like -- \n# for just the V1 and V2 features.\ninputs_v1_v2.head()\n\n# And here's what the first few lines of the outputs/targets\n\n# Set up the output and \n# display the first few rows of the output/target\noutput = data[['Genuine']]\noutput.head()\n\n# Set up the training data\nX_train_v1_v2 = {'data': inputs_v1_v2.values, 'feature1': 'V1', 'feature2': 'V2'}\nX_train_v3_v4 = {'data': inputs_v3_v4.values, 'feature1': 'V3', 'feature2': 'V4'} \nX_train_v1_v3 = {'data': inputs_v1_v3.values, 'feature1': 'V1', 'feature2': 'V3'}\nX_train_v1_v4 = {'data': inputs_v1_v4.values, 'feature1': 'V1', 'feature2': 'V4'}\nX_train_v2_v3 = {'data': inputs_v2_v3.values, 'feature1': 'V2', 'feature2': 'V3'}\nX_train_v2_v4 = {'data': inputs_v2_v4.values, 'feature1': 'V2', 'feature2': 'V4'}\nX_train_v1_v2['data'].shape\n\n# Set up the target data \ny = output.values\n\n# Change the shape of y to suit scikit learn's array shape requirements\ny_train = np.array(list(y.squeeze()))\ny_train.shape\n\n# Set up the positive and negative categories\n# Scatter of V1 versus V2\npositive = data[data['Genuine'].isin([1])]\nnegative = data[data['Genuine'].isin([0])]\n\n# Set up the logistic regression model from SciKit Learn\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import metrics\nfrom sklearn.model_selection import cross_val_score\n# Solvers that seem to work well are 'liblinear' and 'newton-cg\"\nlr = LogisticRegression(C=100.0, random_state=0, solver='liblinear', verbose=2)\n\n# Train the model and find the optimal parameter values\nlr.fit(X_train_v1_v2['data'], y_train)",
"At this point, (just imagine that) we've:\n\nvisualized the data\ndefined the task we'd like to accomplish\ndefined the model\ndefined the penalty for the being wrong\nused an iterative algorithm (like gradient descent) to find the optimal values of the parameters\n\n(Can you picture all of this from the dataset point of view?)\n<img src=\"../Images/nonlinear-logistic-regression-1.png\" alt=\"Table View 1\" style=\"width:600px\"/>\n<img src=\"../Images/nonlinear-logistic-regression-2.png\" alt=\"Table View 1\" style=\"width:600px\"/>\n<img src=\"../Images/nonlinear-logistic-regression-3.png\" alt=\"Table View 1\" style=\"width:600px\"/>",
"# These are the optimal values of w0, w1 and w2\nw0 = lr.intercept_[0]\nw1 = lr.coef_.squeeze()[0]\nw2 = lr.coef_.squeeze()[1]\nprint(\"w0: {}\\nw1: {}\\nw2: {}\".format(w0, w1, w2))\n\n# Function for plotting class boundaries\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import metrics\nfrom sklearn.model_selection import cross_val_score\n\ndef poly_boundary_plot(XTrain, YTrain, degree, show_contours=0):\n \n # XTrain has to have exactly 2 features for this visualization to work\n \n # Transform the training inputs\n poly = PolynomialFeatures(degree)\n X_train_poly = poly.fit_transform(XTrain['data'])\n # NOTE: the poly function adds a bias value of 1 to each row of input data -- \n # default setting is include_bias=True\n \n # Set up the logistic regression model from SciKit Learn\n # Solvers that seem to work well are 'liblinear' and 'newton-cg\"\n lr = LogisticRegression(C=100.0, random_state=0, solver='liblinear', verbose=2)\n \n # Fit the polynomial data to the simple linear logistic regression model we have\n lr.fit(X_train_poly, YTrain);\n \n # Create a grid of feature values\n \n # Find the min and max values of the two features \n # Make grid values\n GRID_INCREMENT = 0.02\n x1_min = np.array([XTrain['data'][i][0] for i in range(len(XTrain['data']))]).min()\n x1_max = np.array([XTrain['data'][i][0] for i in range(len(XTrain['data']))]).max()\n \n x2_min = np.array([XTrain['data'][i][1] for i in range(len(XTrain['data']))]).min()\n x2_max = np.array([XTrain['data'][i][1] for i in range(len(XTrain['data']))]).max()\n \n xx1, xx2 = np.mgrid[x1_min:x1_max:GRID_INCREMENT, x2_min:x2_max:GRID_INCREMENT]\n #xx1.shape, xx2.shape\n \n # Create the grid\n grid = np.c_[xx1.ravel(), xx2.ravel()]\n grid.shape\n \n # The predictions of the model\n preds_poly = lr.predict(poly.fit_transform(grid))\n preds_poly_probs = lr.predict_proba(poly.fit_transform(grid))\n preds_poly_probs_0 = np.array([preds_poly_probs[i][0] for i in range(len(preds_poly_probs))])\n preds_poly_probs_1 = np.array([preds_poly_probs[i][1] for i in range(len(preds_poly_probs))])\n \n #return preds_poly, preds_poly_probs, preds_poly_probs_0, preds_poly_probs_1\n \n # Where did the model misclassify banknotes?\n # Keep in mind we are only using V1 and V2\n ## CAUTION: USING EXISTING variable values here\n model_preds = lr.predict(X_train_poly)\n errors_poly = data[data['Genuine'] != model_preds]\n #errors_poly\n \n # Get some classification performance metrics\n accuracy = metrics.accuracy_score(YTrain, model_preds)\n report = metrics.classification_report(YTrain, model_preds)\n confusion_matrix = metrics.confusion_matrix(YTrain, model_preds, labels=None, sample_weight=None)\n \n # Plot the boundary\n fig, ax = plt.subplots(figsize=(15,10))\n\n ax.scatter(positive[XTrain['feature1']], positive[XTrain['feature2']], s=30, c='b', marker='.', label='Genuine')\n ax.scatter(negative[XTrain['feature1']], negative[XTrain['feature2']], s=30, c='r', marker='.', label='Forged')\n\n ax.set_xlabel(XTrain['feature1'])\n ax.set_ylabel(XTrain['feature2'])\n\n # Now plot black circles around data points that were incorrectly predicted\n ax.scatter(errors_poly[XTrain['feature1']], errors_poly[XTrain['feature2']], facecolors=\"none\", edgecolors=\"m\", s=80, label=\"Wrongly Classified\")\n\n # Finally plot the line which represents the decision boundary\n #ax.plot(x1, x2, color=\"green\", linestyle=\"--\", marker=None, label=\"boundary\")\n # And plot the contours that separate the 1s from the 0s\n plt.contour(xx1,xx2,preds_poly.reshape(xx1.shape), colors='g', linewidths=1)\n if show_contours == 1:\n # preds_poly_probs_0 for contours of probability of 0 -- i.e. prob(forged bank note)\n # preds_poly_probs_1 for contours of probability of 1 -- i.e. prob(genuine bank note)\n contour_probs = preds_poly_probs_1\n cs = plt.contour(xx1,xx2,contour_probs.reshape(xx1.shape), linewidths=0.7)\n plt.clabel(cs, inline=1, fontsize=12)\n\n ax.legend(loc='lower right')\n \n title = 'Logistic Regression\\n'\n title = title + 'Bank Note Validation Based on Feature Values ' + XTrain['feature1'] + ' and ' + XTrain['feature2'] + '\\n'\n title = title + 'Polynomial Degree: ' + str(degree) + '\\n'\n title = title + 'Number of misclassified points = ' + str(len(errors_poly))\n\n plot = plt.title(title);\n \n return errors_poly, accuracy, confusion_matrix, report, plot",
"...and this is what we saw last time for linear logistic regression",
"# logistic regression - what we saw last time\n# NOTE: The contours are probabilities that the bank note is genuine\nerrors, accuracy, conf_matrix, report, plot = poly_boundary_plot(X_train_v1_v2, \n y_train, \n degree=1, \n show_contours=0)\n\n# Which rows of the dataset are misclassfied?\nerrors\n\n# Classification accuracy\naccuracy\n\n# Comfusion Matrix\nprint(conf_matrix)\n\n# True negatives, false positives, false negatives, and true positives\ntn, fp, fn, tp = conf_matrix.ravel()\ntn, fp, fn, tp\n\n# Precision, recall, f1-score\nprint(report)",
"Non-Linear Logistic Regression",
"# logistic regression\n# NOTE: The contours are probabilities that the bank note is genuine\nerrors, accuracy, conf_matrix, report, plot = poly_boundary_plot(X_train_v1_v2, \n y_train, \n degree=5, \n show_contours=1)\n\n# Which rows of the dataset are misclassfied?\nerrors\n\n# Classification accuracy\naccuracy\n\n# Comfusion Matrix\nprint(conf_matrix)\n\n# True negatives, false positives, false negatives, and true positives\ntn, fp, fn, tp = conf_matrix.ravel()\ntn, fp, fn, tp\n\n# Precision, recall, f1-score\nprint(report)",
"At some point, just making the model more and more complex will start to produce diminishing returns. At this point it's more data that will help.\nWe've been working with just 2 of the 4 features -- why not work with all the features available to us? This gives us more predictive power but makes it hard to visualize the boundaries.\nWe can, however, see how our predictions are going by looking at the rows in the dataset that are misclassified.\nUse all 4 features instead of just V1 and V2",
"# Set up the inputs\ninputs_all = data[['V1', 'V2', 'V3', 'V4']]\n\n# Here are some key stats on the inputs\ninputs_all.describe()\n\n# Turn the inputs into an array of training data\nX_all_train = inputs_all.values\nX_all_train.shape\n\n# Sanity check\nX_all_train[0:3]\n\n# The output remains the same\ny_train.shape\n\n# Use the same logistic regression model as before\n# Train the model and find the optimal parameter values\nlr.fit(X_all_train, y_train)\n\n# These are the optimal values of w0, w1, w2, w3, and w4\nw0 = lr.intercept_[0]\nw1 = lr.coef_.squeeze()[0]\nw2 = lr.coef_.squeeze()[1]\nw3 = lr.coef_.squeeze()[2]\nw4 = lr.coef_.squeeze()[3]\nprint(\"w0: {}\\nw1: {}\\nw2: {}\\nw3: {}\\nw4: {}\".format(w0, w1, w2, w3, w4))\n\n# Genuine or fake for the entire data set\ny_all_pred = lr.predict(X_all_train)\nprint(y_all_pred)\n\nlr.score(X_all_train, y_train)\n\n# The probabilities of [Genuine = 0, Genuine = 1]\ny_all_pred_probs = lr.predict_proba(X_all_train)\nprint(y_all_pred_probs)\n\n# Where did the model misclassify banknotes?\nerrors = data[data['Genuine'] != y_all_pred]\nprint('Number of Misclassifications = {}'.format(len(errors)))\nerrors",
"Lesson: With enough data, a linear model is often good enough.\nSummary\nWe now have in our toolkit ways to make numerical and categorical predictions.\nCan you think of a prediction that doesn't predict a numerical value or a category?\nMoreover, our dataset can contain any number of features and our features can be complex.\nWe know how to take linear models and make them into non-linear models to capture more complex patterns in our data.\nWe can even bandy about fancy terms like logistic regression, penalty functions, gradient descent, support vector machines, and neural networks!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.