repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
adamhajari/spyre
tutorial/pydata2015_seattle/pydata2015_seattle.ipynb
mit
[ "twitter: @adamhajari\ngithub: github.com/adamhajari/spyre\nthis notebook: http://bit.ly/pydata2015_spyre\nBefore we start\nmake sure you have the latest version of spyre\npip install --upgrade dataspyre\nthere have been recent changes to spyre, so if you installed more than a day ago, go ahead and upgrade\nWho Am I?\nAdam Hajari\nData Scientist on the Next Big Sound team at Pandora\nadam@nextbigsound.com\n@adamhajari\nSimple Interactive Web Applications with Spyre\nSpyre is a web application framework for turning static data tables and plots into interactive web apps. Spyre was motivated by <a href=\"http://shiny.rstudio.com/\">Shiny</a>, a similar framework for R created by the developers of Rstudio.\nWhere does Spyre Live?\nGitHub: <a href='https://github.com/adamhajari/spyre'>github.com/adamhajari/spyre</a>\nLive example of a spyre app: \n - <a href='http://adamhajari.com'>adamhajari.com</a>\n - <a href='http://dataspyre.herokuapp.com'>dataspyre.herokuapp.com</a>\n - <a href='https://spyre-gallery.herokuapp.com'>spyre-gallery.herokuapp.com</a>\nInstalling Spyre\nSpyre depends on:\n - cherrypy (server and backend)\n - jinja2 (html and javascript templating)\n - matplotlib (displaying plots and images)\n - pandas (for working within tabular data)\nAssuming you don't have any issues with the above dependencies, you can install spyre via pip:\nbash\n$ pip install dataspyre\nLaunching a Spyre App\nSpyre's server module has a App class that every Spyre app will needs to inherit. Use the app's launch() method to deploy your app.", "from spyre import server\n\nclass SimpleApp(server.App):\n title = \"Simple App\"\n\napp = SimpleApp()\napp.launch() # launching from ipython notebook is not recommended", "If you put the above code in a file called simple_app.py you can launch the app from the command line with\n$ python simple_app.py\nMake sure you uncomment the last line first.\nA Very Simple Example\nThere are two variables of the App class that need to be overridden to create the UI for a Spyre app: inputs and outputs (a third optional type called controls that we'll get to later). All three variables are lists of dictionaries which specify each component's properties. For instance, to create a text box input, overide the App's inputs variable:", "from spyre import server\n\nclass SimpleApp(server.App):\n inputs = [{ \"type\":\"text\",\n \"key\":\"words\",\n \"label\": \"write here\",\n \"value\":\"hello world\"}]\n\napp = SimpleApp()\napp.launch()", "Now let's add an output. We first need to list all our out outputs and their attributes in the outputs dictionary.", "from spyre import server\n\nclass SimpleApp(server.App):\n inputs = [{ \"type\":\"text\",\n \"key\":\"words\",\n \"label\": \"write here\",\n \"value\":\"hello world\"}]\n \n outputs = [{\"type\":\"html\",\n \"id\":\"some_html\"}]\n\napp = SimpleApp()\napp.launch()", "To generate the output, we can override a server.App method specific to that output type. In the case of html output, we overide the getHTML method. Each output method should return an object specific to that output type. In the case of html output, we just return a string.", "from spyre import server\n\nclass SimpleApp(server.App):\n title = \"Simple App\"\n \n inputs = [{ \"type\":\"text\",\n \"key\":\"words\",\n \"label\": \"write here\",\n \"value\":\"hello world\"}]\n \n outputs = [{\"type\":\"html\",\n \"id\":\"some_html\"}]\n\n def getHTML(self, params):\n words = params['words']\n return \"here are the words you wrote: <b>%s</b>\"%words\n\napp = SimpleApp()\napp.launch()", "Great. We've got inputs and outputs, but we're not quite finished. As it is, the content of our output is static. That's because the output doesn't know when it needs to get updated. We can fix this in one of two ways:\n 1. We can add a button to our app and tell our output to update whenever the button is pressed.\n 2. We can add an action_id to our input that references the output that we want refreshed when the input value changes.\nLet's see what the first approach looks like.", "from spyre import server\n\nclass SimpleApp(server.App):\n title = \"Simple App\"\n \n inputs = [{ \"type\":\"text\",\n \"key\":\"words\",\n \"label\": \"write here\",\n \"value\":\"hello world\"}]\n \n outputs = [{\"type\":\"html\",\n \"id\":\"some_html\",\n \"control_id\":\"button1\"}]\n \n controls = [{\"type\":\"button\",\n \"label\":\"press to update\",\n \"id\":\"button1\"}]\n\n def getHTML(self, params):\n words = params['words']\n return \"here are the words you wrote: <b>%s</b>\"%words\n\napp = SimpleApp()\napp.launch()", "Our app now has a button with id \"button1\", and our output references our control's id, so that when we press the button we update the output with the most current input values. \n<img src=\"input_output_control.png\">\nIs a button a little overkill for this simple app? Yeah, probably. Let's get rid of it and have the output update just by changing the value in the text box. To do this we'll add an action_id attribute to our input dictionary that references the output's id.", "from spyre import server\n\nclass SimpleApp(server.App):\n title = \"Simple App\"\n \n inputs = [{ \"type\":\"text\",\n \"key\":\"words\",\n \"label\": \"write here\",\n \"value\":\"look ma, no buttons\",\n \"action_id\":\"some_html\"}]\n \n outputs = [{\"type\":\"html\",\n \"id\":\"some_html\"}]\n \n def getHTML(self, params):\n words = params['words']\n return \"here are the words you wrote: <b>%s</b>\"%words\n\napp = SimpleApp()\napp.launch()", "Now the output gets updated with a change to the input.\n<img src=\"no_control.png\">\nAnother Example\nLet's suppose you've written a function to grab historical stock price data from the web. Your function returns a pandas dataframe.", "%pylab inline\nfrom googlefinance.client import get_price_data\n\ndef getData(params):\n ticker = params['ticker']\n if ticker == 'empty':\n ticker = params['custom_ticker'].upper()\n\n xchng = \"NASD\"\n param = {\n 'q': ticker, # Stock symbol (ex: \"AAPL\")\n 'i': \"86400\", # Interval size in seconds (\"86400\" = 1 day intervals)\n 'x': xchng, # Stock exchange symbol on which stock is traded (ex: \"NASD\")\n 'p': \"3M\" # Period (Ex: \"1Y\" = 1 year)\n }\n df = get_price_data(param)\n return df.drop('Volume', axis=1)\n\nparams = {'ticker':'GOOG'}\ndf = getData(params)\ndf.head()", "Let's turn this into a spyre app. We'll use a dropdown menu input this time and start by displaying the data in a table. In the previous example we overrode the getHTML method and had it return a string to generate HTML output. To get a table output we need to override the getData method and have it return a pandas dataframe (conveniently, we've already done that!)", "from spyre import server\nfrom googlefinance.client import get_price_data\n\nserver.include_df_index = True\n\n\nclass StockExample(server.App):\n title = \"Historical Stock Prices\"\n\n inputs = [{\n \"type\": 'dropdown',\n \"label\": 'Company',\n \"options\": [\n {\"label\": \"Google\", \"value\": \"GOOG\"},\n {\"label\": \"Amazon\", \"value\": \"AMZN\"},\n {\"label\": \"Apple\", \"value\": \"AAPL\"}\n ],\n \"key\": 'ticker',\n \"action_id\": \"table_id\"\n }]\n\n outputs = [{\n \"type\": \"table\",\n \"id\": \"table_id\"\n }]\n\n def getData(self, params):\n ticker = params['ticker']\n xchng = \"NASD\"\n param = {\n 'q': ticker, # Stock symbol (ex: \"AAPL\")\n 'i': \"86400\", # Interval size in seconds (\"86400\" = 1 day intervals)\n 'x': xchng, # Stock exchange symbol on which stock is traded (ex: \"NASD\")\n 'p': \"3M\" # Period (Ex: \"1Y\" = 1 year)\n }\n df = get_price_data(param)\n return df.drop('Volume', axis=1)\n\n\napp = StockExample()\napp.launch()", "One really convenient feature of pandas is that you can plot directly from a dataframe using the plot method.", "df.plot()", "Let's take advantage of this convenience and add a plot to our app. To generate a plot output, we need to add another dictionary to our list of outputs.", "from spyre import server\nfrom googlefinance.client import get_price_data\n\nserver.include_df_index = True\n\n\nclass StockExample(server.App):\n title = \"Historical Stock Prices\"\n\n inputs = [{\n \"type\": 'dropdown',\n \"label\": 'Company',\n \"options\": [\n {\"label\": \"Google\", \"value\": \"GOOG\"},\n {\"label\": \"Amazon\", \"value\": \"AMZN\"},\n {\"label\": \"Apple\", \"value\": \"AAPL\"}\n ],\n \"key\": 'ticker',\n }]\n\n outputs = [{\n \"type\": \"plot\",\n \"id\": \"plot\",\n \"control_id\": \"update_data\"\n }, {\n \"type\": \"table\",\n \"id\": \"table_id\",\n \"control_id\": \"update_data\"\n }]\n\n controls = [{\n \"type\": \"button\",\n \"label\": \"get stock data\",\n \"id\": \"update_data\"\n }]\n\n def getData(self, params):\n ticker = params['ticker']\n xchng = \"NASD\"\n param = {\n 'q': ticker, # Stock symbol (ex: \"AAPL\")\n 'i': \"86400\", # Interval size in seconds (\"86400\" = 1 day intervals)\n 'x': xchng, # Stock exchange symbol on which stock is traded (ex: \"NASD\")\n 'p': \"3M\" # Period (Ex: \"1Y\" = 1 year)\n }\n df = get_price_data(param)\n return df.drop('Volume', axis=1)\n\n\napp = StockExample()\napp.launch()", "Notice that we didn't have to add a new method for our plot output. getData is pulling double duty here serving the data for our table and our plot. If you wanted to alter the data or the plot object, you could do that by overriding the getPlot method. Under the hood, if you don't specify a getPlot method for your plot output, server.App's built-in getPlot method will look for a getData method, and just return the result of calling the plot() method on its dataframe.", "from spyre import server\nfrom googlefinance.client import get_price_data\n\nserver.include_df_index = True\n\n\nclass StockExample(server.App):\n title = \"Historical Stock Prices\"\n\n inputs = [{\n \"type\": 'dropdown',\n \"label\": 'Company',\n \"options\": [\n {\"label\": \"Google\", \"value\": \"GOOG\"},\n {\"label\": \"Amazon\", \"value\": \"AMZN\"},\n {\"label\": \"Apple\", \"value\": \"AAPL\"}\n ],\n \"key\": 'ticker',\n }]\n\n outputs = [{\n \"type\": \"plot\",\n \"id\": \"plot\",\n \"control_id\": \"update_data\"\n }, {\n \"type\": \"table\",\n \"id\": \"table_id\",\n \"control_id\": \"update_data\"\n }]\n\n controls = [{\n \"type\": \"button\",\n \"label\": \"get stock data\",\n \"id\": \"update_data\"\n }]\n\n def getData(self, params):\n ticker = params['ticker']\n xchng = \"NASD\"\n param = {\n 'q': ticker, # Stock symbol (ex: \"AAPL\")\n 'i': \"86400\", # Interval size in seconds (\"86400\" = 1 day intervals)\n 'x': xchng, # Stock exchange symbol on which stock is traded (ex: \"NASD\")\n 'p': \"3M\" # Period (Ex: \"1Y\" = 1 year)\n }\n df = get_price_data(param)\n return df.drop('Volume', axis=1)\n\n def getPlot(self, params):\n df = self.getData(params)\n plt_obj = df.plot()\n plt_obj.set_ylabel(\"Price\")\n plt_obj.set_xlabel(\"Date\")\n plt_obj.set_title(params['ticker'])\n return plt_obj\n\n\napp = StockExample()\napp.launch()", "Finally we'll put each of the outputs in separate tabs and add an action_id to the dropdown input that references the \"update_data\" control. Now, a change to the input state triggers the button to be \"clicked\". This makes the existence of a \"button\" supurfluous, so we'll change the control type to \"hidden\"", "from spyre import server\nfrom googlefinance.client import get_price_data\n\nserver.include_df_index = True\n\n\nclass StockExample(server.App):\n title = \"Historical Stock Prices\"\n\n inputs = [{\n \"type\": 'dropdown',\n \"label\": 'Company',\n \"options\": [\n {\"label\": \"Google\", \"value\": \"GOOG\"},\n {\"label\": \"Amazon\", \"value\": \"AMZN\"},\n {\"label\": \"Apple\", \"value\": \"AAPL\"}\n ],\n \"key\": 'ticker',\n \"action_id\": \"update_data\"\n }]\n\n tabs = [\"Plot\", \"Table\"]\n\n outputs = [{\n \"type\": \"plot\",\n \"id\": \"plot\",\n \"control_id\": \"update_data\",\n \"tab\": \"Plot\"\n }, {\n \"type\": \"table\",\n \"id\": \"table_id\",\n \"control_id\": \"update_data\",\n \"tab\": \"Table\"\n }]\n\n controls = [{\n \"type\": \"hidden\",\n \"label\": \"get stock data\",\n \"id\": \"update_data\"\n }]\n\n def getData(self, params):\n ticker = params['ticker']\n xchng = \"NASD\"\n param = {\n 'q': ticker, # Stock symbol (ex: \"AAPL\")\n 'i': \"86400\", # Interval size in seconds (\"86400\" = 1 day intervals)\n 'x': xchng, # Stock exchange symbol on which stock is traded (ex: \"NASD\")\n 'p': \"3M\" # Period (Ex: \"1Y\" = 1 year)\n }\n df = get_price_data(param)\n return df.drop('Volume', axis=1)\n\n def getPlot(self, params):\n df = self.getData(params)\n plt_obj = df.plot()\n plt_obj.set_ylabel(\"Price\")\n plt_obj.set_xlabel(\"Date\")\n plt_obj.set_title(params['ticker'])\n return plt_obj\n\n\napp = StockExample()\napp.launch()", "<img src='two_outputs.png'>\nA few more things you can try\n\nthere's a \"download\" output type that uses either the getData method or a getDownload method\ntables can be sortable. Just add a \"sortable\" key to the table output dictionary and set it's value to true\nthere are a couple of great Python libraries that produce JavaScript plots (Bokeh and Vincent). You can throw them into a getHTML method to add JavaScript plots to your spyre app (hoping to add a \"bokeh\" output type soon to make this integration a little easier).\nyou can link input values\n\nDeploying\n\nHeroku (blog post on setting up, free!)\npythonanywhere (free!)\nDigital Ocean (\\$5/month)\nAWS (~\\$10/month maybe?)\n\nMore Examples On GitHub\nA couple of tricks\n\nyou can either name your output methods using the getType convention or you can have the name match the output id. This is useful if you've got multiple outputs of the same type.\nif multiple outputs use the same data and it takes a long time to generate that data, there's a trick for caching data so you only have to load it once. See the stocks_example app in the examples directory of the git repo to see how (Warning: it's kind of hacky)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-3/cmip6/models/sandbox-1/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-3\nSource ID: SANDBOX-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-1', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
DistrictDataLabs/yellowbrick
examples/agodbehere/PytorchExample.ipynb
apache-2.0
[ "from sklearn.datasets import make_circles, load_iris\nfrom sklearn.model_selection import train_test_split\n\nimport torch\n\nimport numpy as np\n\nimport yellowbrick as yb\nimport matplotlib\nimport matplotlib.pylab as plt\n\n# dtype = torch.long\n# device = torch.device(\"cpu\")", "Load data & prepare", "X, y = make_circles(n_samples=1000, noise=0.1)\n\n# 75/25 train/test split\norig_X_train, orig_X_test, orig_y_train, orig_y_test = train_test_split(X, y, test_size=0.25)\n\n# Transform data into tensors.\nX = torch.tensor(orig_X_train, dtype=torch.float)\ny = torch.tensor(orig_y_train, dtype=torch.long)", "Visualize data", "import yellowbrick.contrib.scatter\nvisualizer = yellowbrick.contrib.scatter.ScatterVisualizer()\n\nvisualizer.fit(orig_X_train, orig_y_train)\nvisualizer.show()", "Basic Neural Net\n3 things are needed for an optimization problem:\n1. model\n2. Loss function\n3. Optimizer", "from torch import nn\n\n# Sequential model allows easy model experimentation\nmodel = nn.Sequential(\n nn.Linear(2, 16), # input dim 2. 16 neurons in first layer.\n nn.ReLU(), # ReLU activation\n #nn.Dropout(p=0.2), # Optional dropout\n nn.Linear(16, 4), # Linear from 16 neurons down to 2\n nn.ReLU(),\n nn.Linear(4,2),\n nn.Softmax(dim=1) # Softmax activation to normalize output weights\n )\n\n\n# Loss function. CrossEntropy is valid for classification problems.\nloss_fn = nn.CrossEntropyLoss()\n\n# Optimizer. Many to choose from. \noptimizer = torch.optim.Adam(params=model.parameters())\n\n# Optimizer iterations\nfor i in range(1000):\n # Clear the gradient at the start of each step.\n optimizer.zero_grad()\n \n # Compute the forward pass\n output = model(X)\n \n # Compute the loss\n loss = loss_fn(output, y)\n \n # Backprop to compute the gradients\n loss.backward()\n \n # Update the model parameters\n optimizer.step()\n\nprint(loss.item())", "What do the activation regions look like?\n(an exercise in Tensor math)", "%matplotlib inline\n\n# Make a grid \nns = 25\nxx, yy = np.meshgrid(np.linspace(-1.5, 1.5, 2*ns), np.linspace(-1.5, 1.5, 2*ns))\n# Shape of each is [ns, ns]\n\n# Combine into a single tensor\nG = torch.tensor(np.array([xx, yy]), dtype=torch.float)\n# Shape is [2, ns, ns]\n\n# reshape to be convenient to work with\nG = G.reshape((2, G.shape[1]*G.shape[2])).transpose(0,1)\n# Now a tensor of shape [ns*ns, 2]. Sequence of x,y coordinate pairs\n\nresult = model(G).detach()\n# For each row (sample) in G, get the prediction under the model\n# The variables inside the model are tracked for gradients. \n# Call \"detach()\" to stop tracking gradient for further computations.\n# Result is shape [ns*ns, 2] since model takes 2-dim vectors and generates a 2-dim prediction\n\nc0 = result[:,0]\n# weights assigned to class 0\n\nc1 = result[:,1]\n# weights assigned to class 1\n\nplt.hexbin(G[:,0].detach().numpy(), G[:,1].detach().numpy(), c0.numpy(), gridsize=ns, cmap='viridis')\n# Gridsize is half that of the meshgrid for clean rendering.\n\nplt.title(\"Class 0 Activation\")\nplt.axis('equal')\nplt.show()\nplt.hexbin(G[:,0].detach().numpy(), G[:,1].detach().numpy(), c1.numpy(), gridsize=ns, cmap='viridis')\nplt.title(\"Class 1 Activation\")\nplt.axis('equal')\nplt.show()", "What is the classification performance?\nCase study in working with Yellowbrick", "from sklearn.base import BaseEstimator\n\nclass NetWrapper(BaseEstimator):\n \"\"\"\n Wrap our model as a BaseEstimator\n \"\"\"\n _estimator_type = \"classifier\"\n # Tell yellowbrick this is a classifier\n \n def __init__(self, model):\n # save a reference to the model\n self.model = model\n self.classes_ = None\n \n def fit(self, X, y):\n # save the list of classes\n self.classes_ = list(set(i for i in y))\n \n def predict_proba(self, X):\n \"\"\"\n Define predict_proba or decision_function\n \n Compute predictions from model. \n Transform input into a Tensor, compute the prediction, \n transform the prediction back into a numpy array\n \"\"\"\n v = model(torch.tensor(X, dtype=torch.float)).detach().numpy()\n print(\"v:\", v.shape)\n return v\n \n\nwrapped_net = NetWrapper(model)\n# Wrap the model\n\n# Use ROCAUC as per usual\nROCAUC = yb.classifier.ROCAUC(wrapped_net)\n\nROCAUC.fit(orig_X_train, orig_y_train)\nprint(orig_X_test.shape, orig_y_test.shape)\nprint(orig_X_train.shape, orig_y_train.shape)\nROCAUC.score(orig_X_test, orig_y_test)\nROCAUC.show()\n", "Custom Modules\nImplementing new functionality, e.g. radial activation regions for \"circular\" neurons", "# weight: a * (x-c)^T(x-c), a is a real number\n\nclass Circle(torch.nn.Module):\n \"\"\"\n Extend torch.nn.Module for a new \"layer\" in a neural network\n \"\"\"\n def __init__(self, k, data):\n \"\"\"\n k is the number of neurons to use\n data is passed in to use as samples to initialize centers\n \"\"\"\n super().__init__()\n \n # k is not a Parameter, so there is no gradient and this is not updated in optimization\n self.k = int(k)\n \n # Parameters always have gradients computed\n self.alpha = torch.nn.Parameter(torch.normal(mean=torch.zeros(k), std=torch.ones(k)*0.5).unsqueeze(1))\n self.C = torch.nn.Parameter(data[np.random.choice(data.shape[0], k, replace=False), :].unsqueeze(1))\n \n \n def forward(self, x): \n diff = (x - self.C) \n # compact way of writing inner products, outer products, etc.\n tmp = torch.einsum('kij,kij->ki', [diff, diff])\n\n return (self.alpha * torch.einsum('kij,kij->ki', [diff, diff])).transpose(0,1)\n\n\n\nfrom tqdm import tqdm\nloss_fn = torch.nn.CrossEntropyLoss()\nmodel = nn.Sequential(\n Circle(16, X),\n nn.ReLU(),\n nn.Linear(16,4),\n nn.ReLU(),\n nn.Linear(4,2),\n nn.Softmax(dim=1)\n )\noptimizer = torch.optim.Adam(params=model.parameters())\nfor i in tqdm(range(1000)):\n optimizer.zero_grad()\n output = model(X)\n loss = loss_fn(output, y)\n loss.backward()\n optimizer.step()\n\n\n%matplotlib inline\n\nns = 25\nxx, yy = np.meshgrid(np.linspace(-1.5, 1.5, 2*ns), np.linspace(-1.5, 1.5, 2*ns))\nG = torch.tensor(np.array([xx, yy]), dtype=torch.float)\n\n\n# reshape...\nG = G.reshape((2, G.shape[1]*G.shape[2])).transpose(0,1)\nresult = model(G).detach()\n\nc0 = result[:,0]\nc1 = result[:,1]\n\nplt.hexbin(G[:,0].detach().numpy(), G[:,1].detach().numpy(), c0.numpy(), gridsize=ns, cmap='viridis')\nplt.title(\"Class 0 Activation\")\nplt.axis('equal')\nplt.show()\nplt.hexbin(G[:,0].detach().numpy(), G[:,1].detach().numpy(), c1.numpy(), gridsize=ns, cmap='viridis')\nplt.title(\"Class 1 Activation\")\nplt.axis('equal')\nplt.show()\n\n\nwrapped_net = NetWrapper(model)\nROCAUC = yb.classifier.ROCAUC(wrapped_net)\n\nROCAUC.fit(orig_X_train, orig_y_train)\nwrapped_net.predict_proba(orig_X_test)\nROCAUC.score(orig_X_test, orig_y_test)\nROCAUC.show()\n\n\n%matplotlib inline\n\n# Show the centers of each \"kernel\" \n\ncenters = model[0].C.squeeze().detach().numpy()\nscales = model[0].alpha.squeeze().detach().numpy()\n\nplt.scatter(centers[:,0], centers[:,1])\nplt.scatter(X[:,0], X[:,1], alpha=0.1)\nplt.axis('equal')\n\nprint(centers.shape)\n\n%matplotlib inline\nfrom matplotlib import cm\n\n# Show the contours of the activation regions of each kernel\n\nns = 25\nxx, yy = np.meshgrid(np.linspace(-2, 2, ns), np.linspace(-2, 2, ns))\nG = torch.tensor(np.array([xx, yy]), dtype=torch.float)\nG = G.reshape((2, G.shape[1]*G.shape[2])).transpose(0,1)\nG = G.expand(centers.shape[0], ns*ns, 2)\nZ = torch.tensor(scales).unsqueeze(1) * torch.einsum('kij,kij->ki', [G-torch.tensor(centers).unsqueeze(1), G-torch.tensor(centers).unsqueeze(1)])\n\nplt.scatter(centers[:,0], centers[:,1])\nplt.scatter(X[:,0], X[:,1], alpha=0.1)\ncmap = cm.get_cmap('tab20')\nfor i in range(Z.shape[0]):\n if scales[i] > 0: \n plt.contour(np.linspace(-2, 2, ns), np.linspace(-2, 2, ns), Z[i].reshape(ns, ns), [-0.5,0.5], antialiased=True, colors=[cmap(i)], alpha=0.8, linestyles='dotted')\n else:\n plt.contour(np.linspace(-2, 2, ns), np.linspace(-2, 2, ns), Z[i].reshape(ns, ns), [-0.5,0.5], antialiased=True, colors=[cmap(i)], alpha=0.3, linestyles='solid')\n\nplt.axis('equal')\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
v0.10/_downloads/3aec65fc693ccd0216a40e663bc10ddb/Hodograph_Inset.ipynb
bsd-3-clause
[ "%matplotlib inline", "Hodograph Inset\nLayout a Skew-T plot with a hodograph inset into the plot.", "import matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\nimport numpy as np\nimport pandas as pd\n\nimport metpy.calc as mpcalc\nfrom metpy.cbook import get_test_data\nfrom metpy.plots import add_metpy_logo, Hodograph, SkewT\nfrom metpy.units import units", "Upper air data can be obtained using the siphon package, but for this example we will use\nsome of MetPy's sample data.", "col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']\n\ndf = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),\n skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)\n\ndf['u_wind'], df['v_wind'] = mpcalc.wind_components(df['speed'],\n np.deg2rad(df['direction']))\n\n# Drop any rows with all NaN values for T, Td, winds\ndf = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',\n 'u_wind', 'v_wind'), how='all').reset_index(drop=True)", "We will pull the data out of the example dataset into individual variables and\nassign units.", "p = df['pressure'].values * units.hPa\nT = df['temperature'].values * units.degC\nTd = df['dewpoint'].values * units.degC\nwind_speed = df['speed'].values * units.knots\nwind_dir = df['direction'].values * units.degrees\nu, v = mpcalc.wind_components(wind_speed, wind_dir)\n\n# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nadd_metpy_logo(fig, 115, 100)\n\n# Grid for plots\nskew = SkewT(fig, rotation=45)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Good bounds for aspect ratio\nskew.ax.set_xlim(-50, 60)\n\n# Create a hodograph\nax_hod = inset_axes(skew.ax, '40%', '40%', loc=1)\nh = Hodograph(ax_hod, component_range=80.)\nh.add_grid(increment=20)\nh.plot_colormapped(u, v, np.hypot(u, v))\n\n# Show the plot\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rickiepark/tfk-notebooks
tensorflow_for_beginners/6. Convolutional Neural Networks.ipynb
mit
[ "import matplotlib.pyplot as plt\n%matplotlib inline", "텐서플로우 라이브러리를 임포트 하세요.\n텐서플로우에는 MNIST 데이터를 자동으로 로딩해 주는 헬퍼 함수가 있습니다. \"MNIST_data\" 폴더에 데이터를 다운로드하고 훈련, 검증, 테스트 데이터를 자동으로 읽어 들입니다. one_hot 옵션을 설정하면 정답 레이블을 원핫벡터로 바꾸어 줍니다.", "from tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)", "mnist.train.images에는 훈련용 이미지 데이터가 있고 mnist.test.images에는 테스트용 이미지 데이터가 있습니다. 이 데이터의 크기를 확인해 보세요.\nmatplotlib에는 이미지를 그려주는 imshow() 함수가 있습니다. 우리가 읽어 들인 mnist.train.images는 길이 784의 배열입니다. 55,000개 중에서 원하는 하나를 출력해 보세요.\n이미지로 표현하려면 원본 이미지 사각형 크기인 [28, 28]로 변경해 줍니다. 그리고 흑백이미지 이므로 컬러맵을 그레이 스케일로 지정합니다.", "plt.imshow(mnist.train.images[..].reshape([.., ..]), cmap=plt.get_cmap('gray_r'))", "mnist.train.labels에는 정답값 y 가 들어 있습니다. 원핫벡터로 로드되었는지 55,000개의 정답 데이터 중 하나를 확인해 보세요.", "mnist.train.labels[..]", "훈련 데이터는 55,000개로 한꺼번에 처리하기에 너무 많습니다. 그래서 미니배치 그래디언트 디센트 방식을 사용하려고 합니다. 미니배치 방식을 사용하려면 훈련 데이터에서 일부를 쪼개어 반복하여 텐서플로우 모델에 주입해 주어야 합니다.\n텐서플로우 모델이 동작하면서 입력 데이터를 받기위해 플레이스 홀더를 정의합니다. 플레이스 홀더는 x(이미지), y(정답 레이블) 두가지입니다.\nx = tf.placeholder(\"float32\", [None, 784])\ny = tf.placeholder(\"float32\", shape=[None, 10])\n콘볼루션을 적용하려면 1차원 배열이 아닌 이미지와 같은 모양으로 바꾸어 주어야 합니다. reshape 명령을 이용하면 텐서의 차원을 바꿀 수 있습니다. 첫번째 차원은 미니 배치 데이터의 개수이므로 그대로 두고, 두번째 차원을 28x28x1 로 변경합니다.\nx_image = tf.reshape(x, [-1,28,28,1])", "x_image = ...\nx_image", "콘볼루션을 적용하기 위해 tf.layers.conv2d 함수를 사용하겠습니다. 커널 사이즈는 5x5 이고 32개를 사용합니다. 스트라이드는 1x1 이고 'same' 패딩을 사용합니다. 활성화 함수는 렐루 함수를 적용합니다.\n풀링은 tf.layers.max_pooling2d 를 사용하여 2x2 맥스 풀링을 적용합니다.\nconv1 = tf.layers.conv2d(x_image, 32, (5, 5), strides=(1, 1), \n padding=\"same\", activation=tf.nn.relu)\npool1 = tf.layers.max_pooling2d(conv1, pool_size=(2, 2), strides=(2, 2))\n입력 데이터와 콘볼루션된 직후와 풀링된 직후의 텐서의 크기를 비교해 보세요.", "print(x_image.get_shape())\nprint(conv1.get_shape())\nprint(pool1.get_shape())", "두번째 콘볼루션의 커널 사이즈는 5x5 이고 64개를 사용합니다. 스트라이드는 1x1 이고 'same' 패딩을 사용합니다. 활성화 함수는 렐루 함수를 적용합니다.\n풀링은 tf.layers.max_pooling2d 를 사용하여 2x2 맥스 풀링을 적용합니다.\nconv2 = tf.layers.conv2d(pool1, 64, (5, 5), strides=(1, 1), \n padding=\"same\", activation=tf.nn.relu)\npool2 = tf.layers.max_pooling2d(conv2, pool_size=(2, 2), strides=(2, 2))\n두번째 콘볼루션된 후와 풀링된 후의 텐서 크기를 비교해 보세요.", "print(conv2.get_shape())\nprint(pool2.get_shape())", "덴스 네트워크에 연결하기 위해 두번째 풀링 결과를 펼칩니다. 이 때에도 reshape 명령을 사용합니다. 첫번째 차원은 상관없이 2~4번째 차원을 하나로 합칩니다.\n이전과 달리 행렬연산을 직접하지 않고 tf.layers.dense 함수를 사용하여 레이어를 구성합니다. 활성화 함수는 렐루 함수를 사용합니다.\npool2_flat = tf.reshape(pool2, [-1, 7*7*64])\nfc = tf.layers.dense(pool2_flat, 1024, activation=tf.nn.relu)\n최종 출력 레이어에 전달하기 전에 드롭 아웃을 적용하여 학습시에 일부 유닛을 비활성화 시키려고 합니다. 하지만 추론시에는 모든 뉴런을 활성화시켜야 하므로 계산 그래프에 상수로 고정시키지 않고 플레이스홀더로 지정하여 외부에서 드롭 아웃 비율을 지정할 수 있도로 합니다.\ndrop_prob = tf.placeholder(\"float\")\nfc_drop = tf.layers.dropout(fc, rate=drop_prob)\n마지막 출력 레이어를 구성하고 출력값을 정규화하여 정답과 비교하려고 소프트맥스 함수를 적용합니다.\nz = tf.layers.dense(fc_drop, 10)\ny_hat=tf.nn.softmax(z)\n손실 함수 크로스 엔트로피를 계산하기 위해 위에서 구한 y_hat을 사용해도 되지만 텐서플로우에는 소프트맥스를 통과하기 전의 값 z 를 이용하여 소프트맥스 크로스 엔트로피를 계산해 주는 함수를 내장하고 있습니다. softmax_cross_entropy를 이용하여 z 와 정답 y 의 손실을 계산합니다.\nloss = tf.losses.softmax_cross_entropy(y, z)\n학습속도 0.5로 경사하강법을 적용하고 위에서 만든 손실 함수를 이용해 훈련 노드를 생성합니다.\noptimizer = tf.train.GradientDescentOptimizer(0.1)\ntrain = optimizer.minimize(loss)\n올바르게 분류된 정확도를 계산하려면 정답을 가지고 있는 원핫벡터인 y 와 소프트맥스를 통과한 원핫벡터인 y_hat을 비교해야 합니다. 이 두 텐서는 [None, 10]의 크기를 가지고 있습니다. 따라서 행방향(1)으로 가장 큰 값을 가진 인덱스(argmax)를 찾아서 같은지(equal) 확인하면 됩니다.\ncorrect_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_hat,1))\ncorrect_prediction은 [True, False, ...] 와 같은 배열이므로 불리언을 숫자(1,0)로 바꾼다음(cast) 전체를 합하여 평균을 내면 정확도 값을 얻을 수 있습니다.\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n세션 객체를 만들고 모델에 사용할 변수를 초기화합니다.", "sess = tf.Session()\nsess.run(tf.global_variables_initializer())", "5000번 반복을 하면서 훈련 데이터에서 100개씩 뽑아내어(mnist.train.next_batch) 모델에 드롭아웃 비율과 함께 주입합니다. 모델의 플레이스 홀더에 주입하려면 플레이스 홀더의 이름과 넘겨줄 값을 딕셔너리 형태로 묶어서 feed_dict 매개변수에 전달합니다.\n계산할 값은 훈련 노드 train과 학습 과정을 그래프로 출력하기 위해 손실값 loss를 계산하여 costs 리스트에 누적합니다.", "costs = []\nfor i in range(5000):\n x_data, y_data = mnist.train.next_batch(100)\n _, cost = sess.run([train, loss], \n feed_dict={x: x_data, y: y_data, drop_prob: 0.5})\n costs.append(cost)", "costs 리스트를 그래프로 출력합니다.", "plt.plot(costs)", "정확도를 계산하기 위해 만든 노드 accuracy를 실행합니다. 이때 입력 데이터는 mnist.test 로 훈련시에 사용하지 않았던 데이터입니다. accuracy를 계산할 때는 모든 뉴런을 사용하기 위해 드롭아웃 비율을 1로 지정해야 합니다.\nsess.run(accuracy, feed_dict={x: mnist.test.images, \n y: mnist.test.labels, drop_prob: 1.0})\n실제 이미지와 예측 값이 동일한지 확인하기 위해 테스트 데이터 앞의 5개 이미지와 예측 값을 차례대로 출력해 봅니다.", "for i in range(5):\n plt.imshow(mnist.test.images[i].reshape([28, 28]), cmap=plt.get_cmap('gray_r'))\n plt.show()\n print(sess.run(tf.argmax(y_hat,1), feed_dict={x: mnist.test.images[i].reshape([1,784]), \n drop_prob: 1.0}))", "학습된 변수를 모두 출력해 봅니다. 여기에는 두개의 콘볼루션 레이어의 가중치와 바이어스, 두개의 덴스 레이어의 가중치와 바이어스가 있습니다.", "[x.name for x in tf.global_variables()]", "첫번째 콘볼루션 레이어의 가중치 텐서의 값을 추출합니다. 이 가중치는 위에서 우리가 정의했던 것과 같이 5x5 사이즈의 32개를 합친 것입니다.", "with tf.variable_scope('conv2d', reuse=True):\n kernel = tf.get_variable('kernel')\n\nweight = sess.run(kernel)\nweight.shape", "이 가중치를 한개씩 이미지로 출력해 보겠습니다. 첫번째 콘볼루션 레이어에서 학습한 것을 눈으로 확인할 수 있나요?", "fig, axes = plt.subplots(4, 8, figsize=(10, 10))\nfor i in range(4):\n for j in range(8):\n axes[i][j].imshow(weight[:, :, :, i*8+j].reshape([5, 5]), cmap=plt.get_cmap('gray_r'))\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Parsl/parsl_demos
First-Tutorial-Start-Here.ipynb
apache-2.0
[ "Parsl Tutorial\nParsl is a native Python library that allows you to write functions that execute in parallel and tie them together with dependencies to create workflows. Parsl wraps Python functions as \"Apps\" using the @App decorator. Decorated functions can run in parallel when all their inputs are ready.\nFor more comprehensive documentation and examples, please refer our documentation", "# Import Parsl\nimport parsl\nfrom parsl import *\nprint(parsl.__version__)", "DataFlowKernal\nParsl's DataFlowKernel acts as an abstraction layer over any pool of execution resources (e.g., clusters, clouds, threads). \nIn this example we use a pool of [threads](https://en.wikipedia.org/wiki/Thread_(computing). to facilitate local parallel exectuion.", "# Let's create a pool of threads to execute our functions\nworkers = ThreadPoolExecutor(max_workers=4)\n\n# We pass the workers to the DataFlowKernel which will execute our Apps over the workers.\ndfk = DataFlowKernel(executors=[workers])", "Hello World App\nAs a first example let's define a simple Python function that returns the string 'Hello World!'. This function is made into a Parsl App using the @App decorator. The decorator specifies the type of App ('python'|'bash') and the DataFlowKernel object as arguments.", "# Here we define our first App function, a simple python app that returns a string\n@App('python', dfk)\ndef hello ():\n return 'Hello World!'\n\napp_future = hello()", "Futures\nUnlike a regular Python function, when an App is called it returns an AppFuture. Futures act as a proxy to the results (or exceptions) that the App will produce once its execution completes. You can retrieve the status of a future object with future.done() or you can ask it to wait for its result with future.result(). It is important to note that while the done() call provides the current status, the result() call blocks execution until the App is complete and the result is available.", "# Check status \nprint(\"Status: \", app_future.done())\n\n# Get result\nprint(\"Result: \", app_future.result())", "Data Dependencies\nFutures can be passed between Apps. When a future created by one App is passed as an input to another, an implicit data dependency is created. Parsl will manage the execution of these Apps by ensuring they are executed when dependencies are resolved. \nLet's see an example of this using the monte-carlo method to calculate pi. We call 3 iterations of this slow function, and take the average. The dependency chain looks like this :\nApp Calls pi() pi() pi()\n \\ | /\nFutures a b c\n \\ | /\nApp Call avg_three()\n |\nFuture avg_pi", "@App('python', dfk)\ndef pi(total):\n # App functions have to import modules they will use.\n import random \n # Set the size of the box (edge length) in which we drop random points\n edge_length = 10000\n center = edge_length / 2\n c2 = center ** 2\n count = 0\n \n for i in range(total):\n # Drop a random point in the box.\n x,y = random.randint(1, edge_length),random.randint(1, edge_length)\n # Count points within the circle\n if (x-center)**2 + (y-center)**2 < c2:\n count += 1\n \n return (count*4/total)\n\n@App('python', dfk)\ndef avg_three(a,b,c):\n return (a+b+c)/3", "Parallelism\nHere we call the function pi() three times, each of which run independently in parallel. \nWe then call another App avg_three() with the three futures that were returned from the pi() calls.\nSince avg_three() is also a parsl App, it returns a future immediately, but defers execution (blocks) until all the futures passed to it as inputs have been resolved.", "a, b, c = pi(10**6), pi(10**6), pi(10**6)\navg_pi = avg_three(a, b, c)\n\n# Print the results\nprint(\"A: {0:.5f} B: {1:.5f} C: {2:.5f}\".format(a.result(), b.result(), c.result()))\nprint(\"Average: {0:.5f}\".format(avg_pi.result()))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
menpo/menpofit-notebooks
notebooks/Transform/Thin_Plate_Splines_Derivatives.ipynb
bsd-3-clause
[ "Derivatives of a TPS", "import os\nimport numpy as np\nimport scipy.io as sio\nimport matplotlib.pyplot as plt\n\nfrom menpo.shape import PointCloud\nimport menpo.io as mio\nfrom menpofit.transform import DifferentiableThinPlateSplines", "We start by defining the source and target landmarks. Notice that, in this first example source = target!!!", "src_landmarks = PointCloud(np.array([[-1, -1],\n [-1, 1],\n [ 1, -1],\n [ 1, 1]]))\n\ntgt_landmarks = PointCloud(np.array([[-1, -1],\n [-1, 1],\n [ 1, -1],\n [ 1, 1]]))", "The warp can be effectively computed, although the rendering will not appear to be correct...", "tps = DifferentiableThinPlateSplines(src_landmarks, tgt_landmarks)\nnp.allclose(tps.apply(src_landmarks).points, tgt_landmarks.points)", "The next step is to define the set of points at which the derivative of the previous TPS warp must be evaluated. In this case, we use the function meshgrid to generate points inside the convex hull defined by the source landmarks.", "x = np.arange(-1, 1, 0.01)\ny = np.arange(-1, 1, 0.01)\nxx, yy = np.meshgrid(x, y)\npoints = np.array([xx.flatten(1), yy.flatten(1)]).T", "We evaluate the derivative, reshape the output, and visualize the result.", "%matplotlib inline\ndW_dxy = tps.d_dl(points)\nreshaped = dW_dxy.reshape(xx.shape + (4,2))\n\n#dW_dx\nplt.subplot(241)\nplt.imshow(reshaped[:,:,0,0])\nplt.subplot(242)\nplt.imshow(reshaped[:,:,1,0])\nplt.subplot(243)\nplt.imshow(reshaped[:,:,2,0])\nplt.subplot(244)\nplt.imshow(reshaped[:,:,3,0])\n\n#dW_dy\nplt.subplot(245)\nplt.imshow(reshaped[:,:,0,1])\nplt.subplot(246)\nplt.imshow(reshaped[:,:,1,1])\nplt.subplot(247)\nplt.imshow(reshaped[:,:,2,1])\nplt.subplot(248)\nplt.imshow(reshaped[:,:,3,1])", "If everything goes as expected, the upper corner of the images defining the derivative of the warp wrt the x and y coordinates of the first of the source landmarks should both contain values close to 1.", "print(reshaped[1:5,1:5,0,0])\nprint(reshaped[1:5,1:5,0,1])", "The sum of all the derivatives wrt the x coordinates should produce an all 1 image", "summed_x = np.sum(reshaped[:,:,:,0], axis=-1)\nnp.allclose(np.ones(xx.shape), summed_x)\n\nplt.imshow(summed_x)", "and so should the sum of all derivatives wrt the y coordinates.", "summed_y = np.sum(reshaped[:,:,:,1], axis=-1)\nnp.allclose(np.ones(xx.shape), summed_y)\n\nplt.imshow(summed_y)", "Finally, the derivatives with respect to the x and y coordinates should be in this case exactly the same!!!", "np.allclose(reshaped[:,:,:,0], reshaped[:,:,:,1])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ledrui/week4_Ridge_Regression
.ipynb_checkpoints/Overfitting_Demo_Ridge_Lasso-checkpoint.ipynb
mit
[ "Overfitting demo\nCreate a dataset based on a true sinusoidal relationship\nLet's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \\sin(4x)$:", "import graphlab\nimport math\nimport random\nimport numpy\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "Create random values for x in interval [0,1)", "random.seed(98103)\nn = 30\nx = graphlab.SArray([random.random() for i in range(n)]).sort()", "Compute y", "y = x.apply(lambda x: math.sin(4*x))", "Add random Gaussian noise to y", "random.seed(1)\ne = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])\ny = y + e", "Put data into an SFrame to manipulate later", "data = graphlab.SFrame({'X1':x,'Y':y})\ndata", "Create a function to plot the data, since we'll do it many times", "def plot_data(data): \n plt.plot(data['X1'],data['Y'],'k.')\n plt.xlabel('x')\n plt.ylabel('y')\n\nplot_data(data)", "Define some useful polynomial regression functions\nDefine a function to create our features for a polynomial regression model of any degree:", "def polynomial_features(data, deg):\n data_copy=data.copy()\n for i in range(1,deg):\n data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']\n return data_copy", "Define a function to fit a polynomial linear regression model of degree \"deg\" to the data in \"data\":", "def polynomial_regression(data, deg):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,l1_penalty=0.,\n validation_set=None,verbose=False)\n return model", "Define function to plot data and predictions made, since we are going to use it many times.", "def plot_poly_predictions(data, model):\n plot_data(data)\n\n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n \n # Create 200 points in the x axis and compute the predicted value for each point\n x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})\n y_pred = model.predict(polynomial_features(x_pred,deg))\n \n # plot predictions\n plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')\n plt.legend(loc='upper left')\n plt.axis([0,1,-1.5,2])", "Create a function that prints the polynomial coefficients in a pretty way :)", "def print_coefficients(model): \n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n\n # Get learned parameters as a list\n w = list(model.coefficients['value'])\n\n # Numpy has a nifty function to print out polynomials in a pretty way\n # (We'll use it, but it needs the parameters in the reverse order)\n print 'Learned polynomial for degree ' + str(deg) + ':'\n w.reverse()\n print numpy.poly1d(w)", "Fit a degree-2 polynomial\nFit our degree-2 polynomial to the data generated above:", "model = polynomial_regression(data, deg=2)", "Inspect learned parameters", "print_coefficients(model)", "Form and plot our predictions along a grid of x values:", "plot_poly_predictions(data,model)", "Fit a degree-4 polynomial", "model = polynomial_regression(data, deg=4)\nprint_coefficients(model)\nplot_poly_predictions(data,model)", "Fit a degree-16 polynomial", "model = polynomial_regression(data, deg=16)\nprint_coefficients(model)", "Woah!!!! Those coefficients are crazy! On the order of 10^6.", "plot_poly_predictions(data,model)", "Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.\n\n\n# \n# \nRidge Regression\nRidge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\\|w\\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called \"L2_penalty\").\nDefine our function to solve the ridge objective for a polynomial regression model of any degree:", "def polynomial_ridge_regression(data, deg, l2_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n return model", "Perform a ridge fit of a degree-16 polynomial using a very small penalty strength", "model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)", "Perform a ridge fit of a degree-16 polynomial using a very large penalty strength", "model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)", "Let's look at fits for a sequence of increasing lambda values", "for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:\n model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)\n print 'lambda = %.2e' % l2_penalty\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('Ridge, lambda = %.2e' % l2_penalty)", "Perform a ridge fit of a degree-16 polynomial using a \"good\" penalty strength\nWe will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider \"leave one out\" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.", "# LOO cross validation -- return the average MSE\ndef loo(data, deg, l2_penalty_values):\n # Create polynomial features\n polynomial_features(data, deg)\n \n # Create as many folds for cross validatation as number of data points\n num_folds = len(data)\n folds = graphlab.cross_validation.KFold(data,num_folds)\n \n # for each value of l2_penalty, fit a model for each fold and compute average MSE\n l2_penalty_mse = []\n min_mse = None\n best_l2_penalty = None\n for l2_penalty in l2_penalty_values:\n next_mse = 0.0\n for train_set, validation_set in folds:\n # train model\n model = graphlab.linear_regression.create(train_set,target='Y', \n l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n \n # predict on validation set \n y_test_predicted = model.predict(validation_set)\n # compute squared error\n next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()\n \n # save squared error in list of MSE for each l2_penalty\n next_mse = next_mse/num_folds\n l2_penalty_mse.append(next_mse)\n if min_mse is None or next_mse < min_mse:\n min_mse = next_mse\n best_l2_penalty = l2_penalty\n \n return l2_penalty_mse,best_l2_penalty", "Run LOO cross validation for \"num\" values of lambda, on a log scale", "l2_penalty_values = numpy.logspace(-4, 10, num=10)\nl2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)", "Plot results of estimating LOO for each value of lambda", "plt.plot(l2_penalty_values,l2_penalty_mse,'k-')\nplt.xlabel('$\\L2_penalty$')\nplt.ylabel('LOO cross validation error')\nplt.xscale('log')\nplt.yscale('log')", "Find the value of lambda, $\\lambda_{\\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit", "best_l2_penalty\n\nmodel = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)", "Lasso Regression\nLasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called \"L1_penalty\"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\\|w\\|$.\nDefine our function to solve the lasso objective for a polynomial regression model of any degree:", "def polynomial_lasso_regression(data, deg, l1_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,\n l1_penalty=l1_penalty,\n validation_set=None, \n solver='fista', verbose=False,\n max_iterations=3000, convergence_threshold=1e-10)\n return model", "Explore the lasso solution as a function of a few different penalty strengths\nWe refer to lambda in the lasso case below as \"l1_penalty\"", "for l1_penalty in [0.0001, 0.01, 0.1, 10]:\n model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)\n print 'l1_penalty = %e' % l1_penalty\n print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))", "Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dianafprieto/SS_2017
05_NB_VTKPython_Scalar.ipynb
mit
[ "<img src=\"imgs/header.png\">\nVisualization techniques for scalar fields in VTK + Python\nRecap: The VTK pipeline\n<img src=\"imgs/vtk_pipeline.png\", align=left>\n$~$\nVisualizing data within a rectilinear grid\nThe following code snippets show step by step the how to create a pipeline to visualize the outline of a rectilinear grid.", "%gui qt\nimport vtk\nfrom vtkviewer import SimpleVtkViewer\n#help(vtk.vtkRectilinearGridReader())", "1. Data input (source)", "# do not forget to call \"Update()\" at the end of the reader\nrectGridReader = vtk.vtkRectilinearGridReader()\nrectGridReader.SetFileName(\"data/jet4_0.500.vtk\")\nrectGridReader.Update()", "2. Filters\n\nFilter 1: vtkRectilinearGridOutlineFilter() creates wireframe outline for a rectilinear grid.", "%qtconsole\n\nrectGridOutline = vtk.vtkRectilinearGridOutlineFilter()\nrectGridOutline.SetInputData(rectGridReader.GetOutput())", "3. Mappers\n\nMapper: vtkPolyDataMapper() maps vtkPolyData to graphics primitives.", "rectGridOutlineMapper = vtk.vtkPolyDataMapper()\nrectGridOutlineMapper.SetInputConnection(rectGridOutline.GetOutputPort())", "4. Actors", "outlineActor = vtk.vtkActor()\noutlineActor.SetMapper(rectGridOutlineMapper)\noutlineActor.GetProperty().SetColor(0, 0, 0)", "5. Renderers and Windows", "#Option 1: Default vtk render window\nrenderer = vtk.vtkRenderer()\nrenderer.SetBackground(0.5, 0.5, 0.5)\nrenderer.AddActor(outlineActor)\nrenderer.ResetCamera()\n\nrenderWindow = vtk.vtkRenderWindow()\nrenderWindow.AddRenderer(renderer)\nrenderWindow.SetSize(500, 500)\nrenderWindow.Render()\n\niren = vtk.vtkRenderWindowInteractor()\niren.SetRenderWindow(renderWindow)\niren.Start()\n\n#Option 2: Using the vtk-viewer for Jupyter to interactively modify the pipeline\nvtkSimpleWin = SimpleVtkViewer()\nvtkSimpleWin.resize(1000,800)\nvtkSimpleWin.hide_axes()\n\nvtkSimpleWin.add_actor(outlineActor)\nvtkSimpleWin.add_actor(gridGeomActor)\n\nvtkSimpleWin.ren.SetBackground(0.5, 0.5, 0.5)\nvtkSimpleWin.ren.ResetCamera()", "<font color='red'>Trick:</font> The autocomplete functionality in Jupyter is available by pressing the Tab button.\nUseful Resources\nhttp://www.vtk.org/Wiki/VTK/Examples/Python" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
the-deep-learners/TensorFlow-LiveLessons
notebooks/generative_adversarial_network.ipynb
mit
[ "Quick, Draw! GAN\n\ncode based directly on Grant Beyleveld's, which is derived from Rowel Atienza's under MIT License\ndata provided by Google under Creative Commons Attribution 4.0 license\n\nSelect processing devices", "# import os\n# os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n# # os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"\n# os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"1\"", "Load dependencies", "# for data input and output:\nimport numpy as np\nimport os\n\n# for deep learning: \nimport keras\nfrom keras.models import Sequential, Model\nfrom keras.layers import Input, Dense, Conv2D, BatchNormalization, Dropout, Flatten\nfrom keras.layers import Activation, Reshape, Conv2DTranspose, UpSampling2D # new! \nfrom keras.optimizers import RMSprop\n\n# for plotting: \nimport pandas as pd\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "Load data\nNumPy bitmap files are here -- pick your own drawing category -- you don't have to pick apples :)", "input_images = \"../quickdraw_data/apple.npy\"\n\ndata = np.load(input_images) # 28x28 (sound familiar?) grayscale bitmap in numpy .npy format; images are centered\n\ndata.shape\n\ndata[4242]\n\ndata = data/255\ndata = np.reshape(data,(data.shape[0],28,28,1)) # fourth dimension is color\nimg_w,img_h = data.shape[1:3]\ndata.shape\n\ndata[4242]\n\nplt.imshow(data[4242,:,:,0], cmap='Greys')", "Create discriminator network", "def discriminator_builder(depth=64,p=0.4):\n\n # Define inputs\n inputs = Input((img_w,img_h,1))\n \n # Convolutional layers\n conv1 = Conv2D(depth*1, 5, strides=2, padding='same', activation='relu')(inputs)\n conv1 = Dropout(p)(conv1)\n \n conv2 = Conv2D(depth*2, 5, strides=2, padding='same', activation='relu')(conv1)\n conv2 = Dropout(p)(conv2)\n \n conv3 = Conv2D(depth*4, 5, strides=2, padding='same', activation='relu')(conv2)\n conv3 = Dropout(p)(conv3)\n \n conv4 = Conv2D(depth*8, 5, strides=1, padding='same', activation='relu')(conv3)\n conv4 = Flatten()(Dropout(p)(conv4))\n \n # Output layer\n output = Dense(1, activation='sigmoid')(conv4)\n \n # Model definition\n model = Model(inputs=inputs, outputs=output)\n model.summary()\n \n return model\n\ndiscriminator = discriminator_builder()\n\ndiscriminator.compile(loss='binary_crossentropy', \n optimizer=RMSprop(lr=0.0008, decay=6e-8, clipvalue=1.0), \n metrics=['accuracy'])", "Create generator network", "def generator_builder(z_dim=100,depth=64,p=0.4):\n \n # Define inputs\n inputs = Input((z_dim,))\n \n # First dense layer\n dense1 = Dense(7*7*64)(inputs)\n dense1 = BatchNormalization(momentum=0.9)(dense1) # default momentum for moving average is 0.99\n dense1 = Activation(activation='relu')(dense1)\n dense1 = Reshape((7,7,64))(dense1)\n dense1 = Dropout(p)(dense1)\n \n # De-Convolutional layers\n conv1 = UpSampling2D()(dense1)\n conv1 = Conv2DTranspose(int(depth/2), kernel_size=5, padding='same', activation=None,)(conv1)\n conv1 = BatchNormalization(momentum=0.9)(conv1)\n conv1 = Activation(activation='relu')(conv1)\n \n conv2 = UpSampling2D()(conv1)\n conv2 = Conv2DTranspose(int(depth/4), kernel_size=5, padding='same', activation=None,)(conv2)\n conv2 = BatchNormalization(momentum=0.9)(conv2)\n conv2 = Activation(activation='relu')(conv2)\n \n conv3 = Conv2DTranspose(int(depth/8), kernel_size=5, padding='same', activation=None,)(conv2)\n conv3 = BatchNormalization(momentum=0.9)(conv3)\n conv3 = Activation(activation='relu')(conv3)\n\n # Output layer\n output = Conv2D(1, kernel_size=5, padding='same', activation='sigmoid')(conv3)\n\n # Model definition \n model = Model(inputs=inputs, outputs=output)\n model.summary()\n \n return model\n\ngenerator = generator_builder()", "Create adversarial network", "def adversarial_builder(z_dim=100):\n model = Sequential()\n model.add(generator)\n model.add(discriminator)\n model.compile(loss='binary_crossentropy', \n optimizer=RMSprop(lr=0.0004, decay=3e-8, clipvalue=1.0), \n metrics=['accuracy'])\n model.summary()\n return model\n\nadversarial_model = adversarial_builder()", "Train!", "def make_trainable(net, val):\n net.trainable = val\n for l in net.layers:\n l.trainable = val\n\ndef train(epochs=2000,batch=128):\n \n d_metrics = []\n a_metrics = []\n \n running_d_loss = 0\n running_d_acc = 0\n running_a_loss = 0\n running_a_acc = 0\n \n for i in range(epochs):\n \n if i%100 == 0:\n print(i)\n \n real_imgs = np.reshape(data[np.random.choice(data.shape[0],batch,replace=False)],(batch,28,28,1))\n fake_imgs = generator.predict(np.random.uniform(-1.0, 1.0, size=[batch, 100]))\n\n x = np.concatenate((real_imgs,fake_imgs))\n y = np.ones([2*batch,1])\n y[batch:,:] = 0\n \n make_trainable(discriminator, True)\n \n d_metrics.append(discriminator.train_on_batch(x,y))\n running_d_loss += d_metrics[-1][0]\n running_d_acc += d_metrics[-1][1]\n \n make_trainable(discriminator, False)\n \n noise = np.random.uniform(-1.0, 1.0, size=[batch, 100])\n y = np.ones([batch,1])\n\n a_metrics.append(adversarial_model.train_on_batch(noise,y)) \n running_a_loss += a_metrics[-1][0]\n running_a_acc += a_metrics[-1][1]\n \n if (i+1)%500 == 0:\n\n print('Epoch #{}'.format(i+1))\n log_mesg = \"%d: [D loss: %f, acc: %f]\" % (i, running_d_loss/i, running_d_acc/i)\n log_mesg = \"%s [A loss: %f, acc: %f]\" % (log_mesg, running_a_loss/i, running_a_acc/i)\n print(log_mesg)\n\n noise = np.random.uniform(-1.0, 1.0, size=[16, 100])\n gen_imgs = generator.predict(noise)\n\n plt.figure(figsize=(5,5))\n\n for k in range(gen_imgs.shape[0]):\n plt.subplot(4, 4, k+1)\n plt.imshow(gen_imgs[k, :, :, 0], cmap='gray')\n plt.axis('off')\n \n plt.tight_layout()\n plt.show()\n \n return a_metrics, d_metrics\n\na_metrics_complete, d_metrics_complete = train(epochs=3000)\n\nax = pd.DataFrame(\n {\n 'Generator': [metric[0] for metric in a_metrics_complete],\n 'Discriminator': [metric[0] for metric in d_metrics_complete],\n }\n).plot(title='Training Loss', logy=True)\nax.set_xlabel(\"Epochs\")\nax.set_ylabel(\"Loss\")\n\nax = pd.DataFrame(\n {\n 'Generator': [metric[1] for metric in a_metrics_complete],\n 'Discriminator': [metric[1] for metric in d_metrics_complete],\n }\n).plot(title='Training Accuracy')\nax.set_xlabel(\"Epochs\")\nax.set_ylabel(\"Accuracy\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oasis-open/cti-python-stix2
docs/guide/serializing.ipynb
bsd-3-clause
[ "# Delete this cell to re-enable tracebacks\nimport sys\nipython = get_ipython()\n\ndef hide_traceback(exc_tuple=None, filename=None, tb_offset=None,\n exception_only=False, running_compiled_code=False):\n etype, value, tb = sys.exc_info()\n value.__cause__ = None # suppress chained exceptions\n return ipython._showtraceback(etype, value, ipython.InteractiveTB.get_exception_only(etype, value))\n\nipython.showtraceback = hide_traceback\n\n# JSON output syntax highlighting\nfrom __future__ import print_function\nfrom pygments import highlight\nfrom pygments.lexers import JsonLexer, TextLexer\nfrom pygments.formatters import HtmlFormatter\nfrom IPython.display import display, HTML\nfrom IPython.core.interactiveshell import InteractiveShell\n\nInteractiveShell.ast_node_interactivity = \"all\"\n\ndef json_print(inpt):\n string = str(inpt)\n formatter = HtmlFormatter()\n if string[0] == '{':\n lexer = JsonLexer()\n else:\n lexer = TextLexer()\n return HTML('<style type=\"text/css\">{}</style>{}'.format(\n formatter.get_style_defs('.highlight'),\n highlight(string, lexer, formatter)))\n\nglobals()['print'] = json_print", "Serializing STIX Objects\nThe string representation of all STIX classes is a valid STIX JSON object.", "from stix2 import Indicator\n\nindicator = Indicator(name=\"File hash for malware variant\",\n pattern_type=\"stix\",\n pattern=\"[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']\")\n\nprint(indicator.serialize(pretty=True))", "New in 3.0.0: \nCalling str() on a STIX object will call serialize() without any formatting options. The change was made to address the performance penalty induced by unknowingly calling with the pretty formatted option. As shown above, to get the same effect as str() had in past versions of the library, use the method directly and pass in the pretty argument serialize(pretty=True).\n\nHowever, the pretty formatted string representation can be slow, as it sorts properties to be in a more readable order. If you need performance and don't care about the human-readability of the output, use the object's serialize() function to pass in any arguments json.dump() would understand:", "print(indicator.serialize())", "If you need performance but also need human-readable output, you can pass the indent keyword argument to serialize():", "print(indicator.serialize(indent=4))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/image_models/labs/1_mnist_linear.ipynb
apache-2.0
[ "MNIST Image Classification with TensorFlow\nThis notebook demonstrates how to implement a simple linear image model on MNIST using the tf.keras API. It builds the foundation for this <a href=\"https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb\">companion notebook</a>, which explores tackling the same problem with other types of models such as DNN and CNN.\nLearning Objectives\n\nKnow how to read and display image data\nKnow how to find incorrect predictions to analyze the model\nVisually see how computers see images", "import os\nimport shutil\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard\nfrom tensorflow.keras.layers import Dense, Flatten, Softmax\n\nprint(tf.__version__)", "Exploring the data\nThe MNIST dataset is already included in tensorflow through the keras datasets module. Let's load it and get a sense of the data.", "mnist = tf.keras.datasets.mnist.load_data()\n(x_train, y_train), (x_test, y_test) = mnist\n\nHEIGHT, WIDTH = x_train[0].shape\nNCLASSES = tf.size(tf.unique(y_train).y)\nprint(\"Image height x width is\", HEIGHT, \"x\", WIDTH)\ntf.print(\"There are\", NCLASSES, \"classes\")", "Each image is 28 x 28 pixels and represents a digit from 0 to 9. These images are black and white, so each pixel is a value from 0 (white) to 255 (black). Raw numbers can be hard to interpret sometimes, so we can plot the values to see the handwritten digit as an image.", "IMGNO = 12\n# Uncomment to see raw numerical values.\n# print(x_test[IMGNO])\nplt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH))\nprint(\"The label for image number\", IMGNO, \"is\", y_test[IMGNO])", "Define the model\nLet's start with a very simple linear classifier. This was the first method to be tried on MNIST in 1998, and scored an 88% accuracy. Quite ground breaking at the time!\nWe can build our linear classifier using the tf.keras API, so we don't have to define or initialize our weights and biases. This happens automatically for us in the background. We can also add a softmax layer to transform the logits into probabilities. Finally, we can compile the model using categorical cross entropy in order to strongly penalize high probability predictions that were incorrect.\nWhen building more complex models such as DNNs and CNNs our code will be more readable by using the tf.keras API. Let's get one working so we can test it and use it as a benchmark.", "def linear_model():\n # TODO: Build a sequential model and compile it.\n return model", "Write Input Functions\nAs usual, we need to specify input functions for training and evaluating. We'll scale each pixel value so it's a decimal value between 0 and 1 as a way of normalizing the data.\nTODO 1: Define the scale function below and build the dataset", "BUFFER_SIZE = 5000\nBATCH_SIZE = 100\n\n\ndef scale(image, label):\n # TODO\n\n\ndef load_dataset(training=True):\n \"\"\"Loads MNIST dataset into a tf.data.Dataset\"\"\"\n (x_train, y_train), (x_test, y_test) = mnist\n x = x_train if training else x_test\n y = y_train if training else y_test\n # TODO: a) one-hot encode labels, apply `scale` function, and create dataset.\n # One-hot encode the classes\n if training:\n # TODO\n return dataset\n\ndef create_shape_test(training):\n dataset = load_dataset(training=training)\n data_iter = dataset.__iter__()\n (images, labels) = data_iter.get_next()\n expected_image_shape = (BATCH_SIZE, HEIGHT, WIDTH)\n expected_label_ndim = 2\n assert images.shape == expected_image_shape\n assert labels.numpy().ndim == expected_label_ndim\n test_name = \"training\" if training else \"eval\"\n print(\"Test for\", test_name, \"passed!\")\n\n\ncreate_shape_test(True)\ncreate_shape_test(False)", "Time to train the model! The original MNIST linear classifier had an error rate of 12%. Let's use that to sanity check that our model is learning.", "NUM_EPOCHS = 10\nSTEPS_PER_EPOCH = 100\n\nmodel = linear_model()\ntrain_data = load_dataset()\nvalidation_data = load_dataset(training=False)\n\nOUTDIR = \"mnist_linear/\"\ncheckpoint_callback = ModelCheckpoint(OUTDIR, save_weights_only=True, verbose=1)\ntensorboard_callback = TensorBoard(log_dir=OUTDIR)\n\nhistory = model.fit(\n # TODO: specify training/eval data, # epochs, steps per epoch.\n verbose=2,\n callbacks=[checkpoint_callback, tensorboard_callback],\n)\n\nBENCHMARK_ERROR = 0.12\nBENCHMARK_ACCURACY = 1 - BENCHMARK_ERROR\n\naccuracy = history.history[\"accuracy\"]\nval_accuracy = history.history[\"val_accuracy\"]\nloss = history.history[\"loss\"]\nval_loss = history.history[\"val_loss\"]\n\nassert accuracy[-1] > BENCHMARK_ACCURACY\nassert val_accuracy[-1] > BENCHMARK_ACCURACY\nprint(\"Test to beat benchmark accuracy passed!\")\n\nassert accuracy[0] < accuracy[1]\nassert accuracy[1] < accuracy[-1]\nassert val_accuracy[0] < val_accuracy[1]\nassert val_accuracy[1] < val_accuracy[-1]\nprint(\"Test model accuracy is improving passed!\")\n\nassert loss[0] > loss[1]\nassert loss[1] > loss[-1]\nassert val_loss[0] > val_loss[1]\nassert val_loss[1] > val_loss[-1]\nprint(\"Test loss is decreasing passed!\")", "Evaluating Predictions\nWere you able to get an accuracy of over 90%? Not bad for a linear estimator! Let's make some predictions and see if we can find where the model has trouble. Change the range of values below to find incorrect predictions, and plot the corresponding images. What would you have guessed for these images?\nTODO 2: Change the range below to find an incorrect prediction", "image_numbers = range(0, 10, 1) # Change me, please.\n\n\ndef load_prediction_dataset():\n dataset = (x_test[image_numbers], y_test[image_numbers])\n dataset = tf.data.Dataset.from_tensor_slices(dataset)\n dataset = dataset.map(scale).batch(len(image_numbers))\n return dataset\n\n\npredicted_results = model.predict(load_prediction_dataset())\nfor index, prediction in enumerate(predicted_results):\n predicted_value = np.argmax(prediction)\n actual_value = y_test[image_numbers[index]]\n if actual_value != predicted_value:\n print(\"image number: \" + str(image_numbers[index]))\n print(\"the prediction was \" + str(predicted_value))\n print(\"the actual label is \" + str(actual_value))\n print(\"\")\n\nbad_image_number = 8\nplt.imshow(x_test[bad_image_number].reshape(HEIGHT, WIDTH));", "It's understandable why the poor computer would have some trouble. Some of these images are difficult for even humans to read. In fact, we can see what the computer thinks each digit looks like.\nEach of the 10 neurons in the dense layer of our model has 785 weights feeding into it. That's 1 weight for every pixel in the image + 1 for a bias term. These weights are flattened feeding into the model, but we can reshape them back into the original image dimensions to see what the computer sees.\nTODO 3: Reshape the layer weights to be the shape of an input image and plot.", "DIGIT = 0 # Change me to be an integer from 0 to 9.\nLAYER = 1 # Layer 0 flattens image, so no weights\nWEIGHT_TYPE = 0 # 0 for variable weights, 1 for biases\n\ndense_layer_weights = model.layers[LAYER].get_weights()\ndigit_weights = dense_layer_weights[WEIGHT_TYPE][:, DIGIT]\nplt.imshow(digit_weights.reshape((HEIGHT, WIDTH)))", "Did you recognize the digit the computer was trying to learn? Pretty trippy, isn't it! Even with a simple \"brain\", the computer can form an idea of what a digit should be. The human brain, however, uses layers and layers of calculations for image recognition. Ready for the next challenge? <a href=\"https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/images/mnist_linear.ipynb\">Click here</a> to super charge our models with human-like vision.\nBonus Exercise\nWant to push your understanding further? Instead of using Keras' built in layers, try repeating the above exercise with your own custom layers.\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jorisvandenbossche/DS-python-data-analysis
notebooks/pandas_05_groupby_operations.ipynb
bsd-3-clause
[ "<p><font size=\"6\"><b>06 - Pandas: \"Group by\" operations</b></font></p>\n\n\n© 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')", "Some 'theory': the groupby operation (split-apply-combine)", "df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],\n 'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})\ndf", "Recap: aggregating functions\nWhen analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example:", "df['data'].sum()", "However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.\nFor example, in the above dataframe df, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following:", "for key in ['A', 'B', 'C']:\n print(key, df[df['key'] == key]['data'].sum())", "This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.\nWhat we did above, applying a function on different groups, is a \"groupby operation\", and pandas provides some convenient functionality for this.\nGroupby: applying functions per group\nThe \"group by\" concept: we want to apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets\nThis operation is also referred to as the \"split-apply-combine\" operation, involving the following steps:\n\nSplitting the data into groups based on some criteria\nApplying a function to each group independently\nCombining the results into a data structure\n\n<img src=\"../img/pandas/splitApplyCombine.png\">\nSimilar to SQL GROUP BY\nInstead of doing the manual filtering as above\ndf[df['key'] == \"A\"].sum()\ndf[df['key'] == \"B\"].sum()\n...\n\npandas provides the groupby method to do exactly this:", "df.groupby('key').sum()\n\ndf.groupby('key').aggregate(np.sum) # 'sum'", "And many more methods are available.", "df.groupby('key')['data'].sum()", "Application of the groupby concept on the titanic data\nWe go back to the titanic passengers survival data:", "df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 1</b>:\n\n <ul>\n <li>Using groupby(), calculate the average age for each sex.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations1.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 2</b>:\n\n <ul>\n <li>Calculate the average survival ratio for all passengers.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations2.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 3</b>:\n\n <ul>\n <li>Calculate this survival ratio for all passengers younger than 25 (remember: filtering/boolean indexing).</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations3.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 4</b>:\n\n <ul>\n <li>What is the difference in the survival ratio between the sexes?</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations4.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 5</b>:\n\n <ul>\n <li>Make a bar plot of the survival ratio for the different classes ('Pclass' column).</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations5.py", "<div class=\"alert alert-success\">\n\n**EXERCISE 6**:\n\n* Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is divided is separate classes using the `pd.cut()` function as provided below.\n\n</div>", "df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))\n\n# %load _solutions/pandas_05_groupby_operations6.py", "If you are ready, more groupby exercises can be found below.\nSome more theory\nSpecifying the grouper\nIn the previous example and exercises, we always grouped by a single column by passing its name. But, a column name is not the only value you can pass as the grouper in df.groupby(grouper). Other possibilities for grouper are:\n\na list of strings (to group by multiple columns)\na Series (similar to a string indicating a column in df) or array\nfunction (to be applied on the index)\nlevels=[], names of levels in a MultiIndex", "df.groupby(df['Age'] < 18)['Survived'].mean()\n\ndf.groupby(['Pclass', 'Sex'])['Survived'].mean()", "The size of groups - value counts\nOften you want to know how many elements there are in a certain group (or in other words: the number of occurences of the different values from a column).\nTo get the size of the groups, we can use size:", "df.groupby('Pclass').size()\n\ndf.groupby('Embarked').size()", "Another way to obtain such counts, is to use the Series value_counts method:", "df['Embarked'].value_counts()", "[OPTIONAL] Additional exercises using the movie data\nThese exercises are based on the PyCon tutorial of Brandon Rhodes (so credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /notebooks/data folder.\ncast dataset: different roles played by actors/actresses in films\n\ntitle: title of the movie\nyear: year it was released\nname: name of the actor/actress\ntype: actor/actress\nn: the order of the role (n=1: leading role)", "cast = pd.read_csv('data/cast.csv')\ncast.head()", "titles dataset:\n\ntitle: title of the movie\nyear: year of release", "titles = pd.read_csv('data/titles.csv')\ntitles.head()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 7</b>:\n\n <ul>\n <li>Using `groupby()`, plot the number of films that have been released each decade in the history of cinema.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations7.py\n\n# %load _solutions/pandas_05_groupby_operations8.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 8</b>:\n\n <ul>\n <li>Use `groupby()` to plot the number of 'Hamlet' movies made each decade.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations9.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 9</b>:\n\n <ul>\n <li>For each decade, plot all movies of which the title contains \"Hamlet\".</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations10.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 10</b>:\n\n <ul>\n <li>List the 10 actors/actresses that have the most leading roles (n=1) since the 1990's.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations11.py\n\n# %load _solutions/pandas_05_groupby_operations12.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 11</b>:\n\n <ul>\n <li>In a previous exercise, the number of 'Hamlet' films released each decade was checked. Not all titles are exactly called 'Hamlet'. Give an overview of the titles that contain 'Hamlet' and an overview of the titles that start with 'Hamlet', each time providing the amount of occurrences in the data set for each of the movies</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations13.py\n\n# %load _solutions/pandas_05_groupby_operations14.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 12</b>:\n\n <ul>\n <li>List the 10 movie titles with the longest name.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations15.py\n\n# %load _solutions/pandas_05_groupby_operations16.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 13</b>:\n\n <ul>\n <li>How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations17.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 14</b>:\n\n <ul>\n <li>What are the 11 most common character names in movie history?</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations18.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 15</b>:\n\n <ul>\n <li>Plot how many roles Brad Pitt has played in each year of his career.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations19.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 16</b>:\n\n <ul>\n <li>What are the 10 most occurring movie titles that start with the words 'The Life'?</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations20.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 17</b>:\n\n <ul>\n <li>Which actors or actresses were most active in the year 2010 (i.e. appeared in the most movies)?</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations21.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 18</b>:\n\n <ul>\n <li>Determine how many roles are listed for each of 'The Pink Panther' movies.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations22.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 19</b>:\n\n <ul>\n <li> List, in order by year, each of the movies in which 'Frank Oz' has played more than 1 role.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations23.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 20</b>:\n\n <ul>\n <li> List each of the characters that Frank Oz has portrayed at least twice.</li>\n</ul>\n</div>", "# %load _solutions/pandas_05_groupby_operations24.py", "<div class=\"alert alert-success\">\n\n**EXERCISE 21**\n\nAdd a new column to the `cast` DataFrame that indicates the number of roles for each movie. \n\n<details><summary>Hints</summary>\n\n- [Transformation](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#transformation) returns an object that is indexed the same (same size) as the one being grouped.\n\n</details> \n\n\n</div>", "# %load _solutions/pandas_05_groupby_operations25.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 22</b>:\n\n <ul>\n <li> Calculate the ratio of leading actor and actress roles to the total number of leading roles per decade. </li>\n</ul><br>\n\n**Tip**: you can do a groupby twice in two steps, first calculating the numbers, and secondly, the ratios.\n</div>", "# %load _solutions/pandas_05_groupby_operations26.py\n\n# %load _solutions/pandas_05_groupby_operations27.py\n\n# %load _solutions/pandas_05_groupby_operations28.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 23</b>:\n\n <ul>\n <li> In which years the most films were released?</li>\n</ul><br>\n</div>", "# %load _solutions/pandas_05_groupby_operations29.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE 24</b>:\n\n <ul>\n <li>How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s? And in 2000s?</li>\n</ul><br>\n</div>", "# %load _solutions/pandas_05_groupby_operations30.py\n\n# %load _solutions/pandas_05_groupby_operations31.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
guildai/guild-examples
keras/basic_classification.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.", "Train your first neural network: basic classification\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/basic_classification\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nThis guide trains a neural network model to classify images of clothing, like sneakers and shirts. It's okay if you don't understand all the details, this is a fast-paced overview of a complete TensorFlow program with the details explained as we go.\nThis guide uses tf.keras, a high-level API to build and train models in TensorFlow.", "# TensorFlow and tf.keras\nimport tensorflow as tf\nfrom tensorflow import keras\n\n# Helper libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)", "Import the Fashion MNIST dataset\nThis guide uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:\n<table>\n <tr><td>\n <img src=\"https://tensorflow.org/images/fashion-mnist-sprite.png\"\n alt=\"Fashion MNIST sprite\" width=\"600\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 1.</b> <a href=\"https://github.com/zalandoresearch/fashion-mnist\">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp;\n </td></tr>\n</table>\n\nFashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the \"Hello, World\" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc) in an identical format to the articles of clothing we'll use here.\nThis guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code. \nWe will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow, just import and load the data:", "fashion_mnist = keras.datasets.fashion_mnist\n\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()", "Loading the dataset returns four NumPy arrays:\n\nThe train_images and train_labels arrays are the training set—the data the model uses to learn.\nThe model is tested against the test set, the test_images, and test_labels arrays.\n\nThe images are 28x28 NumPy arrays, with pixel values ranging between 0 and 255. The labels are an array of integers, ranging from 0 to 9. These correspond to the class of clothing the image represents:\n<table>\n <tr>\n <th>Label</th>\n <th>Class</th> \n </tr>\n <tr>\n <td>0</td>\n <td>T-shirt/top</td> \n </tr>\n <tr>\n <td>1</td>\n <td>Trouser</td> \n </tr>\n <tr>\n <td>2</td>\n <td>Pullover</td> \n </tr>\n <tr>\n <td>3</td>\n <td>Dress</td> \n </tr>\n <tr>\n <td>4</td>\n <td>Coat</td> \n </tr>\n <tr>\n <td>5</td>\n <td>Sandal</td> \n </tr>\n <tr>\n <td>6</td>\n <td>Shirt</td> \n </tr>\n <tr>\n <td>7</td>\n <td>Sneaker</td> \n </tr>\n <tr>\n <td>8</td>\n <td>Bag</td> \n </tr>\n <tr>\n <td>9</td>\n <td>Ankle boot</td> \n </tr>\n</table>\n\nEach image is mapped to a single label. Since the class names are not included with the dataset, store them here to use later when plotting the images:", "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', \n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']", "Explore the data\nLet's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:", "train_images.shape", "Likewise, there are 60,000 labels in the training set:", "len(train_labels)", "Each label is an integer between 0 and 9:", "train_labels", "There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:", "test_images.shape", "And the test set contains 10,000 images labels:", "len(test_labels)", "Preprocess the data\nThe data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:", "plt.figure()\nplt.imshow(train_images[0])\nplt.colorbar()\nplt.grid(False)", "We scale these values to a range of 0 to 1 before feeding to the neural network model. For this, cast the datatype of the image components from an integer to a float, and divide by 255. Here's the function to preprocess the images:\nIt's important that the training set and the testing set are preprocessed in the same way:", "train_images = train_images / 255.0\n\ntest_images = test_images / 255.0", "Display the first 25 images from the training set and display the class name below each image. Verify that the data is in the correct format and we're ready to build and train the network.", "plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[train_labels[i]])", "Build the model\nBuilding the neural network requires configuring the layers of the model, then compiling the model.\nSetup the layers\nThe basic building block of a neural network is the layer. Layers extract representations from the data fed into them. And, hopefully, these representations are more meaningful for the problem at hand.\nMost of deep learning consists of chaining together simple layers. Most layers, like tf.keras.layers.Dense, have parameters that are learned during training.", "model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28, 28)),\n keras.layers.Dense(128, activation=tf.nn.relu),\n keras.layers.Dense(10, activation=tf.nn.softmax)\n])", "The first layer in this network, tf.keras.layers.Flatten, transforms the format of the images from a 2d-array (of 28 by 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.\nAfter the pixels are flattened, the network consists of a sequence of two tf.keras.layers.Dense layers. These are densely-connected, or fully-connected, neural layers. The first Dense layer has 128 nodes (or neurons). The second (and last) layer is a 10-node softmax layer—this returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.\nCompile the model\nBefore the model is ready for training, it needs a few more settings. These are added during the model's compile step:\n\nLoss function —This measures how accurate the model is during training. We want to minimize this function to \"steer\" the model in the right direction.\nOptimizer —This is how the model is updated based on the data it sees and its loss function.\nMetrics —Used to monitor the training and testing steps. The following example uses accuracy, the fraction of the images that are correctly classified.", "model.compile(optimizer=tf.train.AdamOptimizer(), \n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])", "Train the model\nTraining the neural network model requires the following steps:\n\nFeed the training data to the model—in this example, the train_images and train_labels arrays.\nThe model learns to associate images and labels.\nWe ask the model to make predictions about a test set—in this example, the test_images array. We verify that the predictions match the labels from the test_labels array. \n\nTo start training, call the model.fit method—the model is \"fit\" to the training data:", "model.fit(train_images, train_labels, epochs=5)", "As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data.\nEvaluate accuracy\nNext, compare how the model performs on the test dataset:", "test_loss, test_acc = model.evaluate(test_images, test_labels)\n\nprint('Test accuracy:', test_acc)", "It turns out, the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of overfitting. Overfitting is when a machine learning model performs worse on new data than on their training data. \nMake predictions\nWith the model trained, we can use it to make predictions about some images.", "predictions = model.predict(test_images)", "Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:", "predictions[0]", "A prediction is an array of 10 numbers. These describe the \"confidence\" of the model that the image corresponds to each of the 10 different articles of clothing. We can see which label has the highest confidence value:", "np.argmax(predictions[0])", "So the model is most confident that this image is an ankle boot, or class_names[9]. And we can check the test label to see this is correct:", "test_labels[0]", "We can graph this to look at the full set of 10 channels", "def plot_image(i, predictions_array, true_label, img):\n predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n \n plt.imshow(img, cmap=plt.cm.binary)\n\n predicted_label = np.argmax(predictions_array)\n if predicted_label == true_label:\n color = 'blue'\n else:\n color = 'red'\n \n plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n 100*np.max(predictions_array),\n class_names[true_label]),\n color=color)\n\ndef plot_value_array(i, predictions_array, true_label):\n predictions_array, true_label = predictions_array[i], true_label[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n thisplot = plt.bar(range(10), predictions_array, color=\"#777777\")\n plt.ylim([0, 1]) \n predicted_label = np.argmax(predictions_array)\n \n thisplot[predicted_label].set_color('red')\n thisplot[true_label].set_color('blue')", "Let's look at the 0th image, predictions, and prediction array.", "i = 0\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)\n\ni = 12\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)", "Let's plot several images with their predictions. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent (out of 100) for the predicted label. Note that it can be wrong even when very confident.", "# Plot the first X test images, their predicted label, and the true label\n# Color correct predictions in blue, incorrect predictions in red\nnum_rows = 5\nnum_cols = 3\nnum_images = num_rows*num_cols\nplt.figure(figsize=(2*2*num_cols, 2*num_rows))\nfor i in range(num_images):\n plt.subplot(num_rows, 2*num_cols, 2*i+1)\n plot_image(i, predictions, test_labels, test_images)\n plt.subplot(num_rows, 2*num_cols, 2*i+2)\n plot_value_array(i, predictions, test_labels)\n", "Finally, use the trained model to make a prediction about a single image.", "# Grab an image from the test dataset\nimg = test_images[0]\n\nprint(img.shape)", "tf.keras models are optimized to make predictions on a batch, or collection, of examples at once. So even though we're using a single image, we need to add it to a list:", "# Add the image to a batch where it's the only member.\nimg = (np.expand_dims(img,0))\n\nprint(img.shape)", "Now predict the image:", "predictions_single = model.predict(img)\n\nprint(predictions_single)\n\nplot_value_array(0, predictions_single, test_labels)\n_ = plt.xticks(range(10), class_names, rotation=45)", "model.predict returns a list of lists, one for each image in the batch of data. Grab the predictions for our (only) image in the batch:", "np.argmax(predictions_single[0])", "And, as before, the model predicts a label of 9." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CortanaAnalyticsLabs/CortanaAnalyticsLabs
PerceptualIntelligence/Cortana Analytics Lab - Face API.ipynb
mit
[ "Perceptual Intelligence in Cortana Analytics\nExploring the Microsoft Project Oxford Face API\nFace API site\nFace API reference\n<img src=\"https://raw.githubusercontent.com/deldersveld/CortanaAnalyticsLabs/master/PerceptualIntelligence/Images/ThumbsUp.png\" width=\"200\">\nIn this activity, you will use Python to explore various methods available from the Face API. \nYou will detect a face in an image and crop the image so only the face appears. \nYou will then detect multiple faces in a second image and similarly crop each face.\nAfter getting exposure to detecting faces, you will then create a face list and add five distinct faces to the list.\nFinally, you will use your initial image to find similar faces from your face list.\nThis activity uses the following API methods:\n * Detect\n * Create a Face List\n * Get a Face List\n * Delete a Face List (optional)\n * Add a Face to a Face List\n * Find Similar\nAll images are in the public domain.\nStep 1: Enter your Face API key\nSubstitute the value of faceApiSubscriptionKey with your own API key. \nReplace [Face API Primary Key] but leave the quotation marks around your key.\nTo obtain your Face API key or sign up for the API, visit the Subscription page.\nWhen ready, run the cell, which imports required libraries and sets initial variables.", "import httplib, urllib, base64, json\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\n%matplotlib inline\n\nfaceApiSubscriptionKey = \"[Face API Primary Key]\"\n\nbaseFaceUrl = \"https://raw.githubusercontent.com/CortanaAnalyticsLabs/\\\nCortanaAnalyticsLabs/master/PerceptualIntelligence/Images/\"\n\nthumbsUp = baseFaceUrl + \"ThumbsUp.png\"\nmultipleFaces = baseFaceUrl + \"MultipleFaces.png\"\nfaceA = baseFaceUrl + \"FaceA.png\"\nfaceB = baseFaceUrl + \"FaceB.png\"\nfaceC = baseFaceUrl + \"FaceC.png\"\nfaceD = baseFaceUrl + \"FaceD.png\"\nfaceE = baseFaceUrl + \"FaceE.png\"\n\nprint(\"Initialization complete\")", "Step 2: Define functions\nThe following five functions allow you to pass your API key and other relevant parameters based on the API reference.\nRun the cell to create the various functions.\n * detect(apiKey, imageUrl)\n * createFaceList(apiKey, faceListId)\n * getFaceList(apiKey, faceListId)\n * deleteFaceList(apiKey, faceListId)\n * addFaceToFaceList(apiKey, faceListId, imageUrl)\n * findSimilars(apiKey, faceListId, faceId, numberOfCandidates)", "def detect(apiKey, imageUrl):\n jsonBody = '{\"url\":\"' + imageUrl + '\"}'\n\n headers = {\n 'Content-Type': 'application/json',\n 'Ocp-Apim-Subscription-Key': apiKey,\n }\n\n params = urllib.urlencode({\n 'returnFaceId': 'true',\n 'returnFaceLandmarks': 'false',\n 'returnFaceAttributes': 'age,gender',\n })\n\n try:\n conn = httplib.HTTPSConnection('api.projectoxford.ai')\n conn.request(\"POST\", \"/face/v1.0/detect?%s\" % params, jsonBody, headers)\n response = conn.getresponse()\n data = response.read()\n print(data)\n conn.close()\n except Exception as e:\n print(\"[Errno {0}] {1}\".format(e.errno, e.strerror))\n \n return data\n\ndef createFaceList(apiKey, faceListId):\n faceList = faceListId\n jsonBody = '{\"name\":\"' + faceList + '\", \"userData\":\"User-provided data attached to the face list\"}'\n \n headers = {\n 'Content-Type': 'application/json',\n 'Ocp-Apim-Subscription-Key': apiKey,\n }\n\n params = urllib.urlencode({\n })\n\n try:\n conn = httplib.HTTPSConnection('api.projectoxford.ai')\n conn.request(\"PUT\", \"/face/v1.0/facelists/\" + faceList + \"?%s\" % params, jsonBody, headers)\n response = conn.getresponse()\n data = response.read()\n print(data)\n conn.close()\n except Exception as e:\n print(\"[Errno {0}] {1}\".format(e.errno, e.strerror))\n \ndef getFaceList(apiKey, faceListId):\n faceList = faceListId\n jsonBody = ''\n \n headers = {\n 'Ocp-Apim-Subscription-Key': apiKey,\n }\n\n params = urllib.urlencode({\n })\n\n try:\n conn = httplib.HTTPSConnection('api.projectoxford.ai')\n conn.request(\"GET\", \"/face/v1.0/facelists/\" + faceList + \"?%s\" % params, jsonBody, headers)\n response = conn.getresponse()\n data = response.read()\n print(data)\n conn.close()\n except Exception as e:\n print(\"[Errno {0}] {1}\".format(e.errno, e.strerror))\n \ndef deleteFaceList(apiKey, faceListId):\n faceList = faceListId\n jsonBody = ''\n \n headers = {\n 'Ocp-Apim-Subscription-Key': apiKey,\n }\n\n params = urllib.urlencode({\n })\n\n try:\n conn = httplib.HTTPSConnection('api.projectoxford.ai')\n conn.request(\"DELETE\", \"/face/v1.0/facelists/\" + faceList + \"%s\" % params, jsonBody, headers)\n response = conn.getresponse()\n data = response.read()\n print(data)\n conn.close()\n except Exception as e:\n print(\"[Errno {0}] {1}\".format(e.errno, e.strerror))\n \ndef addFaceToFaceList(apiKey, faceListId, imageUrl):\n faceList = faceListId\n jsonBody = '{\"url\":\"' + imageUrl + '\"}'\n \n headers = {\n 'Content-Type': 'application/json',\n 'Ocp-Apim-Subscription-Key': apiKey,\n }\n\n params = urllib.urlencode({\n })\n\n try:\n conn = httplib.HTTPSConnection('api.projectoxford.ai')\n conn.request(\"POST\", \"/face/v1.0/facelists/\" + faceList + \"/persistedFaces?%s\" % params, jsonBody, headers)\n response = conn.getresponse()\n data = response.read()\n print(data)\n conn.close()\n except Exception as e:\n print(\"[Errno {0}] {1}\".format(e.errno, e.strerror))\n \ndef findSimilars(apiKey, faceListId, faceId, numberOfCandidates):\n jsonBody = '{\"faceId\":\"' + faceId + '\", \\\n \"faceListId\":\"' + faceListId + '\", \\\n \"maxNumOfCandidatesReturned\":' + numberOfCandidates + '}'\n headers = {\n 'Content-Type': 'application/json',\n 'Ocp-Apim-Subscription-Key': apiKey,\n }\n\n params = urllib.urlencode({\n })\n\n try:\n conn = httplib.HTTPSConnection('api.projectoxford.ai')\n conn.request(\"POST\", \"/face/v1.0/findsimilars?%s\" % params, jsonBody, headers)\n response = conn.getresponse()\n data = response.read()\n print(data)\n conn.close()\n except Exception as e:\n print(\"[Errno {0}] {1}\".format(e.errno, e.strerror))\n \nprint(\"API functions created\")", "Step 3: Display the initial image\nRun the following cell to display the image of a person. The image also displays pixels along the two axes.", "img = mpimg.imread(thumbsUp)\nplt.imshow(img)", "Step 4: Detect the face on the initial image\nRun the following cell to call the detect method, which returns a face box for the image. Note the values of \"top\" and \"left\" in \"faceRectangle\" and compare that point to the axes on the original image. The face rectangle then defines a box using the appropriate \"width\" and \"height\" values starting from that point. In addition, gender and age are displayed as attributes.", "thumbsUpData = detect(faceApiSubscriptionKey, thumbsUp)", "Step 5: Crop the face\nRun the following cell to take the JSON returned by the API and display only the faceRectangle", "face = json.loads(thumbsUpData)\nthumbsUpFaceId = face[0][\"faceId\"]\nfaceTop = face[0][\"faceRectangle\"][\"top\"]\nfaceLeft = face[0][\"faceRectangle\"][\"left\"]\nfaceWidth = face[0][\"faceRectangle\"][\"width\"]\nfaceHeight = face[0][\"faceRectangle\"][\"height\"]\n\nimg = mpimg.imread(thumbsUp)\nplt.imshow(img[faceTop:faceTop + faceHeight, faceLeft:faceLeft + faceWidth])", "Step 6: Detect multiple faces in an image\nRun the following cells in sequence to display an image with multiple faces, call the API to detect faces, and display the results. Note that while there are three people in the image, the API only returns two faces due to the side-facing orientation of the third person.", "img = mpimg.imread(multipleFaces)\nplt.imshow(img)\n\nmultipleFacesData = detect(faceApiSubscriptionKey, multipleFaces)\n\nface = json.loads(multipleFacesData)\nfaceTop = face[0][\"faceRectangle\"][\"top\"]\nfaceLeft = face[0][\"faceRectangle\"][\"left\"]\nfaceWidth = face[0][\"faceRectangle\"][\"width\"]\nfaceHeight = face[0][\"faceRectangle\"][\"height\"]\n\nimg = mpimg.imread(multipleFaces)\nplt.imshow(img[faceTop:faceTop + faceHeight, faceLeft:faceLeft + faceWidth])\n\nface = json.loads(multipleFacesData)\nfaceTop = face[1][\"faceRectangle\"][\"top\"]\nfaceLeft = face[1][\"faceRectangle\"][\"left\"]\nfaceWidth = face[1][\"faceRectangle\"][\"width\"]\nfaceHeight = face[1][\"faceRectangle\"][\"height\"]\n\nimg = mpimg.imread(multipleFaces)\nplt.imshow(img[faceTop:faceTop + faceHeight, faceLeft:faceLeft + faceWidth])", "Step 7: Create a Face List\nRun the following cell to create a new face list called \"sample\", then show that the list is empty. \nA Face List is simply a collection of faces that remain unidentified and are referenced by Id. \nThe API also has the ability to create a Person List for known people and call methods to identify new face images based on that list.\nNote that there is also a delete function that is commented out in case you would like to delete and re-create your face list at a later time.", "#deleteFaceList(faceApiSubscriptionKey, \"sample\")\ncreateFaceList(faceApiSubscriptionKey, \"sample\")\ngetFaceList(faceApiSubscriptionKey, \"sample\")", "Step 8: Add faces to your Face List\nRun the following five cells in sequence to display images of five distinct people and load their face information into your face list. Your faces are referenced in the list using \"persistedFaceId\".", "img = mpimg.imread(faceA)\nplt.imshow(img)\nrawFaceDataA = detect(faceApiSubscriptionKey, faceA)\naddFaceToFaceList(faceApiSubscriptionKey, \"sample\", faceA)\n\nimg = mpimg.imread(faceB)\nplt.imshow(img)\nrawFaceDataB = detect(faceApiSubscriptionKey, faceB)\naddFaceToFaceList(faceApiSubscriptionKey, \"sample\", faceB)\n\nimg = mpimg.imread(faceC)\nplt.imshow(img)\nrawFaceDataC = detect(faceApiSubscriptionKey, faceC)\nfaceDataC = json.loads(rawFaceDataC)\naddFaceToFaceList(faceApiSubscriptionKey, \"sample\", faceC)\n\nimg = mpimg.imread(faceD)\nplt.imshow(img)\nrawFaceDataD = detect(faceApiSubscriptionKey, faceD)\naddFaceToFaceList(faceApiSubscriptionKey, \"sample\", faceD)\n\nimg = mpimg.imread(faceE)\nplt.imshow(img)\nrawFaceDataE = detect(faceApiSubscriptionKey, faceE)\naddFaceToFaceList(faceApiSubscriptionKey, \"sample\", faceE)", "Step 9: Display contents of your Face List\nRun the following cell to display the contents of your recently populated face list. \nWhen you first created the list and called this function, it was empty.\nYou should now see the \"persistedFaceId\" values for the five faces that you added to your list.", "getFaceList(faceApiSubscriptionKey, \"sample\")", "Step 10: Pass a sample image and find similar faces in your Face List\nRun the following cell to compare the initial face of the \"thumbs up woman\" from Step 3 with the five faces in your face list.\nOne result should display with a reasonably high confidence value.\nThe \"thumbs up woman\" and \"woman with apple\" are the same person.\n<img src=\"https://raw.githubusercontent.com/deldersveld/CortanaAnalyticsLabs/master/PerceptualIntelligence/Images/ThumbsUp.png\" width=\"200\">\n<img src=\"https://raw.githubusercontent.com/deldersveld/CortanaAnalyticsLabs/master/PerceptualIntelligence/Images/FaceA.png\" width=\"300\">", "findSimilars(faceApiSubscriptionKey, \"sample\", thumbsUpFaceId, \"5\")", "Conclusion\nYou have completed the Face API lab activity. \nIf you would like more detail about additional capabilities, visit the Face API reference page." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jldinh/multicell
examples/01 - Creating a simple tissue.ipynb
mit
[ "In this example, we will show how to create a very simple tissue structure comprised of cubic cells and visualize it using Multicell.\nPreparation\nVisualizations rely on the matplotlib module. In order for visualizations to work interactively in this Jupyter notebook, we need to run the following command.", "%matplotlib notebook", "Imports\nIn Python, it is common practice to import the modules we will be using at the beginning of the script. To create a simple virtual tissue, we will need the Multicell module.", "import multicell", "We then need to define the problem\nProblem definition\nTissue structure\nThe simulation_builder module can be used to create a virtual tissue, e.g. in this case a cubic cell grid.\nAs the grid is regular, aside from a small amount of random noise used to break symmetry, all cells have identical sizes measured in arbitrary units, with sides 1 arb. unit-long. The neighbors of a cell are cells that are in direct contact. In this simple case, as we have a single layer of identical and aligned cells, each cell will have between two and four neighbors, depending on its position in the grid. The contact surface of any two adjacent cells will always be equal to $1 (arb. unit)^2$. All cells also have a volume of $1 (arb. unit)^3$ (Fig. 1).\n<img src=\"img/fig1.png\" />\n<center>Figure 1: Structure of a grid tissue comprised of cubic cells. (a) Each cell is a cube whose sides are 1 arb. unit long. (b) Depending on its position within the tissue, the red cell can have 2, 3 or 4 neighbors (in green).</center>\nAs grid tissues are convenient to quickly build prototype models, multicell provides a function, generate_cell_grid_sim that automatically prepares simulations based on grid tissues.", "sim = multicell.generate_cell_grid_sim(x=20,y=20, z=1, noise_amplitude=1e-3)", "The usage of this function (and others) can be checked out in its documentation.", "help(multicell.generate_cell_grid_sim)", "Behind the scenes, this function performs several operations. Knowing what they are is not necessary for the series of examples presented in this chapter, but would be useful, should Multicell be used to run simulations in custom tissue structures. First a simulation object is created.", "sim = multicell.Simulation()", "Then a grid tissue is created using a function provided by Virtual Plants. It contains topological information (information about which vertices are connected by edges, which edges form faces and which faces form cells). This is stored in the mesh variable. It also contains information about the positions of all vertices. This is stored in the pos variable.", "from openalea.tissueshape import tovec, grid_tissue\ntissuedb = grid_tissue.regular_grid((20, 20, 1))\nmesh = tissuedb.get_topology(\"mesh_id\")\npos = tovec(tissuedb.get_property(\"position\"))", "Vertex positions are then modified. The tissue is centered in space (this is not necessary) and a small noise is applied (this is important when symmetry needs to be broken, e.g. in the division algorithm we will use in a later example).", "import numpy as np\nbary = reduce(lambda x,y: x + y,pos.itervalues() ) / len(pos)\npos = dict((pid,vec - bary + np.random.uniform(-5e-4, 5e-4, 3)) for pid,vec in pos.iteritems())", "Finally, mesh and pos are imported into the simulation.", "sim.import_topomesh(mesh, pos)", "In this example, mesh and pos correspond to a regular grid, but they could represent any tissue structure, as long as it is defined in the correct format (OpenAlea TopoMesh).\nVisualization\nTo display the tissue we just created, we register a renderer. A renderer is an object with a display() method, whose purpose is to display a graphical representation of the simulation. A renderer is registered by passing the class of the renderer to the register_renderer method of our simulation object. The additional arguments are the name of the variable to plot on the tissue (if any), and a dictionary of arguments for the renderer class. Here, we supply (optional) arguments to MatplotlibRenderer: view defines the default angle of the 3D visualization (a top view in this case), and axes determines whether axes should be visible or not.", "sim.register_renderer(multicell.rendering.MatplotlibRenderer, None, {\"view\": (90, -90), \"axes\":False})\nsim.renderer.display()", "This single-layer cell grid will be our starting point for all the examples of this chapter.\nOther types of virtual tissues could be created (e.g.: irregular grids, 3D grids with non planar geometries...) and imported into a simulation object. They simply need to be defined as an OpenAlea Topomesh object." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
agile-geoscience/notebooks
Fastest_dimension_of_array.ipynb
apache-2.0
[ "Which is the fastest axis of an array?\nI'd like to know: which axes of a NumPy array are fastest to access?", "import numpy as np\n\n%matplotlib inline\nimport matplotlib.pyplot as plt", "A tiny example", "a = np.arange(9).reshape(3, 3)\na\n\n' '.join(str(i) for i in a.ravel(order='C'))\n\n' '.join(str(i) for i in a.ravel(order='F'))", "A seismic volume", "volume = np.load('data/F3_volume_3x3_16bit.npy')\n\nvolume.shape", "Let's look at how the indices vary:", "idx = np.indices(volume.shape)\n\nidx.shape", "We can't easily look at the indices for 190 &times; 190 &times; 190 samples (6 859 000 samples). So let's look at a small subset: 5 &times; 5 &times; 5 = 125 samples. We can make a plot of how the indices vary in each direction. For C-ordering, the indices on axis 0 vary slowly: they start at 0 and stay at 0 for 25 samples; then they increment by one. So if we ask for all the data for which axis 0 has index 2 (say), the computer just has to retrieve a contiguous chunk of memory and it gets all the samples. \nOn the other hand, if we ask for all the samples for which axis 2 has index 2, we have to retrieve non-contiguous samples from memory, effectively opening a lot of memory 'drawers' and taking one pair of socks out of each one.", "from matplotlib.font_manager import FontProperties\n\nannot = ['data[2, :, :]', 'data[:, 2, :]', 'data[:, :, 2]']\nmono = FontProperties()\nmono.set_family('monospace')\n\nfig, axs = plt.subplots(ncols=3, figsize=(15,3), facecolor='w')\n\nfor i, ax in enumerate(axs):\n data = idx[i, :5, :5, :5].ravel(order='C')\n ax.plot(data, c=f'C{i}')\n ax.scatter(np.where(data==2), data[data==2], color='r', s=10, zorder=10)\n ax.text(65, 4.3, f'axis {i}', color=f'C{i}', size=15, ha='center')\n ax.text(65, -0.7, annot[i], color='red', size=12, ha='center', fontproperties=mono)\n ax.set_ylim(-1, 5)\n_ = plt.suptitle(\"C order\", size=18)\nplt.savefig('/home/matt/Pictures/3d-array-corder.png')\n\nfig, axs = plt.subplots(ncols=3, figsize=(15,3), facecolor='w')\n\nfor i, ax in enumerate(axs):\n data = idx[i, :5, :5, :5].ravel(order='F')\n ax.plot(data, c=f'C{i}')\n ax.scatter(np.where(data==2), data[data==2], color='r', s=10, zorder=10)\n ax.text(65, 4.3, f'axis {i}', color=f'C{i}', size=15, ha='center')\n ax.text(65, -0.7, annot[i], color='red', size=12, ha='center', fontproperties=mono)\n ax.set_ylim(-1, 5)\n_ = plt.suptitle(\"Fortran order\", size=18)\nplt.savefig('/home/matt/Pictures/3d-array-forder.png')", "At the risk of making it more confusing, it might help to look at the plots together. Shown here is the C ordering:", "plt.figure(figsize=(15,3))\nplt.plot(idx[0, :5, :5, :5].ravel(), zorder=10)\nplt.plot(idx[1, :5, :5, :5].ravel(), zorder=9)\nplt.plot(idx[2, :5, :5, :5].ravel(), zorder=8)", "This organization is reflected in ndarray.strides, which tells us how many bytes must be traversed to get to the next index in each axis. Each 2-byte step through memory gets me to the next index in axis 2, but I must strude 72200 bytes to get to the next index of axis 0:", "volume.strides", "Aside: figure for blog post", "fig, axs = plt.subplots(ncols=2, figsize=(10,3), facecolor='w')\n\nfor i, ax in enumerate(axs):\n data = idx[i, :3, :3, 0].ravel(order='C')\n ax.plot(data, 'o-', c='gray')\n ax.text(0, 1.8, f'axis {i}', color='gray', size=15, ha='left')\nplt.savefig('/home/matt/Pictures/2d-array-corder.png')", "Accessing the seismic data\nLet's make all the dimensions the same, to avoid having to slice later. I'll make a copy, otherwise we'll have a view of the original array. \nAlternatively, change the shape here to see effect of small dimensions, eg try volume = volume[:10, :290, :290] with C ordering.", "volume = volume[:190, :190, :190].copy()\n\ndef get_slice_3d(volume, x, axis, n=None):\n \"\"\"\n Naive function... but only works on 3 dimensions.\n NB Using ellipses slows down last axis.\n \"\"\"\n # Force cube shape\n if n is None and not np.sum(np.diff(volume.shape)):\n n = np.min(volume.shape)\n if axis == 0:\n data = volume[x, :n, :n]\n if axis == 1:\n data = volume[:n, x, :n]\n if axis == 2:\n data = volume[:n, :n, x]\n return data + 1\n\n%timeit get_slice_3d(volume, 150, axis=0)\n%timeit get_slice_3d(volume, 150, axis=1)\n%timeit get_slice_3d(volume, 150, axis=2)", "Let's check that changing the memory layout to Fortran ordering makes the last dimension fastest:", "volumef = np.asfortranarray(volume)\n\n%timeit get_slice_3d(volumef, 150, axis=0)\n%timeit get_slice_3d(volumef, 150, axis=1)\n%timeit get_slice_3d(volumef, 150, axis=2)", "Axes 0 and 1 are > 10 times faster than axis 2.\nWhat about if we do something like take a Fourier transform over the first axis?", "from scipy.signal import welch\n\n%timeit s = [welch(tr, fs=500) for tr in volume[:, 10]]\n\n%timeit s = [welch(tr, fs=500) for tr in volumef[:, 10]]", "No practical difference. Hm.\nI'm guessing this is because the DFT takes way longer than the data access.", "del(volume)\ndel(volumef)", "Fake data in n dimensions\nLet's make a function to generate random data in any number of dimensions.\nBe careful: these volumes get big really quickly!", "def makend(n, s, equal=True, rev=False, fortran=False):\n \"\"\"\n Make an n-dimensional hypercube of randoms.\n \"\"\"\n if equal:\n incr = np.zeros(n, dtype=int)\n elif rev:\n incr = list(reversed(np.arange(n)))\n else:\n incr = np.arange(n)\n shape = incr + np.ones(n, dtype=int)*s\n a = np.random.random(shape)\n m = f\"Shape: {tuple(shape)} \"\n m += f\"Memory: {a.nbytes/1e6:.0f}MB \"\n m += f\"Order: {'F' if fortran else 'C'}\"\n print (m)\n if fortran:\n return np.asfortranarray(a)\n else:\n return a", "I tried implementing this as a context manager, so you wouldn't have to delete the volume each time after using it. I tried the @contextmanager decorator, and I tried making a class with __enter__() and __exit__() methods. Each time, I tried putting the del command as part of the exit routine. They both worked fine... except they did not delete the volume from memory. \n2D data", "def get_slice_2d(volume, x, axis, n=None):\n \"\"\"\n Naive function... but only works on 2 dimensions.\n \"\"\"\n if n is None and not np.sum(np.diff(volume.shape)):\n n = np.min(volume.shape)\n if axis == 0:\n data = volume[x, :n]\n if axis == 1:\n data = volume[:n, x]\n return data + 1\n\ndim = 2\n\nv = makend(dim, 6000, fortran=False)\nfor n in range(dim):\n %timeit get_slice_2d(v, 3001, n)\ndel v\n\ndim = 2\n\nv = makend(dim, 6000, fortran=True)\nfor n in range(dim):\n %timeit get_slice_2d(v, 3001, n)\ndel v", "This has been between 3.3 and 12 times faster.\n1D convolution on an array", "def convolve(data, kernel=np.arange(10), axis=0):\n func = lambda tr: np.convolve(tr, kernel, mode='same')\n return np.apply_along_axis(func, axis=axis, arr=data)\n\ndim = 2\n\nv = makend(dim, 6000, fortran=False)\n%timeit convolve(v, axis=0)\n%timeit convolve(v, axis=1)\ndel v\n\ndim = 2\n\nv = makend(dim, 6000, fortran=True)\n%timeit convolve(v, axis=0)\n%timeit convolve(v, axis=1)\ndel v", "Speed is double on fast axis, i.e. second axis on default C order.\nnp.mean() across axes\nLet's try taking averages across different axes. In C order it should be faster to get the mean on axis=1 because that involves getting the rows:", "a = [[ 2, 4],\n [10, 20]]\n\nnp.mean(a, axis=0), np.mean(a, axis=1)", "Let's see how this looks on our data:", "dim = 2\n\nv = makend(dim, 6000, fortran=False)\n%timeit np.mean(v, axis=0)\n%timeit np.mean(v, axis=1)\ndel v\n\ndim = 2\n\nv = makend(dim, 6000, fortran=True)\n%timeit np.mean(v, axis=0)\n%timeit np.mean(v, axis=1)\ndel v", "We'd expect the difference to be even more dramatic with median because it has to sort every row or column:", "v = makend(dim, 6000, fortran=False)\n%timeit np.median(v, axis=0)\n%timeit np.median(v, axis=1)\ndel v\n\nv = makend(dim, 6000, fortran=False)\n%timeit v.mean(axis=0)\n%timeit v.mean(axis=1)\ndel v", "3D arrays\nIn a nutshell:\nC order: first axis is fastest, last axis is slowest; factor of two between others.\nFortran order: last axis is fastest, first axis is slowest; factor of two between others.", "dim = 3\n\nv = makend(dim, 600)\nfor n in range(dim):\n %timeit get_slice_3d(v, 201, n)\ndel v", "Non-equal axes doesn't matter.", "dim = 3\n\nv = makend(dim, 600, equal=False, rev=True)\nfor n in range(dim):\n %timeit get_slice_3d(v, 201, n)\ndel v", "Fortran order results in a fast last axis, as per. But the middle axis is pretty fast too.", "dim = 3\n\nv = makend(dim, 600, fortran=True)\nfor n in range(dim):\n %timeit get_slice_3d(v, 201, n)\ndel v", "For C ordering, the last dimension is more than 20x slower than the other two.\n4 dimensions\nAxes 0 and 1 are fast (for C ordering), axis 2 is half speed, axis 3 is ca. 15 times slower than fast axis.", "def get_slice_4d(volume, x, axis, n=None):\n \"\"\"\n Naive function... but only works on 4 dimensions.\n \"\"\"\n if n is None and not np.sum(np.diff(volume.shape)):\n n = np.min(volume.shape)\n if axis == 0:\n data = volume[x, :n, :n, :n]\n if axis == 1:\n data = volume[:n, x, :n, :n]\n if axis == 2:\n data = volume[:n, :n, x, :n]\n if axis == 3:\n data = volume[:n, :n, :n, x]\n return data + 1\n\ndim = 4\n\nv = makend(dim, 100, equal=True)\nfor n in range(dim):\n %timeit get_slice_4d(v, 51, n)\ndel v\n\ndim = 4\n\nv = makend(dim, 100, equal=True, fortran=True)\nfor n in range(dim):\n %timeit get_slice_4d(v, 51, n)\ndel v", "5 dimensions\nWe are taking 4-dimensional hyperplanes from a 5-dimensional hypercube. \nAxes 0 and 1 are fast, axis 2 is half speed, axis 3 is quarter speed, and the last axis is about 5x slower than that.", "def get_slice_5d(volume, x, axis, n=None):\n \"\"\"\n Naive function... but only works on 5 dimensions.\n \"\"\"\n if n is None and not np.sum(np.diff(volume.shape)):\n n = np.min(volume.shape)\n if axis == 0:\n data = volume[x, :n, :n, :n, :n]\n if axis == 1:\n data = volume[:n, x, :n, :n, :n]\n if axis == 2:\n data = volume[:n, :n, x, :n, :n]\n if axis == 3:\n data = volume[:n, :n, :n, x, :n]\n if axis == 4:\n data = volume[:n, :n, :n, :n, x]\n return data + 1\n\ndim = 5\n\nv = makend(dim, 40)\nfor n in range(dim):\n %timeit get_slice_5d(v, 21, n)\ndel v\n\ndim = 5\n\nv = makend(dim, 40, fortran=True)\nfor n in range(dim):\n %timeit get_slice_5d(v, 21, n)\ndel v", "What about when we're doing something like getting the mean on an array?", "dim = 5\n\nv = makend(dim, 40, fortran=True)\nfor n in range(dim):\n %timeit np.mean(v, axis=n)\ndel v", "6 dimensions and beyond\nIn general, first n/2 dimensions are fast, then gets slower until last dimension is several (5-ish) times slower than the first.", "def get_slice_6d(volume, x, axis, n=None):\n \"\"\"\n Naive function... but only works on 6 dimensions.\n \"\"\"\n if n is None and not np.sum(np.diff(volume.shape)):\n n = np.min(volume.shape)\n if axis == 0:\n data = volume[x, :n, :n, :n, :n, :n]\n if axis == 1:\n data = volume[:n, x, :n, :n, :n, :n]\n if axis == 2:\n data = volume[:n, :n, x, :n, :n, :n]\n if axis == 3:\n data = volume[:n, :n, :n, x, :n, :n]\n if axis == 4:\n data = volume[:n, :n, :n, :n, x, :n]\n if axis == 5:\n data = volume[:n, :n, :n, :n, :n, x]\n return data + 1\n\ndim = 6\n\nv = makend(dim, 23)\nfor n in range(dim):\n %timeit get_slice_6d(v, 12, n)\ndel v" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/inpe/cmip6/models/sandbox-3/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: INPE\nSource ID: SANDBOX-3\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:07\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inpe', 'sandbox-3', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/explainable_ai/sdk_custom_tabular_regression_batch_explain.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex SDK: Custom training tabular regression model for batch prediction with explainabilty\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb\">\n Open in Vertex AI Workbench\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for batch prediction with explanation.\nDataset\nThe dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.\nObjective\nIn this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a batch prediction with explanations on the uploaded model. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.\nThe steps performed include:\n\nCreate a Vertex custom job for training a model.\nTrain the TensorFlow model.\nRetrieve and load the model artifacts.\nView the model evaluation.\nSet explanation parameters.\nUpload the model as a Vertex Model resource.\nMake a batch prediction with explanations.\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3.\n\n\nActivate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.\n\n\nRun jupyter notebook on the command line in a terminal shell to launch Jupyter.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.", "import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG\n\nif os.getenv(\"IS_TESTING\"):\n ! pip3 install --upgrade tensorflow $USER_FLAG", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import google.cloud.aiplatform as aip", "Initialize Vertex SDK for Python\nInitialize the Vertex SDK for Python for your project and corresponding bucket.", "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)", "Set hardware accelerators\nYou can set hardware accelerators for training and prediction.\nSet the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nOtherwise specify (None, None) to use a container image to run on a CPU.\nLearn more here hardware accelerator support for your region\nNote: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.", "if os.getenv(\"IS_TESTING_TRAIN_GPU\"):\n TRAIN_GPU, TRAIN_NGPU = (\n aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_TRAIN_GPU\")),\n )\nelse:\n TRAIN_GPU, TRAIN_NGPU = (None, None)\n\nif os.getenv(\"IS_TESTING_DEPLOY_GPU\"):\n DEPLOY_GPU, DEPLOY_NGPU = (\n aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_DEPLOY_GPU\")),\n )\nelse:\n DEPLOY_GPU, DEPLOY_NGPU = (None, None)", "Set pre-built containers\nSet the pre-built Docker container image for training and prediction.\nFor the latest list, see Pre-built containers for training.\nFor the latest list, see Pre-built containers for prediction.", "if os.getenv(\"IS_TESTING_TF\"):\n TF = os.getenv(\"IS_TESTING_TF\")\nelse:\n TF = \"2-1\"\n\nif TF[0] == \"2\":\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\nelse:\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n\nprint(\"Training:\", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)", "Set machine type\nNext, set the machine type to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU.\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.", "if os.getenv(\"IS_TESTING_TRAIN_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_TRAIN_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nif os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)", "Tutorial\nNow you are ready to start creating your own custom model and training for Boston Housing.\nExamine the training package\nPackage layout\nBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.\n\nPKG-INFO\nREADME.md\nsetup.cfg\nsetup.py\ntrainer\n__init__.py\ntask.py\n\nThe files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.\nThe file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).\nPackage Assembly\nIn the following cells, you will assemble the training package.", "# Make folder for Python training script\n! rm -rf custom\n! mkdir custom\n\n# Add package information\n! touch custom/README.md\n\nsetup_cfg = \"[egg_info]\\n\\ntag_build =\\n\\ntag_date = 0\"\n! echo \"$setup_cfg\" > custom/setup.cfg\n\nsetup_py = \"import setuptools\\n\\nsetuptools.setup(\\n\\n install_requires=[\\n\\n 'tensorflow_datasets==1.3.0',\\n\\n ],\\n\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > custom/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\nName: Boston Housing tabular regression\\n\\nVersion: 0.0.0\\n\\nSummary: Demostration training script\\n\\nHome-page: www.google.com\\n\\nAuthor: Google\\n\\nAuthor-email: aferlitsch@google.com\\n\\nLicense: Public\\n\\nDescription: Demo\\n\\nPlatform: Vertex\"\n! echo \"$pkg_info\" > custom/PKG-INFO\n\n# Make the training subfolder\n! mkdir custom/trainer\n! touch custom/trainer/__init__.py", "Task.py contents\nIn the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:\n\nGet the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.\nLoads Boston Housing dataset from TF.Keras builtin datasets\nBuilds a simple deep neural network model using TF.Keras model API.\nCompiles the model (compile()).\nSets a training distribution strategy according to the argument args.distribute.\nTrains the model (fit()) with epochs specified by args.epochs.\nSaves the trained model (save(args.model_dir)) to the specified model directory.\nSaves the maximum value for each feature f.write(str(params)) to the specified parameters file.", "%%writefile custom/trainer/task.py\n# Single, Mirror and Multi-Machine Distributed Training for Boston Housing\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport numpy as np\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')\nparser.add_argument('--lr', dest='lr',\n default=0.001, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=20, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=100, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nparser.add_argument('--param-file', dest='param_file',\n default='/tmp/param.txt', type=str,\n help='Output file for parameters')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n# Single Machine, multiple compute device\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\n# Multiple Machine, multiple compute device\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n# Multi-worker configuration\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\n\ndef make_dataset():\n\n # Scaling Boston Housing data features\n def scale(feature):\n max = np.max(feature)\n feature = (feature / max).astype(np.float)\n return feature, max\n\n (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(\n path=\"boston_housing.npz\", test_split=0.2, seed=113\n )\n params = []\n for _ in range(13):\n x_train[_], max = scale(x_train[_])\n x_test[_], _ = scale(x_test[_])\n params.append(max)\n\n # store the normalization (max) value for each feature\n with tf.io.gfile.GFile(args.param_file, 'w') as f:\n f.write(str(params))\n return (x_train, y_train), (x_test, y_test)\n\n\n# Build the Keras model\ndef build_and_compile_dnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(1, activation='linear')\n ])\n model.compile(\n loss='mse',\n optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))\n return model\n\nNUM_WORKERS = strategy.num_replicas_in_sync\n# Here the batch size scales up by number of workers since\n# `tf.data.Dataset.batch` expects the global batch size.\nBATCH_SIZE = 16\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\n\nwith strategy.scope():\n # Creation of dataset, and model building/compiling need to be within\n # `strategy.scope()`.\n model = build_and_compile_dnn_model()\n\n# Train the model\n(x_train, y_train), (x_test, y_test) = make_dataset()\nmodel.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)\nmodel.save(args.model_dir)", "Store training script on your Cloud Storage bucket\nNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.", "! rm -f custom.tar custom.tar.gz\n! tar cvf custom.tar custom\n! gzip custom.tar\n! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz", "Create and run custom training job\nTo train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.\nCreate custom training job\nA custom training job is created with the CustomTrainingJob class, with the following parameters:\n\ndisplay_name: The human readable name for the custom training job.\ncontainer_uri: The training container image.\nrequirements: Package requirements for the training container image (e.g., pandas).\nscript_path: The relative path to the training script.", "job = aip.CustomTrainingJob(\n display_name=\"boston_\" + TIMESTAMP,\n script_path=\"custom/trainer/task.py\",\n container_uri=TRAIN_IMAGE,\n requirements=[\"gcsfs==0.7.1\", \"tensorflow-datasets==4.4\"],\n)\n\nprint(job)", "Prepare your command-line arguments\nNow define the command-line arguments for your custom training container:\n\nargs: The command-line arguments to pass to the executable that is set as the entry point into the container.\n--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.\ndirect: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or\nindirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.\n\n\n\"--epochs=\" + EPOCHS: The number of epochs for training.\n\"--steps=\" + STEPS: The number of steps per epoch.", "MODEL_DIR = \"{}/{}\".format(BUCKET_NAME, TIMESTAMP)\n\nEPOCHS = 20\nSTEPS = 100\n\nDIRECT = True\nif DIRECT:\n CMDARGS = [\n \"--model-dir=\" + MODEL_DIR,\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n ]\nelse:\n CMDARGS = [\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n ]", "Run the custom training job\nNext, you run the custom job to start the training job by invoking the method run, with the following parameters:\n\nargs: The command-line arguments to pass to the training script.\nreplica_count: The number of compute instances for training (replica_count = 1 is single node training).\nmachine_type: The machine type for the compute instances.\naccelerator_type: The hardware accelerator type.\naccelerator_count: The number of accelerators to attach to a worker replica.\nbase_output_dir: The Cloud Storage location to write the model artifacts to.\nsync: Whether to block until completion of the job.", "if TRAIN_GPU:\n job.run(\n args=CMDARGS,\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n accelerator_type=TRAIN_GPU.name,\n accelerator_count=TRAIN_NGPU,\n base_output_dir=MODEL_DIR,\n sync=True,\n )\nelse:\n job.run(\n args=CMDARGS,\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n base_output_dir=MODEL_DIR,\n sync=True,\n )\n\nmodel_path_to_deploy = MODEL_DIR", "Load the saved model\nYour model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.\nTo load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.", "import tensorflow as tf\n\nlocal_model = tf.keras.models.load_model(MODEL_DIR)", "Evaluate the model\nNow let's find out how good the model is.\nLoad evaluation data\nYou will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).\nYou don't need the training data, and hence why we loaded it as (_, _).\nBefore you can run the data through evaluation, you need to preprocess it:\nx_test:\n1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces each single value with a 32-bit floating point number between 0 and 1.", "import numpy as np\nfrom tensorflow.keras.datasets import boston_housing\n\n(_, _), (x_test, y_test) = boston_housing.load_data(\n path=\"boston_housing.npz\", test_split=0.2, seed=113\n)\n\n\ndef scale(feature):\n max = np.max(feature)\n feature = (feature / max).astype(np.float32)\n return feature\n\n\n# Let's save one data item that has not been scaled\nx_test_notscaled = x_test[0:1].copy()\n\nfor _ in range(13):\n x_test[_] = scale(x_test[_])\nx_test = x_test.astype(np.float32)\n\nprint(x_test.shape, x_test.dtype, y_test.shape)\nprint(\"scaled\", x_test[0])\nprint(\"unscaled\", x_test_notscaled)", "Perform the model evaluation\nNow evaluate how well the model in the custom job did.", "local_model.evaluate(x_test, y_test)", "Get the serving function signature\nYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\nWhen making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.\nYou also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.", "loaded = tf.saved_model.load(model_path_to_deploy)\n\nserving_input = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\nprint(\"Serving function input:\", serving_input)\nserving_output = list(loaded.signatures[\"serving_default\"].structured_outputs.keys())[0]\nprint(\"Serving function output:\", serving_output)\n\ninput_name = local_model.input.name\nprint(\"Model input name:\", input_name)\noutput_name = local_model.output.name\nprint(\"Model output name:\", output_name)", "Explanation Specification\nTo get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:\n\nparameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:\nShapley - Note, not recommended for image data -- can be very long running\nXRAI\nIntegrated Gradients\nmetadata: This is the specification for how the algoithm is applied on your custom model.\n\nExplanation Parameters\nLet's first dive deeper into the settings for the explainability algorithm.\nShapley\nAssigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.\nUse Cases:\n - Classification and regression on tabular data.\nParameters:\n\npath_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).\n\nFor any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.\nIntegrated Gradients\nA gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.\nUse Cases:\n - Classification and regression on tabular data.\n - Classification on image data.\nParameters:\n\nstep_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.\n\nXRAI\nBased on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.\nUse Cases:\n\nClassification on image data.\n\nParameters:\n\nstep_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.\n\nIn the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.", "XAI = \"ig\" # [ shapley, ig, xrai ]\n\nif XAI == \"shapley\":\n PARAMETERS = {\"sampled_shapley_attribution\": {\"path_count\": 10}}\nelif XAI == \"ig\":\n PARAMETERS = {\"integrated_gradients_attribution\": {\"step_count\": 50}}\nelif XAI == \"xrai\":\n PARAMETERS = {\"xrai_attribution\": {\"step_count\": 50}}\n\nparameters = aip.explain.ExplanationParameters(PARAMETERS)", "Explanation Metadata\nLet's first dive deeper into the explanation metadata, which consists of:\n\n\noutputs: A scalar value in the output to attribute -- what to explain. For example, in a probability output [0.1, 0.2, 0.7] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is y and that is what we want to explain.\ny = f(x)\n\n\nConsider the following formulae, where the outputs are y and z. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output y or z. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.\ny, z = f(x)\n\nThe dictionary format for outputs is:\n{ \"outputs\": { \"[your_display_name]\":\n \"output_tensor_name\": [layer]\n }\n}\n\n<blockquote>\n - [your_display_name]: A human readable name you assign to the output to explain. A common example is \"probability\".<br/>\n - \"output_tensor_name\": The key/value field to identify the output layer to explain. <br/>\n - [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.\n</blockquote>\n\n\n\ninputs: The features for attribution -- how they contributed to the output. Consider the following formulae, where a and b are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where a are the data_items for the prediction and b identifies whether the model instance is A or B. You would want to pick a (or some subset of) for the features, and not b since it does not contribute to the prediction.\ny = f(a,b)\n\n\nThe minimum dictionary format for inputs is:\n{ \"inputs\": { \"[your_display_name]\":\n \"input_tensor_name\": [layer]\n }\n}\n\n<blockquote>\n - [your_display_name]: A human readable name you assign to the input to explain. A common example is \"features\".<br/>\n - \"input_tensor_name\": The key/value field to identify the input layer for the feature attribution. <br/>\n - [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.\n</blockquote>\n\nSince the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:\n<blockquote>\n - \"modality\": \"image\": Indicates the field values are image data.\n</blockquote>\n\nSince the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:\n<blockquote>\n - \"encoding\": \"BAG_OF_FEATURES\" : Indicates that the inputs are set of tabular features.<br/>\n - \"index_feature_mapping\": [ feature-names ] : A list of human readable names for each feature. For this example, we use the feature names specified in the dataset.<br/>\n - \"modality\": \"numeric\": Indicates the field values are numeric.\n</blockquote>", "INPUT_METADATA = {\n \"input_tensor_name\": serving_input,\n \"encoding\": \"BAG_OF_FEATURES\",\n \"modality\": \"numeric\",\n \"index_feature_mapping\": [\n \"crim\",\n \"zn\",\n \"indus\",\n \"chas\",\n \"nox\",\n \"rm\",\n \"age\",\n \"dis\",\n \"rad\",\n \"tax\",\n \"ptratio\",\n \"b\",\n \"lstat\",\n ],\n}\n\nOUTPUT_METADATA = {\"output_tensor_name\": serving_output}\n\ninput_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)\noutput_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)\n\nmetadata = aip.explain.ExplanationMetadata(\n inputs={\"features\": input_metadata}, outputs={\"medv\": output_metadata}\n)", "Upload the model\nNext, upload your model to a Model resource using Model.upload() method, with the following parameters:\n\ndisplay_name: The human readable name for the Model resource.\nartifact: The Cloud Storage location of the trained model artifacts.\nserving_container_image_uri: The serving container image.\nsync: Whether to execute the upload asynchronously or synchronously.\nexplanation_parameters: Parameters to configure explaining for Model's predictions.\nexplanation_metadata: Metadata describing the Model's input and output for explanation.\n\nIf the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.", "model = aip.Model.upload(\n display_name=\"boston_\" + TIMESTAMP,\n artifact_uri=MODEL_DIR,\n serving_container_image_uri=DEPLOY_IMAGE,\n explanation_parameters=parameters,\n explanation_metadata=metadata,\n sync=False,\n)\n\nmodel.wait()", "Send a batch prediction request\nSend a batch prediction to your deployed model.\nMake test items\nYou will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.\nMake the batch input file\nNow make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:\n\nThe first line is the heading with the feature (fields) heading names.\nEach remaining line is a separate prediction request with the corresponding feature values.\n\nFor example:\n\"feature_1\", \"feature_2\". ...\nvalue_1, value_2, ...", "! gsutil cat $IMPORT_FILE | head -n 1 > tmp.csv\n! gsutil cat $IMPORT_FILE | tail -n 10 >> tmp.csv\n\n! cut -d, -f1-16 tmp.csv > batch.csv\n\ngcs_input_uri = BUCKET_NAME + \"/test.csv\"\n\n! gsutil cp batch.csv $gcs_input_uri", "Make the batch explanation request\nNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:\n\njob_display_name: The human readable name for the batch prediction job.\ngcs_source: A list of one or more batch request input files.\ngcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.\ninstances_format: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.\npredictions_format: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.\ngenerate_explanations: Set to True to generate explanations.\nsync: If set to True, the call will block while waiting for the asynchronous batch job to complete.", "MIN_NODES = 1\nMAX_NODES = 1\n\nbatch_predict_job = model.batch_predict(\n job_display_name=\"boston_\" + TIMESTAMP,\n gcs_source=gcs_input_uri,\n gcs_destination_prefix=BUCKET_NAME,\n instances_format=\"csv\",\n predictions_format=\"jsonl\",\n machine_type=DEPLOY_COMPUTE,\n starting_replica_count=MIN_NODES,\n max_replica_count=MAX_NODES,\n generate_explanation=True,\n sync=False,\n)\n\nprint(batch_predict_job)", "Wait for completion of batch prediction job\nNext, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.", "if not os.getenv(\"IS_TESTING\"):\n batch_predict_job.wait()", "Get the explanations\nNext, get the explanation results from the completed batch prediction job.\nThe results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more explanation requests in a CSV format:\n\nCSV header + predicted_label\nCSV row + explanation, per prediction request", "if not os.getenv(\"IS_TESTING\"):\n import tensorflow as tf\n\n bp_iter_outputs = batch_predict_job.iter_outputs()\n\n explanation_results = list()\n for blob in bp_iter_outputs:\n if blob.name.split(\"/\")[-1].startswith(\"explanation\"):\n explanation_results.append(blob.name)\n\n tags = list()\n for explanation_result in explanation_results:\n gfile_name = f\"gs://{bp_iter_outputs.bucket.name}/{explanation_result}\"\n with tf.io.gfile.GFile(name=gfile_name, mode=\"r\") as gfile:\n for line in gfile.readlines():\n print(line)", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nAutoML Training Job\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_all = True\n\nif delete_all:\n # Delete the dataset using the Vertex dataset object\n try:\n if \"dataset\" in globals():\n dataset.delete()\n except Exception as e:\n print(e)\n\n # Delete the model using the Vertex model object\n try:\n if \"model\" in globals():\n model.delete()\n except Exception as e:\n print(e)\n\n # Delete the endpoint using the Vertex endpoint object\n try:\n if \"endpoint\" in globals():\n endpoint.delete()\n except Exception as e:\n print(e)\n\n # Delete the AutoML or Pipeline trainig job\n try:\n if \"dag\" in globals():\n dag.delete()\n except Exception as e:\n print(e)\n\n # Delete the custom trainig job\n try:\n if \"job\" in globals():\n job.delete()\n except Exception as e:\n print(e)\n\n # Delete the batch prediction job using the Vertex batch prediction object\n try:\n if \"batch_predict_job\" in globals():\n batch_predict_job.delete()\n except Exception as e:\n print(e)\n\n # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n try:\n if \"hpt_job\" in globals():\n hpt_job.delete()\n except Exception as e:\n print(e)\n\n if \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
smorton2/think-stats
code/chap07soln.ipynb
gpl-3.0
[ "Examples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT", "from __future__ import print_function, division\n\n%matplotlib inline\n\nimport numpy as np\n\nimport brfss\n\nimport thinkstats2\nimport thinkplot", "Scatter plots\nI'll start with the data from the BRFSS again.", "df = brfss.ReadBrfss(nrows=None)", "The following function selects a random subset of a DataFrame.", "def SampleRows(df, nrows, replace=False):\n indices = np.random.choice(df.index, nrows, replace=replace)\n sample = df.loc[indices]\n return sample", "I'll extract the height in cm and the weight in kg of the respondents in the sample.", "sample = SampleRows(df, 5000)\nheights, weights = sample.htm3, sample.wtkg2", "Here's a simple scatter plot with alpha=1, so each data point is fully saturated.", "thinkplot.Scatter(heights, weights, alpha=1)\nthinkplot.Config(xlabel='Height (cm)',\n ylabel='Weight (kg)',\n axis=[140, 210, 20, 200],\n legend=False)", "The data fall in obvious columns because they were rounded off. We can reduce this visual artifact by adding some random noice to the data.\nNOTE: The version of Jitter in the book uses noise with a uniform distribution. Here I am using a normal distribution. The normal distribution does a better job of blurring artifacts, but the uniform distribution might be more true to the data.", "def Jitter(values, jitter=0.5):\n n = len(values)\n return np.random.normal(0, jitter, n) + values", "Heights were probably rounded off to the nearest inch, which is 2.8 cm, so I'll add random values from -1.4 to 1.4.", "heights = Jitter(heights, 1.4)\nweights = Jitter(weights, 0.5)", "And here's what the jittered data look like.", "thinkplot.Scatter(heights, weights, alpha=1.0)\nthinkplot.Config(xlabel='Height (cm)',\n ylabel='Weight (kg)',\n axis=[140, 210, 20, 200],\n legend=False)", "The columns are gone, but now we have a different problem: saturation. Where there are many overlapping points, the plot is not as dark as it should be, which means that the outliers are darker than they should be, which gives the impression that the data are more scattered than they actually are.\nThis is a surprisingly common problem, even in papers published in peer-reviewed journals.\nWe can usually solve the saturation problem by adjusting alpha and the size of the markers, s.", "thinkplot.Scatter(heights, weights, alpha=0.1, s=10)\nthinkplot.Config(xlabel='Height (cm)',\n ylabel='Weight (kg)',\n axis=[140, 210, 20, 200],\n legend=False)", "That's better. This version of the figure shows the location and shape of the distribution most accurately. There are still some apparent columns and rows where, most likely, people reported their height and weight using rounded values. If that effect is important, this figure makes it apparent; if it is not important, we could use more aggressive jittering to minimize it. \nAn alternative to a scatter plot is something like a HexBin plot, which breaks the plane into bins, counts the number of respondents in each bin, and colors each bin in proportion to its count.", "thinkplot.HexBin(heights, weights)\nthinkplot.Config(xlabel='Height (cm)',\n ylabel='Weight (kg)',\n axis=[140, 210, 20, 200],\n legend=False)", "In this case the binned plot does a pretty good job of showing the location and shape of the distribution. It obscures the row and column effects, which may or may not be a good thing.\nExercise: So far we have been working with a subset of only 5000 respondents. When we include the entire dataset, making an effective scatterplot can be tricky. As an exercise, experiment with Scatter and HexBin to make a plot that represents the entire dataset well.", "# Solution\n\n# With smaller markers, I needed more aggressive jittering to\n# blur the measurement artifacts\n\n# With this dataset, using all of the rows might be more trouble\n# than it's worth. Visualizing a subset of the data might be\n# more practical and more effective.\n\nheights = Jitter(df.htm3, 2.8)\nweights = Jitter(df.wtkg2, 1.0)\n\nthinkplot.Scatter(heights, weights, alpha=0.01, s=2)\nthinkplot.Config(xlabel='Height (cm)',\n ylabel='Weight (kg)',\n axis=[140, 210, 20, 200],\n legend=False)", "Plotting percentiles\nSometimes a better way to get a sense of the relationship between variables is to divide the dataset into groups using one variable, and then plot percentiles of the other variable.\nFirst I'll drop any rows that are missing height or weight.", "cleaned = df.dropna(subset=['htm3', 'wtkg2'])", "Then I'll divide the dataset into groups by height.", "bins = np.arange(135, 210, 5)\nindices = np.digitize(cleaned.htm3, bins)\ngroups = cleaned.groupby(indices)", "Here are the number of respondents in each group:", "for i, group in groups:\n print(i, len(group))", "Now we can compute the CDF of weight within each group.", "mean_heights = [group.htm3.mean() for i, group in groups]\ncdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]", "And then extract the 25th, 50th, and 75th percentile from each group.", "for percent in [75, 50, 25]:\n weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]\n label = '%dth' % percent\n thinkplot.Plot(mean_heights, weight_percentiles, label=label)\n \nthinkplot.Config(xlabel='Height (cm)',\n ylabel='Weight (kg)',\n axis=[140, 210, 20, 200],\n legend=False)", "Exercise: Yet another option is to divide the dataset into groups and then plot the CDF for each group. As an exercise, divide the dataset into a smaller number of groups and plot the CDF for each group.", "# Solution\n\nbins = np.arange(140, 210, 10)\nindices = np.digitize(cleaned.htm3, bins)\ngroups = cleaned.groupby(indices)\ncdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]\n\nthinkplot.PrePlot(len(cdfs))\nthinkplot.Cdfs(cdfs)\nthinkplot.Config(xlabel='Weight (kg)',\n ylabel='CDF',\n axis=[20, 200, 0, 1],\n legend=False)", "Correlation\nThe following function computes the covariance of two variables using NumPy's dot function.", "def Cov(xs, ys, meanx=None, meany=None):\n xs = np.asarray(xs)\n ys = np.asarray(ys)\n\n if meanx is None:\n meanx = np.mean(xs)\n if meany is None:\n meany = np.mean(ys)\n\n cov = np.dot(xs-meanx, ys-meany) / len(xs)\n return cov", "And here's an example:", "heights, weights = cleaned.htm3, cleaned.wtkg2\nCov(heights, weights)", "Covariance is useful for some calculations, but it doesn't mean much by itself. The coefficient of correlation is a standardized version of covariance that is easier to interpret.", "def Corr(xs, ys):\n xs = np.asarray(xs)\n ys = np.asarray(ys)\n\n meanx, varx = thinkstats2.MeanVar(xs)\n meany, vary = thinkstats2.MeanVar(ys)\n\n corr = Cov(xs, ys, meanx, meany) / np.sqrt(varx * vary)\n return corr", "The correlation of height and weight is about 0.51, which is a moderately strong correlation.", "Corr(heights, weights)", "NumPy provides a function that computes correlations, too:", "np.corrcoef(heights, weights)", "The result is a matrix with self-correlations on the diagonal (which are always 1), and cross-correlations on the off-diagonals (which are always symmetric).\nPearson's correlation is not robust in the presence of outliers, and it tends to underestimate the strength of non-linear relationships.\nSpearman's correlation is more robust, and it can handle non-linear relationships as long as they are monotonic. Here's a function that computes Spearman's correlation:", "import pandas as pd\n\ndef SpearmanCorr(xs, ys):\n xranks = pd.Series(xs).rank()\n yranks = pd.Series(ys).rank()\n return Corr(xranks, yranks)", "For heights and weights, Spearman's correlation is a little higher:", "SpearmanCorr(heights, weights)", "A Pandas Series provides a method that computes correlations, and it offers spearman as one of the options.", "def SpearmanCorr(xs, ys):\n xs = pd.Series(xs)\n ys = pd.Series(ys)\n return xs.corr(ys, method='spearman')", "The result is the same as for the one we wrote.", "SpearmanCorr(heights, weights)", "An alternative to Spearman's correlation is to transform one or both of the variables in a way that makes the relationship closer to linear, and the compute Pearson's correlation.", "Corr(cleaned.htm3, np.log(cleaned.wtkg2))", "Exercises\nUsing data from the NSFG, make a scatter plot of birth weight versus mother’s age. Plot percentiles of birth weight versus mother’s age. Compute Pearson’s and Spearman’s correlations. How would you characterize the relationship between these variables?", "import first\n\nlive, firsts, others = first.MakeFrames()\nlive = live.dropna(subset=['agepreg', 'totalwgt_lb'])\n\n# Solution\n\nages = live.agepreg\nweights = live.totalwgt_lb\nprint('Corr', Corr(ages, weights))\nprint('SpearmanCorr', SpearmanCorr(ages, weights))\n\n# Solution\n\ndef BinnedPercentiles(df):\n \"\"\"Bin the data by age and plot percentiles of weight for each bin.\n\n df: DataFrame\n \"\"\"\n bins = np.arange(10, 48, 3)\n indices = np.digitize(df.agepreg, bins)\n groups = df.groupby(indices)\n\n ages = [group.agepreg.mean() for i, group in groups][1:-1]\n cdfs = [thinkstats2.Cdf(group.totalwgt_lb) for i, group in groups][1:-1]\n\n thinkplot.PrePlot(3)\n for percent in [75, 50, 25]:\n weights = [cdf.Percentile(percent) for cdf in cdfs]\n label = '%dth' % percent\n thinkplot.Plot(ages, weights, label=label)\n\n thinkplot.Config(xlabel=\"Mother's age (years)\",\n ylabel='Birth weight (lbs)',\n xlim=[14, 45], legend=True)\n \nBinnedPercentiles(live)\n\n# Solution\n\ndef ScatterPlot(ages, weights, alpha=1.0, s=20):\n \"\"\"Make a scatter plot and save it.\n\n ages: sequence of float\n weights: sequence of float\n alpha: float\n \"\"\"\n thinkplot.Scatter(ages, weights, alpha=alpha)\n thinkplot.Config(xlabel='Age (years)',\n ylabel='Birth weight (lbs)',\n xlim=[10, 45],\n ylim=[0, 15],\n legend=False)\n \nScatterPlot(ages, weights, alpha=0.05, s=10)\n\n# Solution\n\n# My conclusions:\n\n# 1) The scatterplot shows a weak relationship between the variables but\n# it is hard to see clearly.\n\n# 2) The correlations support this. Pearson's is around 0.07, Spearman's\n# is around 0.09. The difference between them suggests some influence\n# of outliers or a non-linear relationsip.\n\n# 3) Plotting percentiles of weight versus age suggests that the\n# relationship is non-linear. Birth weight increases more quickly\n# in the range of mother's age from 15 to 25. After that, the effect\n# is weaker." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
soln/chap09soln.ipynb
mit
[ "Modeling and Simulation in Python\nChapter 9\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# import everything from SymPy.\nfrom sympy import *\n\n# Set up Jupyter notebook to display math.\ninit_printing() ", "The following displays SymPy expressions and provides the option of showing results in LaTeX format.", "from sympy.printing import latex\n\ndef show(expr, show_latex=False):\n \"\"\"Display a SymPy expression.\n \n expr: SymPy expression\n show_latex: boolean\n \"\"\"\n if show_latex:\n print(latex(expr))\n return expr", "Analysis with SymPy\nCreate a symbol for time.", "t = symbols('t')\nt", "If you combine symbols and numbers, you get symbolic expressions.", "expr = t + 1\nexpr", "The result is an Add object, which just represents the sum without trying to compute it.", "type(expr)", "subs can be used to replace a symbol with a number, which allows the addition to proceed.", "expr.subs(t, 2)", "f is a special class of symbol that represents a function.", "f = Function('f')\nf", "The type of f is UndefinedFunction", "type(f)", "SymPy understands that f(t) means f evaluated at t, but it doesn't try to evaluate it yet.", "f(t)", "diff returns a Derivative object that represents the time derivative of f", "dfdt = diff(f(t), t)\ndfdt\n\ntype(dfdt)", "We need a symbol for alpha", "alpha = symbols('alpha')\nalpha", "Now we can write the differential equation for proportional growth.", "eq1 = Eq(dfdt, alpha*f(t))\neq1", "And use dsolve to solve it. The result is the general solution.", "solution_eq = dsolve(eq1)\nsolution_eq", "We can tell it's a general solution because it contains an unspecified constant, C1.\nIn this example, finding the particular solution is easy: we just replace C1 with p_0", "C1, p_0 = symbols('C1 p_0')\n\nparticular = solution_eq.subs(C1, p_0)\nparticular", "In the next example, we have to work a little harder to find the particular solution.\nSolving the quadratic growth equation\nWe'll use the (r, K) parameterization, so we'll need two more symbols:", "r, K = symbols('r K')", "Now we can write the differential equation.", "eq2 = Eq(diff(f(t), t), r * f(t) * (1 - f(t)/K))\neq2", "And solve it.", "solution_eq = dsolve(eq2)\nsolution_eq", "The result, solution_eq, contains rhs, which is the right-hand side of the solution.", "general = solution_eq.rhs\ngeneral", "We can evaluate the right-hand side at $t=0$", "at_0 = general.subs(t, 0)\nat_0", "Now we want to find the value of C1 that makes f(0) = p_0.\nSo we'll create the equation at_0 = p_0 and solve for C1. Because this is just an algebraic identity, not a differential equation, we use solve, not dsolve.\nThe result from solve is a list of solutions. In this case, we have reason to expect only one solution, but we still get a list, so we have to use the bracket operator, [0], to select the first one.", "solutions = solve(Eq(at_0, p_0), C1)\ntype(solutions), len(solutions)\n\nvalue_of_C1 = solutions[0]\nvalue_of_C1", "Now in the general solution, we want to replace C1 with the value of C1 we just figured out.", "particular = general.subs(C1, value_of_C1)\nparticular", "The result is complicated, but SymPy provides a method that tries to simplify it.", "particular = simplify(particular)\nparticular", "Often simplicity is in the eye of the beholder, but that's about as simple as this expression gets.\nJust to double-check, we can evaluate it at t=0 and confirm that we get p_0", "particular.subs(t, 0)", "This solution is called the logistic function.\nIn some places you'll see it written in a different form:\n$f(t) = \\frac{K}{1 + A e^{-rt}}$\nwhere $A = (K - p_0) / p_0$.\nWe can use SymPy to confirm that these two forms are equivalent. First we represent the alternative version of the logistic function:", "A = (K - p_0) / p_0\nA\n\nlogistic = K / (1 + A * exp(-r*t))\nlogistic", "To see whether two expressions are equivalent, we can check whether their difference simplifies to 0.", "simplify(particular - logistic)", "This test only works one way: if SymPy says the difference reduces to 0, the expressions are definitely equivalent (and not just numerically close).\nBut if SymPy can't find a way to simplify the result to 0, that doesn't necessarily mean there isn't one. Testing whether two expressions are equivalent is a surprisingly hard problem; in fact, there is no algorithm that can solve it in general.\nExercises\nExercise: Solve the quadratic growth equation using the alternative parameterization\n$\\frac{df(t)}{dt} = \\alpha f(t) + \\beta f^2(t) $", "# Solution\n\nalpha, beta = symbols('alpha beta')\n\n# Solution\n\neq3 = Eq(diff(f(t), t), alpha*f(t) + beta*f(t)**2)\neq3\n\n# Solution\n\nsolution_eq = dsolve(eq3)\nsolution_eq\n\n# Solution\n\ngeneral = solution_eq.rhs\ngeneral\n\n# Solution\n\nat_0 = general.subs(t, 0)\n\n# Solution\n\nsolutions = solve(Eq(at_0, p_0), C1)\nvalue_of_C1 = solutions[0]\nvalue_of_C1\n\n# Solution\n\nparticular = general.subs(C1, value_of_C1)\nparticular.simplify()", "Exercise: Use WolframAlpha to solve the quadratic growth model, using either or both forms of parameterization:\ndf(t) / dt = alpha f(t) + beta f(t)^2\n\nor\ndf(t) / dt = r f(t) (1 - f(t)/K)\n\nFind the general solution and also the particular solution where f(0) = p_0." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ayush29feb/cs231n
assignment1/knn.ipynb
mit
[ "k-Nearest Neighbor (kNN) exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nThe kNN classifier consists of two stages:\n\nDuring training, the classifier takes the training data and simply remembers it\nDuring testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\nThe value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.", "# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\n# Load the raw CIFAR-10 data.\ncifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint 'Training data shape: ', X_train.shape\nprint 'Training labels shape: ', y_train.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape\n\n# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()\n\n# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = range(num_training)\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = range(num_test)\nX_test = X_test[mask]\ny_test = y_test[mask]\n\n# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint X_train.shape, X_test.shape\n\nfrom cs231n.classifiers import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)", "We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\nFirst we must compute the distances between all test examples and all train examples. \nGiven these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.\nFirst, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.", "# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)\nprint dists.shape\n\n# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()", "Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\nWhat in the data is the cause behind the distinctly bright rows?\nWhat causes the columns?\n\nYour Answer: The matrix shown above is the distance matrix of each test image on y axis to each train image on the x axis. The black/white intensity represents the distance value; white = close and black = far. The rows suggest that there are multiple train images are the close to a test image hence its likely that the answer will be wrong. The column means that a train image is really popular and most test images are similar.", "# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)", "You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:", "y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)", "You should expect to see a slightly better performance than with k = 1.", "# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint 'Difference was: %f' % (difference, )\nif difference < 0.001:\n print 'Good! The distance matrices are the same'\nelse:\n print 'Uh-oh! The distance matrices are different'\n\n# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint 'Difference was: %f' % (difference, )\nif difference < 0.001:\n print 'Good! The distance matrices are the same'\nelse:\n print 'Uh-oh! The distance matrices are different'\n\n# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\ntwo_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\nprint 'Two loop version took %f seconds' % two_loop_time\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint 'One loop version took %f seconds' % one_loop_time\n\nno_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\nprint 'No loop version took %f seconds' % no_loop_time\n\n# you should see significantly faster performance with the fully vectorized implementation", "Cross-validation\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.", "num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\nX_train_folds = np.array_split(X_train, num_folds)\ny_train_folds = np.array_split(y_train, num_folds)\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = {}\n\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\nfor k in k_choices:\n for i in xrange(num_folds):\n num_test_k = X_train.shape[0] * (num_folds - 1) / num_folds\n X_train_k = np.delete(X_train_folds, i, 0).reshape(num_test_k, X_train.shape[1])\n y_train_k = np.delete(y_train_folds, i, 0).reshape(num_test_k,)\n X_test_k = X_train_folds[i]\n y_test_k = y_train_folds[i]\n \n classifier_k = KNearestNeighbor()\n classifier_k.train(X_train_k, y_train_k)\n \n dists_k = classifier_k.compute_distances_no_loops(X_test_k)\n y_test_pred_k = classifier_k.predict_labels(dists_k, k=k)\n \n num_correct_k = np.sum(y_test_pred_k == y_test_k)\n accuracy_k = float(num_correct_k) / num_test_k\n if k not in k_to_accuracies:\n k_to_accuracies[k] = []\n k_to_accuracies[k].append(accuracy_k)\n\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print 'k = %d, accuracy = %f' % (k, accuracy)\n\n# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()\n\n# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 10\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
daniaki/Enrich2
docs/notebooks/min_count.ipynb
gpl-3.0
[ "Selecting variants by input library count\nThis notebook gets scores and standard errors for the variants in a Selection that exceed a minimum count cutoff in the input time point, and plots the relationship between each variant's score and input count.", "% matplotlib inline\n\nimport os.path\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom enrich2.variant import WILD_TYPE_VARIANT\nimport enrich2.plots as enrich_plot\npd.set_option(\"display.max_rows\", 10) # rows shown when pretty-printing", "Modify the results_path variable in the next cell to match the output directory of your Enrich2-Example dataset.", "results_path = \"/path/to/Enrich2-Example/Results/\"", "Open the Selection HDF5 file with the variants we are interested in.", "my_store = pd.HDFStore(os.path.join(results_path, \"Rep1_sel.h5\"))", "The pd.HDFStore.keys() method returns a list of all the tables in this HDF5 file.", "my_store.keys()", "We will work with the \"/main/variants/counts\" table first. Enrich2 names the columns for counts c_n where n is the time point, beginning with 0 for the input library.\nWe can use a query to extract the subset of variants in the table that exceed the specified cutoff. Since we're only interested in variants, we'll explicitly exclude the wild type. We will store the data we extract in the variant_count data frame.", "read_cutoff = 10\n\nvariant_counts = my_store.select('/main/variants/counts', where='c_0 > read_cutoff and index != WILD_TYPE_VARIANT')\nvariant_counts", "The index of the data frame is the list of variants that exceeded the cutoff.", "variant_counts.index", "We can use this index to get the scores for these variants by querying the \"/main/variants/scores\" table. We'll store the result of the query in a new data frame named variant_scores, and keep only the score and standard error (SE) columns.", "variant_scores = my_store.select('/main/variants/scores', where='index in variant_counts.index')\nvariant_scores = variant_scores[['score', 'SE']]\nvariant_scores", "Now that we're finished getting data out of the HDF5 file, we'll close it.", "my_store.close()", "To more easily explore the relationship between input count and score, we'll add a column to the variant_scores data frame that contains input counts from the variant_counts data frame.", "variant_scores['input_count'] = variant_counts['c_0']\nvariant_scores", "Now that all the information is in a single data frame, we can make a plot of score vs. input count. This example uses functions and colors from the Enrich2 plotting library. Taking the log10 of the counts makes the data easier to visualize.", "fig, ax = plt.subplots()\nenrich_plot.configure_axes(ax, xgrid=True)\nax.plot(np.log10(variant_scores['input_count']), \n variant_scores['score'], \n linestyle='none', marker='.', alpha=0.6,\n color=enrich_plot.plot_colors['bright4'])\nax.set_xlabel(\"log10(Input Count)\")\nax.set_ylabel(\"Variant Score\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DavidObando/carnd
Term1/Project3/P3 workspace.ipynb
apache-2.0
[ "Importing things \nBasically ensuring we have access to all the libraries going forward", "import csv\nimport cv2\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout, Lambda\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras.optimizers import Adam\nfrom keras.regularizers import l2, activity_l2\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport os\nimport pickle\nimport random\nfrom sklearn import preprocessing\nfrom sklearn.utils import shuffle\nimport tensorflow as tf\n\ntf.python.control_flow_ops = tf\n", "Image transformation functions", "# Reflection transformation\ndef reflect_image(image):\n return cv2.flip(image, 1)\n\nclass reflection(object):\n def __init__(self, images):\n self.images = images\n self.current = 0\n self.size = len(images)\n def __iter__(self):\n return self\n def __next__(self):\n return self._next()\n def _next(self):\n if self.current < self.size:\n image, self.current = self.images[self.current], self.current + 1\n return reflect_image(image)\n else:\n raise StopIteration()\n\n# Image resize\ndef resize_image(image_in, dimensions=(64,16)):\n \"\"\"\n Resizes the input image to the specified dimensions\n \"\"\"\n top = 60\n bottom = 140\n left = 0\n right = len(image_in[0]) - 1\n return cv2.resize(image_in[top:bottom, left:right], dimensions, interpolation = cv2.INTER_AREA)", "Data loading utilities \nFunctions to make it easly to load the data from disk", "def read_image_data(path):\n \"\"\"\n Loads an image file, resizes it, returns it as a numpy array\n \"\"\"\n return np.array(resize_image(mpimg.imread(path)))\n\ndef load_data(data_folder=\"./data/\"):\n \"\"\"\n Loads the training data from the specified folder\n \"\"\"\n pickle_file = data_folder + \"data.pickle\"\n if os.path.exists(pickle_file):\n print('Loading data from pickle file...')\n with open(pickle_file, 'rb') as f:\n pickle_data = pickle.load(f)\n images = pickle_data[\"images\"]\n steering = pickle_data[\"steering\"]\n del pickle_data\n return images, steering\n images = []\n steering = []\n angle_adjustment = 0.15\n center_image_retention_rate = 0.1\n with open(data_folder + \"driving_log.csv\", 'r') as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n steering_angle = float(row[\"steering\"])\n if steering_angle == 0:\n # decide if this one is going to be retained\n if random.random() > center_image_retention_rate:\n continue\n #center,left,right,steering,throttle,brake,speed\n images.append(read_image_data(data_folder + row[\"center\"].strip()))\n steering.append(steering_angle)\n images.append(read_image_data(data_folder + row[\"left\"].strip()))\n left_steering = steering_angle + angle_adjustment\n steering.append(1. if left_steering > 1 else -1. if left_steering < -1 else left_steering)\n images.append(read_image_data(data_folder + row[\"right\"].strip()))\n right_steering = steering_angle - angle_adjustment\n steering.append(1. if right_steering > 1 else -1. if right_steering < -1 else right_steering)\n images = np.array(images)\n steering = np.array(steering)\n print('Saving data to pickle file...')\n try:\n with open(pickle_file, 'wb') as pfile:\n pickle.dump(\n {\n \"images\": images,\n \"steering\": steering,\n },\n pfile, pickle.DEFAULT_PROTOCOL)\n except Exception as e:\n print(\"Unable to save data to\", pickle_file, \":\", e)\n raise\n return images, steering\n", "Data preprocessing and normalization\nUtilities to prepare the data before it's shoveled into the model", "def balance(X_input, y_input):\n \"\"\"\n It will take the data input and attempt to balance the data set so that\n the training isn't squewed towards any given class\n \"\"\"\n fig, axes = plt.subplots(1, 2)\n axes[0].imshow(X_train[0])\n axes[0].set_title(\"Image 1\")\n axes[0].axis(\"off\")\n axes[1].imshow(reflect_image(X_train[0]))\n axes[1].set_title(\"Reflected image 1\")\n axes[1].axis(\"off\")\n plt.show()\n plt.hist(y_input, bins=100)\n plt.title(\"Count of images (y) per class (x) before augmentation\")\n plt.show()\n tupni_X = [reflect_image(i) for i in X_input]\n tupni_y = [-i for i in y_input]\n plt.show()\n plt.hist(tupni_y, bins=100)\n plt.title(\"Count of reflected images (y) per class (x) before augmentation\")\n plt.show()\n X_output = np.concatenate((X_input, tupni_X), axis=0)\n y_output = np.concatenate((y_input, tupni_y), axis=0)\n plt.hist(y_output, bins=100)\n plt.title(\"Count of images (y) per class (x) after augmentation\")\n plt.show()\n print(y_input[0])\n print(tupni_y[0])\n return X_output, y_output\n\nsource = \"./data/\"\ndata_pickle = source + \"data.pickle\"\nif os.path.exists(data_pickle):\n os.remove(data_pickle)\n\nX_train, y_train = load_data(source)\nX_train, y_train = balance(X_train, y_train)\n\n\ndef normalize_minmax(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [-0.5, 0.5]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n a = -0.5\n b = 0.5\n grayscale_min = 0\n grayscale_max = 255\n return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )", "Model definition \nFunction that creates the model", "def nvidia_make_model(n_classes, input_shape, dropout_rate=0.5, learning_rate=0.00005):\n \"\"\"\n Creates the keras model used for our network, based on the NVidia paper titled\n End-to-End Learning for Self-Driving Cars\n http://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-to-end-dl-using-px.pdf\n \"\"\"\n model = Sequential()\n model.add(Convolution2D(24, 5, 5, border_mode='valid', subsample=(2, 2), input_shape=input_shape))\n model.add(Activation('relu'))\n model.add(Dropout(dropout_rate))\n model.add(Convolution2D(36, 5, 5, border_mode='same', subsample=(2, 2)))\n model.add(Activation('relu'))\n model.add(Dropout(dropout_rate))\n model.add(Convolution2D(48, 5, 5, border_mode='same', subsample=(2, 2)))\n model.add(Activation('relu'))\n model.add(Dropout(dropout_rate))\n model.add(Convolution2D(64, 3, 3, border_mode='same'))\n model.add(Activation('relu'))\n model.add(Dropout(dropout_rate))\n model.add(Convolution2D(64, 3, 3, border_mode='same'))\n model.add(Activation('relu'))\n model.add(Dropout(dropout_rate))\n model.add(Flatten())\n model.add(Dense(1164))\n model.add(Activation('relu'))\n model.add(Dropout(dropout_rate))\n model.add(Dense(100))\n model.add(Activation('relu'))\n model.add(Dropout(dropout_rate))\n model.add(Dense(50))\n model.add(Activation('relu'))\n model.add(Dropout(dropout_rate))\n model.add(Dense(n_classes))\n print(model.summary())\n\n #model.compile(optimizer='adam', loss='mse')\n adam = Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)\n model.compile(optimizer=adam, loss='mse')\n\n return model\n \n\ndef make_model(n_classes, input_shape, dropout_rate=0.5, regularizer_rate=0.0001):\n \"\"\"\n Creates the keras model used for our network\n \"\"\"\n model = Sequential()\n # Add a convolution with 32 filters, 3x3 kernel, and valid padding\n model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=input_shape))\n # Add a ReLU activation layer\n model.add(Activation('relu'))\n # Add a max pooling of 2x2\n model.add(MaxPooling2D(pool_size=(2, 2)))\n # Add a dropout\n model.add(Dropout(dropout_rate))\n\n # Add a convolution with 64 filters, 2x2 kernel, and valid padding\n model.add(Convolution2D(64, 2, 2, border_mode='valid'))\n # Add a ReLU activation layer\n model.add(Activation('relu'))\n # Add a max pooling of 2x2\n model.add(MaxPooling2D(pool_size=(2, 2)))\n # Add a dropout\n model.add(Dropout(dropout_rate))\n\n # Add a flatten layer\n model.add(Flatten())\n # Add a fully connected layer\n #model.add(Dense(128, W_regularizer=l2(regularizer_rate), activity_regularizer=activity_l2(regularizer_rate)))\n model.add(Dense(128))\n # Add a ReLU activation layer\n model.add(Activation('relu'))\n # Add a dropout\n model.add(Dropout(dropout_rate))\n # Add a fully connected layer\n #model.add(Dense(n_classes, W_regularizer=l2(regularizer_rate), activity_regularizer=activity_l2(regularizer_rate)))\n model.add(Dense(n_classes))\n print(model.summary())\n\n model.compile(optimizer='adam', loss='mse')\n return model", "Training \nInstantiate a model and train it with the data in disk", "X_train, y_train = load_data(\"./data/\")\n\nprint(\"Image shape:\",X_train[0].shape)\n\n# instantiate the model\nmodel = nvidia_make_model(1, X_train[0].shape)\n\n# train the model\n\nall_data_sources = [\"./data/\", \"./data-david-track1-1/\", \"./data-david-track1-2/\"]\n\nfor source in all_data_sources:\n print(\"Training data from\", source)\n X_train, y_train = load_data(source)\n X_train, y_train = shuffle(X_train, y_train)\n n_train = len(X_train)\n X_train, y_train = balance(X_train, y_train)\n X_normalized = normalize_minmax(X_train)\n history = model.fit(X_normalized, y_train, batch_size=512, nb_epoch=1000, validation_split=0.2)\n\nprint(\"Done training\")\n\n# save model to disk\nmodel_filepath = \"./model.h5\"\nif os.path.exists(model_filepath):\n os.remove(model_filepath)\nmodel.save(model_filepath)\ndel model", "Let's run some validations on the model", "from keras.models import load_model\n\ntest_model = load_model(model_filepath)\n\nsource = \"./data/\"\nX_test, y_test = load_data(source)\nX_test, y_test = shuffle(X_test, y_test)\nX_test = normalize_minmax(X_test)\n\nn_test = len(X_test)\nindex = random.randint(0, n_test)\ntest_image = X_test[index]\ntest_result = y_test[index]\n\nresult = float(test_model.predict(test_image[None, :, :, :], batch_size=1))\n\nprint(\"Expected\", test_result, \"obtained\", result)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mpi-m/cmip6/models/icon-esm-lr/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: MPI-M\nSource ID: ICON-ESM-LR\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:17\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mpi-m', 'icon-esm-lr', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hainm/dask
notebooks/parallelize_image_filtering_workload.ipynb
bsd-3-clause
[ "Parallelize image filters with dask\nThis notebook will show how to parallize CPU-intensive workload using dask array. A simple uniform filter (equivalent to a mean filter) from scipy.ndimage is used for illustration purposes.", "%pylab inline\nfrom scipy.ndimage import uniform_filter\nimport dask.array as da\n\ndef mean(img):\n \"ndimage.uniform_filter with `size=51`\"\n return uniform_filter(img, size=51)", "Get the image", "!if [ ! -e stitched--U00--V00--C00--Z00.png ]; then wget -q https://github.com/arve0/master/raw/master/stitched--U00--V00--C00--Z00.png; fi\nimg = imread('stitched--U00--V00--C00--Z00.png')\nimg = (img*255).astype(np.uint8) # image read as float32, image is 8 bit grayscale\nimshow(img[::16, ::16])\nmp = str(img.shape[0] * img.shape[1] * 1e-6 // 1)\n'%s Mega pixels, shape %s, dtype %s' % (mp, img.shape, img.dtype)", "Initial speed\nLets try the filter directly on the image.", "# filter directly\n%time mean_nd = mean(img)\nimshow(mean_nd[::16, ::16]);", "With dask\nFirst, we'll create the dask array with one chunk only (chunks=img.shape).", "img_da = da.from_array(img, chunks=img.shape)", "depth defines the overlap. We have one chunk only, so overlap is not necessary.\ncompute must be called to start the computation.", "%time mean_da = img_da.map_overlap(mean, depth=0).compute()\nimshow(mean_da[::16, ::16]);", "As we can see, the performance is the same as applying the filter directly.\nNow, lets chop up the image in chunks so that we can leverage all the cores in our computer.", "from multiprocessing import cpu_count\ncpu_count()", "We have four cores, so lets split the array in four chunks.", "img.shape, mean_da.shape, mean_nd.shape", "Pixels in both axes are even, so we can split the array in equally sized chunks. If we had odd shapes, chunks would not be the same size (given four cpu cores). E.g. 101x101 image => 50x50 and 51x51 chunks.", "chunk_size = [x//2 for x in img.shape]\nimg_da = da.rechunk(img_da, chunks=chunk_size)", "Now, lets see if the filtering is faster.", "%time mean_da = img_da.map_overlap(mean, depth=0).compute()\nimshow(mean_da[::16, ::16]);", "It is :-)\nIf one opens the process manager, one will see that the python process is eating more then 100% CPU.\nAs we are looking at neighbor pixels to compute the mean intensity for the center pixel, you might wonder what happens in the seams between chunks? Lets examine that.", "size = 50\nmask = np.index_exp[chunk_size[0]-size:chunk_size[0]+size, chunk_size[1]-size:chunk_size[1]+size]\n\nfigure(figsize=(12,4))\nsubplot(131)\nimshow(mean_nd[mask]) # filtered directly\nsubplot(132)\nimshow(mean_da[mask]) # filtered in chunks with dask\nsubplot(133)\nimshow(mean_nd[mask] - mean_da[mask]); # difference", "To overcome this edge effect in the seams, we need to define a higher depth so that dask does the computation with an overlap. We need an overlap of 25 pixels (half the size of the neighborhood in mean).", "%time mean_da = img_da.map_overlap(mean, depth=25).compute()\n\nfigure(figsize=(12,4))\nsubplot(131)\nimshow(mean_nd[mask]) # filtered directly\nsubplot(132)\nimshow(mean_da[mask]) # filtered in chunks with dask\nsubplot(133)\nimshow(mean_nd[mask] - mean_da[mask]); # difference", "Edge effect is gone, nice! The dots in the difference is due to uniform_filter's limited precision. From the manual:\n\nThe multi-dimensional filter is implemented as a sequence of one-dimensional uniform filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision.\n\nLets see if we can improve the performance. As we do not get 4x speedup, there might be that computation is not only CPU-bound. Chunksize of 1000 is a good place to start.", "img_da = da.rechunk(img_da, 1000)\n%time mean_da = img_da.map_overlap(mean, depth=25).compute()\nimshow(mean_da[::16, ::16]);", "As you see, adjusting the chunk size did not affect the performance significant, though its a good idea to identify your bottleneck and adjust the chunk size accordingly.\nThat's all! By chopping up the computation we utilized all cpu cores and got a speedup at best:", "'%0.1fx' % (2.7/1.24)", "Happy parallel computing!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
oudalab/fajita
otherHelperCode/jupyter_ldacluster/lda_cluster_smallsizetest.ipynb
mit
[ "from pymongo import MongoClient\n\nclient=MongoClient()\nclient=MongoClient('mongodb://localhost:29017/')\ndb=client['eventData']\nsen=db.documents_english\n\nfrom nltk.tokenize import RegexpTokenizer\nfrom stop_words import get_stop_words\nfrom nltk.stem.porter import PorterStemmer\nfrom gensim import corpora, models\nimport gensim\n\ntokenizer = RegexpTokenizer(r'\\w+')\n\n# create English stop words list\nen_stop = get_stop_words('en')\n\n# Create p_stemmer of class PorterStemmer\np_stemmer = PorterStemmer()", "since sen.find(0 cursor run the order never change so we can get the docId attached to the training result in this way.", "%%time\ntexts = []\ndocIds=[]\nactuallyTrained=0;\ntemp=0;\nfor i in sen.find():\n if temp<1000:\n temp=temp+1\n try:\n raw = ''.join(i['document']).lower()\n tokens = tokenizer.tokenize(raw)\n stopped_tokens = [i for i in tokens if not i in en_stop]\n stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]\n texts.append(stemmed_tokens)\n docIds.append(i['_id'])\n actuallyTrained=actuallyTrained+1\n except:\n pass\n else:\n break\nprint(actuallyTrained)\n\n%%time\ndictionary = corpora.Dictionary(texts)\n\n%%time\ncorpus = [dictionary.doc2bow(text) for text in texts]\n\n%%time\nldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=20, id2word = dictionary, passes=1)\n\nactuallyTrained", "change result into multidimensinal array in order to feed in k-means model,\nthe no of dimension is the no of topics", "%%time\n#the dim is the same number of topics\ndim=20 \nresult=[]\nfor i in range(0,actuallyTrained):\n feature=[]\n previousindex=0\n for item in ldamodel[corpus[i]]:\n index=item[0]\n #print(index)\n for beforeindex in range(previousindex,index):\n feature.append(0)\n feature.append(item[1])\n previousindex=index+1\n while (len(feature)<dim):\n feature.append(0); #add in 0 at the end\n result.append(feature)\n\nfrom sklearn.cluster import KMeans\nimport numpy as np\n\n%%time\nkmeanstest=np.array(result)\n\n%%time\nkmeans = KMeans(n_clusters=20, random_state=0).fit(kmeanstest)\n\nkmeans.labels_.size\n\nlen(docIds)", "build a dictionary for ['docId','cluster #']", "#and before building the dictionary test if the size of docIds and cluster result dimensions are the same.\ntry:\n assert(len(docIds)==kmeans.labels_.size)\n dictionary_cocId_topicClusterItBelongs={}\n for i in range(0,actuallyTrained):\n dictionary_cocId_topicClusterItBelongs.update({docIds[i]:kmeans.labels_[i]})\nexcept:\n print(\"the docIds size is different from the topic # cluster size\")\n \n\ndictionary_cocId_topicClusterItBelongs\n\n#using pickle to dump and load the data\nimport pickle\n\nwith open('traingrst_english.pkl', 'wb') as output:\n pickle.dump(dictionary_cocId_topicClusterItBelongs,output)\n\n#thsi is the way to load the dictionary object in \npickle.load(open( \"traingrst_english.pkl\", \"rb\" ) )" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
palrogg/foundations-homework
08/Homework8-passengers.ipynb
mit
[ "import pandas as pd\n\ndf = pd.read_csv('passagierfrequenz.csv', delimiter=';')", "Data set 1: Passengers frequence in the Swiss railway stations\nSource and documentation: http://data.sbb.ch/explore/dataset/passagierfrequenz/\n\n\nDTV = Durchschnittlicher täglicher Verkehr (Montag bis Sonntag) = average daily circulation (including the weekend)\n\n\nDWV = Durchschnittlicher werktäglicher Verkehr (Montag bis Freitag) = average daily circulation Mo-Friday", "print(\"Q1: Which Swiss railway station is the most frequented?\")\nprint(\"A: The most frequented station is Zürich HB:\")\ndf[['Station', 'DTV']].sort_values(by='DTV', ascending=False).head(1)\n\nprint(\"Q2: Which stations have a higher average daily circulation on Saturday and Sunday?\")\nprint(\"A: These 21 stations:\")\ndf[df['DTV'] > df['DWV']]\n\nprint(\"Q3: Print a comma-separated list of all the comments in the Comments column. Escape them with the “\\\"” character and don't include any empty cell.\")\ncomments_list = df[df['Comments'] == df['Comments']]['Comments'].tolist()\nprint(\"A: The comments are:\", '\"' + str.join('\",\"',comments_list) + '\".')\n\nprint(\"Q4: How many rows contains another year than 2014?\")\nprint(\"A: I counted\", len(df[df['Year'] != 2014]), \"rows containing another year than 2014.\")\n\nprint(\"Q5: What is the size (rows, columns) of the data?\")\nprint(\"A: There is\", df.shape[0], \"rows and\", df.shape[1], \"columns.\")\n\ndf[df['Station'] == 'Zürich HB']\n\nprint(\"Q6: How many stations have a name starting with A?\")\nimport re\n\n\na_stations = df[df['Station'].str.match('^A')]\nprint(\"A: There is\", len(a_stations), \"“A stations”. Here they are:\")\na_stations\n\nprint(\"Q7: Which are the least frequented stations during the work days? And the full week?\")\nprint(\"A(a): During the work days:\")\ndf[['Station', 'DWV']].sort_values(by='DWV').head(10)\n\nprint(\"A(b): During the full week:\")\ndf[['Station', 'DTV']].sort_values(by='DTV').head(10)\n\nprint(\"Q8: Take the most frequented and the least frequented stations. How many times more passengers has the most frequented one?\")\n\nmost_freq = df[['Station', 'DTV']].sort_values(by='DTV', ascending=False).head(1)\nleast_freq = df[['Station', 'DTV']].sort_values(by='DTV').head(1)\nmost_freq[['Station', 'DTV']]\n\nprint(\"A:\", most_freq['Station'].tolist()[0], \"has\", most_freq['DTV'].tolist()[0], \"average daily passengers and\", least_freq['Station'].tolist()[0], str(least_freq['DTV'].tolist()[0]) + \".\")\n\nratio = most_freq['DTV'].tolist()[0] / least_freq['DTV'].tolist()[0]\nprint(\"This means that Zurich HB has\", ratio, \"times more daily passengers than Oron.\")\n\nprint(\"Q9: Which stations have far more passengers during work days than during the full week? Group them in a subset.\")\n\nwork_days = df[df['DWV'] >= 1.35 * df['DTV']]\nprint(\"A: These\", len(work_days), \"stations have at least 35% more passengers during the work days:\")\nwork_days\n\nprint(\"Q10: Find a crazy station name. Is its average frequency near to the mean average frequency of all stations?\")\n\n# Let's try to find a very long name...\nlongnames = df[df['Station'].str.match('.{25,}')]\n\nlongnames\n\n# … We'll pick “Geneveys-sur-Coffrane, Les”. This is an pretty long name.\n\nmeanDTV = df['DTV'].mean()\nGeneveysDTV = df[df['Code'] == 'GEC']['DTV'].values\n\nprint(\"A: “Geneveys-sur-Coffran, Les” has an average daily frequency of\", str(GeneveysDTV[0]) + \".\")\nprint(\"This is far less than\", str(meanDTV) + \", the mean average frequency of all stations.\")\nprint(\"However, the _median_ frequency of all stations is only\", str(df['DTV'].median()) + \".\")\n\n\nprint(\"Q11: Who else than the SBB CFF FFS (Federal Railways) owns stations? Make a list of them (remove any duplicate).\")\nother_owner = df[(df['Owner'] != 'CFF') & (df['Owner'] != 'SBB') & (df['Owner'] != 'FFS')]\nlist_owners = other_owner['Owner'].tolist()\nprint(\"A:\", str.join(\", \", set(list_owners)))\n\nprint(\"Q12: Print how many stations each owner has.\")\nprint(\"A: Here is how many stations they have:\\n\" + str(df['Owner'].value_counts()))", "Graphics", "import matplotlib.pyplot as plt\n\n%matplotlib inline\n\nplt.style.use('ggplot')\n\nstandard = df[(df['DWV'] > 300) & (df['DWV'] < 2300) ]\n\nstandard.plot(kind='scatter', x='DWV', y='DTV')\nprint(\"These are the stations in Q2 and Q3 and their average daily passengers during the full week vs. the work days:\")\n\nplt.style.use('ggplot')\nleast_frequented = df.sort_values(by='DWV').head(20)\nleast_frequented.plot(kind='barh', x='Station', y='DTV').invert_yaxis()\nprint(\"These are the 20 least frequented stations, in average daily passengers:\")\n\n\nq1_freq = df[df['DWV'] <= 340]\nq2_freq = df[(df['DWV'] <= 915) & (df['DWV'] > 340)]\nq3_freq = df[(df['DWV'] <= 2700) & (df['DWV'] > 915)]\n\nplt.scatter(y=q1_freq[\"DWV\"], x=q1_freq[\"DTV\"], c='c', alpha=0.75, marker='1')\nplt.scatter(y=q2_freq[\"DWV\"], x=q2_freq[\"DTV\"], c='y', alpha=0.75, marker='2')\nplt.scatter(y=q3_freq[\"DWV\"], x=q3_freq[\"DTV\"], c='m', alpha=0.75, marker='3')\n\nprint(\"Q1, Q2 and Q3 of average daily circulation; x axis = DTV, y axis = DWV\")\n\nplt.xlim(-15,2500)\nplt.ylim(-30,2800)" ]
[ "code", "markdown", "code", "markdown", "code" ]
BrainIntensive/OnlineBrainIntensive
resources/nipype/nipype_tutorial/notebooks/basic_joinnodes.ipynb
mit
[ "<img src=\"../static/images/joinnode.png\" width=\"240\">\nJoinNode\nJoinNode have the opposite effect of a MapNode or iterables. Where they split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out JoinNode, synchronize and itersource from the main homepage.\nSimple example\nLet's consider the very simple example depicted at the top of this page:", "from nipype import Node, JoinNode, Workflow\n\n# Specify fake input node A\na = Node(interface=A(), name=\"a\")\n\n# Iterate over fake node B's input 'in_file?\nb = Node(interface=B(), name=\"b\")\nb.iterables = ('in_file', [file1, file2])\n\n# Pass results on to fake node C\nc = Node(interface=C(), name=\"c\")\n\n# Join forked execution workflow in fake node D\nd = JoinNode(interface=D(),\n joinsource=\"b\",\n joinfield=\"in_files\",\n name=\"d\")\n\n# Put everything into a workflow as usual\nworkflow = Workflow(name=\"workflow\")\nworkflow.connect([(a, b, [('subject', 'subject')]),\n (b, c, [('out_file', 'in_file')])\n (c, d, [('out_file', 'in_files')])\n ])", "As you can see, setting up a JoinNode is rather simple. The only difference to a normal Node are the joinsource and the joinfield. joinsource specifies from which node the information to join is coming and the joinfield specifies the input field of the JoinNode where the information to join will be entering the node.\nMore realistic example\nLet's consider another example where we have one node that iterates over 3 different numbers and another node that joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the Function interface to do something with those numbers, before we spit them out again.", "from nipype import JoinNode, Node, Workflow\nfrom nipype.interfaces.utility import Function, IdentityInterface\n\n# Create iteration node\nfrom nipype import IdentityInterface\niternode = Node(IdentityInterface(fields=['number_id']),\n name=\"iternode\")\niternode.iterables = [('number_id', [1, 4, 9])]\n\n# Create join node - compute square root for each element in the joined list\ndef compute_sqrt(numbers):\n from math import sqrt\n return [sqrt(e) for e in numbers]\n\njoinnode = JoinNode(Function(input_names=['numbers'],\n output_names=['sqrts'],\n function=compute_sqrt),\n name='joinnode',\n joinsource='iternode',\n joinfield=['numbers'])\n\n# Create the workflow and run it\njoinflow = Workflow(name='joinflow')\njoinflow.connect(iternode, 'number_id', joinnode, 'numbers')\nres = joinflow.run()", "Now, let's look at the input and output of the joinnode:", "res.nodes()[0].result.outputs\n\nres.nodes()[0].inputs" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
esa-as/2016-ml-contest
dagrha/KNN_submission_2_dagrha.ipynb
apache-2.0
[ "Facies classification using KNearestNeighbors (submission 2)\n<a rel=\"license\" href=\"https://creativecommons.org/licenses/by-sa/4.0/\">\n <img alt=\"Creative Commons License BY-SA\" align=\"left\" src=\"https://i.creativecommons.org/l/by-sa/4.0/88x31.png\">\n</a>\n<br>\nDan Hallau\nHere is another KNearestNeighbors solution to the facies classification contest described at https://github.com/seg/2016-ml-contest. A lot of sophisticated models have been submitted for the contest so far (deep neural nets, random forests, etc.) so I thought I'd try submitting a simpler model to see how it stacks up. In that spirit here's another KNearestNeighbors classifier.\nNote: The main differences between my KNN Submission 1 and KNN Submission 2 are:\n- In submission 2 I use a KNearestNeighborsRegressor to predict PE in records where there is no data. This gives me much more data with which to train the classifier.\n- In submission 1 I excluded the CROSS H CATTLE well from the training set, but in submission 2 I include it.\n- In submission 1 I excluded records where PHIND was greater than 40%, but in submission 2 I leave those records in the training set, in case rugose hole is an issue in the validation wells.\n- In submission 2 I basically did a bootstrapped grid search to optimize the n_neighbors parameter.\nI spend a few cells back-calculating some more standard logging curves (RHOB, NPHI, etc), use a KNN regressor to regress missing PE values from other logs, then create a log-based lithology model from a Umaa-Rhomaa plot. After training, I finish it up with a LeaveOneGroupOut test.", "import pandas as pd\nimport numpy as np\n\nfrom sklearn import neighbors\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import LeaveOneGroupOut\n\nimport inversion\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "Load training data", "df = pd.read_csv('../facies_vectors.csv')", "Build features\nIn the real world it would be unusual to have neutron-density cross-plot porosity (i.e. PHIND) without the corresponding raw input curves, namely bulk density and neutron porosity, as we have in this contest dataset. So as part of the feature engineering process, I back-calculate estimates of those raw curves from the provided DeltaPHI and PHIND curves. One issue with this approach though is that cross-plot porosity differs between vendors, toolstrings, and software packages, and it is not known exactly how the PHIND in this dataset was computed. So I make the assumption here that PHIND ≈ sum of squares porosity, which is usually an adequate approximation of neutron-density crossplot porosity. That equation looks like this: \n$$PHIND ≈ \\sqrt{\\frac{NPHI^2 + DPHI^2}{2}}$$\nand it is assumed here that DeltaPHI is:\n$$DeltaPHI = NPHI - DPHI$$\nThe functions below use the relationships from the above equations (...two equations, two unknowns...) to estimate NPHI and DPHI (and consequently RHOB).\nOnce we have RHOB, we can use it combined with PE to estimate apparent grain density (RHOMAA) and apparent photoelectric capture cross-section (UMAA), which are useful in lithology estimations from well logs.", "def estimate_dphi(df):\n return ((4*(df['PHIND']**2) - (df['DeltaPHI']**2))**0.5 - df['DeltaPHI']) / 2\n\ndef estimate_rhob(df):\n return (2.71 - (df['DPHI_EST']/100) * 1.71)\n\ndef estimate_nphi(df):\n return df['DPHI_EST'] + df['DeltaPHI']\n\ndef compute_rhomaa(df):\n return (df['RHOB_EST'] - (df['PHIND'] / 100)) / (1 - df['PHIND'] / 100)\n \ndef compute_umaa(df):\n return ((df['PE'] * df['RHOB_EST']) - (df['PHIND']/100 * 0.398)) / (1 - df['PHIND'] / 100)", "Because solving the sum of squares equation involved the quadratic formula, in some cases imaginary numbers result due to porosities being negative, which is what the warning below is about.", "df['DPHI_EST'] = df.apply(lambda x: estimate_dphi(x), axis=1).astype(float)\ndf['RHOB_EST'] = df.apply(lambda x: estimate_rhob(x), axis=1)\ndf['NPHI_EST'] = df.apply(lambda x: estimate_nphi(x), axis=1)\ndf['RHOMAA_EST'] = df.apply(lambda x: compute_rhomaa(x), axis=1)", "Regress missing PE values", "pe = df.dropna()\n\nPE = pe['PE'].values\nwells = pe['Well Name'].values\n\ndrop_list_pe = ['Formation', 'Well Name', 'Facies', 'Depth', 'PE', 'RELPOS'] \n\nfv_pe = pe.drop(drop_list_pe, axis=1).values\n\nX_pe = preprocessing.StandardScaler().fit(fv_pe).transform(fv_pe)\ny_pe = PE\n\nreg = neighbors.KNeighborsRegressor(n_neighbors=40, weights='distance')\n\nlogo = LeaveOneGroupOut()\nf1knn_pe = []\n\nfor train, test in logo.split(X_pe, y_pe, groups=wells):\n well_name = wells[test[0]]\n reg.fit(X_pe[train], y_pe[train])\n score = reg.fit(X_pe[train], y_pe[train]).score(X_pe[test], y_pe[test])\n print(\"{:>20s} {:.3f}\".format(well_name, score))\n f1knn_pe.append(score)\n \nprint(\"-Average leave-one-well-out F1 Score: %6f\" % (np.mean(f1knn_pe)))", "Apply regression model to missing PE values and merge back into dataframe:", "reg.fit(X_pe, y_pe)\nfv_apply = df.drop(drop_list_pe, axis=1).values\nX_apply = preprocessing.StandardScaler().fit(fv_apply).transform(fv_apply)\ndf['PE_EST'] = reg.predict(X_apply)\ndf.PE = df.PE.combine_first(df.PE_EST)", "Compute UMAA for lithology model", "df['UMAA_EST'] = df.apply(lambda x: compute_umaa(x), axis=1)", "Just for fun, below is a basic Umaa-Rhomaa plot to view relative abundances of quartz, calcite, dolomite, and clay. The red triangle represents a ternary solution for QTZ, CAL, and DOL, while the green triangle represents a solution for QTZ, CAL, and CLAY (illite).", "df[df.GR < 125].plot(kind='scatter', x='UMAA_EST', y='RHOMAA_EST', c='GR', figsize=(8,6))\nplt.ylim(3.1, 2.2)\nplt.xlim(0.0, 17.0)\nplt.plot([4.8, 9.0, 13.8, 4.8], [2.65, 2.87, 2.71, 2.65], c='r')\nplt.plot([4.8, 11.9, 13.8, 4.8], [2.65, 3.06, 2.71, 2.65], c='g')\nplt.scatter([4.8], [2.65], s=50, c='r')\nplt.scatter([9.0], [2.87], s=50, c='r')\nplt.scatter([13.8], [2.71], s=50, c='r')\nplt.scatter([11.9], [3.06], s=50, c='g')\nplt.text(2.8, 2.65, 'Quartz', backgroundcolor='w')\nplt.text(14.4, 2.71, 'Calcite', backgroundcolor='w')\nplt.text(9.6, 2.87, 'Dolomite', backgroundcolor='w')\nplt.text(12.5, 3.06, 'Illite', backgroundcolor='w')\nplt.text(7.0, 2.55, \"gas effect\", ha=\"center\", va=\"center\", rotation=-55,\n size=8, bbox=dict(boxstyle=\"larrow,pad=0.3\", fc=\"pink\", ec=\"red\", lw=2))\nplt.text(15.0, 2.78, \"barite?\", ha=\"center\", va=\"center\", rotation=0,\n size=8, bbox=dict(boxstyle=\"rarrow,pad=0.3\", fc=\"yellow\", ec=\"orange\", lw=2))", "Here I use matrix inversion to \"solve\" the ternary plot for each lithologic component. Essentially each datapoint is a mix of the three components defined by the ternary diagram, with abundances of each defined by the relative distances from each endpoint. I use a GR cutoff of 40 API to determine when to use either the QTZ-CAL-DOL or QTZ-CAL-CLAY ternary solutions. In other words, it is assumed that below 40 API, there is 0% clay, and above 40 API there is 0% dolomite, and also that these four lithologic components are the only components in these rocks. Admittedly it's not a great assumption, especially since the ternary plot indicates other stuff is going on. For example the high Umaa datapoints near the Calcite endpoint may indicate some heavy minerals (e.g., pyrite) or even barite-weighted mud. The \"pull\" of datapoints to the northwest quadrant probably reflects some gas effect, so my lithologies in those gassy zones will be skewed.", "# QTZ-CAL-CLAY\nur1 = inversion.UmaaRhomaa()\nur1.set_dol_uma(11.9)\nur1.set_dol_rhoma(3.06)\n# QTZ-CAL-DOL\nur2 = inversion.UmaaRhomaa()\n\ndf['UR_QTZ'] = np.nan\ndf['UR_CLY'] = np.nan\ndf['UR_CAL'] = np.nan\ndf['UR_DOL'] = np.nan\n\ndf.ix[df.GR >= 40, 'UR_QTZ'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)\ndf.ix[df.GR >= 40, 'UR_CLY'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) \ndf.ix[df.GR >= 40, 'UR_CAL'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)\ndf.ix[df.GR >= 40, 'UR_DOL'] = 0\n\ndf.ix[df.GR < 40, 'UR_QTZ'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)\ndf.ix[df.GR < 40, 'UR_DOL'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) \ndf.ix[df.GR < 40, 'UR_CAL'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)\ndf.ix[df.GR < 40, 'UR_CLY'] = 0", "Below I train the model using 1 to 599 n_neighbors and select a value for n_neighbors to use in my classifier with a high average on the LOGO test. In this case I will use 62. I recommend not running this cell as it takes a while to complete.", "#score_list = []\n#for i in range(1,600):\n# clf = neighbors.KNeighborsClassifier(n_neighbors=i, weights='distance')\n# f1knn = []\n#\n# for train, test in logo.split(X, y, groups=wells):\n# well_name = wells[test[0]]\n# clf.fit(X[train], y[train])\n# score = clf.fit(X[train], y[train]).score(X[test], y[test])\n# #print(\"{:>20s} {:.3f}\".format(well_name, score))\n# f1knn.append(score)\n# \n# score_list.append([i, np.mean(f1knn)])\n#\n#score_list", "Fit KNearestNeighbors model and apply LeaveOneGroupOut test\nThere is some bad log data in this dataset which I'd guess is due to rugose hole. PHIND gets as high at 80%, which is certainly spurious. For now I'll leave them in, since the validation wells may have rugose hole, too.", "facies = df['Facies'].values\nwells = df['Well Name'].values\n\ndrop_list = ['Formation', 'Well Name', 'Facies', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI',\n 'RHOMAA_EST', 'UMAA_EST', 'UR_QTZ', 'UR_DOL', 'PE'] \n\nfv = df.drop(drop_list, axis=1).values\nX = preprocessing.StandardScaler().fit(fv).transform(fv)\ny = facies\n\nclf = neighbors.KNeighborsClassifier(n_neighbors=62, weights='distance') \n\nlogo = LeaveOneGroupOut()\n\nf1knn = []\n\nfor train, test in logo.split(X, y, groups=wells):\n well_name = wells[test[0]]\n clf.fit(X[train], y[train])\n score = clf.fit(X[train], y[train]).score(X[test], y[test])\n print(\"{:>20s} {:.3f}\".format(well_name, score))\n f1knn.append(score)\n \nprint(\"-Average leave-one-well-out F1 Score: %6f\" % (np.mean(f1knn)))\nf1knn.pop(7)\nprint(\"-Average leave-one-well-out F1 Score, no Recruit F1: %6f\" % (np.mean(f1knn)))", "On average the scores are slightly worse than in my KNN_submission_1 model, but that is partially because this time I've included the CROSS H CATTLE well, which performs markedly worse than the other LOGO cases. I am hoping that since the scores for several of the wells have increased, the performance of this model against the validation data will improve.\nApply model to validation dataset\nLoad validation data (vd), build features, and use the classfier from above to predict facies. Ultimately the PE_EST curve seemed to be slightly more predictive than the PE curve proper (?). I use that instead of PE in the classifer so I need to compute it with the validation data.", "clf.fit(X, y)\n\nvd = pd.read_csv('../validation_data_nofacies.csv')\n\nvd['DPHI_EST'] = vd.apply(lambda x: estimate_dphi(x), axis=1).astype(float)\nvd['RHOB_EST'] = vd.apply(lambda x: estimate_rhob(x), axis=1)\nvd['NPHI_EST'] = vd.apply(lambda x: estimate_nphi(x), axis=1)\nvd['RHOMAA_EST'] = vd.apply(lambda x: compute_rhomaa(x), axis=1)\n\ndrop_list_vd = ['Formation', 'Well Name', 'Depth', 'PE', 'RELPOS'] \nfv_vd = vd.drop(drop_list_vd, axis=1).values\nX_vd = preprocessing.StandardScaler().fit(fv_vd).transform(fv_vd)\nvd['PE_EST'] = reg.predict(X_vd)\nvd.PE = vd.PE.combine_first(vd.PE_EST)\n\nvd['UMAA_EST'] = vd.apply(lambda x: compute_umaa(x), axis=1)\n\nvd['UR_QTZ'] = np.nan\nvd['UR_CLY'] = np.nan\nvd['UR_CAL'] = np.nan\nvd['UR_DOL'] = np.nan\n\nvd.ix[vd.GR >= 40, 'UR_QTZ'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)\nvd.ix[vd.GR >= 40, 'UR_CLY'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) \nvd.ix[vd.GR >= 40, 'UR_CAL'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)\nvd.ix[vd.GR >= 40, 'UR_DOL'] = 0\n\nvd.ix[vd.GR < 40, 'UR_QTZ'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1)\nvd.ix[vd.GR < 40, 'UR_DOL'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) \nvd.ix[vd.GR < 40, 'UR_CAL'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1)\nvd.ix[vd.GR < 40, 'UR_CLY'] = 0\n\ndrop_list1 = ['Formation', 'Well Name', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI',\n 'RHOMAA_EST', 'UMAA_EST', 'UR_QTZ', 'UR_DOL', 'PE'] \n\nfv_vd1 = vd.drop(drop_list1, axis=1).values\nX_vd1 = preprocessing.StandardScaler().fit(fv_vd1).transform(fv_vd1)\nvd_predicted_facies = clf.predict(X_vd1)\n\nvd_predicted_facies" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Jackporter415/phys202-2015-work
assignments/assignment08/InterpolationEx01.ipynb
mit
[ "Interpolation Exercise 1", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n\nfrom scipy.interpolate import interp2d\nfrom scipy.interpolate import interp1d", "2D trajectory interpolation\nThe file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time:\n\nt which has discrete values of time t[i].\nx which has values of the x position at those times: x[i] = x(t[i]).\nx which has values of the y position at those times: y[i] = y(t[i]).\n\nLoad those arrays into this notebook and save them as variables x, y and t:", "with np.load('trajectory.npz') as data:\n t = data['t']\n x = data['x']\n y = data['y']\n \n\n\nassert isinstance(x, np.ndarray) and len(x)==40\nassert isinstance(y, np.ndarray) and len(y)==40\nassert isinstance(t, np.ndarray) and len(t)==40", "Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays:\n\nnewt which has 200 points between ${t_{min},t_{max}}$.\nnewx which has the interpolated values of $x(t)$ at those times.\nnewy which has the interpolated values of $y(t)$ at those times.", "\nnewt = np.linspace(t.min(),t.max(),200)\nfx = interp1d(t,x,kind = 'cubic')\nfy = interp1d(t,y,kind = 'cubic')\nnewx = fx(newt)\nnewy = fy(newt)\n\nlen(newx)\n\nassert newt[0]==t.min()\nassert newt[-1]==t.max()\nassert len(newt)==200\nassert len(newx)==200\nassert len(newy)==200", "Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points:\n\nFor the interpolated points, use a solid line.\nFor the original points, use circles of a different color and no line.\nCustomize you plot to make it effective and beautiful.", "ax = plt.gca()\nplt.plot(newx,newy, color = 'r')\nplt.plot(x,y, 'bo')\n\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.get_xaxis().tick_bottom()\nax.get_yaxis().tick_left()\nplt.title('Trajectory')\nplt.xlabel('Distance')\nplt.ylabel('Height')\n\n\n\nassert True # leave this to grade the trajectory plot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ImAlexisSaez/deep-learning-specialization-coursera
course_1/week_4/assignment_1/building_your_deep_neural_network_step_by_step_v2.ipynb
mit
[ "Building your Deep Neural Network: Step by Step\nWelcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!\n\nIn this notebook, you will implement all the functions required to build a deep neural network.\nIn the next assignment, you will use these functions to build a deep neural network for image classification.\n\nAfter this assignment you will be able to:\n- Use non-linear units like ReLU to improve your model\n- Build a deeper neural network (with more than 1 hidden layer)\n- Implement an easy-to-use neural network class\nNotation:\n- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. \n - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.\n- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. \n - Example: $x^{(i)}$ is the $i^{th}$ training example.\n- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.\n - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).\nLet's get started!\n1 - Packages\nLet's first import all the packages that you will need during this assignment. \n- numpy is the main package for scientific computing with Python.\n- matplotlib is a library to plot graphs in Python.\n- dnn_utils provides some necessary functions for this notebook.\n- testCases provides some test cases to assess the correctness of your functions\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.", "import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nfrom testCases_v2 import *\nfrom dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)", "2 - Outline of the Assignment\nTo build your neural network, you will be implementing several \"helper functions\". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:\n\nInitialize the parameters for a two-layer network and for an $L$-layer neural network.\nImplement the forward propagation module (shown in purple in the figure below).\nComplete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).\nWe give you the ACTIVATION function (relu/sigmoid).\nCombine the previous two steps into a new [LINEAR->ACTIVATION] forward function.\nStack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.\n\n\nCompute the loss.\nImplement the backward propagation module (denoted in red in the figure below).\nComplete the LINEAR part of a layer's backward propagation step.\nWe give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) \nCombine the previous two steps into a new [LINEAR->ACTIVATION] backward function.\nStack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function\n\n\nFinally update the parameters.\n\n<img src=\"images/final outline.png\" style=\"width:800px;height:500px;\">\n<caption><center> Figure 1</center></caption><br>\nNote that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. \n3 - Initialization\nYou will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.\n3.1 - 2-layer Neural Network\nExercise: Create and initialize the parameters of the 2-layer neural network.\nInstructions:\n- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID. \n- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.\n- Use zero initialization for the biases. Use np.zeros(shape).", "# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n parameters -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(1)\n \n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = np.random.randn(n_h, n_x) * 0.01\n b1 = np.zeros((n_h, 1))\n W2 = np.random.randn(n_y, n_h) * 0.01\n b2 = np.zeros((n_y, 1))\n ### END CODE HERE ###\n \n assert(W1.shape == (n_h, n_x))\n assert(b1.shape == (n_h, 1))\n assert(W2.shape == (n_y, n_h))\n assert(b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters \n\nparameters = initialize_parameters(2,2,1)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected output:\n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td> [[ 0.01624345 -0.00611756]\n [-0.00528172 -0.01072969]] </td> \n </tr>\n\n <tr>\n <td> **b1**</td>\n <td>[[ 0.]\n [ 0.]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[ 0.00865408 -0.02301539]]</td>\n </tr>\n\n <tr>\n <td> **b2** </td>\n <td> [[ 0.]] </td> \n </tr>\n\n</table>\n\n3.2 - L-layer Neural Network\nThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:\n<table style=\"width:100%\">\n\n\n <tr>\n <td> </td> \n <td> **Shape of W** </td> \n <td> **Shape of b** </td> \n <td> **Activation** </td>\n <td> **Shape of Activation** </td> \n <tr>\n\n <tr>\n <td> **Layer 1** </td> \n <td> $(n^{[1]},12288)$ </td> \n <td> $(n^{[1]},1)$ </td> \n <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> \n\n <td> $(n^{[1]},209)$ </td> \n <tr>\n\n <tr>\n <td> **Layer 2** </td> \n <td> $(n^{[2]}, n^{[1]})$ </td> \n <td> $(n^{[2]},1)$ </td> \n <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> \n <td> $(n^{[2]}, 209)$ </td> \n <tr>\n\n <tr>\n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$</td> \n <td> $\\vdots$ </td> \n <tr>\n\n <tr>\n <td> **Layer L-1** </td> \n <td> $(n^{[L-1]}, n^{[L-2]})$ </td> \n <td> $(n^{[L-1]}, 1)$ </td> \n <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> \n <td> $(n^{[L-1]}, 209)$ </td> \n <tr>\n\n\n <tr>\n <td> **Layer L** </td> \n <td> $(n^{[L]}, n^{[L-1]})$ </td> \n <td> $(n^{[L]}, 1)$ </td>\n <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>\n <td> $(n^{[L]}, 209)$ </td> \n <tr>\n\n</table>\n\nRemember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: \n$$ W = \\begin{bmatrix}\n j & k & l\\\n m & n & o \\\n p & q & r \n\\end{bmatrix}\\;\\;\\; X = \\begin{bmatrix}\n a & b & c\\\n d & e & f \\\n g & h & i \n\\end{bmatrix} \\;\\;\\; b =\\begin{bmatrix}\n s \\\n t \\\n u\n\\end{bmatrix}\\tag{2}$$\nThen $WX + b$ will be:\n$$ WX + b = \\begin{bmatrix}\n (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\\n (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\\n (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\n\\end{bmatrix}\\tag{3} $$\nExercise: Implement initialization for an L-layer Neural Network. \nInstructions:\n- The model's structure is [LINEAR -> RELU] $ \\times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.\n- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.\n- Use zeros initialization for the biases. Use np.zeros(shape).\n- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the \"Planar Data classification model\" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers! \n- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).\npython\n if L == 1:\n parameters[\"W\" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01\n parameters[\"b\" + str(L)] = np.zeros((layer_dims[1], 1))", "# GRADED FUNCTION: initialize_parameters_deep\n\ndef initialize_parameters_deep(layer_dims):\n \"\"\"\n Arguments:\n layer_dims -- python array (list) containing the dimensions of each layer in our network\n \n Returns:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])\n bl -- bias vector of shape (layer_dims[l], 1)\n \"\"\"\n \n np.random.seed(3)\n parameters = {}\n L = len(layer_dims) # number of layers in the network\n\n for l in range(1, L):\n ### START CODE HERE ### (≈ 2 lines of code)\n parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01\n parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))\n ### END CODE HERE ###\n \n assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))\n assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))\n\n \n return parameters\n\nparameters = initialize_parameters_deep([5,4,3])\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))", "Expected output:\n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> \n </tr>\n\n <tr>\n <td>**b1** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n\n <tr>\n <td>**W2** </td>\n <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]\n [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> \n </tr>\n\n <tr>\n <td>**b2** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n\n</table>\n\n4 - Forward propagation module\n4.1 - Linear Forward\nNow that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:\n\nLINEAR\nLINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. \n[LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID (whole model)\n\nThe linear forward module (vectorized over all the examples) computes the following equations:\n$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\\tag{4}$$\nwhere $A^{[0]} = X$. \nExercise: Build the linear part of forward propagation.\nReminder:\nThe mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.", "# GRADED FUNCTION: linear_forward\n\ndef linear_forward(A, W, b):\n \"\"\"\n Implement the linear part of a layer's forward propagation.\n\n Arguments:\n A -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n\n Returns:\n Z -- the input of the activation function, also called pre-activation parameter \n cache -- a python dictionary containing \"A\", \"W\" and \"b\" ; stored for computing the backward pass efficiently\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n Z = np.dot(W, A) + b\n ### END CODE HERE ###\n \n assert(Z.shape == (W.shape[0], A.shape[1]))\n cache = (A, W, b)\n \n return Z, cache\n\nA, W, b = linear_forward_test_case()\n\nZ, linear_cache = linear_forward(A, W, b)\nprint(\"Z = \" + str(Z))", "Expected output:\n<table style=\"width:35%\">\n\n <tr>\n <td> **Z** </td>\n <td> [[ 3.26295337 -1.23429987]] </td> \n </tr>\n\n</table>\n\n4.2 - Linear-Activation Forward\nIn this notebook, you will use two activation functions:\n\n\nSigmoid: $\\sigma(Z) = \\sigma(W A + b) = \\frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value \"a\" and a \"cache\" that contains \"Z\" (it's what we will feed in to the corresponding backward function). To use it you could just call: \npython\nA, activation_cache = sigmoid(Z)\n\n\nReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value \"A\" and a \"cache\" that contains \"Z\" (it's what we will feed in to the corresponding backward function). To use it you could just call:\npython\nA, activation_cache = relu(Z)\n\n\nFor more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.\nExercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation \"g\" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.", "# GRADED FUNCTION: linear_activation_forward\n\ndef linear_activation_forward(A_prev, W, b, activation):\n \"\"\"\n Implement the forward propagation for the LINEAR->ACTIVATION layer\n\n Arguments:\n A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n\n Returns:\n A -- the output of the activation function, also called the post-activation value \n cache -- a python dictionary containing \"linear_cache\" and \"activation_cache\";\n stored for computing the backward pass efficiently\n \"\"\"\n \n if activation == \"sigmoid\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n Z, linear_cache = linear_forward(A_prev, W, b)\n A, activation_cache = sigmoid(Z)\n ### END CODE HERE ###\n \n elif activation == \"relu\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n Z, linear_cache = linear_forward(A_prev, W, b)\n A, activation_cache = relu(Z)\n ### END CODE HERE ###\n \n assert (A.shape == (W.shape[0], A_prev.shape[1]))\n cache = (linear_cache, activation_cache)\n\n return A, cache\n\nA_prev, W, b = linear_activation_forward_test_case()\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"sigmoid\")\nprint(\"With sigmoid: A = \" + str(A))\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"relu\")\nprint(\"With ReLU: A = \" + str(A))", "Expected output:\n<table style=\"width:35%\">\n <tr>\n <td> **With sigmoid: A ** </td>\n <td > [[ 0.96890023 0.11013289]]</td> \n </tr>\n <tr>\n <td> **With ReLU: A ** </td>\n <td > [[ 3.43896131 0. ]]</td> \n </tr>\n</table>\n\nNote: In deep learning, the \"[LINEAR->ACTIVATION]\" computation is counted as a single layer in the neural network, not two layers. \nd) L-Layer Model\nFor even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.\n<img src=\"images/model_architecture_kiank.png\" style=\"width:600px;height:300px;\">\n<caption><center> Figure 2 : [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>\nExercise: Implement the forward propagation of the above model.\nInstruction: In the code below, the variable AL will denote $A^{[L]} = \\sigma(Z^{[L]}) = \\sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\\hat{Y}$.) \nTips:\n- Use the functions you had previously written \n- Use a for loop to replicate [LINEAR->RELU] (L-1) times\n- Don't forget to keep track of the caches in the \"caches\" list. To add a new value c to a list, you can use list.append(c).", "# GRADED FUNCTION: L_model_forward\n\ndef L_model_forward(X, parameters):\n \"\"\"\n Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation\n \n Arguments:\n X -- data, numpy array of shape (input size, number of examples)\n parameters -- output of initialize_parameters_deep()\n \n Returns:\n AL -- last post-activation value\n caches -- list of caches containing:\n every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)\n the cache of linear_sigmoid_forward() (there is one, indexed L-1)\n \"\"\"\n\n caches = []\n A = X\n L = len(parameters) // 2 # number of layers in the neural network\n \n # Implement [LINEAR -> RELU]*(L-1). Add \"cache\" to the \"caches\" list.\n for l in range(1, L):\n A_prev = A \n ### START CODE HERE ### (≈ 2 lines of code)\n A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], \"relu\")\n caches.append(cache)\n ### END CODE HERE ###\n \n # Implement LINEAR -> SIGMOID. Add \"cache\" to the \"caches\" list.\n ### START CODE HERE ### (≈ 2 lines of code)\n AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], \"sigmoid\")\n caches.append(cache)\n ### END CODE HERE ###\n \n assert(AL.shape == (1,X.shape[1]))\n \n return AL, caches\n\nX, parameters = L_model_forward_test_case()\nAL, caches = L_model_forward(X, parameters)\nprint(\"AL = \" + str(AL))\nprint(\"Length of caches list = \" + str(len(caches)))", "<table style=\"width:40%\">\n <tr>\n <td> **AL** </td>\n <td > [[ 0.17007265 0.2524272 ]]</td> \n </tr>\n <tr>\n <td> **Length of caches list ** </td>\n <td > 2</td> \n </tr>\n</table>\n\nGreat! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in \"caches\". Using $A^{[L]}$, you can compute the cost of your predictions.\n5 - Cost function\nNow you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.\nExercise: Compute the cross-entropy cost $J$, using the following formula: $$-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} (y^{(i)}\\log\\left(a^{[L] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{L}\\right)) \\tag{7}$$", "# GRADED FUNCTION: compute_cost\n\ndef compute_cost(AL, Y):\n \"\"\"\n Implement the cost function defined by equation (7).\n\n Arguments:\n AL -- probability vector corresponding to your label predictions, shape (1, number of examples)\n Y -- true \"label\" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)\n\n Returns:\n cost -- cross-entropy cost\n \"\"\"\n \n m = Y.shape[1]\n\n # Compute loss from aL and y.\n ### START CODE HERE ### (≈ 1 lines of code)\n cost = -1 / m * np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1 - Y), np.log(1 - AL)))\n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).\n assert(cost.shape == ())\n \n return cost\n\nY, AL = compute_cost_test_case()\n\nprint(\"cost = \" + str(compute_cost(AL, Y)))", "Expected Output:\n<table>\n\n <tr>\n <td>**cost** </td>\n <td> 0.41493159961539694</td> \n </tr>\n</table>\n\n6 - Backward propagation module\nJust like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. \nReminder: \n<img src=\"images/backprop_kiank.png\" style=\"width:650px;height:250px;\">\n<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>\n<!-- \nFor those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:\n\n$$\\frac{d \\mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \\frac{d\\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\\frac{{da^{[2]}}}{{dz^{[2]}}}\\frac{{dz^{[2]}}}{{da^{[1]}}}\\frac{{da^{[1]}}}{{dz^{[1]}}} \\tag{8} $$\n\nIn order to calculate the gradient $dW^{[1]} = \\frac{\\partial L}{\\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.\n\nEquivalently, in order to calculate the gradient $db^{[1]} = \\frac{\\partial L}{\\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial b^{[1]}}$.\n\nThis is why we talk about **backpropagation**.\n!-->\n\nNow, similar to forward propagation, you are going to build the backward propagation in three steps:\n- LINEAR backward\n- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation\n- [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)\n6.1 - Linear backward\nFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).\nSuppose you have already calculated the derivative $dZ^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.\n<img src=\"images/linearback_kiank.png\" style=\"width:250px;height:300px;\">\n<caption><center> Figure 4 </center></caption>\nThe three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:\n$$ dW^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial W^{[l]}} = \\frac{1}{m} dZ^{[l]} A^{[l-1] T} \\tag{8}$$\n$$ db^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial b^{[l]}} = \\frac{1}{m} \\sum_{i = 1}^{m} dZ^{l}\\tag{9}$$\n$$ dA^{[l-1]} = \\frac{\\partial \\mathcal{L} }{\\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \\tag{10}$$\nExercise: Use the 3 formulas above to implement linear_backward().", "# GRADED FUNCTION: linear_backward\n\ndef linear_backward(dZ, cache):\n \"\"\"\n Implement the linear portion of backward propagation for a single layer (layer l)\n\n Arguments:\n dZ -- Gradient of the cost with respect to the linear output (of current layer l)\n cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer\n\n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n A_prev, W, b = cache\n m = A_prev.shape[1]\n\n ### START CODE HERE ### (≈ 3 lines of code)\n dW = 1 / m * np.dot(dZ, A_prev.T)\n db = 1 / m * np.sum(dZ, axis=1, keepdims=True)\n dA_prev = np.dot(W.T, dZ)\n ### END CODE HERE ###\n \n assert (dA_prev.shape == A_prev.shape)\n assert (dW.shape == W.shape)\n assert (db.shape == b.shape)\n \n return dA_prev, dW, db\n\n# Set up some test inputs\ndZ, linear_cache = linear_backward_test_case()\n\ndA_prev, dW, db = linear_backward(dZ, linear_cache)\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))", "Expected Output: \n<table style=\"width:90%\">\n <tr>\n <td> **dA_prev** </td>\n <td > [[ 0.51822968 -0.19517421]\n [-0.40506361 0.15255393]\n [ 2.37496825 -0.89445391]] </td> \n </tr> \n\n <tr>\n <td> **dW** </td>\n <td > [[-0.10076895 1.40685096 1.64992505]] </td> \n </tr> \n\n <tr>\n <td> **db** </td>\n <td> [[ 0.50629448]] </td> \n </tr> \n\n</table>\n\n6.2 - Linear-Activation backward\nNext, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward. \nTo help you implement linear_activation_backward, we provided two backward functions:\n- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:\npython\ndZ = sigmoid_backward(dA, activation_cache)\n\nrelu_backward: Implements the backward propagation for RELU unit. You can call it as follows:\n\npython\ndZ = relu_backward(dA, activation_cache)\nIf $g(.)$ is the activation function, \nsigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \\tag{11}$$. \nExercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.", "# GRADED FUNCTION: linear_activation_backward\n\ndef linear_activation_backward(dA, cache, activation):\n \"\"\"\n Implement the backward propagation for the LINEAR->ACTIVATION layer.\n \n Arguments:\n dA -- post-activation gradient for current layer l \n cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n \n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n linear_cache, activation_cache = cache\n \n if activation == \"relu\":\n ### START CODE HERE ### (≈ 2 lines of code)\n dZ = relu_backward(dA, activation_cache)\n dA_prev, dW, db = linear_backward(dZ, linear_cache)\n ### END CODE HERE ###\n \n elif activation == \"sigmoid\":\n ### START CODE HERE ### (≈ 2 lines of code)\n dZ = sigmoid_backward(dA, activation_cache)\n dA_prev, dW, db = linear_backward(dZ, linear_cache)\n ### END CODE HERE ###\n \n return dA_prev, dW, db\n\nAL, linear_activation_cache = linear_activation_backward_test_case()\n\ndA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = \"sigmoid\")\nprint (\"sigmoid:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db) + \"\\n\")\n\ndA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = \"relu\")\nprint (\"relu:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))", "Expected output with sigmoid:\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td >[[ 0.11017994 0.01105339]\n [ 0.09466817 0.00949723]\n [-0.05743092 -0.00576154]] </td> \n\n </tr> \n\n <tr>\n <td > dW </td> \n <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> \n </tr> \n\n <tr>\n <td > db </td> \n <td > [[-0.05729622]] </td> \n </tr> \n</table>\n\nExpected output with relu\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td > [[ 0.44090989 0. ]\n [ 0.37883606 0. ]\n [-0.2298228 0. ]] </td> \n\n </tr> \n\n <tr>\n <td > dW </td> \n <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> \n </tr> \n\n <tr>\n <td > db </td> \n <td > [[-0.20837892]] </td> \n </tr> \n</table>\n\n6.3 - L-Model Backward\nNow you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. \n<img src=\"images/mn_backward.png\" style=\"width:450px;height:300px;\">\n<caption><center> Figure 5 : Backward pass </center></caption>\n Initializing backpropagation:\nTo backpropagate through this network, we know that the output is, \n$A^{[L]} = \\sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \\frac{\\partial \\mathcal{L}}{\\partial A^{[L]}}$.\nTo do so, use this formula (derived using calculus which you don't need in-depth knowledge of):\npython\ndAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL\nYou can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : \n$$grads[\"dW\" + str(l)] = dW^{[l]}\\tag{15} $$\nFor example, for $l=3$ this would store $dW^{[l]}$ in grads[\"dW3\"].\nExercise: Implement backpropagation for the [LINEAR->RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID model.", "# GRADED FUNCTION: L_model_backward\n\ndef L_model_backward(AL, Y, caches):\n \"\"\"\n Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group\n \n Arguments:\n AL -- probability vector, output of the forward propagation (L_model_forward())\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat)\n caches -- list of caches containing:\n every cache of linear_activation_forward() with \"relu\" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)\n the cache of linear_activation_forward() with \"sigmoid\" (it's caches[L-1])\n \n Returns:\n grads -- A dictionary with the gradients\n grads[\"dA\" + str(l)] = ... \n grads[\"dW\" + str(l)] = ...\n grads[\"db\" + str(l)] = ... \n \"\"\"\n grads = {}\n L = len(caches) # the number of layers\n m = AL.shape[1]\n Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL\n \n # Initializing the backpropagation\n ### START CODE HERE ### (1 line of code)\n dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))\n ### END CODE HERE ###\n \n # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: \"AL, Y, caches\". Outputs: \"grads[\"dAL\"], grads[\"dWL\"], grads[\"dbL\"]\n ### START CODE HERE ### (approx. 2 lines)\n current_cache = caches[L - 1]\n grads[\"dA\" + str(L)], grads[\"dW\" + str(L)], grads[\"db\" + str(L)] = linear_activation_backward(dAL, current_cache, activation = \"sigmoid\")\n ### END CODE HERE ###\n \n for l in reversed(range(L-1)):\n # lth layer: (RELU -> LINEAR) gradients.\n # Inputs: \"grads[\"dA\" + str(l + 2)], caches\". Outputs: \"grads[\"dA\" + str(l + 1)] , grads[\"dW\" + str(l + 1)] , grads[\"db\" + str(l + 1)] \n ### START CODE HERE ### (approx. 5 lines)\n current_cache = caches[l]\n dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads[\"dA\" + str(l + 2)], current_cache, activation = \"relu\")\n grads[\"dA\" + str(l + 1)] = dA_prev_temp\n grads[\"dW\" + str(l + 1)] = dW_temp\n grads[\"db\" + str(l + 1)] = db_temp\n ### END CODE HERE ###\n\n return grads\n\nAL, Y_assess, caches = L_model_backward_test_case()\ngrads = L_model_backward(AL, Y_assess, caches)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"db1 = \"+ str(grads[\"db1\"]))\nprint (\"dA1 = \"+ str(grads[\"dA1\"]))", "Expected Output\n<table style=\"width:60%\">\n\n <tr>\n <td > dW1 </td> \n <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]\n [ 0. 0. 0. 0. ]\n [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> \n </tr> \n\n <tr>\n <td > db1 </td> \n <td > [[-0.22007063]\n [ 0. ]\n [-0.02835349]] </td> \n </tr> \n\n <tr>\n <td > dA1 </td> \n <td > [[ 0. 0.52257901]\n [ 0. -0.3269206 ]\n [ 0. -0.32070404]\n [ 0. -0.74079187]] </td> \n\n </tr> \n</table>\n\n6.4 - Update Parameters\nIn this section you will update the parameters of the model, using gradient descent: \n$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{16}$$\n$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{17}$$\nwhere $\\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. \nExercise: Implement update_parameters() to update your parameters using gradient descent.\nInstructions:\nUpdate parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.", "# GRADED FUNCTION: update_parameters\n\ndef update_parameters(parameters, grads, learning_rate):\n \"\"\"\n Update parameters using gradient descent\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients, output of L_model_backward\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n parameters[\"W\" + str(l)] = ... \n parameters[\"b\" + str(l)] = ...\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural network\n\n # Update rule for each parameter. Use a for loop.\n ### START CODE HERE ### (≈ 3 lines of code)\n for l in range(L):\n parameters[\"W\" + str(l+1)] = parameters[\"W\" + str(l+1)] - learning_rate * grads[\"dW\" + str(l+1)]\n parameters[\"b\" + str(l+1)] = parameters[\"b\" + str(l+1)] - learning_rate * grads[\"db\" + str(l+1)]\n ### END CODE HERE ###\n \n return parameters\n\nparameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads, 0.1)\n\nprint (\"W1 = \"+ str(parameters[\"W1\"]))\nprint (\"b1 = \"+ str(parameters[\"b1\"]))\nprint (\"W2 = \"+ str(parameters[\"W2\"]))\nprint (\"b2 = \"+ str(parameters[\"b2\"]))\n#print (\"W3 = \"+ str(parameters[\"W3\"]))\n#print (\"b3 = \"+ str(parameters[\"b3\"]))", "Expected Output:\n<table style=\"width:100%\"> \n <tr>\n <td > W1 </td> \n <td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n [-1.0535704 -0.86128581 0.68284052 2.20374577]] </td> \n </tr> \n\n <tr>\n <td > b1 </td> \n <td > [[-0.04659241]\n [-1.28888275]\n [ 0.53405496]] </td> \n </tr> \n <tr>\n <td > W2 </td> \n <td > [[-0.55569196 0.0354055 1.32964895]]</td> \n </tr> \n\n <tr>\n <td > b2 </td> \n <td > [[-0.84610769]] </td> \n </tr> \n</table>\n\n7 - Conclusion\nCongrats on implementing all the functions required for building a deep neural network! \nWe know it was a long assignment but going forward it will only get better. The next part of the assignment is easier. \nIn the next assignment you will put all these together to build two models:\n- A two-layer neural network\n- An L-layer neural network\nYou will in fact use these models to classify cat vs non-cat images!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mrcslws/nupic.research
projects/archive/dynamic_sparse/notebooks/mcaporale/2019-10-07--Experiment-Analysis-NonBinaryHeb.ipynb
agpl-3.0
[ "Experiment\nRun Hebbian pruning with non-binary activations.\nMotivation\nAttempt pruning given intuition offered up in \"Memory Aware Synapses\" paper:\n * The weights with higher coactivations computed as $x_i \\times x_j$\n have a greater effect on the L2 norm of the layers output. Here $x_i$ and $x_j$ are\n the input and output activations respectively.", "from IPython.display import Markdown, display\n%load_ext autoreload\n%autoreload 2\n\nimport sys\nimport itertools\nsys.path.append(\"../../\")\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport glob\nimport tabulate\nimport pprint\nimport click\nimport numpy as np\nimport pandas as pd\nfrom ray.tune.commands import *\nfrom nupic.research.frameworks.dynamic_sparse.common.browser import *\n\nbase = 'gsc-trials-2019-10-07'\nexp_names = [\n 'gsc-BaseModel',\n 'gsc-Static',\n 'gsc-Heb-nonbinary',\n 'gsc-WeightedMag-nonbinary',\n 'gsc-WeightedMag',\n 'gsc-SET',\n]\nexps = [\n os.path.join(base, exp) for exp in exp_names\n]\n \npaths = [os.path.expanduser(\"~/nta/results/{}\".format(e)) for e in exps]\nfor p in paths:\n print(os.path.exists(p), p)\ndf = load_many(paths)\n\n# remove nans where appropriate\ndf['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)\ndf['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)\n\n# distill certain values \ndf['on_perc'] = df['on_perc'].replace('None-None-0.1-None', 0.1, regex=True)\ndf['on_perc'] = df['on_perc'].replace('None-None-0.4-None', 0.4, regex=True)\ndf['on_perc'] = df['on_perc'].replace('None-None-0.02-None', 0.02, regex=True)\ndf['prune_methods'] = df['prune_methods'].replace('None-None-dynamic-linear-None', 'dynamic-linear', regex=True)\n\n# def model_name(row):\n# col = 'Experiment Name'\n \n# for exp in exp_names:\n# if exp in row[col]:\n# return exp\n\n# # if row[col] == 'DSNNWeightedMag':\n# # return 'DSNN-WM'\n\n# # elif row[col] == 'DSNNMixedHeb':\n# # if row['hebbian_prune_perc'] == 0.3:\n# # return 'SET'\n\n# # elif row['weight_prune_perc'] == 0.3:\n# # return 'DSNN-Heb'\n\n# # elif row[col] == 'SparseModel':\n# # return 'Static'\n \n# assert False, \"This should cover all cases. Got {}\".format(row[col])\n\n# df['model2'] = df.apply(model_name, axis=1) \n\ndf.iloc[34]\n\ndf.groupby('experiment_base_path')['experiment_base_path'].count()\n\n# Did anything fail?\ndf[df[\"epochs\"] < 30][\"epochs\"].count()\n\n# helper functions\ndef mean_and_std(s):\n return \"{:.3f} ± {:.3f}\".format(s.mean(), s.std())\n\ndef round_mean(s):\n return \"{:.0f}\".format(round(s.mean()))\n\nstats = ['min', 'max', 'mean', 'std']\n\ndef agg(columns, filter=None, round=3):\n if filter is None:\n return (df.groupby(columns)\n .agg({'val_acc_max_epoch': round_mean,\n 'val_acc_max': stats, \n 'model': ['count']})).round(round)\n else:\n return (df[filter].groupby(columns)\n .agg({'val_acc_max_epoch': round_mean,\n 'val_acc_max': stats, \n 'model': ['count']})).round(round)\n\ntype(np.nan)\n\ndf['on_perc'][0] is nan", "Dense Model", "fltr = (df['experiment_base_path'] == 'gsc-BaseModel')\nagg(['model'], fltr)", "Static Sparse", "# 2% sparse\nfltr = (df['experiment_base_path'] == 'gsc-Static')\nagg(['model'], fltr)", "Weighted Magnitude", "# 2% sparse\n# 2% sparse \ncombos = {\n 'experiment_base_path': ['gsc-WeightedMag', 'gsc-WeightedMag-nonbinary'],\n 'hebbian_grow': [True, False],\n}\ncombos = [[(k, v_i) for v_i in v] for k, v in combos.items()]\ncombos = list(itertools.product(*combos))\n\nfor c in combos:\n fltr = None\n summary = []\n for restraint in c:\n \n rname = restraint[0]\n rcond = restraint[1]\n \n summary.append(\"{}={} \".format(rname, rcond))\n \n new_fltr = df[rname] == rcond\n if fltr is not None:\n fltr = fltr & new_fltr\n else:\n fltr = new_fltr\n \n summary = Markdown(\"### \" + \" / \".join(summary))\n display(summary)\n display(agg(['experiment_base_path'], fltr))\n print('\\n\\n\\n\\n')\n", "SET", "# 2% sparse \nfltr = (df['experiment_base_path'] == 'gsc-SET')\ndisplay(agg(['model'], fltr))", "Hebbien", "# 2% sparse \ncombos = {\n 'hebbian_grow': [True, False],\n 'moving_average_alpha': [0.6, 0.8, 1.0],\n 'reset_coactivations': [True, False],\n}\ncombos = [[(k, v_i) for v_i in v] for k, v in combos.items()]\ncombos = list(itertools.product(*combos))\n\nfor c in combos:\n fltr = None\n summary = []\n for restraint in c:\n \n rname = restraint[0]\n rcond = restraint[1]\n \n summary.append(\"{}={} \".format(rname, rcond))\n \n new_fltr = df[rname] == rcond\n if fltr is not None:\n fltr = fltr & new_fltr\n else:\n fltr = new_fltr\n \n summary = Markdown(\"### \" + \" / \".join(summary))\n display(summary)\n display(agg(['experiment_base_path'], fltr))\n print('\\n\\n\\n\\n')\n\n\nd = {'b':4}\n'b' in d" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ricklupton/sankeyview
docs/cookbook/forwards-backwards.ipynb
mit
[ "Forwards & backwards flows\nThis recipe demonstrates how forwards and backwards flows work.\nFor demonstration, the CSV data is written directly in the cell below -- in practice you would want to load data a file.", "import pandas as pd\nfrom io import StringIO\n\nflows = pd.read_csv(StringIO(\"\"\"\nsource,target,type,value\na,b,main,2\na,c,main,1\nc,d,main,3\nb,c,back,2\n\"\"\"))\n\nflows", "Here is one structure, with nodes b and c both in the same vertical slice:", "from floweaver import *\n\n# Set the default size to fit the documentation better.\nsize = dict(width=570, height=300)\n\nnodes = {\n 'a': ProcessGroup(['a']),\n 'b': ProcessGroup(['b']),\n 'c': ProcessGroup(['c']),\n 'd': ProcessGroup(['d']),\n 'back': Waypoint(direction='L'),\n}\n\nbundles = [\n Bundle('a', 'b'),\n Bundle('a', 'c'),\n Bundle('b', 'c', waypoints=['back']),\n Bundle('c', 'd'),\n Bundle('c', 'b'),\n]\n\nordering = [\n [['a'], []],\n [['b', 'c'], ['back']],\n [['d'], []],\n]\n\nsdd = SankeyDefinition(nodes, bundles, ordering)\n\nweave(sdd, flows).to_widget(**size)", "Alternatively, if b is moved to the right, extra hidden waypoints are automatically added to get the b--c flow back to the left of c:", "bundles = [\n Bundle('a', 'b'),\n Bundle('a', 'c'),\n Bundle('b', 'c'),\n Bundle('c', 'd'),\n Bundle('c', 'b'),\n]\n\nordering = [\n [['a'], []],\n [['c'], ['back']],\n [['b', 'd'], []],\n]\n\nsdd = SankeyDefinition(nodes, bundles, ordering)\n\nweave(sdd, flows).to_widget(**size)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Linguistics-DTU/DTU_7th_Sem_Project
Jupyter/Components/4_the_folk.ipynb
gpl-3.0
[ "This notebook represents the statistics related to the state\nThe datasets which we use in this notebook are\n\nEducation Level by Age and Sex\nGraduate and Above by Age and Sex\nHousehold Size\nNon Workers\nPopulation attending education institutions\nPopulation by religious community\nMain Worker, Marginal Worker and Non Worker\nAdolescent and Youth Population", "import os\nos.chdir(\"/home/archimedeas/wrkspc/anaconda/the-visual-verdict/visualizations/4_the_folk/datasets\")\nos.getcwd()\n", "BURTIN PLOT", "\"\"\"\nExample from the bokeh gallery\n\nhttp://bokeh.pydata.org/en/latest/docs/gallery/burtin.html\n\n\"\"\"\n\n\nfrom collections import OrderedDict\nfrom math import log, sqrt\n\nimport numpy as np\nimport pandas as pd\nfrom six.moves import cStringIO as StringIO\n\nfrom bokeh.plotting import figure, show, output_file\n\nantibiotics = \"\"\"\nbacteria, penicillin, streptomycin, neomycin, gram\nMycobacterium tuberculosis, 800, 5, 2, negative\nSalmonella schottmuelleri, 10, 0.8, 0.09, negative\nProteus vulgaris, 3, 0.1, 0.1, negative\nKlebsiella pneumoniae, 850, 1.2, 1, negative\nBrucella abortus, 1, 2, 0.02, negative\nPseudomonas aeruginosa, 850, 2, 0.4, negative\nEscherichia coli, 100, 0.4, 0.1, negative\nSalmonella (Eberthella) typhosa, 1, 0.4, 0.008, negative\nAerobacter aerogenes, 870, 1, 1.6, negative\nBrucella antracis, 0.001, 0.01, 0.007, positive\nStreptococcus fecalis, 1, 1, 0.1, positive\nStaphylococcus aureus, 0.03, 0.03, 0.001, positive\nStaphylococcus albus, 0.007, 0.1, 0.001, positive\nStreptococcus hemolyticus, 0.001, 14, 10, positive\nStreptococcus viridans, 0.005, 10, 40, positive\nDiplococcus pneumoniae, 0.005, 11, 10, positive\n\"\"\"\n\ndrug_color = OrderedDict([\n (\"Penicillin\", \"#0d3362\"),\n (\"Streptomycin\", \"#c64737\"),\n (\"Neomycin\", \"black\" ),\n])\n\ngram_color = {\n \"positive\" : \"#aeaeb8\",\n \"negative\" : \"#e69584\",\n}\n\ndf = pd.read_csv(StringIO(antibiotics),\n skiprows=1,\n skipinitialspace=True,\n engine='python')\n\nwidth = 800\nheight = 800\ninner_radius = 90\nouter_radius = 300 - 10\n\nminr = sqrt(log(.001 * 1E4))\nmaxr = sqrt(log(1000 * 1E4))\na = (outer_radius - inner_radius) / (minr - maxr)\nb = inner_radius - a * maxr\n\ndef rad(mic):\n return a * np.sqrt(np.log(mic * 1E4)) + b\n\nbig_angle = 2.0 * np.pi / (len(df) + 1)\nsmall_angle = big_angle / 7\n\nx = np.zeros(len(df))\ny = np.zeros(len(df))\n\noutput_file(\"burtin.html\", title=\"burtin.py example\")\n\np = figure(plot_width=width, plot_height=height, title=\"\",\n x_axis_type=None, y_axis_type=None,\n x_range=[-420, 420], y_range=[-420, 420],\n min_border=0, outline_line_color=\"black\",\n background_fill=\"#f0e1d2\", border_fill=\"#f0e1d2\")\n\np.line(x+1, y+1, alpha=0)\n\n# annular wedges\nangles = np.pi/2 - big_angle/2 - df.index.to_series()*big_angle\ncolors = [gram_color[gram] for gram in df.gram]\np.annular_wedge(\n x, y, inner_radius, outer_radius, -big_angle+angles, angles, color=colors,\n)\n\n# small wedges\np.annular_wedge(x, y, inner_radius, rad(df.penicillin),\n -big_angle+angles+5*small_angle, -big_angle+angles+6*small_angle,\n color=drug_color['Penicillin'])\np.annular_wedge(x, y, inner_radius, rad(df.streptomycin),\n -big_angle+angles+3*small_angle, -big_angle+angles+4*small_angle,\n color=drug_color['Streptomycin'])\np.annular_wedge(x, y, inner_radius, rad(df.neomycin),\n -big_angle+angles+1*small_angle, -big_angle+angles+2*small_angle,\n color=drug_color['Neomycin'])\n\n# circular axes and lables\nlabels = np.power(10.0, np.arange(-3, 4))\nradii = a * np.sqrt(np.log(labels * 1E4)) + b\np.circle(x, y, radius=radii, fill_color=None, line_color=\"white\")\np.text(x[:-1], radii[:-1], [str(r) for r in labels[:-1]],\n text_font_size=\"8pt\", text_align=\"center\", text_baseline=\"middle\")\n\n# radial axes\np.annular_wedge(x, y, inner_radius-10, outer_radius+10,\n -big_angle+angles, -big_angle+angles, color=\"black\")\n\n# bacteria labels\nxr = radii[0]*np.cos(np.array(-big_angle/2 + angles))\nyr = radii[0]*np.sin(np.array(-big_angle/2 + angles))\nlabel_angle=np.array(-big_angle/2+angles)\nlabel_angle[label_angle < -np.pi/2] += np.pi # easier to read labels on the left side\np.text(xr, yr, df.bacteria, angle=label_angle,\n text_font_size=\"9pt\", text_align=\"center\", text_baseline=\"middle\")\n\n# OK, these hand drawn legends are pretty clunky, will be improved in future release\np.circle([-40, -40], [-370, -390], color=list(gram_color.values()), radius=5)\np.text([-30, -30], [-370, -390], text=[\"Gram-\" + gr for gr in gram_color.keys()],\n text_font_size=\"7pt\", text_align=\"left\", text_baseline=\"middle\")\n\np.rect([-40, -40, -40], [18, 0, -18], width=30, height=13,\n color=list(drug_color.values()))\np.text([-15, -15, -15], [18, 0, -18], text=list(drug_color.keys()),\n text_font_size=\"9pt\", text_align=\"left\", text_baseline=\"middle\")\n\np.xgrid.grid_line_color = None\np.ygrid.grid_line_color = None\n\nshow(p)\n\n", "Donut Chart", "\"\"\"\n\nThe donut graph from bokeh gallery\n\nhttp://bokeh.pydata.org/en/latest/docs/gallery/donut_chart.html\n\n\"\"\"\n\n\nfrom collections import OrderedDict\n\nimport pandas as pd\n\nfrom bokeh.charts import Donut, show, output_file\nfrom bokeh.sampledata.olympics2014 import data\n\n# throw the data into a pandas data frame\ndf = pd.io.json.json_normalize(data['data'])\n\n# filter by countries with at least one medal and sort\ndf = df[df['medals.total'] > 8]\ndf = df.sort(\"medals.total\", ascending=False)\n\n# get the countries and we group the data by medal type\ncountries = df.abbr.values.tolist()\ngold = df['medals.gold'].astype(float).values\nsilver = df['medals.silver'].astype(float).values\nbronze = df['medals.bronze'].astype(float).values\n\n# build a dict containing the grouped data\nmedals = OrderedDict()\nmedals['bronze'] = bronze\nmedals['silver'] = silver\nmedals['gold'] = gold\n\n# any of the following commented are also valid Donut inputs\n#medals = list(medals.values())\n#medals = np.array(list(medals.values()))\n#medals = pd.DataFrame(medals)\n\noutput_file(\"donut.html\")\n\ndonut = Donut(medals, countries)\n\nshow(donut)\n\n", "Charts", "\n\"\"\"\nhttp://bokeh.pydata.org/en/latest/docs/reference/charts.html\n\n\"\"\"\n\n\nfrom collections import OrderedDict\nfrom bokeh.charts import HeatMap, output_file, show\n\n# (dict, OrderedDict, lists, arrays and DataFrames are valid inputs)\nxyvalues = OrderedDict()\nxyvalues['apples'] = [4,5,8]\nxyvalues['bananas'] = [1,2,4]\nxyvalues['pears'] = [6,5,4]\n\nhm = HeatMap(xyvalues, title='Fruits')\n\noutput_file('heatmap.html')\nshow(hm)\n\n################\n", "Image", "\n\"\"\"\n\nhttp://bokeh.pydata.org/en/latest/docs/gallery/image.html\n\"\"\"\n\nimport numpy as np\n\nfrom bokeh.plotting import figure, show, output_file\n\nN = 1000\n\nx = np.linspace(0, 10, N)\ny = np.linspace(0, 10, N)\nxx, yy = np.meshgrid(x, y)\nd = np.sin(xx)*np.cos(yy)\n\noutput_file(\"image.html\", title=\"image.py example\")\n\np = figure(x_range=[0, 10], y_range=[0, 10])\np.image(image=[d], x=[0], y=[0], dw=[10], dh=[10], palette=\"Spectral11\")\n\nshow(p) # open a browser\n\n\n################\n", "Les Miserable", "\n\"\"\"\nhttp://bokeh.pydata.org/en/latest/docs/gallery/les_mis.html\n\n\"\"\"\n\n\nfrom collections import OrderedDict\n\nimport numpy as np\n\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.models import HoverTool, ColumnDataSource\nfrom bokeh.sampledata.les_mis import data\n\nnodes = data['nodes']\nnames = [node['name'] for node in sorted(data['nodes'], key=lambda x: x['group'])]\n\nN = len(nodes)\ncounts = np.zeros((N, N))\nfor link in data['links']:\n counts[link['source'], link['target']] = link['value']\n counts[link['target'], link['source']] = link['value']\n\ncolormap = [\n \"#444444\", \"#a6cee3\", \"#1f78b4\", \"#b2df8a\", \"#33a02c\", \"#fb9a99\",\n \"#e31a1c\", \"#fdbf6f\", \"#ff7f00\", \"#cab2d6\", \"#6a3d9a\"\n]\n\nxname = []\nyname = []\ncolor = []\nalpha = []\nfor i, n1 in enumerate(nodes):\n for j, n2 in enumerate(nodes):\n xname.append(n1['name'])\n yname.append(n2['name'])\n\n a = min(counts[i,j]/4.0, 0.9) + 0.1\n alpha.append(a)\n\n if n1['group'] == n2['group']:\n color.append(colormap[n1['group']])\n else:\n color.append('lightgrey')\n\n\nsource = ColumnDataSource(\n data=dict(\n xname=xname,\n yname=yname,\n colors=color,\n alphas=alpha,\n count=counts.flatten(),\n )\n)\n\noutput_file(\"les_mis.html\")\n\np = figure(title=\"Les Mis Occurrences\",\n x_axis_location=\"above\", tools=\"resize,hover,save\",\n x_range=list(reversed(names)), y_range=names)\np.plot_width = 800\np.plot_height = 800\n\np.rect('xname', 'yname', 0.9, 0.9, source=source,\n color='colors', alpha='alphas', line_color=None)\n\np.grid.grid_line_color = None\np.axis.axis_line_color = None\np.axis.major_tick_line_color = None\np.axis.major_label_text_font_size = \"5pt\"\np.axis.major_label_standoff = 0\np.xaxis.major_label_orientation = np.pi/3\n\nhover = p.select(dict(type=HoverTool))\nhover.tooltips = OrderedDict([\n ('names', '@yname, @xname'),\n ('count', '@count'),\n])\n\nshow(p) # show the plot\n\n####################\n", "Stacked Bars", "\n\"\"\"\nhttp://bokeh.pydata.org/en/latest/docs/reference/charts.html\n\n\"\"\"\n\n\nfrom collections import OrderedDict\nfrom bokeh.charts import Bar, output_file, show\n\n# (dict, OrderedDict, lists, arrays and DataFrames are valid inputs)\nxyvalues = OrderedDict()\nxyvalues['python']=[-2, 5]\nxyvalues['pypy']=[12, 40]\nxyvalues['jython']=[22, 30]\n\ncat = ['1st', '2nd']\n\nbar = Bar(xyvalues, cat, title=\"Stacked bars\",\n xlabel=\"category\", ylabel=\"language\")\n\noutput_file(\"stacked_bar.html\")\nshow(bar)\n\nfrom collections import OrderedDict\n\nimport pandas as pd\n\nfrom bokeh._legacy_charts import Bar, output_file, show\nfrom bokeh.sampledata.olympics2014 import data\n\ndf = pd.io.json.json_normalize(data['data'])\n\n# filter by countries with at least one medal and sort\ndf = df[df['medals.total'] > 0]\ndf = df.sort(\"medals.total\", ascending=False)\n\n# get the countries and we group the data by medal type\ncountries = df.abbr.values.tolist()\ngold = df['medals.gold'].astype(float).values\nsilver = df['medals.silver'].astype(float).values\nbronze = df['medals.bronze'].astype(float).values\n\n# build a dict containing the grouped data\nmedals = OrderedDict(bronze=bronze, silver=silver, gold=gold)\n\n# any of the following commented are also alid Bar inputs\n#medals = pd.DataFrame(medals)\n#medals = list(medals.values())\n\noutput_file(\"stacked_bar.html\")\n\nbar = Bar(medals, countries, title=\"Stacked bars\", stacked=True)\n\nshow(bar)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Geosyntec/pycvc
examples/2 - Hydrologic Summaries.ipynb
bsd-3-clause
[ "CVC Data Summaries (with simple method hydrology)\nSetup the basic working environment", "%matplotlib inline\n\nimport os\nimport sys\nimport datetime\nimport warnings\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas\nimport seaborn\nseaborn.set(style='ticks', context='paper')\n\nimport wqio\nfrom wqio import utils\nimport pybmpdb\nimport pynsqd\n\nimport pycvc\n\nmin_precip = 1.9999\nbig_storm_date = datetime.date(2013, 7, 8)\n\npybmpdb.setMPLStyle()\nseaborn.set(style='ticks', rc={'text.usetex': False}, palette='deep')\n\nPOCs = [p['cvcname'] for p in filter(lambda p: p['include'], pycvc.info.POC_dicts)]\n\nif wqio.testing.checkdep_tex() is None:\n tex_msg = (\"LaTeX not found on system path. You will \"\n \"not be able to compile ISRs to PDF files\")\n warnings.warn(tex_msg, UserWarning)\n \nwarning_filter = \"ignore\" \nwarnings.simplefilter(warning_filter)", "Load tidy data\nData using the Simple Method hydrology is suffixed with _simple.\nYou could also use the SWMM Model hydrology with the _SWMM files.", "# simple method file\ntidy_file = \"output/tidy/hydro_simple.csv\"\n\n\n# # SWMM Files\n# tidy_file = \"output/tidy/hydro_swmm.csv\"\n\nhydro = pandas.read_csv(tidy_file, parse_dates=['start_date', 'end_date'])", "High-level summaries\nHydrologic info and stats\nDoes not include the July 8, 2013 storm event.\nFor LV-1 and LV-2, event durations are winsorized to replace outliers beyond the 97.5 percentile.\nFor more information, see:\n\nscipy.stats.mstats.winsorize\nwqio.utils.winsorize_dataframe", "def winsorize_duration(g): \n winsor_limits = {\n 'ED-1': (0.0, 0.0),\n 'LV-1': (0.2, 0.1),\n 'LV-2': (0.2, 0.3),\n 'LV-4': (0.0, 0.0),\n }\n return wqio.utils.winsorize_dataframe(g, duration_hours=winsor_limits[g.name])\n\nwith pandas.ExcelWriter(\"output/xlsx/CVCHydro_StormInfo_Simple.xlsx\") as xl_storminfo:\n \n hydro.to_excel(xl_storminfo, sheet_name='Storm Info', index=False)\n for timegroup in [None, 'year', 'season', 'grouped_season']:\n stat_options = {\n 'minprecip': min_precip,\n 'groupby_col': timegroup,\n }\n\n (\n hydro.groupby('site')\n .apply(winsorize_duration)\n .pipe(pycvc.summary.remove_load_data_from_storms, [big_storm_date], 'start_date')\n .pipe(pycvc.summary.storm_stats, **stat_options)\n .to_excel(xl_storminfo, sheet_name='Storm Stats - {}'.format(timegroup), index=False)\n \n )", "Hydrologic Pairplots\nExpected failures due to lack of data:\n 1. LV-2, outflow\n 1. LV-4, grouped_season", "for site in ['ED-1', 'LV-2', 'LV-4']:\n for by in ['year', 'outflow', 'season', 'grouped_season']:\n try: \n pycvc.viz.hydro_pairplot(hydro, site, by=by)\n except: \n print('failed on {}, {}'.format(site, by))", "Hydrologic joint distribution plots", "sites = [\n {'name': 'ED-1', 'color': seaborn.color_palette()[0]},\n {'name': 'LV-1', 'color': seaborn.color_palette()[1]},\n {'name': 'LV-2', 'color': seaborn.color_palette()[4]},\n {'name': 'LV-4', 'color': seaborn.color_palette()[5]},\n]\nfor site in sites: \n pycvc.viz.hydro_jointplot(\n hydro=hydro, site=site['name'],\n xcol='total_precip_depth', \n ycol='outflow_mm', \n conditions=\"outflow_mm > 0\", \n one2one=True,\n color=site['color'],\n )\n\n pycvc.viz.hydro_jointplot(\n hydro=hydro, site=site['name'],\n xcol='antecedent_days', \n ycol='outflow_mm', \n conditions=\"outflow_mm > 0\", \n one2one=False,\n color=site['color'],\n )\n\n pycvc.viz.hydro_jointplot(\n hydro=hydro, site=site['name'],\n xcol='total_precip_depth', \n ycol='antecedent_days', \n conditions=\"outflow_mm == 0\", \n one2one=False,\n color=site['color'],\n )\n \n pycvc.viz.hydro_jointplot(\n hydro=hydro, site=site['name'],\n xcol='peak_precip_intensity', \n ycol='peak_outflow', \n conditions=None, \n one2one=False,\n color=site['color'],\n )\n \n plt.close('all')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
alecbrooks/notebooks
notebooks/YouTube Age.ipynb
mit
[ "The average channel in the top 1000 channels by most subscribers is about six years old. \nVidStatsX treats YouTube's automatically generated channels, like #Music, as real channels and many of them have enough subscribers to be in the top 1000. Since the YouTube API (and most people, probably) do not consider them real channels, they're not included. Without these channels, there are only 964 channels in the dataset.", "def median(l):\n l = sorted(l) #sort the list\n if len(l) % 2 == 1: #Even number of items\n return float(l[len(l)/2])\n else:\n return float(l[len(l)/2]+l[(len(l)/2)-1])/2\n\naverage_age = sum(top_users_ages)/len(top_users_ages)\nmedian_age = median(top_users_ages)\nprint(\"Average (days): \" + str(average_age) + \"; Median: \" + str(median_age))\nprint(\"Average (years): \" + str(average_age/365.0) + \"; Median: \" + str(median_age/365.0))\nprint(\"Number of channels: \" + str(len(top_users_ages)))", "Educational channels are slightly older than the average channel, by about six months. Gaming channels are on average only slightly younger, but the median gaming is much younger than the median channel, by about seven months.", "edu_average_age = sum(edu_ages)/len(edu_ages)\nedu_median_age = median(edu_ages)\n\nprint(\"Educational Average: \" + str(edu_average_age/365.0) + \"; Median: \" + str(edu_median_age/365.0))\n\ngaming_average_age = sum(gaming_ages)/len(gaming_ages)\ngaming_median_age = median(gaming_ages)\n\nprint(\"Gaming Average: \" + str(gaming_average_age/365.0) + \"; Median: \" + str(gaming_median_age/365.0))\n", "Code to generate dataset\nThis initial code scrapes the top user lists from a given VidStatsX url.", "from bs4 import BeautifulSoup\nimport requests\nimport arrow\n\ndef get_users(url=\"http://vidstatsx.com/youtube-top-200-most-subscribed-channels\"):\n \"\"\"Get the users from a VidStatsX page.\"\"\"\n r = requests.get(url)\n soup = BeautifulSoup(r.text)\n return [x.get('id') for x in soup.find_all(\"td\") if x.get('id') is not None]\n\n", "Now that we have a function to get the users, we can ask YouTube for information about them, including the start dates. From there, we can convert those dates into ages using a third function.", "def get_start_dates(users):\n request_url = \"https://www.googleapis.com/youtube/v3/channels?part=snippet&forUsername=\"\n key = \"&key=AIzaSyCZx95H8pP-csC_6G8mF5tv-kW_U20HJKs\"\n responses = [ requests.get(request_url + x + key) for x in users] #Raw content from YouTube\n \n return [x.json()['items'][0]['snippet'].get('publishedAt') for x in responses if len(x.json()['items']) > 0]\n\ndef get_ages(users):\n start_dates = get_start_dates(users)\n \n return [int((arrow.now() - arrow.get(x)).days) for x in start_dates]\n ", "With all of our functions written, we can use them to find the dates of the top 1000 channels. \nNote that after the top 200, the pages start where the last one left off, so the top 500 most subscribed channels page includes only channels from 201 to 500.", "top_users_ages = get_ages(get_users() +\n get_users(\"http://vidstatsx.com/youtube-top-500-most-subscribed-channels\") + \n get_users(\"http://vidstatsx.com/youtube-top-750-most-subscribed-channels\") +\n get_users(\"http://vidstatsx.com/youtube-top-1000-most-subscribed-channels\"))", "VidStatsX also includes charts by category, so we can get the results by category, too.", "edu_ages = get_ages(get_users(\"http://vidstatsx.com/youtube-top-100-most-subscribed-education-channels\"))\ngaming_ages = get_ages(get_users(\"http://vidstatsx.com/youtube-top-100-most-subscribed-games-gaming-channels\"))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Benedicto/ML-Learning
Clustering_2_kmeans-with-text-data_blank.ipynb
gpl-3.0
[ "k-means with text data\nIn this assignment you will\n* Cluster Wikipedia documents using k-means\n* Explore the role of random initialization on the quality of the clustering\n* Explore how results differ after changing the number of clusters\n* Evaluate clustering, both quantitatively and qualitatively\nWhen properly executed, clustering uncovers valuable insights from a set of unlabeled documents.\nNote to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.\nImport necessary packages\nThe following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.", "import graphlab\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sys\nimport os\nfrom scipy.sparse import csr_matrix\n\n%matplotlib inline\n\n'''Check GraphLab Create version'''\nfrom distutils.version import StrictVersion\nassert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'", "Load data, extract features\nTo work with text data, we must first convert the documents into numerical features. As in the first assignment, let's extract TF-IDF features for each article.", "wiki = graphlab.SFrame('people_wiki.gl/')\n\nwiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])", "For the remainder of the assignment, we will use sparse matrices. Sparse matrices are matrices that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.\nWe first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format. We included plenty of comments for the curious; if you'd like, you may skip the next block and treat the function as a black box.", "def sframe_to_scipy(x, column_name):\n '''\n Convert a dictionary column of an SFrame into a sparse matrix format where\n each (row_id, column_id, value) triple corresponds to the value of\n x[row_id][column_id], where column_id is a key in the dictionary.\n \n Example\n >>> sparse_matrix, map_key_to_index = sframe_to_scipy(sframe, column_name)\n '''\n assert x[column_name].dtype() == dict, \\\n 'The chosen column must be dict type, representing sparse data.'\n \n # Create triples of (row_id, feature_id, count).\n # 1. Add a row number.\n x = x.add_row_number()\n # 2. Stack will transform x to have a row for each unique (row, key) pair.\n x = x.stack(column_name, ['feature', 'value'])\n\n # Map words into integers using a OneHotEncoder feature transformation.\n f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])\n # 1. Fit the transformer using the above data.\n f.fit(x)\n # 2. The transform takes 'feature' column and adds a new column 'feature_encoding'.\n x = f.transform(x)\n # 3. Get the feature mapping.\n mapping = f['feature_encoding']\n # 4. Get the feature id to use for each key.\n x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])\n\n # Create numpy arrays that contain the data for the sparse matrix.\n i = np.array(x['id'])\n j = np.array(x['feature_id'])\n v = np.array(x['value'])\n width = x['id'].max() + 1\n height = x['feature_id'].max() + 1\n\n # Create a sparse matrix.\n mat = csr_matrix((v, (i, j)), shape=(width, height))\n\n return mat, mapping\n\n# The conversion will take about a minute or two.\ntf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')\n\ntf_idf", "The above matrix contains a TF-IDF score for each of the 59071 pages in the data set and each of the 547979 unique words.\nNormalize all vectors\nAs discussed in the previous assignment, Euclidean distance can be a poor metric of similarity between documents, as it unfairly penalizes long articles. For a reasonable assessment of similarity, we should disregard the length information and use length-agnostic metrics, such as cosine distance.\nThe k-means algorithm does not directly work with cosine distance, so we take an alternative route to remove length information: we normalize all vectors to be unit length. It turns out that Euclidean distance closely mimics cosine distance when all vectors are unit length. In particular, the squared Euclidean distance between any two vectors of length one is directly proportional to their cosine distance.\nWe can prove this as follows. Let $\\mathbf{x}$ and $\\mathbf{y}$ be normalized vectors, i.e. unit vectors, so that $\\|\\mathbf{x}\\|=\\|\\mathbf{y}\\|=1$. Write the squared Euclidean distance as the dot product of $(\\mathbf{x} - \\mathbf{y})$ to itself:\n\\begin{align}\n\\|\\mathbf{x} - \\mathbf{y}\\|^2 &= (\\mathbf{x} - \\mathbf{y})^T(\\mathbf{x} - \\mathbf{y})\\\n &= (\\mathbf{x}^T \\mathbf{x}) - 2(\\mathbf{x}^T \\mathbf{y}) + (\\mathbf{y}^T \\mathbf{y})\\\n &= \\|\\mathbf{x}\\|^2 - 2(\\mathbf{x}^T \\mathbf{y}) + \\|\\mathbf{y}\\|^2\\\n &= 2 - 2(\\mathbf{x}^T \\mathbf{y})\\\n &= 2(1 - (\\mathbf{x}^T \\mathbf{y}))\\\n &= 2\\left(1 - \\frac{\\mathbf{x}^T \\mathbf{y}}{\\|\\mathbf{x}\\|\\|\\mathbf{y}\\|}\\right)\\\n &= 2\\left[\\text{cosine distance}\\right]\n\\end{align}\nThis tells us that two unit vectors that are close in Euclidean distance are also close in cosine distance. Thus, the k-means algorithm (which naturally uses Euclidean distances) on normalized vectors will produce the same results as clustering using cosine distance as a distance metric.\nWe import the normalize() function from scikit-learn to normalize all vectors to unit length.", "from sklearn.preprocessing import normalize\ntf_idf = normalize(tf_idf)", "Implement k-means\nLet us implement the k-means algorithm. First, we choose an initial set of centroids. A common practice is to choose randomly from the data points.\nNote: We specify a seed here, so that everyone gets the same answer. In practice, we highly recommend to use different seeds every time (for instance, by using the current timestamp).", "def get_initial_centroids(data, k, seed=None):\n '''Randomly choose k data points as initial centroids'''\n if seed is not None: # useful for obtaining consistent results\n np.random.seed(seed)\n n = data.shape[0] # number of data points\n \n # Pick K indices from range [0, N).\n rand_indices = np.random.randint(0, n, k)\n \n # Keep centroids as dense format, as many entries will be nonzero due to averaging.\n # As long as at least one document in a cluster contains a word,\n # it will carry a nonzero weight in the TF-IDF vector of the centroid.\n centroids = data[rand_indices,:].toarray()\n \n return centroids", "After initialization, the k-means algorithm iterates between the following two steps:\n1. Assign each data point to the closest centroid.\n$$\nz_i \\gets \\mathrm{argmin}j \\|\\mu_j - \\mathbf{x}_i\\|^2\n$$\n2. Revise centroids as the mean of the assigned data points.\n$$\n\\mu_j \\gets \\frac{1}{n_j}\\sum{i:z_i=j} \\mathbf{x}_i\n$$\nIn pseudocode, we iteratively do the following:\ncluster_assignment = assign_clusters(data, centroids)\ncentroids = revise_centroids(data, k, cluster_assignment)\nAssigning clusters\nHow do we implement Step 1 of the main k-means loop above? First import pairwise_distances function from scikit-learn, which calculates Euclidean distances between rows of given arrays. See this documentation for more information.\nFor the sake of demonstration, let's look at documents 100 through 102 as query documents and compute the distances between each of these documents and every other document in the corpus. In the k-means algorithm, we will have to compute pairwise distances between the set of centroids and the set of documents.", "from sklearn.metrics import pairwise_distances\n\n# Get the TF-IDF vectors for documents 100 through 102.\nqueries = tf_idf[100:102,:]\n\n# Compute pairwise distances from every data point to each query vector.\ndist = pairwise_distances(tf_idf, queries, metric='euclidean')\n\nprint dist", "More formally, dist[i,j] is assigned the distance between the ith row of X (i.e., X[i,:]) and the jth row of Y (i.e., Y[j,:]).\nCheckpoint: For a moment, suppose that we initialize three centroids with the first 3 rows of tf_idf. Write code to compute distances from each of the centroids to all data points in tf_idf. Then find the distance between row 430 of tf_idf and the second centroid and save it to dist.", "# Students should write code here\ncentroids = tf_idf[0:3, :]\ndistances = pairwise_distances(tf_idf, centroids, metric = 'euclidean')\ndist = distances[430, 1]\n\n'''Test cell'''\nif np.allclose(dist, pairwise_distances(tf_idf[430,:], tf_idf[1,:])):\n print('Pass')\nelse:\n print('Check your code again')", "Checkpoint: Next, given the pairwise distances, we take the minimum of the distances for each data point. Fittingly, NumPy provides an argmin function. See this documentation for details.\nRead the documentation and write code to produce a 1D array whose i-th entry indicates the centroid that is the closest to the i-th data point. Use the list of distances from the previous checkpoint and save them as distances. The value 0 indicates closeness to the first centroid, 1 indicates closeness to the second centroid, and so forth. Save this array as closest_cluster.\nHint: the resulting array should be as long as the number of data points.", "# Students should write code here\nclosest_cluster = np.argmin(distances, axis=1)\nclosest_cluster\n\n'''Test cell'''\nreference = [list(row).index(min(row)) for row in distances]\nif np.allclose(closest_cluster, reference):\n print('Pass')\nelse:\n print('Check your code again')", "Checkpoint: Let's put these steps together. First, initialize three centroids with the first 3 rows of tf_idf. Then, compute distances from each of the centroids to all data points in tf_idf. Finally, use these distance calculations to compute cluster assignments and assign them to cluster_assignment.", "# Students should write code here\ncentroids = tf_idf[0:3, :]\ndistances = pairwise_distances(centroids, tf_idf, metric='euclidean')\ncluster_assignment = np.argmin(distances, axis=0)\ncluster_assignment\n\nif len(cluster_assignment)==59071 and \\\n np.array_equal(np.bincount(cluster_assignment), np.array([23061, 10086, 25924])):\n print('Pass') # count number of data points for each cluster\nelse:\n print('Check your code again.')", "Now we are ready to fill in the blanks in this function:", "def assign_clusters(data, centroids):\n \n # Compute distances between each data point and the set of centroids:\n # Fill in the blank (RHS only)\n distances_from_centroids = pairwise_distances(centroids, data)\n \n # Compute cluster assignments for each data point:\n # Fill in the blank (RHS only)\n cluster_assignment = np.argmin(distances_from_centroids, axis=0)\n \n return cluster_assignment", "Checkpoint. For the last time, let us check if Step 1 was implemented correctly. With rows 0, 2, 4, and 6 of tf_idf as an initial set of centroids, we assign cluster labels to rows 0, 10, 20, ..., and 90 of tf_idf. The resulting cluster labels should be [0, 1, 1, 0, 0, 2, 0, 2, 2, 1].", "if np.allclose(assign_clusters(tf_idf[0:100:10], tf_idf[0:8:2]), np.array([0, 1, 1, 0, 0, 2, 0, 2, 2, 1])):\n print('Pass')\nelse:\n print('Check your code again.')", "Revising clusters\nLet's turn to Step 2, where we compute the new centroids given the cluster assignments. \nSciPy and NumPy arrays allow for filtering via Boolean masks. For instance, we filter all data points that are assigned to cluster 0 by writing\ndata[cluster_assignment==0,:]\nTo develop intuition about filtering, let's look at a toy example consisting of 3 data points and 2 clusters.", "data = np.array([[1., 2., 0.],\n [0., 0., 0.],\n [2., 2., 0.]])\ncentroids = np.array([[0.5, 0.5, 0.],\n [0., -0.5, 0.]])", "Let's assign these data points to the closest centroid.", "cluster_assignment = assign_clusters(data, centroids)\nprint cluster_assignment", "The expression cluster_assignment==1 gives a list of Booleans that says whether each data point is assigned to cluster 1 or not:", "cluster_assignment==1", "Likewise for cluster 0:", "cluster_assignment==0", "In lieu of indices, we can put in the list of Booleans to pick and choose rows. Only the rows that correspond to a True entry will be retained.\nFirst, let's look at the data points (i.e., their values) assigned to cluster 1:", "data[cluster_assignment==1]", "This makes sense since [0 0 0] is closer to [0 -0.5 0] than to [0.5 0.5 0].\nNow let's look at the data points assigned to cluster 0:", "data[cluster_assignment==0]", "Again, this makes sense since these values are each closer to [0.5 0.5 0] than to [0 -0.5 0].\nGiven all the data points in a cluster, it only remains to compute the mean. Use np.mean(). By default, the function averages all elements in a 2D array. To compute row-wise or column-wise means, add the axis argument. See the linked documentation for details. \nUse this function to average the data points in cluster 0:", "data[cluster_assignment==0].mean(axis=0)", "We are now ready to complete this function:", "def revise_centroids(data, k, cluster_assignment):\n new_centroids = []\n for i in xrange(k):\n # Select all data points that belong to cluster i. Fill in the blank (RHS only)\n member_data_points = data[cluster_assignment == i]\n # Compute the mean of the data points. Fill in the blank (RHS only)\n centroid = member_data_points.mean(axis=0)\n \n # Convert numpy.matrix type to numpy.ndarray type\n centroid = centroid.A1\n new_centroids.append(centroid)\n new_centroids = np.array(new_centroids)\n \n return new_centroids", "Checkpoint. Let's check our Step 2 implementation. Letting rows 0, 10, ..., 90 of tf_idf as the data points and the cluster labels [0, 1, 1, 0, 0, 2, 0, 2, 2, 1], we compute the next set of centroids. Each centroid is given by the average of all member data points in corresponding cluster.", "result = revise_centroids(tf_idf[0:100:10], 3, np.array([0, 1, 1, 0, 0, 2, 0, 2, 2, 1]))\nif np.allclose(result[0], np.mean(tf_idf[[0,30,40,60]].toarray(), axis=0)) and \\\n np.allclose(result[1], np.mean(tf_idf[[10,20,90]].toarray(), axis=0)) and \\\n np.allclose(result[2], np.mean(tf_idf[[50,70,80]].toarray(), axis=0)):\n print('Pass')\nelse:\n print('Check your code')", "Assessing convergence\nHow can we tell if the k-means algorithm is converging? We can look at the cluster assignments and see if they stabilize over time. In fact, we'll be running the algorithm until the cluster assignments stop changing at all. To be extra safe, and to assess the clustering performance, we'll be looking at an additional criteria: the sum of all squared distances between data points and centroids. This is defined as\n$$\nJ(\\mathcal{Z},\\mu) = \\sum_{j=1}^k \\sum_{i:z_i = j} \\|\\mathbf{x}_i - \\mu_j\\|^2.\n$$\nThe smaller the distances, the more homogeneous the clusters are. In other words, we'd like to have \"tight\" clusters.", "def compute_heterogeneity(data, k, centroids, cluster_assignment):\n \n heterogeneity = 0.0\n for i in xrange(k):\n \n # Select all data points that belong to cluster i. Fill in the blank (RHS only)\n member_data_points = data[cluster_assignment==i, :]\n \n if member_data_points.shape[0] > 0: # check if i-th cluster is non-empty\n # Compute distances from centroid to data points (RHS only)\n distances = pairwise_distances(member_data_points, [centroids[i]], metric='euclidean')\n squared_distances = distances**2\n heterogeneity += np.sum(squared_distances)\n \n return heterogeneity", "Let's compute the cluster heterogeneity for the 2-cluster example we've been considering based on our current cluster assignments and centroids.", "compute_heterogeneity(data, 2, centroids, cluster_assignment)", "Combining into a single function\nOnce the two k-means steps have been implemented, as well as our heterogeneity metric we wish to monitor, it is only a matter of putting these functions together to write a k-means algorithm that\n\nRepeatedly performs Steps 1 and 2\nTracks convergence metrics\nStops if either no assignment changed or we reach a certain number of iterations.", "# Fill in the blanks\ndef kmeans(data, k, initial_centroids, maxiter, record_heterogeneity=None, verbose=False):\n '''This function runs k-means on given data and initial set of centroids.\n maxiter: maximum number of iterations to run.\n record_heterogeneity: (optional) a list, to store the history of heterogeneity as function of iterations\n if None, do not store the history.\n verbose: if True, print how many data points changed their cluster labels in each iteration'''\n centroids = initial_centroids[:]\n prev_cluster_assignment = None\n \n for itr in xrange(maxiter): \n if verbose:\n print(itr)\n \n # 1. Make cluster assignments using nearest centroids\n # YOUR CODE HERE\n cluster_assignment = assign_clusters(data, centroids)\n \n # 2. Compute a new centroid for each of the k clusters, averaging all data points assigned to that cluster.\n # YOUR CODE HERE\n centroids = revise_centroids(data, k, cluster_assignment)\n \n # Check for convergence: if none of the assignments changed, stop\n if prev_cluster_assignment is not None and \\\n (prev_cluster_assignment==cluster_assignment).all():\n break\n \n # Print number of new assignments \n if prev_cluster_assignment is not None:\n num_changed = np.sum(prev_cluster_assignment!=cluster_assignment)\n if verbose:\n print(' {0:5d} elements changed their cluster assignment.'.format(num_changed)) \n \n # Record heterogeneity convergence metric\n if record_heterogeneity is not None:\n # YOUR CODE HERE\n score = compute_heterogeneity(data, k, centroids, cluster_assignment)\n record_heterogeneity.append(score)\n \n prev_cluster_assignment = cluster_assignment[:]\n \n return centroids, cluster_assignment", "Plotting convergence metric\nWe can use the above function to plot the convergence metric across iterations.", "def plot_heterogeneity(heterogeneity, k):\n plt.figure(figsize=(7,4))\n plt.plot(heterogeneity, linewidth=4)\n plt.xlabel('# Iterations')\n plt.ylabel('Heterogeneity')\n plt.title('Heterogeneity of clustering over time, K={0:d}'.format(k))\n plt.rcParams.update({'font.size': 16})\n plt.tight_layout()", "Let's consider running k-means with K=3 clusters for a maximum of 400 iterations, recording cluster heterogeneity at every step. Then, let's plot the heterogeneity over iterations using the plotting function above.", "k = 3\nheterogeneity = []\ninitial_centroids = get_initial_centroids(tf_idf, k, seed=0)\ncentroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,\n record_heterogeneity=heterogeneity, verbose=True)\nplot_heterogeneity(heterogeneity, k)", "Quiz Question. (True/False) The clustering objective (heterogeneity) is non-increasing for this example.\nQuiz Question. Let's step back from this particular example. If the clustering objective (heterogeneity) would ever increase when running k-means, that would indicate: (choose one)\n\nk-means algorithm got stuck in a bad local minimum\nThere is a bug in the k-means code\nAll data points consist of exact duplicates\nNothing is wrong. The objective should generally go down sooner or later.\n\nQuiz Question. Which of the cluster contains the greatest number of data points in the end? Hint: Use np.bincount() to count occurrences of each cluster label.\n 1. Cluster #0\n 2. Cluster #1\n 3. Cluster #2", "np.bincount(cluster_assignment)", "Beware of local maxima\nOne weakness of k-means is that it tends to get stuck in a local minimum. To see this, let us run k-means multiple times, with different initial centroids created using different random seeds.\nNote: Again, in practice, you should set different seeds for every run. We give you a list of seeds for this assignment so that everyone gets the same answer.\nThis may take several minutes to run.", "k = 10\nheterogeneity = {}\nimport time\nstart = time.time()\nfor seed in [0, 20000, 40000, 60000, 80000, 100000, 120000]:\n initial_centroids = get_initial_centroids(tf_idf, k, seed)\n centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,\n record_heterogeneity=None, verbose=False)\n # To save time, compute heterogeneity only once in the end\n heterogeneity[seed] = compute_heterogeneity(tf_idf, k, centroids, cluster_assignment)\n print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity[seed]))\n cluster_size = np.bincount(cluster_assignment)\n biggest_cluster = np.argmax(cluster_size)\n print 'Biggest Cluster: {}, Size: {}'.format(biggest_cluster, cluster_size[biggest_cluster])\n sys.stdout.flush()\nend = time.time()\nprint(end-start)\n\n'{} {}'.format(1, 2)", "Notice the variation in heterogeneity for different initializations. This indicates that k-means sometimes gets stuck at a bad local minimum.\nQuiz Question. Another way to capture the effect of changing initialization is to look at the distribution of cluster assignments. Add a line to the code above to compute the size (# of member data points) of clusters for each run of k-means. Look at the size of the largest cluster (most # of member data points) across multiple runs, with seeds 0, 20000, ..., 120000. How much does this measure vary across the runs? What is the minimum and maximum values this quantity takes?\nOne effective way to counter this tendency is to use k-means++ to provide a smart initialization. This method tries to spread out the initial set of centroids so that they are not too close together. It is known to improve the quality of local optima and lower average runtime.", "def smart_initialize(data, k, seed=None):\n '''Use k-means++ to initialize a good set of centroids'''\n if seed is not None: # useful for obtaining consistent results\n np.random.seed(seed)\n centroids = np.zeros((k, data.shape[1]))\n \n # Randomly choose the first centroid.\n # Since we have no prior knowledge, choose uniformly at random\n idx = np.random.randint(data.shape[0])\n centroids[0] = data[idx,:].toarray()\n # Compute distances from the first centroid chosen to all the other data points\n distances = pairwise_distances(data, centroids[0:1], metric='euclidean').flatten()\n \n for i in xrange(1, k):\n # Choose the next centroid randomly, so that the probability for each data point to be chosen\n # is directly proportional to its squared distance from the nearest centroid.\n # Roughtly speaking, a new centroid should be as far as from ohter centroids as possible.\n idx = np.random.choice(data.shape[0], 1, p=distances/sum(distances))\n centroids[i] = data[idx,:].toarray()\n # Now compute distances from the centroids to all data points\n distances = np.min(pairwise_distances(data, centroids[0:i+1], metric='euclidean'),axis=1)\n \n return centroids", "Let's now rerun k-means with 10 clusters using the same set of seeds, but always using k-means++ to initialize the algorithm.\nThis may take several minutes to run.", "k = 10\nheterogeneity_smart = {}\nstart = time.time()\nfor seed in [0, 20000, 40000, 60000, 80000, 100000, 120000]:\n initial_centroids = smart_initialize(tf_idf, k, seed)\n centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,\n record_heterogeneity=None, verbose=False)\n # To save time, compute heterogeneity only once in the end\n heterogeneity_smart[seed] = compute_heterogeneity(tf_idf, k, centroids, cluster_assignment)\n print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity_smart[seed]))\n sys.stdout.flush()\nend = time.time()\nprint(end-start)", "Let's compare the set of cluster heterogeneities we got from our 7 restarts of k-means using random initialization compared to the 7 restarts of k-means using k-means++ as a smart initialization.\nThe following code produces a box plot for each of these methods, indicating the spread of values produced by each method.", "plt.figure(figsize=(8,5))\nplt.boxplot([heterogeneity.values(), heterogeneity_smart.values()], vert=False)\nplt.yticks([1, 2], ['k-means', 'k-means++'])\nplt.rcParams.update({'font.size': 16})\nplt.tight_layout()", "A few things to notice from the box plot:\n* Random initialization results in a worse clustering than k-means++ on average.\n* The best result of k-means++ is better than the best result of random initialization.\nIn general, you should run k-means at least a few times with different initializations and then return the run resulting in the lowest heterogeneity. Let us write a function that runs k-means multiple times and picks the best run that minimizes heterogeneity. The function accepts an optional list of seed values to be used for the multiple runs; if no such list is provided, the current UTC time is used as seed values.", "def kmeans_multiple_runs(data, k, maxiter, num_runs, seed_list=None, verbose=False):\n heterogeneity = {}\n \n min_heterogeneity_achieved = float('inf')\n best_seed = None\n final_centroids = None\n final_cluster_assignment = None\n \n for i in xrange(num_runs):\n \n # Use UTC time if no seeds are provided \n if seed_list is not None: \n seed = seed_list[i]\n np.random.seed(seed)\n else: \n seed = int(time.time())\n np.random.seed(seed)\n \n # Use k-means++ initialization\n # YOUR CODE HERE\n initial_centroids = smart_initialize(data, k, seed)\n \n # Run k-means\n # YOUR CODE HERE\n centroids, cluster_assignment = kmeans(data, k, initial_centroids, maxiter)\n \n # To save time, compute heterogeneity only once in the end\n # YOUR CODE HERE\n heterogeneity[seed] = compute_heterogeneity(data, k, centroids, cluster_assignment)\n \n if verbose:\n print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity[seed]))\n sys.stdout.flush()\n \n # if current measurement of heterogeneity is lower than previously seen,\n # update the minimum record of heterogeneity.\n if heterogeneity[seed] < min_heterogeneity_achieved:\n min_heterogeneity_achieved = heterogeneity[seed]\n best_seed = seed\n final_centroids = centroids\n final_cluster_assignment = cluster_assignment\n \n # Return the centroids and cluster assignments that minimize heterogeneity.\n return final_centroids, final_cluster_assignment", "How to choose K\nSince we are measuring the tightness of the clusters, a higher value of K reduces the possible heterogeneity metric by definition. For example, if we have N data points and set K=N clusters, then we could have 0 cluster heterogeneity by setting the N centroids equal to the values of the N data points. (Note: Not all runs for larger K will result in lower heterogeneity than a single run with smaller K due to local optima.) Let's explore this general trend for ourselves by performing the following analysis.\nUse the kmeans_multiple_runs function to run k-means with five different values of K. For each K, use k-means++ and multiple runs to pick the best solution. In what follows, we consider K=2,10,25,50,100 and 7 restarts for each setting.\nIMPORTANT: The code block below will take about one hour to finish. We highly suggest that you use the arrays that we have computed for you.\nSide note: In practice, a good implementation of k-means would utilize parallelism to run multiple runs of k-means at once. For an example, see scikit-learn's KMeans.", "#def plot_k_vs_heterogeneity(k_values, heterogeneity_values):\n# plt.figure(figsize=(7,4))\n# plt.plot(k_values, heterogeneity_values, linewidth=4)\n# plt.xlabel('K')\n# plt.ylabel('Heterogeneity')\n# plt.title('K vs. Heterogeneity')\n# plt.rcParams.update({'font.size': 16})\n# plt.tight_layout()\n\n#start = time.time()\n#centroids = {}\n#cluster_assignment = {}\n#heterogeneity_values = []\n#k_list = [2, 10, 25, 50, 100]\n#seed_list = [0, 20000, 40000, 60000, 80000, 100000, 120000]\n\n#for k in k_list:\n# heterogeneity = []\n# centroids[k], cluster_assignment[k] = kmeans_multiple_runs(tf_idf, k, maxiter=400,\n# num_runs=len(seed_list),\n# seed_list=seed_list,\n# verbose=True)\n# score = compute_heterogeneity(tf_idf, k, centroids[k], cluster_assignment[k])\n# heterogeneity_values.append(score)\n\n#plot_k_vs_heterogeneity(k_list, heterogeneity_values)\n\n#end = time.time()\n#print(end-start)", "To use the pre-computed NumPy arrays, first download kmeans-arrays.npz as mentioned in the reading for this assignment and load them with the following code. Make sure the downloaded file is in the same directory as this notebook.", "def plot_k_vs_heterogeneity(k_values, heterogeneity_values):\n plt.figure(figsize=(7,4))\n plt.plot(k_values, heterogeneity_values, linewidth=4)\n plt.xlabel('K')\n plt.ylabel('Heterogeneity')\n plt.title('K vs. Heterogeneity')\n plt.rcParams.update({'font.size': 16})\n plt.tight_layout()\n\nfilename = 'kmeans-arrays.npz'\n\nheterogeneity_values = []\nk_list = [2, 10, 25, 50, 100]\n\nif os.path.exists(filename):\n arrays = np.load(filename)\n centroids = {}\n cluster_assignment = {}\n for k in k_list:\n print k\n sys.stdout.flush()\n '''To save memory space, do not load the arrays from the file right away. We use\n a technique known as lazy evaluation, where some expressions are not evaluated\n until later. Any expression appearing inside a lambda function doesn't get\n evaluated until the function is called.\n Lazy evaluation is extremely important in memory-constrained setting, such as\n an Amazon EC2 t2.micro instance.'''\n centroids[k] = lambda k=k: arrays['centroids_{0:d}'.format(k)]\n cluster_assignment[k] = lambda k=k: arrays['cluster_assignment_{0:d}'.format(k)]\n score = compute_heterogeneity(tf_idf, k, centroids[k](), cluster_assignment[k]())\n heterogeneity_values.append(score)\n \n plot_k_vs_heterogeneity(k_list, heterogeneity_values)\n\nelse:\n print('File not found. Skipping.')", "In the above plot we show that heterogeneity goes down as we increase the number of clusters. Does this mean we should always favor a higher K? Not at all! As we will see in the following section, setting K too high may end up separating data points that are actually pretty alike. At the extreme, we can set individual data points to be their own clusters (K=N) and achieve zero heterogeneity, but separating each data point into its own cluster is hardly a desirable outcome. In the following section, we will learn how to detect a K set \"too large\".\nVisualize clusters of documents\nLet's start visualizing some clustering results to see if we think the clustering makes sense. We can use such visualizations to help us assess whether we have set K too large or too small for a given application. Following the theme of this course, we will judge whether the clustering makes sense in the context of document analysis.\nWhat are we looking for in a good clustering of documents?\n* Documents in the same cluster should be similar.\n* Documents from different clusters should be less similar.\nSo a bad clustering exhibits either of two symptoms:\n* Documents in a cluster have mixed content.\n* Documents with similar content are divided up and put into different clusters.\nTo help visualize the clustering, we do the following:\n* Fetch nearest neighbors of each centroid from the set of documents assigned to that cluster. We will consider these documents as being representative of the cluster.\n* Print titles and first sentences of those nearest neighbors.\n* Print top 5 words that have highest tf-idf weights in each centroid.", "def visualize_document_clusters(wiki, tf_idf, centroids, cluster_assignment, k, map_index_to_word, display_content=True):\n '''wiki: original dataframe\n tf_idf: data matrix, sparse matrix format\n map_index_to_word: SFrame specifying the mapping betweeen words and column indices\n display_content: if True, display 8 nearest neighbors of each centroid'''\n \n print('==========================================================')\n\n # Visualize each cluster c\n for c in xrange(k):\n # Cluster heading\n print('Cluster {0:d} '.format(c)),\n # Print top 5 words with largest TF-IDF weights in the cluster\n idx = centroids[c].argsort()[::-1]\n for i in xrange(5): # Print each word along with the TF-IDF weight\n print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroids[c,idx[i]])),\n print('')\n \n if display_content:\n # Compute distances from the centroid to all data points in the cluster,\n # and compute nearest neighbors of the centroids within the cluster.\n distances = pairwise_distances(tf_idf, centroids[c].reshape(1, -1), metric='euclidean').flatten()\n distances[cluster_assignment!=c] = float('inf') # remove non-members from consideration\n nearest_neighbors = distances.argsort()\n # For 8 nearest neighbors, print the title as well as first 180 characters of text.\n # Wrap the text at 80-character mark.\n for i in xrange(8):\n text = ' '.join(wiki[nearest_neighbors[i]]['text'].split(None, 25)[0:25])\n print('\\n* {0:50s} {1:.5f}\\n {2:s}\\n {3:s}'.format(wiki[nearest_neighbors[i]]['name'],\n distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))\n print('==========================================================')", "Let us first look at the 2 cluster case (K=2).", "'''Notice the extra pairs of parentheses for centroids and cluster_assignment.\n The centroid and cluster_assignment are still inside the npz file,\n and we need to explicitly indicate when to load them into memory.'''\nvisualize_document_clusters(wiki, tf_idf, centroids[2](), cluster_assignment[2](), 2, map_index_to_word)", "Both clusters have mixed content, although cluster 1 is much purer than cluster 0:\n* Cluster 0: artists, songwriters, professors, politicians, writers, etc.\n* Cluster 1: baseball players, hockey players, football (soccer) players, etc.\nTop words of cluster 1 are all related to sports, whereas top words of cluster 0 show no clear pattern.\nRoughly speaking, the entire dataset was divided into athletes and non-athletes. It would be better if we sub-divided non-athletes into more categories. So let us use more clusters. How about K=10?", "k = 10\nvisualize_document_clusters(wiki, tf_idf, centroids[k](), cluster_assignment[k](), k, map_index_to_word)", "Clusters 0 and 2 appear to be still mixed, but others are quite consistent in content.\n* Cluster 0: artists, poets, writers, environmentalists\n* Cluster 1: film directors\n* Cluster 2: female figures from various fields\n* Cluster 3: politicians\n* Cluster 4: track and field athletes\n* Cluster 5: composers, songwriters, singers, music producers\n* Cluster 6: soccer (football) players\n* Cluster 7: baseball players\n* Cluster 8: professors, researchers, scholars\n* Cluster 9: lawyers, judges, legal scholars\nClusters are now more pure, but some are qualitatively \"bigger\" than others. For instance, the category of scholars is more general than the category of baseball players. Increasing the number of clusters may split larger clusters. Another way to look at the size of the clusters is to count the number of articles in each cluster.", "np.bincount(cluster_assignment[10]())", "Quiz Question. Which of the 10 clusters above contains the greatest number of articles?\n\nCluster 0: artists, poets, writers, environmentalists\nCluster 4: track and field athletes\nCluster 5: composers, songwriters, singers, music producers\nCluster 7: baseball players\nCluster 9: lawyers, judges, legal scholars\n\nQuiz Question. Which of the 10 clusters contains the least number of articles?\n\nCluster 1: film directors\nCluster 3: politicians\nCluster 6: soccer (football) players\nCluster 7: baseball players\nCluster 9: lawyers, judges, legal scholars\n\nThere appears to be at least some connection between the topical consistency of a cluster and the number of its member data points.\nLet us visualize the case for K=25. For the sake of brevity, we do not print the content of documents. It turns out that the top words with highest TF-IDF weights in each cluster are representative of the cluster.", "visualize_document_clusters(wiki, tf_idf, centroids[25](), cluster_assignment[25](), 25,\n map_index_to_word, display_content=False) # turn off text for brevity", "Looking at the representative examples and top words, we classify each cluster as follows. Notice the bolded items, which indicate the appearance of a new theme.\n* Cluster 0: composers, songwriters, singers, music producers\n* Cluster 1: poets\n* Cluster 2: rugby players\n* Cluster 3: baseball players\n* Cluster 4: government officials\n* Cluster 5: football players\n* Cluster 6: radio hosts\n* Cluster 7: actors, TV directors\n* Cluster 8: professors, researchers, scholars\n* Cluster 9: lawyers, judges, legal scholars\n* Cluster 10: track and field athletes\n* Cluster 11: (mixed; no clear theme)\n* Cluster 12: car racers\n* Cluster 13: priests, bishops, church leaders\n* Cluster 14: painters, sculptors, artists\n* Cluster 15: novelists\n* Cluster 16: American football players\n* Cluster 17: golfers\n* Cluster 18: American politicians\n* Cluster 19: basketball players\n* Cluster 20: generals of U.S. Air Force\n* Cluster 21: politicians\n* Cluster 22: female figures of various fields\n* Cluster 23: film directors\n* Cluster 24: music directors, composers, conductors\nIndeed, increasing K achieved the desired effect of breaking up large clusters. Depending on the application, this may or may not be preferable to the K=10 analysis.\nLet's take it to the extreme and set K=100. We have a suspicion that this value is too large. Let us look at the top words from each cluster:", "k=100\nvisualize_document_clusters(wiki, tf_idf, centroids[k](), cluster_assignment[k](), k,\n map_index_to_word, display_content=False)\n# turn off text for brevity -- turn it on if you are curious ;)", "The class of rugby players has been broken into two clusters (11 and 72). The same goes for soccer (football) players (clusters 6, 21, 40, and 87), although some may like the benefit of having a separate category for Australian Football League. The class of baseball players has also been broken into two clusters (18 and 95).\nA high value of K encourages pure clusters, but we cannot keep increasing K. For large enough K, related documents end up going to different clusters.\nThat said, the result for K=100 is not entirely bad. After all, it gives us separate clusters for such categories as Scotland, Brazil, LGBT, computer science and the Mormon Church. If we set K somewhere between 25 and 100, we should be able to avoid breaking up clusters while discovering new ones.\nAlso, we should ask ourselves how much granularity we want in our clustering. If we wanted a rough sketch of Wikipedia, we don't want too detailed clusters. On the other hand, having many clusters can be valuable when we are zooming into a certain part of Wikipedia.\nThere is no golden rule for choosing K. It all depends on the particular application and domain we are in.\nAnother heuristic people use that does not rely on so much visualization, which can be hard in many applications (including here!) is as follows. Track heterogeneity versus K and look for the \"elbow\" of the curve where the heterogeneity decrease rapidly before this value of K, but then only gradually for larger values of K. This naturally trades off between trying to minimize heterogeneity, but reduce model complexity. In the heterogeneity versus K plot made above, we did not yet really see a flattening out of the heterogeneity, which might indicate that indeed K=100 is \"reasonable\" and we only see real overfitting for larger values of K (which are even harder to visualize using the methods we attempted above.)\nQuiz Question. Another sign of too large K is having lots of small clusters. Look at the distribution of cluster sizes (by number of member data points). How many of the 100 clusters have fewer than 236 articles, i.e. 0.4% of the dataset?\nHint: Use cluster_assignment[100](), with the extra pair of parentheses for delayed loading.", "(np.bincount(cluster_assignment[100]()) < 236).sum()", "Takeaway\nKeep in mind though that tiny clusters aren't necessarily bad. A tiny cluster of documents that really look like each others is definitely preferable to a medium-sized cluster of documents with mixed content. However, having too few articles in a cluster may cause overfitting by reading too much into a limited pool of training data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cdawei/digbeta
dchen/music/playlist_multilabel_example.ipynb
gpl-3.0
[ "A simple example of generating playlist by multilable learning", "%matplotlib inline\n\nimport os, sys, time\nimport pickle as pkl\nimport numpy as np\nimport pandas as pd\nimport sklearn as sk\nfrom sklearn.linear_model import LogisticRegression\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndata_dir = 'data'\nfaotm = os.path.join(data_dir, 'aotm-2011/aotm-2011-subset.pkl')\nfmap = os.path.join(data_dir, 'aotm-2011/map_song_track.pkl')\nftag = os.path.join(data_dir, 'msd/msd_tagtraum_cd2c.cls')", "Data loading\nLoad playlists.", "playlists = pkl.load(open(faotm, 'rb'))\n\nprint('#Playlists: %d' % len(playlists))\n\nplaylists[0]\n\nprint('#Songs: %d' % len({songID for p in playlists for songID in p['filtered_lists'][0]}))\n\nlengths = [len(p['filtered_lists'][0]) for p in playlists]\n#plt.hist(lengths, bins=20)\nprint('Average playlist length: %.1f' % np.mean(lengths))", "Load song_id --> track_id mapping: a song may correspond to multiple tracks.", "song2TrackID = pkl.load(open(fmap, 'rb'))\n\n{ k : song2TrackID[k] for k in list(song2TrackID.keys())[:10] }", "Load song tags, build track_id --> tag mapping.", "track2Tags = dict()\n\nwith open(ftag) as f:\n for line in f:\n if line[0] == '#': continue\n tid, tag = line.strip().split('\\t')\n #print(tid, tag)\n track2Tags[tid] = tag\n\nprint('#(Track, Tag): %d' % len(track2Tags))\n\n{ k : track2Tags[k] for k in list(track2Tags.keys())[:10] }", "Data cleaning\nUse the subset of playlist such that the first song (i.e. the seed song) in each playlist has tag(s).", "subset_ix = []\n\nseedSong2Tag = { }\nfor ix in range(len(playlists)):\n # the list of song IDs in the playlist\n songIDs = playlists[ix]['filtered_lists'][0]\n\n # seed song\n seedSongID = songIDs[0]\n seedTrackIDs = song2TrackID[seedSongID]\n \n # make sure that at least one track for the song has a corresponding tag\n flag = [ (trackID in track2Tags) for trackID in seedTrackIDs]\n if not np.any(flag):\n continue\n\n seedSong2Tag[playlists[ix]['mix_id']] = [ track2Tags[seedTrackIDs[i]] for i in range(0, len(flag)) if flag[i] == True ]\n\n subset_ix.append(ix)\n\n#seedSong2Tag\n\nplaylists_subset = [playlists[ix] for ix in subset_ix]\n\nprint('#Playlists used: %d' % len(subset_ix))", "The set of unique songs, in multilabel learning, we have a label for each song in this set.", "song_set = sorted({songID for p in playlists_subset for songID in p['filtered_lists'][0]})\n\nprint('#Songs used: %d' % len(song_set))\n\nprint(song_set[:10])", "Data analysis\nFor the most part, playlists contain less than 10 songs. The most common playlist length is 2 songs.", "playlist_lengths = [len(playlist['filtered_lists'][0]) for playlist in playlists_subset]\nplt.hist(playlist_lengths, bins=20)\nprint('Average playlist length: %.1f' % np.mean(playlist_lengths))", "Song_id --&gt; Song_name mapping.", "songID2Name = {s[1]: s[0] for p in playlists_subset for s in p['playlist']}\n\n#songID2Name", "One-hot tag encoding\nIndicator of tags: tag --> index mapping.", "# the set of unique tags\ntag_set = sorted(set(track2Tags.values()))\n\nprint('#Tags: %d' % len(tag_set))\n\ntag_indicator = { tag: ix for ix, tag in enumerate(tag_set) }\n\ntag_indicator", "Feature extraction\nBuild features (1-hot encoding of tag) for a song given its song_id.", "def gen_features(song_id, song2TrackID = song2TrackID, tag_indicator = tag_indicator):\n \"\"\"\n Generate one-hot feature vector for a given song ID\n \"\"\"\n\n features = np.zeros(len(tag_set), dtype = np.float)\n trackIDs = song2TrackID[song_id]\n\n cnt = 0\n for trackID in trackIDs:\n if trackID in track2Tags:\n cnt += 1\n tag = track2Tags[trackID]\n tag_ix = tag_indicator[tag]\n features[tag_ix] = 1\n\n # must have at least one tag for the song, else useless\n assert(cnt >= 1)\n\n return features\n\ndef gen_feature_map(song_id, seed):\n \"\"\"\n Generate feature mapping for a given (label, query) pair\n \"\"\"\n \n #return gen_features(song_id) - gen_features(seed) # feature map\n return gen_features(seed) # a trivial feature map\n\ndef gen_training_set(label_ix, playlists = playlists_subset, song_set = song_set):\n \"\"\"\n Create the labelled dataset for a given song index\n \n Input:\n - label_ix: song index, number in { 0, ..., # songs }\n - playlists: which playlists to create features for\n \n Output:\n - (Feature, Label) pair (X, y), with # num playlists rows\n X comprises the features for each seed song and the given song\n y comprises the indicator of whether the given song is present in the respective playlist\n \"\"\"\n\n assert(label_ix >= 0)\n assert(label_ix < len(song_set))\n\n N = len(playlists)\n d = len(tag_set)\n\n X = np.zeros((N, d), dtype = np.float)\n y = np.zeros(N, dtype = np.float)\n \n whichSong = song_set[label_ix]\n \n for i in range(len(playlists)):\n playlist = playlists[i]['filtered_lists'][0]\n seed = playlist[0]\n\n X[i,:] = gen_feature_map(whichSong, seed)\n y[i] = int(whichSong in playlist)\n\n return X, y\n\ngen_feature_map(song_set[100], playlists_subset[0]['filtered_lists'][0][0])", "Training & Testing\nTrain a logistic regression model for each label.", "classifiers = [LogisticRegression(class_weight='balanced') for i in range(len(song_set))]\n\nallPreds = [ ]\nallTruths = [ ]\ncoefMat = [ ]\nlabelIndices = [ ]\n\nY = np.NAN * np.ones((len(playlists_subset), len(song_set)))\n\nfor label_ix in range(len(song_set)):\n X, y = gen_training_set(label_ix)\n Y[:,label_ix] = y\n \n # by fixing random seed, the same playlists will be in the test set each time\n X_train, X_test, y_train, y_test = sk.model_selection.train_test_split(X, y, \\\n test_size = 0.33, \\\n random_state = 31) \n \n if np.max(y_train) == 0.0: # or np.max(y_test) == 0.0:\n continue\n\n classifiers[label_ix].fit(X_train, y_train)\n \n allPreds.append(classifiers[label_ix].decision_function(X_test))\n allTruths.append(y_test) \n\n coefMat.append(classifiers[label_ix].coef_.reshape(-1))\n labelIndices.append(label_ix)\n #print(classifiers[label_ix].coef_)\n #print(classifiers[label_ix].intercept_)\n\nallPreds = np.array(allPreds).T\nallTruths = np.array(allTruths).T\n\nprint(allPreds.shape)\nprint(allTruths.shape)", "Evaluation\nCompute AUC.", "aucs = [ ]\nfor i in range(0,allPreds.shape[0]):\n pred = allPreds[i,:]\n truth = allTruths[i,:]\n \n if np.max(truth) == 0.0:\n continue\n \n aucs.append(sk.metrics.roc_auc_score(truth, pred))\n \nprint('Average AUC: %1.4f' % np.mean(aucs))\nplt.hist(aucs, bins = 10);", "Compute average precision.\nResult analysis\nCoefficient matrix (#Genres, #Songs).", "coefMat = np.array(coefMat).T\n\ncoefMat.shape\n\n#sns.heatmap(coefMat[:, :30])", "Top 10 songs of each genre (w.r.t.) the coefficients.", "labelIndices = np.array(labelIndices)\n\nTop10Songs_ix = [ ]\nfor i in range(coefMat.shape[0]):\n ix = np.argsort(coefMat[i, :])[::-1][:10]\n Top10Songs_ix.append(labelIndices[ix])\n \nBot10Songs_ix = [ ]\nfor i in range(coefMat.shape[0]):\n ix = np.argsort(coefMat[i, :])[:10]\n Bot10Songs_ix.append(labelIndices[ix])\n\n#Top10Songs_ix\n\n#np.array(song_set)[Top10Songs_ix[0]]\n\ncols = ['Genre.Count'] + ['Top %d' % k for k in range(1, 11)] + ['Bot %d' % k for k in range(1, 11)]\nTop10Songs = pd.DataFrame(np.zeros((len(tag_set), 21), dtype = object),\n index = tag_set, columns = cols)\n\n# number of appearances of playlists with each genre\nS = X.sum(axis = 0)\nidx = np.argsort(S)[::-1]\n#[(tag_set[i], S[i]) for i in idx]\n\n# number of appearances of each song in a playlist\nplt.hist(Y.sum(axis = 0));\nplt.xlabel('# of playlist appearances');\n\nfor i in range(len(tag_set)):\n row = tag_set[i]\n Top10Songs.loc[row, 'Genre.Count'] = S[i]\n for j in range(10):\n song_ix = Top10Songs_ix[i][j]\n songID = song_set[song_ix]\n songName = (songID, songID2Name[songID][0], songID2Name[songID][1])\n col = 'Top %d' % (j+1)\n Top10Songs.loc[row, col] = songName\n \n song_ix = Bot10Songs_ix[i][j]\n songID = song_set[song_ix]\n songName = (songID, songID2Name[songID][0], songID2Name[songID][1]) \n col = 'Bot %d' % (j+1)\n Top10Songs.loc[row, col] = songName \n \nTop10Songs = Top10Songs.sort_values(['Genre.Count'], ascending=False)\n\nTop10Songs.head(5)\n\nrapPlaylists = [ k for k in seedSong2Tag if 'Rap' in seedSong2Tag[k] ]\n\n[ p['playlist'] for p in playlists_subset if p['mix_id'] in rapPlaylists ]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mbakker7/ttim
pumpingtest_benchmarks/10_moench_test.ipynb
mit
[ "Test for anisotropic water-table aquifer\nThis test is taken from examples presented in MLU tutorial.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom ttim import *\nimport pandas as pd", "Set basic parameters:", "b = 10 #aquifer thickness in m\nQ = 172.8 #constant discharge rate in m^3/d\nrw = 0.1 #well radius in m\nrc = 0.1 #casing radius in m", "Load datasets of observation wells:", "r1 = 3.16 \nr2 = 31.6\ndata0 = np.loadtxt('data/moench_pumped.txt', skiprows=1)\nt0 = data0[:, 0] / 60 / 60 / 24 #convert time from seconds to days\nh0 = -data0[:, 1]\ndata1 = np.loadtxt('data/moench_ps1.txt', skiprows=1)\nt1 = data1[:, 0] / 60 / 60 / 24 #convert time from seconds to days\nh1 = -data1[:, 1]\ndata2 = np.loadtxt('data/moench_pd1.txt', skiprows=1)\nt2 = data2[:, 0] / 60 / 60 / 24 #convert time from seconds to days\nh2 = -data2[:, 1]\ndata3 = np.loadtxt('data/moench_ps2.txt', skiprows=1)\nt3 = data3[:, 0] / 60 / 60 / 24 #convert time from seconds to days\nh3 = -data3[:, 1]\ndata4 = np.loadtxt('data/moench_pd2.txt', skiprows=1)\nt4 = data4[:, 0] / 60 / 60 / 24 #convert time from seconds to days\nh4 = -data4[:, 1]", "Check how well TTim can simulate drawdowns in a vertically anisotropic water-table aquifer:", "#Set kaq, Saq, Sy and kzoverkh as given in Moench (1997)\nkaq = 1e-4 * 60 * 60 * 24 #convert from m/s to m/d\nSy = 0.2\nSaq = 2e-5\nzh = 0.5 #kzoverkh\n\nml1 = Model3D(kaq=kaq, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[Sy, Saq, Saq, Saq], \\\n kzoverkh=zh, tmin=1e-5, tmax=3)\nw1 = Well(ml1, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)\nml1.solve()\n\nhm1 = ml1.head(r1, 0, t1, layers=1)[0]\nhm2 = ml1.head(r1, 0, t2, layers=3)[0]\nhm3 = ml1.head(r2, 0, t3, layers=1)[0]\nhm4 = ml1.head(r2, 0, t4, layers=3)[0]\nhm0 = ml1.head(0, 0, t0, layers=3)[0]\nplt.figure(figsize=(8, 5))\nplt.loglog(t0, -h0, '.', label='pumped well')\nplt.loglog(t0, -hm0, label='ttim pumped well')\nplt.loglog(t1, -h1, '.', label='PS1')\nplt.loglog(t1, -hm1, label='ttim PS1')\nplt.loglog(t2, -h2, '.', label='PD1')\nplt.loglog(t2, -hm2, label='ttim PD1')\nplt.loglog(t3, -h3, '.', label='PS2')\nplt.loglog(t3, -hm3, label='ttim PS2')\nplt.loglog(t4, -h4, '.', label='PD2')\nplt.loglog(t4, -hm4, label='ttim PD2')\nplt.legend();\n\nres1 = 0\nres2 = 0\nres3 = 0\nres4 = 0\nres0 = 0\nfor i in range(len(h1)):\n r = (h1[i] - hm1[i]) ** 2\n res1 = res1 + r\nfor i in range(len(h2)):\n r = (h2[i] - hm2[i]) ** 2\n res2 = res2 + r\nfor i in range(len(h3)):\n r = (h3[i] - hm3[i]) ** 2\n res3 = res3 + r\nfor i in range(len(h4)):\n r = (h4[i] - hm4[i]) ** 2\n res4 = res4 + r\nfor i in range(len(h0)):\n r = (h0[i] - hm0[i]) ** 2\n res0 = res0 + r\n \nn = len(h1) + len(h2) + len(h3) + len(h4) + len(h0)\nresiduals = res1 + res2 + res3 + res4 + res0\nrmse = np.sqrt(residuals/n)\nprint('RMSE:', rmse)", "Try calibrating model to find the parameters:", "ml2 = Model3D(kaq=1, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[0.1, 1e-4, 1e-4, 1e-4], \\\n kzoverkh=1, tmin=1e-5, tmax=3)\nw2 = Well(ml2, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)\nml2.solve()\n\nca2 = Calibrate(ml2)\nca2.set_parameter(name='kaq0_3', initial=1)\nca2.set_parameter(name='Saq0', initial=0.2)\nca2.set_parameter(name='Saq1_3', initial=1e-4, pmin=0)\nca2.set_parameter_by_reference(name='kzoverkh', parameter=ml2.aq.kzoverkh, \\\n initial=0.1, pmin=0)\nca2.series(name='pumped', x=0, y=0, t=t0, h=h0, layer=3)\nca2.series(name='PS1', x=r1, y=0, t=t1, h=h1, layer=1)\nca2.series(name='PD1', x=r1, y=0, t=t2, h=h2, layer=3)\nca2.series(name='PS2', x=r2, y=0, t=t3, h=h3, layer=1)\nca2.series(name='PD2', x=r2, y=0, t=t4, h=h4, layer=3)\nca2.fit()\n\ndisplay(ca2.parameters)\nprint('RMSE:', ca2.rmse())\n\nhm0_2 = ml2.head(0, 0, t0, layers=3)[0]\nhm1_2 = ml2.head(r1, 0, t1, layers=1)[0]\nhm2_2 = ml2.head(r1, 0, t2, layers=3)[0]\nhm3_2 = ml2.head(r2, 0, t3, layers=1)[0]\nhm4_2 = ml2.head(r2, 0, t4, layers=3)[0]\nplt.figure(figsize=(8, 5))\nplt.semilogx(t0, h0, '.', label='pumped')\nplt.semilogx(t0, hm0_2, label='ttim pumped')\nplt.semilogx(t1, h1, '.', label='PS1')\nplt.semilogx(t1, hm1_2, label='ttim PS1')\nplt.semilogx(t2, h2, '.', label='PD1')\nplt.semilogx(t2, hm2_2, label='ttim PD1')\nplt.semilogx(t3, h3, ',', label='PS2')\nplt.semilogx(t3, hm3_2, label='ttim PS2')\nplt.semilogx(t4, h4, '.', label='PD2')\nplt.semilogx(t4, hm4_2, label='ttim PD2')\nplt.legend();", "Try calibrating model with stratified kaq:", "ml3 = Model3D(kaq=1, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[0.1, 1e-4, 1e-4, 1e-4], \\\n kzoverkh=1, tmin=1e-5, tmax=3)\nw3 = Well(ml3, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)\nml3.solve()\n\nca3 = Calibrate(ml3)\nca3.set_parameter(name='kaq0', initial=1, pmin=0)\nca3.set_parameter(name='kaq1_3', initial=1)\nca3.set_parameter(name='Saq0', initial=0.2, pmin=0)\nca3.set_parameter(name='Saq1_3', initial=1e-4, pmin=0)\nca3.set_parameter_by_reference(name='kzoverkh', parameter=ml3.aq.kzoverkh, \\\n initial=0.1, pmin=0)\nca3.series(name='pumped', x=0, y=0, t=t0, h=h0, layer=3)\nca3.series(name='PS1', x=r1, y=0, t=t1, h=h1, layer=1)\nca3.series(name='PD1', x=r1, y=0, t=t2, h=h2, layer=3)\nca3.series(name='PS2', x=r2, y=0, t=t3, h=h3, layer=1)\nca3.series(name='PD2', x=r2, y=0, t=t4, h=h4, layer=3)\nca3.fit()\n\ndisplay(ca3.parameters)\nprint('RMSE:', ca3.rmse())\n\nhm0_3 = ml3.head(0, 0, t0, layers=3)[0]\nhm1_3 = ml3.head(r1, 0, t1, layers=1)[0]\nhm2_3 = ml3.head(r1, 0, t2, layers=3)[0]\nhm3_3 = ml3.head(r2, 0, t3, layers=1)[0]\nhm4_3 = ml3.head(r2, 0, t4, layers=3)[0]\nplt.figure(figsize=(8, 5))\nplt.semilogx(t0, h0, '.', label='pumped')\nplt.semilogx(t0, hm0_3, label='ttim pumped')\nplt.semilogx(t1, h1, '.', label='PS1')\nplt.semilogx(t1, hm1_3, label='ttim PS1')\nplt.semilogx(t2, h2, '.', label='PD1')\nplt.semilogx(t2, hm2_3, label='ttim PD1')\nplt.semilogx(t3, h3, ',', label='PS2')\nplt.semilogx(t3, hm3_3, label='ttim PS2')\nplt.semilogx(t4, h4, '.', label='PD2')\nplt.semilogx(t4, hm4_3, label='ttim PD2');", "Summary of calibrated values", "ca3.parameters['optimal'].values\n\nta = pd.DataFrame(columns=['Moench', 'TTim', 'TTim-stratified'],\\\n index=['k0[m/d]', 'k[m/d]', 'Sy[-]', 'Ss[1/m]', 'kz/kh'])\nta.loc[:, 'TTim-stratified'] = ca3.parameters['optimal'].values\nta.loc[1:, 'TTim'] = ca2.parameters['optimal'].values\nta.loc[1:, 'Moench'] = [8.640, 0.2, 2e-5, 0.5]\nta.loc['RMSE'] = [0.061318, ca2.rmse(), ca3.rmse()]\nta" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/zh-cn/agents/tutorials/5_replay_buffers_tutorial.ipynb
apache-2.0
[ "Copyright 2021 The TF-Agents Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "回放缓冲区\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://tensorflow.google.cn/agents/tutorials/5_replay_buffers_tutorial\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看</a>\n</td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/agents/tutorials/5_replay_buffers_tutorial.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行</a></td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/agents/tutorials/5_replay_buffers_tutorial.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 Github 上查看源代码</a>\n</td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/agents/tutorials/5_replay_buffers_tutorial.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">下载笔记本</a>\n</td>\n</table>\n\n简介\n强化学习算法使用回放缓冲区来存储在环境中执行策略时的经历轨迹。在训练过程中,将查询回放缓冲区中的轨迹子集(顺序子集或样本)以“回放”代理的经历。\n在本 Colab 中,我们将介绍两种回放缓冲区:Python 支持型和 Tensorflow 支持型,这两种类型采用共同的 API。在以下各部分中,我们将介绍 API、每种缓冲区实现以及如何在数据收集训练期间使用回放缓冲区。\n设置\n如果尚未安装 TF-Agents,请先安装。", "!pip install tf-agents\n\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nimport numpy as np\n\nfrom tf_agents import specs\nfrom tf_agents.agents.dqn import dqn_agent\nfrom tf_agents.drivers import dynamic_step_driver\nfrom tf_agents.environments import suite_gym\nfrom tf_agents.environments import tf_py_environment\nfrom tf_agents.networks import q_network\nfrom tf_agents.replay_buffers import py_uniform_replay_buffer\nfrom tf_agents.replay_buffers import tf_uniform_replay_buffer\nfrom tf_agents.specs import tensor_spec\nfrom tf_agents.trajectories import time_step", "回放缓冲区 API\n回放缓冲区类的定义和方法如下:\n```python\nclass ReplayBuffer(tf.Module):\n \"\"\"Abstract base class for TF-Agents replay buffer.\"\"\"\ndef init(self, data_spec, capacity):\n \"\"\"Initializes the replay buffer.\nArgs:\n data_spec: A spec or a list/tuple/nest of specs describing\n a single item that can be stored in this buffer\n capacity: number of elements that the replay buffer can hold.\n\"\"\"\n\n@property\n def data_spec(self):\n \"\"\"Returns the spec for items in the replay buffer.\"\"\"\n@property\n def capacity(self):\n \"\"\"Returns the capacity of the replay buffer.\"\"\"\ndef add_batch(self, items):\n \"\"\"Adds a batch of items to the replay buffer.\"\"\"\ndef get_next(self,\n sample_batch_size=None,\n num_steps=None,\n time_stacked=True):\n \"\"\"Returns an item or batch of items from the buffer.\"\"\"\ndef as_dataset(self,\n sample_batch_size=None,\n num_steps=None,\n num_parallel_calls=None):\n \"\"\"Creates and returns a dataset that returns entries from the buffer.\"\"\"\ndef gather_all(self):\n \"\"\"Returns all the items in buffer.\"\"\"\n return self._gather_all()\ndef clear(self):\n \"\"\"Resets the contents of replay buffer\"\"\"\n```\n请注意,重播缓冲区对象初始化后,需提供待存储元素的 data_spec。此规范与待添加到缓冲区的轨迹元素的 TensorSpec 相对应。通常可以通过查看代理的 agent.collect_data_spec 来获得此规范,其定义了代理在训练时所预期的形状、类型和结构(稍后将详细介绍)\nTFUniformReplayBuffer\nTFUniformReplayBuffer 是 TF-Agents 中最常用的回放缓冲区,因此我们将在本教程中予以使用。在 TFUniformReplayBuffer 中,备份缓冲区存储由 Tensorflow 变量实现,因此是计算图的一部分。\n缓冲区会成批次地存储元素,每个批次段最大容量为 max_length 个元素。因此,总缓冲区容量为 batch_size x max_length 个元素。缓冲区中存储的元素必须全部具有匹配的数据规范。将回放缓冲区用于数据收集时,该规范为代理的收集数据规范。\n创建缓冲区:\n要创建 TFUniformReplayBuffer,我们传入以下内容:\n\n缓冲区将存储的数据元素的规范\n与缓冲区批次大小对应的 batch size\n每个批次段的元素个数 max_length\n\n在以下创建 TFUniformReplayBuffer 的示例中,采用了示例数据规范,batch_size 为 32,max_length 为 1000。", "data_spec = (\n tf.TensorSpec([3], tf.float32, 'action'),\n (\n tf.TensorSpec([5], tf.float32, 'lidar'),\n tf.TensorSpec([3, 2], tf.float32, 'camera')\n )\n)\n\nbatch_size = 32\nmax_length = 1000\n\nreplay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n data_spec,\n batch_size=batch_size,\n max_length=max_length)", "写入缓冲区:\n要将元素添加到回放缓冲区,我们使用 add_batch(items) 方法,其中 items 是代表要添加到缓冲区的项目批次的张量的列表/元组/嵌套。items 的每个元素的外部维度必须等于 batch_size,其余维度必须符合项目的数据规范(与传递至回放缓冲区构造函数的数据规范相同)。\n以下为添加一批项目的示例", "action = tf.constant(1 * np.ones(\n data_spec[0].shape.as_list(), dtype=np.float32))\nlidar = tf.constant(\n 2 * np.ones(data_spec[1][0].shape.as_list(), dtype=np.float32))\ncamera = tf.constant(\n 3 * np.ones(data_spec[1][1].shape.as_list(), dtype=np.float32))\n \nvalues = (action, (lidar, camera))\nvalues_batched = tf.nest.map_structure(lambda t: tf.stack([t] * batch_size),\n values)\n \nreplay_buffer.add_batch(values_batched)", "从缓冲区读取\n有三种方法可以从 TFUniformReplayBuffer 中读取数据:\n\nget_next() - 从缓冲区返回一个样本。通过此方法的参数可以指定返回的样本批次大小和时间步骤数。\nas_dataset() - 将回放缓冲区以 tf.data.Dataset 形式返回。然后,用户可以创建数据集迭代器,并在缓冲区中迭代项目样本。\ngather_all() - 将缓冲区内的所有项目以形状为 [batch, time, data_spec] 的张量形式返回。\n\n以下示例展示了如何使用这些方法从回放缓冲区读取:", "# add more items to the buffer before reading\nfor _ in range(5):\n replay_buffer.add_batch(values_batched)\n\n# Get one sample from the replay buffer with batch size 10 and 1 timestep:\n\nsample = replay_buffer.get_next(sample_batch_size=10, num_steps=1)\n\n# Convert the replay buffer to a tf.data.Dataset and iterate through it\ndataset = replay_buffer.as_dataset(\n sample_batch_size=4,\n num_steps=2)\n\niterator = iter(dataset)\nprint(\"Iterator trajectories:\")\ntrajectories = []\nfor _ in range(3):\n t, _ = next(iterator)\n trajectories.append(t)\n \nprint(tf.nest.map_structure(lambda t: t.shape, trajectories))\n\n# Read all elements in the replay buffer:\ntrajectories = replay_buffer.gather_all()\n\nprint(\"Trajectories from gather all:\")\nprint(tf.nest.map_structure(lambda t: t.shape, trajectories))\n", "PyUniformReplayBuffer\nPyUniformReplayBuffer 与 TFUniformReplayBuffer 功能相同,但前者数据存储在 numpy 数组中,而非 TF 变量。该缓冲区可用于非图形式数据收集。对于某些应用而言,numpy 型备份存储无需使用 Tensorflow 变量,数据操作会更为方便(例如,针对更新优先级建立索引)。但是,这种实现方式不具备 Tensorflow 所能提供的图形优化优势。\n以下是基于代理的策略轨迹规范实例化 PyUniformReplayBuffer 的示例:", "replay_buffer_capacity = 1000*32 # same capacity as the TFUniformReplayBuffer\n\npy_replay_buffer = py_uniform_replay_buffer.PyUniformReplayBuffer(\n capacity=replay_buffer_capacity,\n data_spec=tensor_spec.to_nest_array_spec(data_spec))", "在训练过程中使用回放缓冲区\n我们已了解如何创建回放缓冲区,向其写入项目以及从中读取项目,现在可以在代理训练过程中使用回放缓冲区存储轨迹了。\n数据收集\n首先,让我们看一下如何在数据收集过程中使用回放缓冲区。\n在 TF-Agents 中,我们使用 Driver(有关更多详细信息,请参阅“驱动程序”教程)收集环境中的经历数据。要使用 Driver,我们需要指定 Observer,Driver 在收到轨迹时将执行该函数。\n因此,要将轨迹元素添加到重播缓冲区,我们需要添加一个观测函数,其调用 add_batch(items) 以将项目成批次地添加到重播缓冲区内。\n以下是使用 TFUniformReplayBuffer 的示例。我们首先创建环境、网络和代理。然后,我们创建 TFUniformReplayBuffer。请注意,回放缓冲区中轨迹元素的规范与代理的收集数据规范相同。然后,我们将其 add_batch 方法设置为将在训练过程中收集数据的驱动程序的观察者:", "env = suite_gym.load('CartPole-v0')\ntf_env = tf_py_environment.TFPyEnvironment(env)\n\nq_net = q_network.QNetwork(\n tf_env.time_step_spec().observation,\n tf_env.action_spec(),\n fc_layer_params=(100,))\n\nagent = dqn_agent.DqnAgent(\n tf_env.time_step_spec(),\n tf_env.action_spec(),\n q_network=q_net,\n optimizer=tf.compat.v1.train.AdamOptimizer(0.001))\n\nreplay_buffer_capacity = 1000\n\nreplay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n agent.collect_data_spec,\n batch_size=tf_env.batch_size,\n max_length=replay_buffer_capacity)\n\n# Add an observer that adds to the replay buffer:\nreplay_observer = [replay_buffer.add_batch]\n\ncollect_steps_per_iteration = 10\ncollect_op = dynamic_step_driver.DynamicStepDriver(\n tf_env,\n agent.collect_policy,\n observers=replay_observer,\n num_steps=collect_steps_per_iteration).run()", "读取用于训练步骤的数据\n将轨迹元素添加到回放缓冲区后,我们可以从回放缓冲区中批量读取轨迹,用作训练步骤的输入数据。\n以下示例展示了在训练循环中如何训练从回放缓冲区中读取的轨迹:", "# Read the replay buffer as a Dataset,\n# read batches of 4 elements, each with 2 timesteps:\ndataset = replay_buffer.as_dataset(\n sample_batch_size=4,\n num_steps=2)\n\niterator = iter(dataset)\n\nnum_train_steps = 10\n\nfor _ in range(num_train_steps):\n trajectories, _ = next(iterator)\n loss = agent.train(experience=trajectories)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Hvass-Labs/TensorFlow-Tutorials
20_Natural_Language_Processing.ipynb
mit
[ "TensorFlow Tutorial #20\nNatural Language Processing\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nThis tutorial is about a basic form of Natural Language Processing (NLP) called Sentiment Analysis, in which we will try and classify a movie review as either positive or negative.\nConsider a simple example: \"This movie is not very good.\" This text ends with the words \"very good\" which indicates a very positive sentiment, but it is negated because it is preceded by the word \"not\", so the text should be classified as having a negative sentiment. How can we teach a Neural Network to do this classification?\nAnother problem is that neural networks cannot work directly on text-data, so we need to convert text into numbers that are compatible with a neural network.\nYet another problem is that a text may be arbitrarily long. The neural networks we have worked with in previous tutorials use fixed data-shapes - except for the first dimension of the data which varies with the batch-size. Now we need a type of neural network that can work on both short and long sequences of text.\nYou should be familiar with TensorFlow and Keras in general, see Tutorials #01 and #03-C.\nFlowchart\nTo solve this problem we need several processing steps. First we need to convert the raw text-words into so-called tokens which are integer values. These tokens are really just indices into a list of the entire vocabulary. Then we convert these integer-tokens into so-called embeddings which are real-valued vectors, whose mapping will be trained along with the neural network, so as to map words with similar meanings to similar embedding-vectors. Then we input these embedding-vectors to a Recurrent Neural Network which can take sequences of arbitrary length as input and output a kind of summary of what it has seen in the input. This output is then squashed using a Sigmoid-function to give us a value between 0.0 and 1.0, where 0.0 is taken to mean a negative sentiment and 1.0 means a positive sentiment. This whole process allows us to classify input-text as either having a negative or positive sentiment.\nThe flowchart of the algorithm is roughly:\n<img src=\"images/20_natural_language_flowchart.png\" alt=\"Flowchart NLP\" style=\"width: 300px;\"/>\nRecurrent Neural Network\nThe basic building block in a Recurrent Neural Network (RNN) is a Recurrent Unit (RU). There are many different variants of recurrent units such as the rather clunky LSTM (Long-Short-Term-Memory) and the somewhat simpler GRU (Gated Recurrent Unit) which we will use in this tutorial. Experiments in the literature suggest that the LSTM and GRU have roughly similar performance. Even simpler variants also exist and the literature suggests that they may perform even better than both LSTM and GRU, but they are not implemented in Keras which we will use in this tutorial.\nThe following figure shows the abstract idea of a recurrent unit, which has an internal state that is being updated every time the unit receives a new input. This internal state serves as a kind of memory. However, it is not a traditional kind of computer memory which stores bits that are either on or off. Instead the recurrent unit stores floating-point values in its memory-state, which are read and written using matrix-operations so the operations are all differentiable. This means the memory-state can store arbitrary floating-point values (although typically limited between -1.0 and 1.0) and the network can be trained like a normal neural network using Gradient Descent.\nThe new state-value depends on both the old state-value and the current input. For example, if the state-value has memorized that we have recently seen the word \"not\" and the current input is \"good\" then we need to store a new state-value that memorizes \"not good\" which indicates a negative sentiment.\nThe part of the recurrent unit that is responsible for mapping old state-values and inputs to the new state-value is called a gate, but it is really just a type of matrix-operation. There is another gate for calculating the output-values of the recurrent unit. The implementation of these gates vary for different types of recurrent units. This figure merely shows the abstract idea of a recurrent unit. The LSTM has more gates than the GRU but some of them are apparently redundant so they can be omitted.\nIn order to train the recurrent unit, we must gradually change the weight-matrices of the gates so the recurrent unit gives the desired output for an input sequence. This is done automatically in TensorFlow.\n\nUnrolled Network\nAnother way to visualize and understand a Recurrent Neural Network is to \"unroll\" the recursion. In this figure there is only a single recurrent unit denoted RU, which will receive a text-word from the input sequence in a series of time-steps.\nThe initial memory-state of the RU is reset to zero internally by Keras / TensorFlow every time a new sequence begins.\nIn the first time-step the word \"this\" is input to the RU which uses its internal state (initialized to zero) and its gate to calculate the new state. The RU also uses its other gate to calculate the output but it is ignored here because it is only needed at the end of the sequence to output a kind of summary.\nIn the second time-step the word \"is\" is input to the RU which now uses the internal state that was just updated from seeing the previous word \"this\".\nThere is not much meaning in the words \"this is\" so the RU probably doesn't save anything important in its internal state from seeing these words. But when it sees the third word \"not\" the RU has learned that it may be important for determining the overall sentiment of the input-text, so it needs to be stored in the memory-state of the RU, which can be used later when the RU sees the word \"good\" in time-step 6.\nFinally when the entire sequence has been processed, the RU outputs a vector of values that summarizes what it has seen in the input sequence. We then use a fully-connected layer with a Sigmoid activation to get a single value between 0.0 and 1.0 which we interpret as the sentiment either being negative (values close to 0.0) or positive (values close to 1.0).\nNote that for the sake of clarity, this figure doesn't show the mapping from text-words to integer-tokens and embedding-vectors, as well as the fully-connected Sigmoid layer on the output.\n\n3-Layer Unrolled Network\nIn this tutorial we will use a Recurrent Neural Network with 3 recurrent units (or layers) denoted RU1, RU2 and RU3 in the \"unrolled\" figure below.\nThe first layer is much like the unrolled figure above for a single-layer RNN. First the recurrent unit RU1 has its internal state initialized to zero by Keras / TensorFlow. Then the word \"this\" is input to RU1 and it updates its internal state. Then it processes the next word \"is\", and so forth. But instead of outputting a single summary value at the end of the sequence, we use the output of RU1 for every time-step. This creates a new sequence that can then be used as input for the next recurrent unit RU2. The same process is repeated for the second layer and this creates a new output sequence which is then input to the third layer's recurrent unit RU3, whose final output is passed to a fully-connected Sigmoid layer that outputs a value between 0.0 (negative sentiment) and 1.0 (positive sentiment).\nNote that for the sake of clarity, the mapping of text-words to integer-tokens and embedding-vectors has been omitted from this figure.\n\nExploding & Vanishing Gradients\nIn order to train the weights for the gates inside the recurrent unit, we need to minimize some loss-function which measures the difference between the actual output of the network relative to the desired output.\nFrom the \"unrolled\" figures above we see that the reccurent units are applied recursively for each word in the input sequence. This means the recurrent gate is applied once for each time-step. The gradient-signals have to flow back from the loss-function all the way to the first time the recurrent gate is used. If the gradient of the recurrent gate is multiplicative, then we essentially have an exponential function.\nIn this tutorial we will use texts that have more than 500 words. This means the RU's gate for updating its internal memory-state is applied recursively more than 500 times. If a gradient of just 1.01 is multiplied with itself 500 times then it gives a value of about 145. If a gradient of just 0.99 is multiplied with itself 500 times then it gives a value of about 0.007. These are called exploding and vanishing gradients. The only gradients that can survive recurrent multiplication are 0 and 1.\nTo avoid these so-called exploding and vanishing gradients, care must be made when designing the recurrent unit and its gates. That is why the actual implementation of the GRU is more complicated, because it tries to send the gradient back through the gates without this distortion.\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom scipy.spatial.distance import cdist", "We need to import several things from Keras.", "from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, GRU, Embedding\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences", "This was developed using Python 3.6 (Anaconda) and package versions:", "tf.__version__\n\ntf.keras.__version__", "Load Data\nWe will use a data-set consisting of 50000 reviews of movies from IMDB. Keras has a built-in function for downloading a similar data-set (but apparently half the size). However, Keras' version has already converted the text in the data-set to integer-tokens, which is a crucial part of working with natural languages that will also be demonstrated in this tutorial, so we download the actual text-data.\nNOTE: The data-set is 84 MB and will be downloaded automatically.", "import imdb", "Change this if you want the files saved in another directory.", "# imdb.data_dir = \"data/IMDB/\"", "Automatically download and extract the files.", "imdb.maybe_download_and_extract()", "Load the training- and test-sets.", "x_train_text, y_train = imdb.load_data(train=True)\nx_test_text, y_test = imdb.load_data(train=False)\n\n# Convert to numpy arrays.\ny_train = np.array(y_train)\ny_test = np.array(y_test)\n\nprint(\"Train-set size: \", len(x_train_text))\nprint(\"Test-set size: \", len(x_test_text))", "Combine into one data-set for some uses below.", "data_text = x_train_text + x_test_text", "Print an example from the training-set to see that the data looks correct.", "x_train_text[1]", "The true \"class\" is a sentiment of the movie-review. It is a value of 0.0 for a negative sentiment and 1.0 for a positive sentiment. In this case the review is positive.", "y_train[1]", "Tokenizer\nA neural network cannot work directly on text-strings so we must convert it somehow. There are two steps in this conversion, the first step is called the \"tokenizer\" which converts words to integers and is done on the data-set before it is input to the neural network. The second step is an integrated part of the neural network itself and is called the \"embedding\"-layer, which is described further below.\nWe may instruct the tokenizer to only use e.g. the 10000 most popular words from the data-set.", "num_words = 10000\n\ntokenizer = Tokenizer(num_words=num_words)", "The tokenizer can then be \"fitted\" to the data-set. This scans through all the text and strips it from unwanted characters such as punctuation, and also converts it to lower-case characters. The tokenizer then builds a vocabulary of all unique words along with various data-structures for accessing the data.\nNote that we fit the tokenizer on the entire data-set so it gathers words from both the training- and test-data. This is OK as we are merely building a vocabulary and want it to be as complete as possible. The actual neural network will of course only be trained on the training-set.", "%%time\ntokenizer.fit_on_texts(data_text)", "If you want to use the entire vocabulary then set num_words=None above, and then it will automatically be set to the vocabulary-size here. (This is because of Keras' somewhat awkward implementation.)", "if num_words is None:\n num_words = len(tokenizer.word_index)", "We can then inspect the vocabulary that has been gathered by the tokenizer. This is ordered by the number of occurrences of the words in the data-set. These integer-numbers are called word indices or \"tokens\" because they uniquely identify each word in the vocabulary.", "tokenizer.word_index", "We can then use the tokenizer to convert all texts in the training-set to lists of these tokens.", "x_train_tokens = tokenizer.texts_to_sequences(x_train_text)", "For example, here is a text from the training-set:", "x_train_text[1]", "This text corresponds to the following list of tokens:", "np.array(x_train_tokens[1])", "We also need to convert the texts in the test-set to tokens.", "x_test_tokens = tokenizer.texts_to_sequences(x_test_text)", "Padding and Truncating Data\nThe Recurrent Neural Network can take sequences of arbitrary length as input, but in order to use a whole batch of data, the sequences need to have the same length. There are two ways of achieving this: (A) Either we ensure that all sequences in the entire data-set have the same length, or (B) we write a custom data-generator that ensures the sequences have the same length within each batch.\nSolution (A) is simpler but if we use the length of the longest sequence in the data-set, then we are wasting a lot of memory. This is particularly important for larger data-sets.\nSo in order to make a compromise, we will use a sequence-length that covers most sequences in the data-set, and we will then truncate longer sequences and pad shorter sequences.\nFirst we count the number of tokens in all the sequences in the data-set.", "num_tokens = [len(tokens) for tokens in x_train_tokens + x_test_tokens]\nnum_tokens = np.array(num_tokens)", "The average number of tokens in a sequence is:", "np.mean(num_tokens)", "The maximum number of tokens in a sequence is:", "np.max(num_tokens)", "The max number of tokens we will allow is set to the average plus 2 standard deviations.", "max_tokens = np.mean(num_tokens) + 2 * np.std(num_tokens)\nmax_tokens = int(max_tokens)\nmax_tokens", "This covers about 95% of the data-set.", "np.sum(num_tokens < max_tokens) / len(num_tokens)", "When padding or truncating the sequences that have a different length, we need to determine if we want to do this padding or truncating 'pre' or 'post'. If a sequence is truncated, it means that a part of the sequence is simply thrown away. If a sequence is padded, it means that zeros are added to the sequence.\nSo the choice of 'pre' or 'post' can be important because it determines whether we throw away the first or last part of a sequence when truncating, and it determines whether we add zeros to the beginning or end of the sequence when padding. This may confuse the Recurrent Neural Network.", "pad = 'pre'\n\nx_train_pad = pad_sequences(x_train_tokens, maxlen=max_tokens,\n padding=pad, truncating=pad)\n\nx_test_pad = pad_sequences(x_test_tokens, maxlen=max_tokens,\n padding=pad, truncating=pad)", "We have now transformed the training-set into one big matrix of integers (tokens) with this shape:", "x_train_pad.shape", "The matrix for the test-set has the same shape:", "x_test_pad.shape", "For example, we had the following sequence of tokens above:", "np.array(x_train_tokens[1])", "This has simply been padded to create the following sequence. Note that when this is input to the Recurrent Neural Network, then it first inputs a lot of zeros. If we had padded 'post' then it would input the integer-tokens first and then a lot of zeros. This may confuse the Recurrent Neural Network.", "x_train_pad[1]", "Tokenizer Inverse Map\nFor some strange reason, the Keras implementation of a tokenizer does not seem to have the inverse mapping from integer-tokens back to words, which is needed to reconstruct text-strings from lists of tokens. So we make that mapping here.", "idx = tokenizer.word_index\ninverse_map = dict(zip(idx.values(), idx.keys()))", "Helper-function for converting a list of tokens back to a string of words.", "def tokens_to_string(tokens):\n # Map from tokens back to words.\n words = [inverse_map[token] for token in tokens if token != 0]\n \n # Concatenate all words.\n text = \" \".join(words)\n\n return text", "For example, this is the original text from the data-set:", "x_train_text[1]", "We can recreate this text except for punctuation and other symbols, by converting the list of tokens back to words:", "tokens_to_string(x_train_tokens[1])", "Create the Recurrent Neural Network\nWe are now ready to create the Recurrent Neural Network (RNN). We will use the Keras API for this because of its simplicity. See Tutorial #03-C for a tutorial on Keras.", "model = Sequential()", "The first layer in the RNN is a so-called Embedding-layer which converts each integer-token into a vector of values. This is necessary because the integer-tokens may take on values between 0 and 10000 for a vocabulary of 10000 words. The RNN cannot work on values in such a wide range. The embedding-layer is trained as a part of the RNN and will learn to map words with similar semantic meanings to similar embedding-vectors, as will be shown further below.\nFirst we define the size of the embedding-vector for each integer-token. In this case we have set it to 8, so that each integer-token will be converted to a vector of length 8. The values of the embedding-vector will generally fall roughly between -1.0 and 1.0, although they may exceed these values somewhat.\nThe size of the embedding-vector is typically selected between 100-300, but it seems to work reasonably well with small values for Sentiment Analysis.", "embedding_size = 8", "The embedding-layer also needs to know the number of words in the vocabulary (num_words) and the length of the padded token-sequences (max_tokens). We also give this layer a name because we need to retrieve its weights further below.", "model.add(Embedding(input_dim=num_words,\n output_dim=embedding_size,\n input_length=max_tokens,\n name='layer_embedding'))", "We can now add the first Gated Recurrent Unit (GRU) to the network. This will have 16 outputs. Because we will add a second GRU after this one, we need to return sequences of data because the next GRU expects sequences as its input.", "model.add(GRU(units=16, return_sequences=True))", "This adds the second GRU with 8 output units. This will be followed by another GRU so it must also return sequences.", "model.add(GRU(units=8, return_sequences=True))", "This adds the third and final GRU with 4 output units. This will be followed by a dense-layer, so it should only give the final output of the GRU and not a whole sequence of outputs.", "model.add(GRU(units=4))", "Add a fully-connected / dense layer which computes a value between 0.0 and 1.0 that will be used as the classification output.", "model.add(Dense(1, activation='sigmoid'))", "Use the Adam optimizer with the given learning-rate.", "optimizer = Adam(lr=1e-3)", "Compile the Keras model so it is ready for training.", "model.compile(loss='binary_crossentropy',\n optimizer=optimizer,\n metrics=['accuracy'])\n\nmodel.summary()", "Train the Recurrent Neural Network\nWe can now train the model. Note that we are using the data-set with the padded sequences. We use 5% of the training-set as a small validation-set, so we have a rough idea whether the model is generalizing well or if it is perhaps over-fitting to the training-set.", "%%time\nmodel.fit(x_train_pad, y_train,\n validation_split=0.05, epochs=3, batch_size=64)", "Performance on Test-Set\nNow that the model has been trained we can calculate its classification accuracy on the test-set.", "%%time\nresult = model.evaluate(x_test_pad, y_test)\n\nprint(\"Accuracy: {0:.2%}\".format(result[1]))", "Example of Mis-Classified Text\nIn order to show an example of mis-classified text, we first calculate the predicted sentiment for the first 1000 texts in the test-set.", "%%time\ny_pred = model.predict(x=x_test_pad[0:1000])\ny_pred = y_pred.T[0]", "These predicted numbers fall between 0.0 and 1.0. We use a cutoff / threshold and say that all values above 0.5 are taken to be 1.0 and all values below 0.5 are taken to be 0.0. This gives us a predicted \"class\" of either 0.0 or 1.0.", "cls_pred = np.array([1.0 if p>0.5 else 0.0 for p in y_pred])", "The true \"class\" for the first 1000 texts in the test-set are needed for comparison.", "cls_true = np.array(y_test[0:1000])", "We can then get indices for all the texts that were incorrectly classified by comparing all the \"classes\" of these two arrays.", "incorrect = np.where(cls_pred != cls_true)\nincorrect = incorrect[0]", "Of the 1000 texts used, how many were mis-classified?", "len(incorrect)", "Let us look at the first mis-classified text. We will use its index several times.", "idx = incorrect[0]\nidx", "The mis-classified text is:", "text = x_test_text[idx]\ntext", "These are the predicted and true classes for the text:", "y_pred[idx]\n\ncls_true[idx]", "New Data\nLet us try and classify new texts that we make up. Some of these are obvious, while others use negation and sarcasm to try and confuse the model into mis-classifying the text.", "text1 = \"This movie is fantastic! I really like it because it is so good!\"\ntext2 = \"Good movie!\"\ntext3 = \"Maybe I like this movie.\"\ntext4 = \"Meh ...\"\ntext5 = \"If I were a drunk teenager then this movie might be good.\"\ntext6 = \"Bad movie!\"\ntext7 = \"Not a good movie!\"\ntext8 = \"This movie really sucks! Can I get my money back please?\"\ntexts = [text1, text2, text3, text4, text5, text6, text7, text8]", "We first convert these texts to arrays of integer-tokens because that is needed by the model.", "tokens = tokenizer.texts_to_sequences(texts)", "To input texts with different lengths into the model, we also need to pad and truncate them.", "tokens_pad = pad_sequences(tokens, maxlen=max_tokens,\n padding=pad, truncating=pad)\ntokens_pad.shape", "We can now use the trained model to predict the sentiment for these texts.", "model.predict(tokens_pad)", "A value close to 0.0 means a negative sentiment and a value close to 1.0 means a positive sentiment. These numbers will vary every time you train the model.\nEmbeddings\nThe model cannot work on integer-tokens directly, because they are integer values that may range between 0 and the number of words in our vocabulary, e.g. 10000. So we need to convert the integer-tokens into vectors of values that are roughly between -1.0 and 1.0 which can be used as input to a neural network.\nThis mapping from integer-tokens to real-valued vectors is also called an \"embedding\". It is essentially just a matrix where each row contains the vector-mapping of a single token. This means we can quickly lookup the mapping of each integer-token by simply using the token as an index into the matrix. The embeddings are learned along with the rest of the model during training.\nIdeally the embedding would learn a mapping where words that are similar in meaning also have similar embedding-values. Let us investigate if that has happened here.\nFirst we need to get the embedding-layer from the model:", "layer_embedding = model.get_layer('layer_embedding')", "We can then get the weights used for the mapping done by the embedding-layer.", "weights_embedding = layer_embedding.get_weights()[0]", "Note that the weights are actually just a matrix with the number of words in the vocabulary times the vector length for each embedding. That's because it is basically just a lookup-matrix.", "weights_embedding.shape", "Let us get the integer-token for the word 'good', which is just an index into the vocabulary.", "token_good = tokenizer.word_index['good']\ntoken_good", "Let us also get the integer-token for the word 'great'.", "token_great = tokenizer.word_index['great']\ntoken_great", "These integertokens may be far apart and will depend on the frequency of those words in the data-set.\nNow let us compare the vector-embeddings for the words 'good' and 'great'. Several of these values are similar, although some values are quite different. Note that these values will change every time you train the model.", "weights_embedding[token_good]\n\nweights_embedding[token_great]", "Similarly, we can compare the embeddings for the words 'bad' and 'horrible'.", "token_bad = tokenizer.word_index['bad']\ntoken_horrible = tokenizer.word_index['horrible']\n\nweights_embedding[token_bad]\n\nweights_embedding[token_horrible]", "Sorted Words\nWe can also sort all the words in the vocabulary according to their \"similarity\" in the embedding-space. We want to see if words that have similar embedding-vectors also have similar meanings.\nSimilarity of embedding-vectors can be measured by different metrics, e.g. Euclidean distance or cosine distance.\nWe have a helper-function for calculating these distances and printing the words in sorted order.", "def print_sorted_words(word, metric='cosine'):\n \"\"\"\n Print the words in the vocabulary sorted according to their\n embedding-distance to the given word.\n Different metrics can be used, e.g. 'cosine' or 'euclidean'.\n \"\"\"\n\n # Get the token (i.e. integer ID) for the given word.\n token = tokenizer.word_index[word]\n\n # Get the embedding for the given word. Note that the\n # embedding-weight-matrix is indexed by the word-tokens\n # which are integer IDs.\n embedding = weights_embedding[token]\n\n # Calculate the distance between the embeddings for\n # this word and all other words in the vocabulary.\n distances = cdist(weights_embedding, [embedding],\n metric=metric).T[0]\n \n # Get an index sorted according to the embedding-distances.\n # These are the tokens (integer IDs) for words in the vocabulary.\n sorted_index = np.argsort(distances)\n \n # Sort the embedding-distances.\n sorted_distances = distances[sorted_index]\n \n # Sort all the words in the vocabulary according to their\n # embedding-distance. This is a bit excessive because we\n # will only print the top and bottom words.\n sorted_words = [inverse_map[token] for token in sorted_index\n if token != 0]\n\n # Helper-function for printing words and embedding-distances.\n def _print_words(words, distances):\n for word, distance in zip(words, distances):\n print(\"{0:.3f} - {1}\".format(distance, word))\n\n # Number of words to print from the top and bottom of the list.\n k = 10\n\n print(\"Distance from '{0}':\".format(word))\n\n # Print the words with smallest embedding-distance.\n _print_words(sorted_words[0:k], sorted_distances[0:k])\n\n print(\"...\")\n\n # Print the words with highest embedding-distance.\n _print_words(sorted_words[-k:], sorted_distances[-k:])", "We can then print the words that are near and far from the word 'great' in terms of their vector-embeddings. Note that these may change each time you train the model.", "print_sorted_words('great', metric='cosine')", "Similarly, we can print the words that are near and far from the word 'worst' in terms of their vector-embeddings.", "print_sorted_words('worst', metric='cosine')", "Conclusion\nThis tutorial showed the basic methods for doing Natural Language Processing (NLP) using a Recurrent Neural Network with integer-tokens and an embedding layer. This was used to do sentiment analysis of movie reviews from IMDB. It works reasonably well if the hyper-parameters are chosen properly. But it is important to understand that this is not human-like comprehension of text. The system does not have any real understanding of the text. It is just a clever way of doing pattern-recognition.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nRun more training-epochs. Does it improve performance?\nIf your model overfits the training-data, try using dropout-layers and dropout inside the GRU.\nIncrease or decrease the number of words in the vocabulary. This is done when the Tokenizer is initialized. Does it affect performance?\nIncrease the size of the embedding-vectors to e.g. 200. Does it affect performance?\nTry varying all the different hyper-parameters for the Recurrent Neural Network.\nUse Bayesian Optimization from Tutorial #19 to find the best choice of hyper-parameters.\nUse 'post' for padding and truncating in pad_sequences(). Does it affect the performance?\nUse individual characters instead of tokenized words as the vocabulary. You can then use one-hot encoded vectors for each character instead of using the embedding-layer.\nUse model.fit_generator() instead of model.fit() and make your own data-generator, which creates a batch of data using a random subset of x_train_tokens. The sequences must be padded so they all match the length of the longest sequence.\nExplain to a friend how the program works.\n\nLicense (MIT)\nCopyright (c) 2018 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Mashimo/datascience
01-Regression/prob.ipynb
apache-2.0
[ "Probability with a Montecarlo view\nStochastic vs deterministic numbers\nThe word stochastic is an adjective in English that describes something that was randomly determined.\nRandomness is the lack of pattern or predictability in events. A random sequence of events thereofore has no order and does not follow an intelligible combination.\nIndividual random events are by definition unpredictable, but in many cases the frequency of different outcomes over a large number of events is predictable.\nAnd this is what is interesting for us: if I throw a die with six faces thousands of times, how many times in percent shall I expect to see the face number six?\nWe generate a (pseudo) random number in Python using the random library:", "import random\n\ndef genEven():\n '''\n Returns a random even number x, where 0 <= x < 100\n '''\n return random.randrange(0,100,2)\n\ngenEven()\n\ndef stochasticNumber():\n '''\n Stochastically generates and returns a uniformly distributed even \n number between 9 and 21\n '''\n return random.randrange(10,21,2)\n\nstochasticNumber()", "Again:", "stochasticNumber()", "On the other side, deterministic means that the outcome - given the same input - will always be the same. There is no unpredictability.\nGenerally, in applications such as in security applications, hardware generators are generally preferred over software algorithm. \nA pseudo-random algorithms, like the Python random library above, is called pseudo because is not really unpredictable: the sequence of random numbers generated depends on the initial seed: using the same number as seed will generate the same sequence. \nThis is very useful for debugging purpose but means also that one needs to be very careful when choosing the seed (for example, choosing atmospheric signals or other noises).", "def deterministicNumber():\n '''\n Deterministically generates and returns an even number \n between 9 and 21\n '''\n random.seed(0) # Fixed seed, always the same.\n return random.randrange(10,21,2)\n\ndeterministicNumber()", "And again:", "deterministicNumber()", "The same number !!\nBefore looking at what is the probability of an event, we define a function that simulates the roll of a six-faced die:", "def rollDie():\n \"\"\"returns a random int between 1 and 6\"\"\"\n #return random.choice([1,2,3,4,5,6])\n return random.randint(1,6)\n\nrollDie()", "Discrete probability\nIf E represents an event, then P(E) is the probability that E will occur.\nNaive definition of probability\nIf all outcomes are equally likely, the probability of an event A happening is: \nP(E) = number of outcomes favourable to E / total number of outcomes. \nBy definition, P is always between 0 (no favourable outcomes) and 1 (all favourable outcomes).\nExample: a die has six faces therefore the space of the possibilities is six (total number of possible outcomes).\nIf I want to calculate the probability of getting the event 6 when rolling the die one time, I need to consider that there is one face with six therefore only one possible outcome favourable:\nP(6) = 1 / 6\nThis is the frequentist definition.\nProbability can be seen as the long-run relative frequency with which an event occurred given many repeated trials.\nEmpirically, let's say I throw this die 1000 times and the face number six comes 125 times.\nThe frequency observed and measured for face six is 125/1000 = 0.125, so I can define the probability to get six = 0.125 = 1/6.\nAn alternative is Bayesianism which defines probability as a degree of belief that an event will occur. It depends on the own state of knowledge or on evidence at hand, therefore is more subjective than frequency probability.\nBy the way, the probability that an event does NOT occur is:\nP(not A) = 1 - P(A) \nProbability NOT to get a three in a die roll is therefore 1 - 1/6 = 5/6\nMontecarlo simulation for die goal\nThe term Monte Carlo simulation was coined in 1949 by Stanislav Ulam and Nicholas Metropolis, two mathematicians, in homage to the casino of Monaco.\nMonte Carlo methods are a class of methods that can be applied to computationally ‘difficult’ problems to arrive at near-enough accurate answers. The general premise is remarkably simple:\n\nRandomly sample input(s) to the problem\nFor each sample, compute an output\nAggregate the outputs to approximate the solution\n\nWe run now a simulation of a die roll and see what are the frequencies for each face of the die:", "def simMCdie(numTrials):\n print (\"Running \", numTrials)\n counters = [0] * 6 # initialize the counters for each face\n for i in range(numTrials):\n roll = rollDie()\n counters[roll-1] += 1\n \n return counters\n\nimport matplotlib.pyplot as plt\n\nresults = simMCdie(10000)\n\nresults\n\nplt.bar(['1','2','3','4','5','6'], results);", "Independent events\nA and B are independent if knowing whether A occurred gives no information about whether B occurred.\nLet's define a function to model n rolls of a die, whereas each roll should be independent form the others.", "def rollNdice(n):\n result = ''\n for i in range(n):\n result = result + str(rollDie())\n return result\n\nrollNdice(5)", "These are indipendent events.\nNow an interesting question would be:\nWhat is the probability that both two independent events A and B occur?\nP(A and B) = P(A) * P(B) \nFor example, the probability to get TWO consecutive six in a die roll is therefore:\n1/6 * 1/6 = 1 / (6^2) = 1 / 36 \nWhich is quite low.\nThis applies also for more than two independent evemts.\nIn general, the probabilities to occur n indepenet events is: \n$ P(\\forall E_i) = \\prod (E_i) $ \nFor a six-sided die, there are 6^5 possible sequences of length five.\nThe probability of getting five consecutives six is 1 / 6^5, pretty low number. 1 out of 7776 possibilities.\nLet's look at a simulation to check this.", "def getTarget(goal):\n # Roll dice until we get the goal\n # goal: a string of N die results, for example 5 six: \"66666\"\n numRolls = len(goal)\n \n numTries = 0\n while True:\n numTries += 1\n result = rollNdice(numRolls)\n # if success then exit\n if result == goal:\n return numTries\n\ndef runDiceMC(goal, numTrials):\n print (\"Running ... trails: \", numTrials)\n total = 0\n for i in range(numTrials):\n total += getTarget(goal)\n \n print ('Average number of tries =', total/float(numTrials))\n\nrunDiceMC('66666', 100)", "Remember that the theory says it will take on average 7776 tries.\nPascal's problem\nA friend asked Pascal:\nwould it be profitable, given 24 rolls of a pair of dice, to bet against their being at least one double six?\nIn 17th century this was a hard problem.\nNow we know it's :\nP(A=6 and B=6) = 1/6 * 1/6 = 1/36 (two independent events)\nP(not double six) = 1 - 1/36 = 35/36\nP(no double six in 24 rolls) = (35/36)^24", "(35.0 / 36.0)**24", "it's very close!\nAgain, we can run a simulation to check it:", "def checkPascalMC(numTrials, roll, numRolls = 24, goal = 6):\n numSuccess = 0.0\n \n for i in range(numTrials):\n for j in range(numRolls):\n die1 = roll()\n die2 = roll()\n if die1 == goal and die2 == goal:\n numSuccess += 1\n break\n \n print ('Probability of losing =', 1.0 - numSuccess / numTrials)\n\ncheckPascalMC(10000, rollDie)", "In the function above, I am passing the function to roll a die as a parameter to show what can happen if the die has not the same faces but the face number six has a higher probability:", "def rollLoadedDie():\n if random.random() < 1.0/5.5:\n return 6\n else:\n return random.choice([1,2,3,4,5])\n\ncheckPascalMC(10000, rollLoadedDie)", "A last one.\nWhat's the probability to get at least one die showing one when rolled ten times?", "def atLeastOneOne(numRolls, numTrials):\n numSuccess = 0\n \n for i in range(numTrials):\n rolls = rollNdice(numRolls)\n if '1' in rolls:\n numSuccess += 1\n fracSuccess = numSuccess/float(numTrials)\n print (fracSuccess) #?!\n\natLeastOneOne(10, 1000)", "Sampling table\nThe sampling table gives the number of possible samples of size k out of a population of size n, under various assumptions how the sample is collected.\nOne example:\none ball will be drawn at random from a box containing: 3 green balls, 5 red balls, and 7 yellow balls.\nWhat is the probability that the ball will be green?", "green = 3\nred = 5\nyellow = 7\nballs = green+red+yellow\npGreen = green / balls\npGreen", "The population has size 15 and has therefore 15 possible samples of size 1; out of these 15 possible samples, only 3 of them will answer our question (ball is green).\nWe defined the variable pGreen as the probability of choosing a green ball from the box.\nWhat is the probability that the ball you draw from the box will NOT be green?", "1 - pGreen", "Sampling without replacement - generalized\nInstead of taking just one draw, consider taking two draws. You take the second draw without returning the first draw to the box. We call this sampling without replacement.\nWhat is the probability that the first draw is green and that the second draw is not green?", "# probability of choosing a green ball from the box on the first draw.\npGreen1 = green / balls\n# probability of NOT choosing a green ball on the second draw without replacement.\npGreen2 = (red + yellow) / (green -1 + red + yellow)\n\n# Calculate the probability that the first draw is green and the second draw is not green.\npGreen1 * pGreen2", "Sampling with replacement - generalized\nNow repeat the experiment, but this time, after taking the first draw and recording the color, return it back to the box and shake the box. We call this sampling with replacement.\nWhat is the probability that the first draw is green and that the second draw is not green?", "# probability of choosing a green ball from the box on the first draw.\n# same as above: pGreen1\n# probability of NOT choosing a green ball on the second draw with replacement\n\npGreen2r = (red + yellow) / balls\n\n\n# Calculate the probability that the first draw is cyan and the second draw is not cyan.\npGreen1 * pGreen2r", "Sampling with replacement - be careful\nSay you’ve drawn 5 balls from a box that has 3 green balls, 5 red balls, and 7 yellow balls - with replacement - and all have been yellow.\nWhat is the probability that the next one is yellow?", "# probability that a yellow ball is drawn from the box.\npYellow = yellow / balls\n\n# probability of drawing a yellow ball on the sixth draw.\npYellow", "Yes, doesn't matter if all previous five draws were ALL yellow balls, the probability that the sixth ball is yellow is the same as for the first draw and all other draws. With replacement the population is always the same.\nA football match\nTwo teams, say the Manchester United (M.Utd.) and the AC Milan, are playing a seven game series. The Milan are a better team and have a 60% chance of winning each game.\nWhat is the probability that the M.Utd. wins at least one game? Remember that they must win one of the first four games, or the series will be over!\nLet´s assume they are independent events (in reality losing one match may impact the team's morale for the next match):", "p_milan_wins = 0.6\n# probability that the Milan team will win the first four games of the series:\np_milan_win4 = p_milan_wins**4\n\n# probability that the M.Utd. wins at least one game in the first four games of the series.\n1 - p_milan_win4\n", "Here is the Monte Carlo simulation to confirm our answer to M.Utd. winning a game.", "import numpy as np\n\ndef RealWinsOneMC(numTrials, nGames=4):\n numSuccess = 0\n \n for i in range(numTrials):\n simulatedGames = np.random.choice([\"lose\",\"win\"], size=nGames, replace=True, p=[0.6,0.4])\n if 'win' in simulatedGames:\n numSuccess += 1\n \n return numSuccess / numTrials\n\nprint (RealWinsOneMC(1000))", "Winning a game - with MonteCarlo\nThe two teams are playing a seven game championship series. The first to win four games wins the series. The teams are equally good, so they each have a 50-50 chance of winning each game.\nIf Milan lose the first game, what is the probability that they win the series?", "# Create a list that contains all possible outcomes for the remaining games.\npossibilities = [(g1,g2,g3,g4,g5,g6) for g1 in range(2) for g2 in range(2)\n for g3 in range(2) for g4 in range(2) for g5 in range(2)\n for g6 in range(2)]\n\n# Create a list that indicates whether each row in 'possibilities' \n# contains enough wins for the Cavs to win the series.\nsums = [sum(tup) for tup in possibilities]\nresults = [s >= 4 for s in sums]\n\n# Calculate the proportion of 'results' in which the Cavs win the series. \nnp.mean(results)", "Confirm the results of the previous question with a Monte Carlo simulation to estimate the probability of Milan winning the series.", "def MilanWinsSeriesMC(numTrials, nGames=6):\n numSuccess = 0\n \n for i in range(numTrials):\n simulatedGames = np.random.choice([0,1], size=nGames, replace=True)\n if sum(simulatedGames) >= 4:\n numSuccess += 1\n \n return numSuccess / numTrials\n\nMilanWinsSeriesMC(100)", "A and B play a series", "def noReplacementSimulation(numTrials):\n '''\n Runs numTrials trials of a Monte Carlo simulation\n of drawing 3 balls out of a bucket containing\n 3 red and 3 green balls. Balls are not replaced once\n drawn. Returns the a decimal - the fraction of times 3 \n balls of the same color were drawn.\n '''\n sameColor = 0\n for i in range(numTrials):\n red = 3.0\n green = 3.0\n for j in range(3):\n if random.random() < red / (red + green):\n # this is red\n red -= 1\n else:\n green -= 1\n if red == 0 or green == 0:\n sameColor += 1\n \n return float(sameColor) / numTrials\n\nnoReplacementSimulation(100)\n\ndef oneTrial():\n '''\n Simulates one trial of drawing 3 balls out of a bucket containing\n 3 red and 3 green balls. Balls are not replaced once\n drawn. Returns True if all three balls are the same color,\n False otherwise.\n '''\n balls = ['r', 'r', 'r', 'g', 'g', 'g']\n chosenBalls = []\n for t in range(3):\n # For three trials, pick a ball\n ball = random.choice(balls)\n # Remove the chosen ball from the set of balls\n balls.remove(ball)\n # and add it to a list of balls we picked\n chosenBalls.append(ball)\n # If the first ball is the same as the second AND the second is the same as the third,\n # we know all three must be the same color.\n if chosenBalls[0] == chosenBalls[1] and chosenBalls[1] == chosenBalls[2]:\n return True\n return False\n\noneTrial()\n\ndef noReplacementSimulationProfessor(numTrials):\n '''\n Runs numTrials trials of a Monte Carlo simulation\n of drawing 3 balls out of a bucket containing\n 3 red and 3 green balls. Balls are not replaced once\n drawn. Returns the a decimal - the fraction of times 3 \n balls of the same color were drawn.\n '''\n numTrue = 0\n for trial in range(numTrials):\n if oneTrial():\n numTrue += 1\n\n return float(numTrue)/float(numTrials)\n\nnoReplacementSimulationProfessor(100)", "Write a function called sampleQuizzes() that implements a Monte Carlo simulation\n that estimates the probability of a student having a final score >= 70 and <= 75. \n Assume that 10,000 trials are sufficient to provide an accurate answer.", "def sampleQuizzes():\n yes = 0.0\n numTrials = 10000\n for trial in range(numTrials):\n midTerm1Vote = random.randint(50,80)\n midTerm2Vote = random.randint(60,90)\n finalExamVote = random.randint(55,95)\n finalVote = midTerm1Vote*0.25 + midTerm2Vote*0.25 + finalExamVote*0.5\n if finalVote >= 70 and finalVote <= 75:\n yes += 1\n return yes/numTrials\n\nsampleQuizzes()", "Estimate PI", "def throwNeedlesInCircle(numNeedles):\n '''\n Throw randomly <numNeedles> needles in a 2x2 square (area=4) \n that has a circle inside of radius 1 (area = PI)\n Count how many of those needles at the end landed inside the circle.\n Return this estimated proportion: Circle Area / Square Area\n ''' \n inCircle = 0 # number of needles inside the circle\n \n for needle in range(1, numNeedles + 1):\n \n x = random.random()\n y = random.random()\n \n if (x*x + y*y)**0.5 <= 1.0:\n inCircle += 1\n \n return (inCircle/float(numNeedles))\n\npiEstimation = throwNeedlesInCircle(100) * 4\npiEstimation", "More needles you throw more precise will be the PI value.", "def getPiEstimate(numTrials, numNeedles):\n\n print((\"{t} trials, each has {n} Needles.\")\\\n .format(t= numTrials, n=numNeedles))\n \n estimates = []\n for t in range(numTrials):\n piGuess = 4*throwNeedlesInCircle(numNeedles)\n estimates.append(piGuess)\n \n stdDev = np.std(estimates)\n curEst = sum(estimates)/len(estimates)\n \n print ('PI Estimation = ' + str(curEst))\n print ('Std. dev. = ' + str(round(stdDev, 5)))\n \n return (curEst, stdDev)\n\ngetPiEstimate(20, 100);", "We can do better and go on - increasing the number of needles at each trial - until we get the wished precision.", "def estimatePi(precision, numTrials, numNeedles = 1000):\n \n sDev = precision\n while sDev >= precision/2.0:\n curEst, sDev = getPiEstimate(numTrials, numNeedles)\n numNeedles *= 2\n print(\"---\")\n return curEst\n\nestimatePi(0.005, 100)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zomansud/coursera
ml-clustering-and-retrieval/week-4/4_em-with-text-data_blank.ipynb
mit
[ "Fitting a diagonal covariance Gaussian mixture model to text data\nIn a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component.\n * Computational cost becomes prohibitive in high dimensions: score calculations have complexity cubic in the number of dimensions M if the Gaussian has a full covariance matrix.\n * A model with many parameters require more data: observe that a full covariance matrix for an M-dimensional Gaussian will have M(M+1)/2 parameters to fit. With the number of parameters growing roughly as the square of the dimension, it may quickly become impossible to find a sufficient amount of data to make good inferences.\nBoth of these issues are avoided if we require the covariance matrix of each component to be diagonal, as then it has only M parameters to fit and the score computation decomposes into M univariate score calculations. Recall from the lecture that the M-step for the full covariance is:\n\\begin{align}\n\\hat{\\Sigma}k &= \\frac{1}{N_k^{soft}} \\sum{i=1}^N r_{ik} (x_i-\\hat{\\mu}_k)(x_i - \\hat{\\mu}_k)^T\n\\end{align}\nNote that this is a square matrix with M rows and M columns, and the above equation implies that the (v, w) element is computed by\n\\begin{align}\n\\hat{\\Sigma}{k, v, w} &= \\frac{1}{N_k^{soft}} \\sum{i=1}^N r_{ik} (x_{iv}-\\hat{\\mu}{kv})(x{iw} - \\hat{\\mu}_{kw})\n\\end{align}\nWhen we assume that this is a diagonal matrix, then non-diagonal elements are assumed to be zero and we only need to compute each of the M elements along the diagonal independently using the following equation. \n\\begin{align}\n\\hat{\\sigma}^2_{k, v} &= \\hat{\\Sigma}{k, v, v} \\\n&= \\frac{1}{N_k^{soft}} \\sum{i=1}^N r_{ik} (x_{iv}-\\hat{\\mu}_{kv})^2\n\\end{align}\nIn this section, we will use an EM implementation to fit a Gaussian mixture model with diagonal covariances to a subset of the Wikipedia dataset. The implementation uses the above equation to compute each variance term. \nWe'll begin by importing the dataset and coming up with a useful representation for each article. After running our algorithm on the data, we will explore the output to see whether we can give a meaningful interpretation to the fitted parameters in our model.\nNote to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.\nImport necessary packages\nThe following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.", "import graphlab\n\n'''Check GraphLab Create version'''\nfrom distutils.version import StrictVersion\nassert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'", "We also have a Python file containing implementations for several functions that will be used during the course of this assignment.", "from em_utilities import *", "Load Wikipedia data and extract TF-IDF features\nLoad Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.", "wiki = graphlab.SFrame('people_wiki.gl/').head(5000)\nwiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])", "Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.", "tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')", "As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.", "tf_idf = normalize(tf_idf)", "We can check that the length (Euclidean norm) of each row is now 1.0, as expected.", "for i in range(5):\n doc = tf_idf[i]\n print(np.linalg.norm(doc.todense()))", "EM in high dimensions\nEM for high-dimensional data requires some special treatment:\n * E step and M step must be vectorized as much as possible, as explicit loops are dreadfully slow in Python.\n * All operations must be cast in terms of sparse matrix operations, to take advantage of computational savings enabled by sparsity of data.\n * Initially, some words may be entirely absent from a cluster, causing the M step to produce zero mean and variance for those words. This means any data point with one of those words will have 0 probability of being assigned to that cluster since the cluster allows for no variability (0 variance) around that count being 0 (0 mean). Since there is a small chance for those words to later appear in the cluster, we instead assign a small positive variance (~1e-10). Doing so also prevents numerical overflow.\nWe provide the complete implementation for you in the file em_utilities.py. For those who are interested, you can read through the code to see how the sparse matrix implementation differs from the previous assignment. \nYou are expected to answer some quiz questions using the results of clustering.\nInitializing mean parameters using k-means\nRecall from the lectures that EM for Gaussian mixtures is very sensitive to the choice of initial means. With a bad initial set of means, EM may produce clusters that span a large area and are mostly overlapping. To eliminate such bad outcomes, we first produce a suitable set of initial means by using the cluster centers from running k-means. That is, we first run k-means and then take the final set of means from the converged solution as the initial means in our EM algorithm.", "from sklearn.cluster import KMeans\n\nnp.random.seed(5)\nnum_clusters = 25\n\n# Use scikit-learn's k-means to simplify workflow\n#kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=-1) # uncomment to use parallelism -- may break on your installation\nkmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=1)\nkmeans_model.fit(tf_idf)\ncentroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_\n\nmeans = [centroid for centroid in centroids]\n\nmeans", "Initializing cluster weights\nWe will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.", "cluster_assignment\n(cluster_assignment == 2).sum()\n\nnum_docs = tf_idf.shape[0]\nweights = []\nfor i in xrange(num_clusters):\n # Compute the number of data points assigned to cluster i:\n num_assigned = (cluster_assignment == i).sum()\n w = float(num_assigned) / num_docs\n weights.append(w)", "Initializing covariances\nTo initialize our covariance parameters, we compute $\\hat{\\sigma}{k, j}^2 = \\sum{i=1}^{N}(x_{i,j} - \\hat{\\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.", "covs = []\nfor i in xrange(num_clusters):\n member_rows = tf_idf[cluster_assignment==i]\n cov = (member_rows.multiply(member_rows) - 2*member_rows.dot(diag(means[i]))).sum(axis=0).A1 / member_rows.shape[0] \\\n + means[i]**2\n cov[cov < 1e-8] = 1e-8\n covs.append(cov)", "Running EM\nNow that we have initialized all of our parameters, run EM.", "out = EM_for_high_dimension(tf_idf, means, covs, weights, cov_smoothing=1e-10)\n\nout['loglik']\n\nout['means'][0][98817]\n\narr = np.argsort(out['means'][0])\nreversed_arr = arr[::-1]\nreversed_arr", "Interpret clustering results\nIn contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.\nWrite yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix). \nA sample output may be:\n```\n==========================================================\nCluster 0: Largest mean parameters in cluster \nWord Mean Variance \nfootball 1.08e-01 8.64e-03\nseason 5.80e-02 2.93e-03\nclub 4.48e-02 1.99e-03\nleague 3.94e-02 1.08e-03\nplayed 3.83e-02 8.45e-04\n...\n```", "# Fill in the blanks\ndef visualize_EM_clusters(tf_idf, means, covs, map_index_to_word):\n print('')\n print('==========================================================')\n\n num_clusters = len(means)\n for c in xrange(num_clusters):\n print('Cluster {0:d}: Largest mean parameters in cluster '.format(c))\n print('\\n{0: <12}{1: <12}{2: <12}'.format('Word', 'Mean', 'Variance'))\n \n # The k'th element of sorted_word_ids should be the index of the word \n # that has the k'th-largest value in the cluster mean. Hint: Use np.argsort().\n arr = np.argsort(means[c])\n sorted_word_ids = arr[::-1]\n \n for i in sorted_word_ids[:5]:\n print '{0: <12}{1:<10.2e}{2:10.2e}'.format(map_index_to_word['category'][i], \n means[c][i],\n covs[c][i])\n print '\\n=========================================================='\n\n'''By EM'''\nvisualize_EM_clusters(tf_idf, out['means'], out['covs'], map_index_to_word)", "Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]\nComparing to random initialization\nCreate variables for randomly initializing the EM algorithm. Complete the following code block.", "np.random.seed(5) # See the note below to see why we set seed=5.\nnum_clusters = len(means)\nnum_docs, num_words = tf_idf.shape\n\nrandom_means = []\nrandom_covs = []\nrandom_weights = []\n\nfor k in range(num_clusters):\n \n # Create a numpy array of length num_words with random normally distributed values.\n # Use the standard univariate normal distribution (mean 0, variance 1).\n # YOUR CODE HERE\n mean = np.random.normal(0, 1, num_words)\n \n # Create a numpy array of length num_words with random values uniformly distributed between 1 and 5.\n # YOUR CODE HERE\n cov = np.random.uniform(1,5,num_words)\n\n # Initially give each cluster equal weight.\n # YOUR CODE HERE\n weight = 1/float(num_clusters)\n \n random_means.append(mean)\n random_covs.append(cov)\n random_weights.append(weight)\n \nrandom_weights = random_weights * num_clusters", "Quiz Question: Try fitting EM with the random initial parameters you created above. (Use cov_smoothing=1e-5.) Store the result to out_random_init. What is the final loglikelihood that the algorithm converges to?", "out_random_init = EM_for_high_dimension(tf_idf, random_means, random_covs, random_weights, cov_smoothing=1e-10)", "Quiz Question: Is the final loglikelihood larger or smaller than the final loglikelihood we obtained above when initializing EM with the results from running k-means?", "print out_random_init['loglik']\nprint out['loglik']", "Quiz Question: For the above model, out_random_init, use the visualize_EM_clusters method you created above. Are the clusters more or less interpretable than the ones found after initializing using k-means?", "# YOUR CODE HERE. Use visualize_EM_clusters, which will require you to pass in tf_idf and map_index_to_word.\nvisualize_EM_clusters(tf_idf, out_random_init['means'], out_random_init['covs'], map_index_to_word)", "Note: Random initialization may sometimes produce a superior fit than k-means initialization. We do not claim that random initialization is always worse. However, this section does illustrate that random initialization often produces much worse clustering than k-means counterpart. This is the reason why we provide the particular random seed (np.random.seed(5)).\nTakeaway\nIn this assignment we were able to apply the EM algorithm to a mixture of Gaussians model of text data. This was made possible by modifying the model to assume a diagonal covariance for each cluster, and by modifying the implementation to use a sparse matrix representation. In the second part you explored the role of k-means initialization on the convergence of the model as well as the interpretability of the clusters." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
smorton2/think-stats
code/chap02exmine.ipynb
gpl-3.0
[ "Examples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT", "from __future__ import print_function, division\n\n%matplotlib inline\n\nimport numpy as np\n\nimport nsfg\nimport first", "Given a list of values, there are several ways to count the frequency of each value.", "t = [1, 2, 2, 3, 5]", "You can use a Python dictionary:", "hist = {}\nfor x in t:\n hist[x] = hist.get(x, 0) + 1\n \nhist", "You can use a Counter (which is a dictionary with additional methods):", "from collections import Counter\ncounter = Counter(t)\ncounter", "Or you can use the Hist object provided by thinkstats2:", "import thinkstats2\nhist = thinkstats2.Hist([1, 2, 2, 3, 5])\nhist", "Hist provides Freq, which looks up the frequency of a value.", "hist.Freq(2)", "You can also use the bracket operator, which does the same thing.", "hist[2]", "If the value does not appear, it has frequency 0.", "hist[4]", "The Values method returns the values:", "hist.Values()", "So you can iterate the values and their frequencies like this:", "for val in sorted(hist.Values()):\n print(val, hist[val])", "Or you can use the Items method:", "for val, freq in hist.Items():\n print(val, freq)", "thinkplot is a wrapper for matplotlib that provides functions that work with the objects in thinkstats2.\nFor example Hist plots the values and their frequencies as a bar graph.\nConfig takes parameters that label the x and y axes, among other things.", "import thinkplot\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='value', ylabel='frequency')", "As an example, I'll replicate some of the figures from the book.\nFirst, I'll load the data from the pregnancy file and select the records for live births.", "preg = nsfg.ReadFemPreg()\nlive = preg[preg.outcome == 1]", "Here's the histogram of birth weights in pounds. Notice that Hist works with anything iterable, including a Pandas Series. The label attribute appears in the legend when you plot the Hist.", "hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')", "Before plotting the ages, I'll apply floor to round down:", "ages = np.floor(live.agepreg)\n\nhist = thinkstats2.Hist(ages, label='agepreg')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='years', ylabel='Count')", "As an exercise, plot the histogram of pregnancy lengths (column prglngth).", "\nthinkplot.Hist(thinkstats2.Hist(live.prglngth))", "Hist provides smallest, which select the lowest values and their frequencies.", "for weeks, freq in hist.Smallest(10):\n print(weeks, freq)", "Use Largest to display the longest pregnancy lengths.", "hist.Largest()", "From live births, we can selection first babies and others using birthord, then compute histograms of pregnancy length for the two groups.", "firsts = live[live.birthord == 1]\nothers = live[live.birthord != 1]\n\nfirst_hist = thinkstats2.Hist(firsts.prglngth, label='first')\nother_hist = thinkstats2.Hist(others.prglngth, label='other')", "We can use width and align to plot two histograms side-by-side.", "width = 0.45\nthinkplot.PrePlot(2)\nthinkplot.Hist(first_hist, align='right', width=width)\nthinkplot.Hist(other_hist, align='left', width=width)\nthinkplot.Config(xlabel='weeks', ylabel='Count', xlim=[27, 46])", "Series provides methods to compute summary statistics:", "mean = live.prglngth.mean()\nvar = live.prglngth.var()\nstd = live.prglngth.std()", "Here are the mean and standard deviation:", "mean, std", "As an exercise, confirm that std is the square root of var:", "import math\nstd == math.sqrt(var)", "Here's are the mean pregnancy lengths for first babies and others:", "firsts.prglngth.mean(), others.prglngth.mean()", "And here's the difference (in weeks):", "firsts.prglngth.mean() - others.prglngth.mean()", "This functon computes the Cohen effect size, which is the difference in means expressed in number of standard deviations:", "def CohenEffectSize(group1, group2):\n \"\"\"Computes Cohen's effect size for two groups.\n \n group1: Series or DataFrame\n group2: Series or DataFrame\n \n returns: float if the arguments are Series;\n Series if the arguments are DataFrames\n \"\"\"\n diff = group1.mean() - group2.mean()\n\n var1 = group1.var()\n var2 = group2.var()\n n1, n2 = len(group1), len(group2)\n\n pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)\n d = diff / np.sqrt(pooled_var)\n return d", "Compute the Cohen effect size for the difference in pregnancy length for first babies and others.", "CohenEffectSize(firsts,others).prglngth", "Exercises\nUsing the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others. \nCompute Cohen’s effect size to quantify the difference between the groups. How does it compare to the difference in pregnancy length?", "print(firsts.totalwgt_lb.mean())\n\nprint(others.totalwgt_lb.mean())\n\nCohenEffectSize(firsts.totalwgt_lb, others.totalwgt_lb)", "For the next few exercises, we'll load the respondent file:", "resp = nsfg.ReadFemResp()", "Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.", "hist = thinkstats2.Hist(resp.totincr)\nthinkplot.Hist(hist)", "Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.", "# Solution goes here\nhist = thinkstats2.Hist(resp.age_r)\nthinkplot.Hist(hist)", "Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.", "hist = thinkstats2.Hist(resp.numfmhh)\nthinkplot.Hist(hist)", "Make a histogram of <tt>parity</tt>, the number of children borne by the respondent. How would you describe this distribution?", "# Solution goes here\nhist = thinkstats2.Hist(resp.parity)\nthinkplot.Hist(hist)", "Use Hist.Largest to find the largest values of <tt>parity</tt>.", "print(hist.Largest())", "Let's investigate whether people with higher income have higher parity. Keep in mind that in this study, we are observing different people at different times during their lives, so this data is not the best choice for answering this question. But for now let's take it at face value.\nUse <tt>totincr</tt> to select the respondents with the highest income (level 14). Plot the histogram of <tt>parity</tt> for just the high income respondents.", "top = resp[resp.totincr == 14]\nhist = thinkstats2.Hist(top.parity)\nthinkplot.Hist(hist)", "Find the largest parities for high income respondents.", "# Solution goes here\nhist.Largest()", "Compare the mean <tt>parity</tt> for high income respondents and others.", "# Solution goes here\nbottom = resp[resp.totincr < 14]\nprint(top.parity.mean())\nprint(bottom.parity.mean())", "Compute the Cohen effect size for this difference. How does it compare with the difference in pregnancy length for first babies and others?", "# Solution goes here\nCohenEffectSize(top.parity, bottom.parity)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karthikrangarajan/intro-to-sklearn
00.Setup and Primers.ipynb
bsd-3-clause
[ "Tutorial Setup\nCheck your install", "import numpy\n\nimport matplotlib\n\nimport sklearn\n\nimport pandas", "Finding the location of an installed package and its version:", "numpy.__path__\n\nnumpy.__version__", "Or check it all at once: pip install version_information and check versions with a magic command.", "!pip install version_information\n\n%load_ext version_information\n%version_information numpy, scipy, matplotlib, pandas, tensorflow, sklearn, skflow", "A NumPy primer\nNumPy array dtypes and shapes", "import numpy as np\n\na = np.array([1, 2, 3])\n\na\n\nb = np.array([[0, 2, 4], [1, 3, 5]])\n\nb\n\nb.shape\n\nb.dtype\n\na.shape\n\na.dtype\n\nnp.zeros(5)\n\nnp.ones(shape=(3, 4), dtype=np.int32)", "Common array operations", "c = b * 0.5\n\nc\n\nc.shape\n\nc.dtype\n\na\n\nd = a + c\n\nd\n\nd[0]\n\nd[0, 0]\n\nd[:, 0]\n\nd.sum()\n\nd.mean()\n\nd.sum(axis=0)\n\nd.mean(axis=1)", "Reshaping and inplace update", "e = np.arange(12)\n\ne\n\nf = e.reshape(3, 4)\n\nf\n\ne\n\ne[5:] = 0\n\ne\n\nf", "Combining arrays", "a\n\nb\n\nd\n\nnp.concatenate([a, a, a])\n\nnp.vstack([a, b, d])\n\nnp.hstack([b, d])", "Also see this fun \"100 numpy exercises\" site\nA Matplotlib primer", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 2, 10)\n\nx\n\nplt.plot(x, 'o-');\n\nplt.plot(x, x, 'o-', label='linear')\nplt.plot(x, x ** 2, 'x-', label='quadratic')\n\nplt.legend(loc='best')\nplt.title('Linear vs Quadratic progression')\nplt.xlabel('Input')\nplt.ylabel('Output');\n\nsamples = np.random.normal(loc=1.0, scale=0.5, size=1000)\n\nsamples.shape\n\nsamples.dtype\n\nsamples[:30]\n\nplt.hist(samples, bins=50);\n\nsamples_1 = np.random.normal(loc=1, scale=.5, size=10000)\nsamples_2 = np.random.standard_t(df=10, size=10000)\n\nbins = np.linspace(-3, 3, 50)\n_ = plt.hist(samples_1, bins=bins, alpha=0.5, label='samples 1')\n_ = plt.hist(samples_2, bins=bins, alpha=0.5, label='samples 2')\nplt.legend(loc='upper left');\n\nplt.scatter(samples_1, samples_2, alpha=0.1)\n\nsamples_3 = np.random.normal(loc=2, scale=.5, size=10000)\n\nfig = plt.figure()\nax1 = fig.add_subplot(111)\nax1.scatter(samples_1, samples_2, alpha=0.1, c='b', marker=\"s\", label='first')\nax1.scatter(samples_3, samples_2, alpha=0.1, c='r', marker=\"o\", label='second')\nplt.show()", "Credits\nMost of this material is adapted from the Olivier Grisel's 2015 tutorial:\nhttps://github.com/ogrisel/parallel_ml_tutorial\nOriginal author:\n\nOlivier Grisel @ogrisel | http://ogrisel.com" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tebeka/pythonwise
First-Contact-With-Data.ipynb
bsd-3-clause
[ "First Contact with Data\nEvery time I encounter new data file. There are few initial \"looks\" that I take on it. This help me understand if I can load the whole set to memory and what are the fields there. Since I'm command line oriented, I use linux command line utilities to do that (which are easily accesible from Jupython with !), but it's easily done with Python as well.\nAs an example, we'll use a subset of the NYC taxi dataset. The file is called taxi.csv.\nFile Size", "# Command line\n!ls -lh taxi.csv\n\n# Python\nfrom os import path\nprint('%.2f KB' % (path.getsize('taxi.csv')/(1<<10)))\nprint('%.2f MB' % (path.getsize('taxi.csv')/(1<<20)))", "Number of Lines", "# Command line\n!wc -l taxi.csv\n\n# Python\nwith open('taxi.csv') as fp:\n print(sum(1 for _ in fp))", "Header", "# Command line\n!head -1 taxi.csv | tr , \\\\n\n!printf \"%d fields\" $(head -1 taxi.csv | tr , \\\\n | wc -l)\n\n# Python\nimport csv\nwith open('taxi.csv') as fp:\n fields = next(csv.reader(fp))\nprint('\\n'.join(fields))\nprint('%d fields' % len(fields))", "Sample Data", "# Command line\n!head -2 taxi.csv | tail -1 | tr , \\\\n\n!printf \"%d values\" $(head -2 taxi.csv | tail -1 | tr , \\\\n | wc -l)\n\n# Python\nwith open('taxi.csv') as fp:\n fp.readline() # Skip header\n values = next(csv.reader(fp))\nprint('\\n'.join(values))\nprint('%d values' % len(values))\n\n# Python (with field names)\nfrom itertools import zip_longest\nwith open('taxi.csv') as fp:\n reader = csv.reader(fp)\n header = next(reader)\n values = next(reader)\nfor col, val in zip_longest(header, values, fillvalue='???'):\n print('%-20s: %s' % (col, val))", "In both methods (with fields or without) we see that we have some extra empty fields at the end of each data row.\nLoading as DataFrame\nAfter the initial look, we know we can load the whole data to memory and have a good idea what to tell pandas for parsing it.", "import pandas as pd\nimport numpy as np\ndate_cols = ['lpep_pickup_datetime', 'Lpep_dropoff_datetime']\nwith open('taxi.csv') as fp:\n header = next(csv.reader(fp))\n df = pd.read_csv(fp, names=header, usecols=np.arange(len(header)), parse_dates=date_cols)\ndf.head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lilleswing/deepchem
examples/tutorials/15_Training_a_Generative_Adversarial_Network_on_MNIST.ipynb
mit
[ "Tutorial Part 15: Training a Generative Adversarial Network on MNIST\nIn this tutorial, we will train a Generative Adversarial Network (GAN) on the MNIST dataset. This is a large collection of 28x28 pixel images of handwritten digits. We will try to train a network to produce new images of handwritten digits.\nColab\nThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.\n\nSetup\nTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.", "!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py\nimport conda_installer\nconda_installer.install()\n!/root/miniconda/bin/conda info -e\n\n!pip install --pre deepchem\nimport deepchem\ndeepchem.__version__", "To begin, let's import all the libraries we'll need and load the dataset (which comes bundled with Tensorflow).", "import deepchem as dc\nimport tensorflow as tf\nfrom deepchem.models.optimizers import ExponentialDecay\nfrom tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Reshape\nimport matplotlib.pyplot as plot\nimport matplotlib.gridspec as gridspec\n%matplotlib inline\n\nmnist = tf.keras.datasets.mnist.load_data(path='mnist.npz')\nimages = mnist[0][0].reshape((-1, 28, 28, 1))/255\ndataset = dc.data.NumpyDataset(images)", "Let's view some of the images to get an idea of what they look like.", "def plot_digits(im):\n plot.figure(figsize=(3, 3))\n grid = gridspec.GridSpec(4, 4, wspace=0.05, hspace=0.05)\n for i, g in enumerate(grid):\n ax = plot.subplot(g)\n ax.set_xticks([])\n ax.set_yticks([])\n ax.imshow(im[i,:,:,0], cmap='gray')\n\nplot_digits(images)", "Now we can create our GAN. Like in the last tutorial, it consists of two parts:\n\nThe generator takes random noise as its input and produces output that will hopefully resemble the training data.\nThe discriminator takes a set of samples as input (possibly training data, possibly created by the generator), and tries to determine which are which.\n\nThis time we will use a different style of GAN called a Wasserstein GAN (or WGAN for short). In many cases, they are found to produce better results than conventional GANs. The main difference between the two is in the discriminator (often called a \"critic\" in this context). Instead of outputting the probability of a sample being real training data, it tries to learn how to measure the distance between the training distribution and generated distribution. That measure can then be directly used as a loss function for training the generator.\nWe use a very simple model. The generator uses a dense layer to transform the input noise into a 7x7 image with eight channels. That is followed by two convolutional layers that upsample it first to 14x14, and finally to 28x28.\nThe discriminator does roughly the same thing in reverse. Two convolutional layers downsample the image first to 14x14, then to 7x7. A final dense layer produces a single number as output. In the last tutorial we used a sigmoid activation to produce a number between 0 and 1 that could be interpreted as a probability. Since this is a WGAN, we instead use a softplus activation. It produces an unbounded positive number that can be interpreted as a distance.", "class DigitGAN(dc.models.WGAN):\n\n def get_noise_input_shape(self):\n return (10,)\n\n def get_data_input_shapes(self):\n return [(28, 28, 1)]\n\n def create_generator(self):\n return tf.keras.Sequential([\n Dense(7*7*8, activation=tf.nn.relu),\n Reshape((7, 7, 8)),\n Conv2DTranspose(filters=16, kernel_size=5, strides=2, activation=tf.nn.relu, padding='same'),\n Conv2DTranspose(filters=1, kernel_size=5, strides=2, activation=tf.sigmoid, padding='same')\n ])\n\n def create_discriminator(self):\n return tf.keras.Sequential([\n Conv2D(filters=32, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'),\n Conv2D(filters=64, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'),\n Dense(1, activation=tf.math.softplus)\n ])\n\ngan = DigitGAN(learning_rate=ExponentialDecay(0.001, 0.9, 5000))", "Now to train it. As in the last tutorial, we write a generator to produce data. This time the data is coming from a dataset, which we loop over 100 times.\nOne other difference is worth noting. When training a conventional GAN, it is important to keep the generator and discriminator in balance thoughout training. If either one gets too far ahead, it becomes very difficult for the other one to learn.\nWGANs do not have this problem. In fact, the better the discriminator gets, the cleaner a signal it provides and the easier it becomes for the generator to learn. We therefore specify generator_steps=0.2 so that it will only take one step of training the generator for every five steps of training the discriminator. This tends to produce faster training and better results.", "def iterbatches(epochs):\n for i in range(epochs):\n for batch in dataset.iterbatches(batch_size=gan.batch_size):\n yield {gan.data_inputs[0]: batch[0]}\n\ngan.fit_gan(iterbatches(100), generator_steps=0.2, checkpoint_interval=5000)", "Let's generate some data and see how the results look.", "plot_digits(gan.predict_gan_generator(batch_size=16))", "Not too bad. Many of the generated images look plausibly like handwritten digits. A larger model trained for a longer time can do much better, of course.\nCongratulations! Time to join the Community!\nCongratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:\nStar DeepChem on GitHub\nThis helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\nJoin the DeepChem Gitter\nThe DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phungkh/phys202-2015-work
assignments/assignment10/ODEsEx02.ipynb
mit
[ "Ordinary Differential Equations Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed", "Lorenz system\nThe Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:\n$$ \\frac{dx}{dt} = \\sigma(y-x) $$\n$$ \\frac{dy}{dt} = x(\\rho-z) - y $$\n$$ \\frac{dz}{dt} = xy - \\beta z $$\nThe solution vector is $[x(t),y(t),z(t)]$ and $\\sigma$, $\\rho$, and $\\beta$ are parameters that govern the behavior of the solutions.\nWrite a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.", "def lorentz_derivs(yvec, t, sigma, rho, beta):\n \"\"\"Compute the the derivatives for the Lorentz system at yvec(t).\"\"\"\n x=yvec[0] \n y=yvec[1]\n z=yvec[2]\n dx= sigma*(y-x)\n dy= x*(rho-z)-y\n dz=x*y-beta*z\n return((dx,dy,dz))\n\nassert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])", "Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.", "def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Solve the Lorenz system for a single initial condition.\n \n Parameters\n ----------\n ic : array, list, tuple\n Initial conditions [x,y,z].\n max_time: float\n The max time to use. Integrate with 250 points per time unit.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \n Returns\n -------\n soln : np.ndarray\n The array of the solution. Each row will be the solution vector at that time.\n t : np.ndarray\n The array of time points used.\n \n \"\"\"\n t=np.linspace(0,max_time,max_time*250)\n soln=odeint(lorentz_derivs,ic,t, args=(sigma,rho,beta))\n \n return np.array(soln),np.array(t)\n\nassert True # leave this to grade solve_lorenz", "Write a function plot_lorentz that:\n\nSolves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.\nPlot $[x(t),z(t)]$ using a line to show each trajectory.\nColor each line using the hot colormap from Matplotlib.\nLabel your plot and choose an appropriate x and y limit.\n\nThe following cell shows how to generate colors that can be used for the lines:", "N = 5\ncolors = plt.cm.hot(np.linspace(0,1,N))\nfor i in range(N):\n # To use these colors with plt.plot, pass them as the color argument\n print(colors[i])\n\ndef plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Plot [x(t),z(t)] for the Lorenz system.\n \n Parameters\n ----------\n N : int\n Number of initial conditions and trajectories to plot.\n max_time: float\n Maximum time to use.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \"\"\"\n np.random.seed(1)\n \n \n plt.figure(figsize=(10,10))\n for i in range(N): #Uniform random samples\n x=np.random.uniform(-15,15)\n y=np.random.uniform(-15,15)\n z=np.random.uniform(-15,15)\n colors = plt.cm.hot(np.linspace(0,1,N))\n solutionarray, timearray=solve_lorentz((x,y,z),max_time,sigma,rho,beta)\n plt.plot(solutionarray[:,0], solutionarray[:,2], color=colors[i]) # plotting x column vs z column\n \n print(solutionarray)\n \n plt.xlabel('x(t)')\n plt.ylabel('z(t)')\n plt.set_cmap('hot') # whether or not I have plt.set_cmap doesn't seem to matter for some reason.. \n plt.title('Lorentz Solutions')\n plt.box(False)\n \n \n \n \n \n \n\nplot_lorentz()\n\nassert True # leave this to grade the plot_lorenz function", "Use interact to explore your plot_lorenz function with:\n\nmax_time an integer slider over the interval $[1,10]$.\nN an integer slider over the interval $[1,50]$.\nsigma a float slider over the interval $[0.0,50.0]$.\nrho a float slider over the interval $[0.0,50.0]$.\nbeta fixed at a value of $8/3$.", "interact(plot_lorentz, max_time=([1,10]), N=([1,50]), sigma=([0.0,50.0]), rho=([0.0,50.0]), beta=fixed(8/3))", "Describe the different behaviors you observe as you vary the parameters $\\sigma$, $\\rho$ and $\\beta$ of the system:\n-increasing/decreasing $\\sigma$ appears to increase/decrease the overall distances from each point to another. So increasing $\\sigma$ expands our system while decreasing $\\sigma$ contracts our system.\n-Increases/decreasing $\\rho$ appears to increases/decreases the number of spiraling the system does.\n-$\\beta$ is fixed." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
a301-teaching/a301_code
notebooks/histograms.ipynb
mit
[ "Histograms\nWe're going to be using histograms throughout the course, starting with the Planck function, which is really just a histogram of photon energies. As a demo, let's plot a grade distribution. The cell below generates 1000 random samples of a normal distribution with mean 75 and standard deviation 10", "%matplotlib inline\nimport numpy.random as nr\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\nthe_mean=75\nthe_sigma=20.\nnumpoints=1000", "Remove all grades below 0 or above 100", "outRandom=nr.normal(the_mean,the_sigma,[numpoints,])\noutRandom=outRandom[outRandom <= 100.]\noutRandom=outRandom[outRandom >= 0.]\n\n#\n# histogram these 1000 grades into 1000 bins with uniform width\n#\nfig,ax = plt.subplots(1,1)\nbin_edges=np.linspace(0,100,21,endpoint=True)\nax.hist(outRandom,bins=bin_edges)\n_=ax.set(title='Grade distribution',xlabel='mark (%)',ylabel='Number in bin')", "Note that UBC has grade boundaries that narrow for higher marks. Just counting the number in each bin and plotting it is seriously misleading if you expect the area of the bin to be proportional to the number in the bin.", "#\n# make a dictionary to hold the grade boundaries\n#\nbounds={'a+':90,'a':85,'a-':80,'b+':76,'b':72,'b-':68,'c+':64,'c':60,'c-':55,'d':50}\nbounds=list(bounds.values())\nbounds.sort()\n#\n# add the high and low edges\n#\nbounds.insert(0,0)\nbounds.append(100)\n\nfig,ax = plt.subplots(1,1)\nout=ax.hist(outRandom,bins=bounds)\n_=ax.set(title='Grade distribution UBC',xlabel='mark (%)',ylabel='Number in bin')", "matplotlib accepts a \"normed=True\" flag that divides by the total number and the bin width, so that the histogram area integrates to 1.", "fig,ax = plt.subplots(1,1)\nout=ax.hist(outRandom,bins=bounds,normed=True)", "If we want to put numbers/(bin width) on the y axis, we need to first do the histogram then divide the counts in each bin by the bin width before plotting", "fig,ax = plt.subplots(1,1)\ncounts,edges=np.histogram(outRandom,bins=bounds)\nwidths=np.diff(edges)\ncounts_dens=counts/widths\nleft_edge = edges[:-1]\nout=ax.bar(left_edge,counts_dens,width=widths)\n_=ax.set(title='Grade density histogram UBC',xlabel='mark (%)',ylabel='Number in bin/(%)')", "An idealized satellite sensor\nSuppose I had 1 $m^2$ sensor that could count photons for 1 second, in a series of wavelength bins spaced 0.1 $\\mu m$ apart. Each time a bin received 0.01 Joules, the sensor reports a count by writing out the wavelength of the bin that accumulated that energy. The data set consists of 46,000 wavelength values over 1 second, which means that the sensor received a flux of 46,000*0.01 = 460 $Joules/s/m^2 = 460\\ W\\,m^{-2}$.\n1) dowload the 46,000 using a301utils.a301_readfile module:", "from a301utils.a301_readfile import download\ndownload('photon_data.csv')", "2) read the file in using np.loadtxt", "bin_wavelengths = np.loadtxt('photon_data.csv')\n\nbin_wavelengths[:10]", "The total $W/m^2$ is just the number measurements multiplied by 0.01 Joules for each measurement:", "total_counts = len(bin_wavelengths)\ntotal_flux = total_counts*0.01\n\ntotal_flux", "Now histogram the photon counts\nHistogram this so the area sums to 460 $W/m^2$", "fig,ax = plt.subplots(1,1)\n#\n# 51 edges from 0.1 to 60 microns\n#\nedges = np.linspace(0.1,60,51)\ncounts,edges=np.histogram(bin_wavelengths,bins=edges)\nwidths=np.diff(edges)\ncounts_dens=counts/widths/total_counts*total_flux\nleft_edge = edges[:-1]\nout=ax.bar(left_edge,counts_dens,width=widths)\n_=ax.set(title='Flux density ($W/m^2/\\mu m$)',xlabel='wavelength ($\\mu m$)',ylabel='$E_\\lambda\\ (W/m^2/\\mu m)$')\n\n#\n# put the Planck curve on top of this\n#\nfrom a301lib.radiation import planckwavelen\nTemp=300 #Kelvin\nElambda = planckwavelen(edges*1.e-6,Temp)*1.e-6 #convert from W/m^2/m to W/m^2/micron\nax.plot(edges,Elambda,linewidth=4,label='Planck curve')\n_=ax.legend()", "Summary: the Planck function describes the histogram of radiant flux emitted by a black body of temperature T" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eyaltrabelsi/my-notebooks
Lectures/elegent_exception_handling/Exceptions Handling Interactive.ipynb
mit
[ "Elegant Exception Handling\nEyal Trabelsi\nAbout Me 🙈\n\n\nSoftware Engineer at Salesforce 👷\n\n\nBig passion for python and data 🐍🤖\n\n\nOnline at medium | Twitter 🌐\n\n\nRestaurant Recommendation 🍔", "! pip install typeguard rollbar returns tenacity icontract > /dev/null 2>&1\n\nimport contextlib\nimport json\nimport icontract\nimport logging\nimport pathlib\nimport os\nfrom typing import Union\n\nimport requests\nfrom typeguard import typechecked\n\ndef get_relevant_restaurants(user):\n base_url = \"https://en.wikipedia.org/wiki\"\n return requests.get(f\"{base_url}/{user}\").content\n\ndef get_config(path):\n with open(path, 'r') as json_file:\n return json.load(json_file)\n\ndef pick_best_restaurants(restaurants):\n pass\n\ndef get_restaurant_recommendation(path):\n config = get_config(path)\n user = config[\"user\"]\n candidates = get_relevant_restaurants(user)\n pick_best_restaurants(candidates)", "# We Can Proud of Ourselves 💃\n\n\nImplemented restaurant recommendation 💪\n\n\nClean code 💄\n\n\nException Handling? Why?! 🤨\n\n\nErrors are everywhere 🙈\n\n\nHardware can fail 🌲\n\n\nSoftware often fail 🚪\n\n\nHow complex systems fail 🧩\n\n\nUnexceptable 😡\nLesson 1: We want to build a fault tolerant system.\nException Handling to the Rescue 👨‍🚒\nExceptions Anatomy from bird's eye view 🐦\n\n\nException message 💬\n\n\nException traceback 👻\n\n\nException type 🍅🍇🍆\n\n\n\nException Types 🍅🍇🍆\n\n\nHelps distinguish between different exceptions\n\n\nHierarchical nature\n\n\nDozens of built-in exceptions\n\n\nBuiltin and Custom exceptions\n\n\n<img src=\"https://github.com/eyaltrabelsi/my-notebooks/blob/master/Lectures/elegent_exception_handling/builtin_exceptions.png?raw=true\" width=\"700\"/>\nNaive Approach for Exception Handling👶\n\n\nCatch all exceptions 🙈\n\n\nLog all exceptions 📝\n\n\n\"Clean and safe\" version 😈", "def get_restaurant_recommendation(path):\n try:\n config = get_config(path)\n user = config[\"user\"]\n candidates = get_relevant_restaurants(user)\n pick_best_restaurants(candidates)\n except BaseException:\n logging.error(\"VERY UNINFORMATIVE INFORMATION\")\n raise BaseException", "There are problems lurking around🐲\n\n\nUnintentional exceptions being caught 😧\n\n\nKeyboardInterrupt as we want the user to be able to kill the program.\n\n\nSynatxError as we want our code to be valid.\n\n\nExceptions are not distinguishable 😵\n\n\nNot safe 🔓\n\n\nthe invoker of this function can't really destinguise between the diffrent types of errors and allow to recover from certain expected issues.\n\n\nFor example, if we have flaky internet i would like to retry, but if a file is actually missing I dont.\n\n\ngeneraly it’s better for a program to fail fast and crash than to silence the error and continue running the program. \n\n\nThe bugs that inevitably happen later on will be harder to debug since they are far removed from the original cause. \n\n\nJust because programmers often ignore error messages doesn’t mean the program should stop emitting them. \n\n\nUnfortunately very common 😱\n\n\nNaive approach for exception handling won't do.\nTake 2: Exception Handling 🎬\n\n\nShould not catch all exceptions ☝\n\n\nRecover when possible 🔧\n\n\nPropogated exceptions should be distinguishable 👯", "def get_restaurant_recommendation(path):\n try:\n config = get_config(path)\n user = config[\"user\"]\n except FileNotFoundException:\n logging.error(\"VERY UNINFORMATIVE INFORMATION\")\n raise\n except JSONDecodeError:\n logging.error(\"VERY UNINFORMATIVE INFORMATION\")\n raise\n except KeyError:\n user = \"default_user\"\n candidates = get_relevant_restaurants(user)\n pick_best_restaurants(candidates)", "Lesson 2: Catch relevant exceptions only.\nLesson 3: Different propogated exceptions should be distinguishable.\nA Bit of Mackup💄", "def get_restaurant_recommendation(path):\n try:\n config = get_config(path)\n user = config[\"user\"]\n except FileNotFoundException:\n logging.error(\"VERY UNINFORMATIVE INFORMATION\")\n raise\n except JSONDecodeError:\n logging.error(\"VERY UNINFORMATIVE INFORMATION\")\n raise\n except KeyError:\n user = \"default_user\"\n candidates = get_relevant_restaurants(user)\n pick_best_restaurants(candidates)", "First since we handle both FileNotFoundException and JSONDecodeError in the same manner they can \"share the except block\"\n as except clause may name multiple exceptions as a parenthesized tuple.\n\nSecondly we can use else clause which occur when the try block executed and did not raise an exception.\n\n\nThirdly, we use dictionary builtin function get which allow us to define default values. \n\n\nLesson 4: Use python syntax to the fullest\nSuppressing Exceptions 🤫\n\nThere is another common flow for exception handling\ni want to cover which is suppressing exceptions using suppress\nsupported from python>=3.5", "def run_unstopable_animation():\n pass\n\ntry:\n os.remove('somefile.pyc')\nexcept FileNotFoundError:\n pass\n\ntry:\n run_unstopable_animation()\nexcept KeyboardInterrupt:\n pass\n\nfrom contextlib import suppress\n\nwith suppress(FileNotFoundError):\n os.remove('somefile.pyc')\n \nfrom contextlib import suppress\n\nwith suppress(KeyboardInterrupt):\n run_unstopable_animation() ", "Our code is not elegant 😭\n\n\nDominated by exception handling\n\n\nBusiness logic is not clear\n\n\nCode become hard to maintain\n\n\nLesson 5: Error handling should not obscures business logic\n- Error handling is important, but we should strives to make our job easier.\n\nas the zen of python state \"If the implementation is hard to explain, it's a bad idea.\"\n\nTake 3: Exception Handling 🎬\n\n\nSeparate business logic from exception handling code ✂\n\n\nHandled exceptions in other layer 📚\n\n\nThe \"perfect\" code:", "def get_restaurant_recommendation(path):\n config = get_config(path)\n user = get_config.get(\"user\", \"default_user\")\n candidates = get_relevant_restaurants(user)\n pick_best_restaurants(candidates) ", "", "def get_config(path):\n with open(path, 'r') as json_file:\n config = json.load(json_file)\n return config\n \ndef get_restaurant_recommendation(path):\n try:\n config = get_config(path)\n except (FileNotFoundException, JSONDecodeError):\n logging.error(\"VERY UNINFORMATIVE INFORMATION\")\n raise\n else:\n user = config.get(\"user\", \"default_user\")\n candidates = get_relevant_restaurants(user)\n pick_best_restaurants(candidates) ", "Lesson 6: Pick the right abstraction level to handle exceptions\nAre we completly safe now? 👷\nSilent Errors 🔇\n\n\nDoes not crash code 😠\n\n\nDelivers incorrect results 😠😠\n\n\nMuch harder to detect, Makes matter worse 🤬\n\n\nValidations 🆗\n\n\nOutput/Input types/values\n\n\nPostconditions/Preconditions\n\n\nSide-effects/Invariants\n\n\nLesson 7: validate, and fail fast !\nTools for validation 🔨\n\n\nVanilla Exceptions 🍧\n\n\nType Hints 🔍\n\n\nContract Testing Libraries 📜\n\n\nVanilla Exceptions 🍧", "def get_user(path):\n if isinstance(path, (str, pathlib.PurePath)):\n raise TypeError(f\"path has invalid type: {type(path).__name__}\")\n \n with open(path, 'r') as json_file:\n try:\n config = json.load(json_file)\n except (FileNotFoundException, JSONDecodeError):\n logging.error(\"VERY INFORMATIVE INFORMATION\")\n raise\n else:\n user = config.get(\"user\",\"default_user\")\n if isinstance(user, str):\n raise TypeError(f\"user has invalid type: {type(user).__name__}\")\n return user", "Can validate everything ✅ \n\n\nOn runtime ✅ but not compile time ❌\n\n\nNot clean ❌\n\n\nWhy not assertions ? ❌\n\n\nRaises the wrong exception type 😮\n\n\nCan be compiled away 😥\n\n\nType Hints 🔍", "@typechecked\ndef get_user(path: Union[str, pathlib.PurePath]) -> str:\n with open(path, 'r') as json_file:\n try:\n data = json.load(json_file)\n except (FileNotFoundException, JSONDecodeError):\n logging.error(\"VERY INFORMATIVE INFORMATION\")\n raise\n else:\n user = data.get(\"user\",\"default_user\")\n return user", "Can validate input/output types ✅ But not other validation ❌\n\n\nOn runtime and compile time ✅\n\n\nClean and elegant ✅\n\n\nContract Testing Libraries 📜", "@icontract.require(lambda path: path.startswith(\"s3://\"), \"path must be valid s3 path\")\ndef get_user(path):\n with open(path, 'r') as json_file:\n try:\n data = json.load(json_file)\n except (FileNotFoundException, JSONDecodeError):\n logging.error(\"VERY INFORMATIVE INFORMATION\")\n raise\n else:\n user = data.get(\"user\",\"default_user\")\n return user", "All the validations are supported ✅ \n\n\nOn runtime ✅ but not compile time ❌\n\n\nClean and elegant ✅\n\n\nNo mature/maintained option ❌\n\n\nicontract- not matured 🍼\n\n\ncontracts- not maintained 🤕\n\n\nThere are still problems lurking 🐉", "def get_relevant_restaurants(user):\n base_url = \"cool_restaurants.com\"\n resp = requests.get(f\"{base_url}/{user}\")\n resp.raise_for_status()\n return resp.json()", "App might \"live\" in Unstable Environment 🤪\n\n\nYour network might be down 😑\n\n\nThe server might be down 😣\n\n\nThe server might be too busy and you will face a timeout 😭", "def get_relevant_restaurants(user):\n base_url = \"cool_restaurants.com\"\n \n allowed_retries = 5\n for i in range(allowed_retries):\n try:\n resp = requests.get(f\"{base_url}/{user}\")\n resp.raise_for_status()\n except (requests.ConnectionError):\n if i == allowed_retries:\n raise\n else:\n return resp.json() ", "There must be better way 😇\n\n\nDecorators 🎊\n\n\nContext Managers 🌉\n\n\nCommon usecases already implemented 💪", "from functools import wraps\ndef retry(exceptions, allowed_retries=5):\n def callable(func):\n @wraps(func)\n def wrapped(*args, **kwargs):\n for i in range(allowed_retries):\n try:\n res = func()\n except exceptions:\n continue\n else:\n return res \n return wrapped\n return callable\n\n@retry(exceptions=requests.ConnectionError)\ndef get_relevant_restaurants(country):\n base_url = \"cool_restaurants.com\"\n resp = requests.get(f\"{base_url}/{user}\")\n resp.raise_for_status()\n return resp.json()\n\nimport tenacity\n\n@tenacity.retry(retry=tenacity.retry_if_exception_type(ConnectionError))\ndef get_relevant_restaurants(user):\n base_url = \"cool_restaurants.com\"\n resp = requests.get(f\"{base_url}/{user}\")\n resp.raise_for_status()\n return resp.json()", "Useful usecases 🧠\n\nDecorator: ratelimit, Retry, logger.catch\n\nContext manager: Database Connections, Transactions, Temporary Files and Output Redirections\n\n\nimportant note retry can be handled in the request itself by writing an adapter, but for the example sake i wont use it. \n\n\nLesson 8: Use patterns for better code reuse\nWhats next ?! 🛸\nLets dive into exceptions types 🐠\n\n\nHelps distinguish between different exceptions\n\n\nHelps emphasis our intent\n\n\nBuiltin and Custom exceptions\n\n\nWhen Builtin Exception Types ?🍅🍇🍆\n\n\nShould default to use builtin exceptions \n\n\nFamiliar\n\n\nWell documented, stackoverflow magic :)\n\n\nWhen Custom Exception Types ? 🍅🍇🍆\n\n\nEmphasis our intent \n\n\nDistinguish between different exceptions.\n\n\nLets say we have ValueError and we want to recover in diffrent way between TooBig/TooSmall.\n\n\nGroup different exceptions.\n\n\nWrapping third party apis.\n\n\nwhen we wrap third party api we minimize our dependecy on it. for example uppon recovery shouldn't have to import exceptions from your dependecies for example requests.exceptions\n\nAlso the users that use your library does not need/want to know about the implementation details.\n\nWrapping third party 👀\n\n\nMinimize dependency\n\n\nget_restaurant_recommendation can raise requests.ReadTimeout\n\n\nRecovering in get_restaurant_recommendation", "def login(user):\n pass\n\nimport requests\n\ndef get_restaurant_recommendation(path):\n # ...\n try:\n candidates = get_relevant_restaurants(user)\n except requests.exceptions.ReadTimeout:\n login.user()\n # ...", "Exception cause 🤯\n\n\n cause indicates the reason of the exception \n\n\nWe can overide the cause to replace the exception", "try:\n 1/0\nexcept ZeroDivisionError:\n # Some amazing recovery mechanism \n raise", "Lesson 9: Pick the right exception types and messages.\nSensitive information 🕵\n\nExceptions will be spread far and wide 🇫🇷🇺🇸🇫🇷\n\nthrough logging, reporting, and monitoring software.\n\nPersonal data 🕵\n\nIn a world where regulation around personal data is constantly getting stricter, \n\n\nNever reveal your weaknesses, bad actors are everywhere 👺\n\n\nYou can never be too careful 🤓", "def login(user):\n raise CommonPasswordException(f\"password: {password} is too common\")", "Lesson 10: Don’t use sensitive information in your exceptions.\nPython hooks 🎣\n\n\nPython has builtin hooks for various events \n\n\nsys.excepthook for uncaught exceptions\n\n\nDoesn't require modifying existing code.\n\n\nsys's excepthook example! 🎣\n\n\nUncaught exception print traceback to STDERR before closing\n\n\nUnacceptable in production environment\n\n\nGraceful exit by notify incident system", "import sys\nimport rollbar\n\nrollbar.init(\"Super Secret Token\")\n\ndef rollbar_except_hook(exc_type, exc_value, traceback):\n rollbar.report_exc_info((exc_type, exc_value, traceback))\n sys.__excepthook__(exc_type, exc_value, traceback)\n \nsys.excepthook = rollbar_except_hook", "Useful usecases 🧠\n\nFormat Diffrently We can format the exceptions diffrently, to provide more/less information.\nRedirect To Incident System We can redirect Exceptions to an incident system like rollbar or pager-duty.\nMulti Threading Behaviour Since threading/multiprocessing have their own unhandled exception machinery.\n that is a bit customized so no unhandled exception exists at the top level.\n we might want to overide it to support KeyboardInterupt for example.\nSearch Stackoverflow 😛😛😛 Search Stackoverflow for the exception that was being raise\n\nLesson 14: Python has some useful builtin hooks\nCommon Gotchas 💀\nExcept block order ⚠\n\n\nExcept block order matter\n\n\nTop to bottom\n\n\nSpecific exceptions first", "try:\n raise ValueError\nexcept Exception:\n result = \"Exception\"\nexcept ValueError:\n result = \"ValueError\"\nresult", "NotImplemented vs NotImplementedError ⚠", "raise NotImplementedError\n\nraise NotImplemented", "Return in finally block ⚠", "def supprising_result():\n try:\n return \"Expected\"\n finally:\n return \"Supprising\"\n\nsupprising_result()", "Lesson 11: Avoid exception handling gotchas.\nThis sounds like a lot of work 🏋\nNot all programs made equal 👯\n\n\nExtremely reliable ✈ ✨\n\n\nHighly reliable 🚘\n\n\nReliable 💳\n\n\nDodgy 📱\n\n\nCrap 💩\n\n\nLesson 1:* We want to build a fault tolerant to a certain degree\nStill not perfect 💯\n\n\nHard to tell what exceptions can be thrown \n\n\nHard to tell where exceptions will be handled\n\n\nNo static analysis\n\n\nFunctional Exception Handling for the rescue 🚔\n\n\nUse success/failure container values\n\n\nfunctions are typed, and safe\n\n\nRailway oriented programming\n\n\nreturns library\n\n\n\nLesson 17: Consider functional exception handling for complicated flows\nLessons: 👨‍🏫👩‍🏫\n\n\nLesson 1: We want to build a fault tolerant to a certain degree.\n\n\nLesson 2: Should catch relevant exceptions only.\n\n\nLesson 3: Different exceptions should be distinguishable.\n\n\nLesson 4: Use python syntax to the fullest.\n\n\nLesson 5: Error handling should not obscures business logic.\n\n\nLesson 6: Pick the right abstraction level for handling exceptions.\n\n\nLesson 7: validate, and fail fast !\n\n\nLesson 8: Use patterns for better code reuse\n\n\nLesson 9: Should pick the right exception types and messages.\n\n\nLesson 10: Don’t use sensitive information in exceptions.\n\n\nLesson 11: Avoid exception handling gotchas.\n\n\nLesson 12: Consider functional exception handling for complicated flows.\n\n\nLesson 13: Python has some useful builtin hooks.\n\n\nTopics I didnt cover 🥵\n\nError codes\nFunctional approach for exception handling\nAvoiding errors using DDD (domain driven design)\nArchitectural patterns for resilience\n\nAdditional Resources 📚\n\nIntro to Exception Handling\nExceptional Exceptions\nThe Do's and Don'ts of Error Handling\nException Chaining\nRailway oriented programming\nThe error model\n\n\nConcurrent/Parallal Exception handling 🎸🎺🎻🎷\n\nMulti-threading/processing 1, 2\nAsync 1,2, 3\n\nError codes 👾\nWhen\n\nWITHIN a program one should always use exceptions. \nAny time the error must leave the program you are left with error error codes as exceptions can't propagate beyond a program. \nIf, however, I'm writing a piece of code which I must know the behaviour of in every possible situation, then I want error codes.\nIt's tedious and hard to write code that reacts appropriately to every situatio, but that's because writing error-free code is tedious and hard, not because you're passing error code\n\nPros\n\n\nThat being said, errors, whether in code form or simple error response, are a bit like getting a shot — unpleasant, but incredibly useful. Error codes are probably the most useful diagnostic element in the API space, and this is surprising, given how little attention we often pay them.\n\n\nIn general, the goal with error responses is to create a source of information to not only inform the user of a problem, but of the solution to that problem as well. Simply stating a problem does nothing to fix it – and the same is true of API failures.\n\n\n1,2,3,4,5\n\n\nRelease it 📪\n\nyou can always reboot the world by restarding every single server layer by layer thats \nalmost always effective but takes long time\n\nits like a doctor diagnosing desease, theyou could treat a patient,\n\n\ncounter integration point with circuit breaker and decoupling middleware\n\na cascading failure happens after something else already gone wrong. circuit breaker protect your system by avoiding calls out to the troubled integration point. using timeout ensure that you can come back from a call out to the troubled one\n\nRecoverability 🩹\n\nHow do i recover\n\nhow can you make sure all bad state is cleared away to retry\n\n\nwhat is recoverable:\n\nnetwork flakiness\ndatabase out of connection\ndisk unavailable\nrecoverable database out of connections\n\n\n\nBugs Aren’t Recoverable Errors!\nA critical distinction we made early on is the difference between recoverable errors and bugs:\nA recoverable error is usually the result of programmatic data validation. Some code has examined the state of the world and deemed the situation unacceptable for progress. Maybe it’s some markup text being parsed, user input from a website, or a transient network connection failure. In these cases, programs are expected to recover. The developer who wrote this code must think about what to do in the event of failure because it will happen in well-constructed programs no matter what you do. The response might be to communicate the situation to an end-user, retry, or abandon the operation entirely, however it is a predictable and, frequently, planned situation, despite being called an “error.”\nA bug is a kind of error the programmer didn’t expect. Inputs weren’t validated correctly, logic was written wrong, or any host of problems have arisen. Such problems often aren’t even detected promptly; it takes a while until “secondary effects” are observed indirectly, at which point significant damage to the program’s state might have occurred. Because the developer didn’t expect this to happen, all bets are off. All data structures reachable by this code are now suspect. And because these problems aren’t necessarily detected promptly, in fact, a whole lot more is suspect. Depending on the isolation guarantees of your language, perhaps the entire process is tainted. \nReasons for errors\n\nThe obious one is that something exceptional happened.\nAs a control flow mechanism.\nCan be triggered due to a bug in our code.\n\nTypes of errors\n\nerror that can be detected at compile time\nerrors that can be deteled at run time\nerrors that can be infered\nreproducieable erros\nnon reproduceable errors\n\nTypes of exception handling\n\nEAFP (it’s easier to ask for forgiveness than permission) \nLBYL (Look before you leap)\nEach has its own pros and cons (whether the thread-safty or readability)\nbut both are legitimate in python as oppose to other languages." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yunqu/PYNQ
boards/Pynq-Z1/base/notebooks/arduino/arduino_grove_gesture.ipynb
bsd-3-clause
[ "Grove Gesture Example\nThis example shows how to use the \nGrove gesture sensor on the board.\nThe gesture sensor can detect 10 gestures as follows:\n| Raw value read by sensor | Gesture |\n|--------------------------|--------------------|\n| 0 | No detection |\n| 1 | forward |\n| 2 | backward |\n| 3 | right |\n| 4 | left |\n| 5 | up |\n| 6 | down |\n| 7 | clockwise |\n| 8 | counter-clockwise |\n| 9 | wave |\nFor this notebook, a PYNQ Arduino shield is also required.\nThe grove gesture sensor is attached to the I2C interface on the shield. \nThis grove sensor should also work with PMOD interfaces on the board.", "from pynq.overlays.base import BaseOverlay\n\nbase = BaseOverlay(\"base.bit\")", "1. Instantiate the sensor object", "from pynq.lib.arduino import Grove_Gesture\nfrom pynq.lib.arduino import ARDUINO_GROVE_I2C\n\nsensor = Grove_Gesture(base.ARDUINO, ARDUINO_GROVE_I2C)", "2. Set speed\nThere are currently 2 modes available for users to use: far and near.\nThe corresponding fps are 120 and 240, respectively.\nFor more information, please refer to Grove gesture sensor.", "sensor.set_speed(240)", "3. Read gestures\nThe following code will read 10 gestures within 30 seconds. \nTry to change your gesture in front of the sensor and check the results.", "from time import sleep\n\nfor i in range(10):\n print(sensor.read_gesture())\n sleep(3)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mlopatka/scoreLRgraphs
score_LR_demo_v3.ipynb
gpl-2.0
[ "Forensic Intelligence Applications\nIn this section we will explore the use of similar source, score-based, non-anchored LRs for forensic intelligence applications. Score-based LRs can be a valuable tool for relating forensic traces to each other in a manner that pertains to:\n\nThe relative rarity of a relationship between two traces\nThe different senses of similarity and thier expression in terms of multivariable features\n\nWe will explore the use of likelihood ratios as a method for exploring the connectivity between forensic traces. As well as the interactive process of asking different kinds of forensic questions and interpretting the results.\nHow to work with the Jupyter notebook interface\nEach cell contains some code that is executed when you select any cell and:\n<br />\n<center>click on the execute icon<br />\nOR<br />\npresses <code>CRTL + ENTER</code><br />\n</center>\n<br />\nIn between these code cells are text cells that give explanations about the input and output of these cells as well as information about the operations being performed. In this workshop segment we will work our way down this iPython3 notebook. All the code cells can be edited to change the parameters of our example. We will pause to discuss the output generated at various stages of this process.\nAccessing this notebook\nAfter this workshop is over, you will be able to download this notebook any time at:\nhttp://www.score_LR_demo.pendingassimilation.com\nLets get started\nExecute the following code cell in order to import al of the Python3 libraries that will be required to run our example. <br/>\nThis code will also change the colour scheme of all code cells so that you can more easily tell the difference between <br/> instruction cells (black text white background) and code cells syntax coloured text on a black background.", "from IPython.core.display import HTML\nimport os\n#def css_styling():\n #response = urllib.request.urlopen('https://dl.dropboxusercontent.com/u/24373111/custom.css')\n #desktopFile = os.path.expanduser(\"~\\Desktop\\EAFS_LR_software\\ForensicIntelligence\\custom.css\")\n# styles = open(desktopFile, \"r\").read()\n# return HTML(styles)\n## importing nice mathy things\nimport numpy as np # numericla operators\nimport matplotlib.pyplot as plt # plotting operators\nimport matplotlib\nfrom numpy import ndarray \nimport random # for random data generation\nfrom matplotlib.mlab import PCA as mlabPCA # pca required for plotting higher dimensional data\nimport matplotlib.mlab as mlab\nfrom scipy.stats import norm\nimport networkx as nx # for generating and plotting graphs\nfrom scipy.spatial.distance import pdist, cdist, squareform # pairwise distance operators\nfrom scipy import sparse as sp\n#css_styling() # defines the style and colour scheme for the markdown and code cells.\n", "Define some characteristics of the dataset for our example\nWe will generate a dataset meant to emulate the kinds of intel availabel in pan-european drug seizure analysis. <br>\nThe variables bellow will define our dataset for this example, including the number of seizure batches, the number of samples per seizure batch and the relative variability observed in the samples. Some noise parameters are introduced as well and we assume a particular number of marker compounds are measured between seizures.<br>\nRemember! This is only example data to illustrate the technology you can come back here and change the nature of the exmaple data any time", "NUMBER_OF_CLASSES = 20 \n# number of different classes \nSAMPLES_PER_CLASS = 400 \n# we assume balanced classes\nNOISINESS_INTRA_CLASS = 0.25 \n#expresses the spread of the classes (between 0.0 and 1.0 gives workable data)\nNOISINESS_INTER_CLASS = 2.5 \n# expresses the spaces in between the clases (between 5 and 10 times the value of NOISINESS_INTRA_CLASS is nice)\nDIMENSIONALITY = 10 \n# how many features are measured for this multivariate data\nFAKE_DIMENSIONS = 2 # these features are drawn from a different (non class specific distribution) so as to have no actual class descriminating power.\n\nif NUMBER_OF_CLASSES * SAMPLES_PER_CLASS > 10000:\n print('Too many samples requested, please limit simulated data to 10000 samples to avoid slowing down the server kernel... Using Default values')\n print(NUMBER_OF_CLASSES)", "Generate a sample data set\nThe following code cell will generate a new simulated data set according to the parameters defined in the previous code cell", "##simulate interesting multiclass data \nmyData = np.empty((0,DIMENSIONALITY), int)\nnonInformativeFeats = np.array(random.sample(range(DIMENSIONALITY), FAKE_DIMENSIONS))\n\nlabels = np.repeat(range(NUMBER_OF_CLASSES), SAMPLES_PER_CLASS) # integer class labels\n#print labels\n\nfor x in range(0, NUMBER_OF_CLASSES):\n A = np.random.rand(DIMENSIONALITY,DIMENSIONALITY)\n cov = np.dot(A,A.transpose())*NOISINESS_INTRA_CLASS # ensure a positive semi-definite covariance matrix, but random relation between variables\n mean = np.random.uniform(-NOISINESS_INTER_CLASS, NOISINESS_INTER_CLASS, DIMENSIONALITY) # random n-dimensional mean in a space we can plot easily\n #print('random mutlivariate distribution mean for today is', mean)\n #print('random positive semi-define matrix for today is', cov)\n\n x = np.random.multivariate_normal(mean,cov,SAMPLES_PER_CLASS)\n myData = np.append(myData, np.array(x), axis=0)\n # exit here and concatenate\n\nx = np.random.multivariate_normal(np.zeros(FAKE_DIMENSIONS),mlab.identity(FAKE_DIMENSIONS),SAMPLES_PER_CLASS*NUMBER_OF_CLASSES)\nmyData[:, nonInformativeFeats.astype(int)] = x\n# substitute the noninformative dimensions with smaples drawn from the sampe boring distribution\n \n#print myData ", "Examine the generated data\nThe following code cell will display a 2-dimensional projection of your simulated illicit drugs seizure data set.<br>\nThe colours indicate similar source seizures. Remember, this is a projection of the higher-dimensional data for plotting purposes only.", "## plotting our dataset to see if it is what we expect\nnames = matplotlib.colors.cnames #colours for plotting\nnames_temp = names\n\ncol_means = myData.mean(axis=0,keepdims=True)\nmyData = myData - col_means # mean center the data before PCA\n\ncol_stds = myData.std(axis=0,keepdims=True)\nmyData = myData / col_stds # unit variance scaling \n\nresults = mlabPCA(myData)# PCA results into and ND array scores, loadings\n%matplotlib inline\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.axis('equal');\nfor i in range(NUMBER_OF_CLASSES):\n plt.plot(results.Y[labels==i,0],results.Y[labels==i,1], 'o', markersize=7, color=names_temp.popitem()[1], alpha=0.5)\n# plot the classes after PCA just for rough idea of their overlap.\n\nplt.xlabel('x_values')\nplt.ylabel('y_values')\nplt.xlim([-4,4])\nplt.ylim([-4,4])\nplt.title('Transformed samples with class labels from matplotlib.mlab.PCA()')\n \nplt.show()", "Defining a similarity model to generate score-based likelihood ratios\nHere we must define three parameters that will affect the shape of our graph.<br>\n* The distance metric will convert the high dimensional data into a set of univariate pair-wise scores between samples.\n* The distribution that will be used to model the scores in one dimension must be chosen here.\n* The holdout size is a set of sample from our generated data set that will be removed before modelling the distributions.\nThe holdout samples will be used later to draw graphs. We can verify if useful intel is being provided based on the original seizure identities of the holdout samples. The holdout set is a random sample from the full data generated.", "DISTANCE_METRIC = 'canberra'\n# this can be any of: euclidean, minkowski, cityblock, seuclidean, sqeuclidean, cosine, correlation\n# hamming, jaccard, chebyshev, canberra, braycurtis, mahalanobis, yule\nDISTRIBUTION = 'normal'\nHOLDOUT_SIZE = 0.01\n\n# optional feature selection/masking for different questions\nsz = myData.shape\nRELEVANT_FEATURES = range(0,sz[1])\n\n#### Later feature selcection occurs here ####\n#RELEVANT_FEATURES = [2,5,7] ####\n#### Selected features specifically relevant to precursor chemical composition\n#####################################\nmyData = myData[:,RELEVANT_FEATURES]", "Dividing the data\nThe following code cell executes the data division as defined above and reports on the size of your reference collection, holdout set, and the dimensionality of the data. Additionally a graphical sample of the data is displayed with variable magnitudes mapped to a cold->hot colour scale.", "# divide the dataset\nidx = np.random.choice(np.arange(NUMBER_OF_CLASSES*SAMPLES_PER_CLASS), int(NUMBER_OF_CLASSES*SAMPLES_PER_CLASS*HOLDOUT_SIZE), replace=False)\n#10% holdout set removed to demonstrate the LR values of new samples!\n\ntest_samples = myData[idx,:]\ntest_labels = labels[idx]\n\ntrain_samples = np.delete(myData, idx, 0)\ntrain_labels = np.delete(labels, idx)\n\nprint(train_samples.shape, 'is the size of the data used to model same-source and different-source distributions')\nprint(test_samples.shape, 'are the points we will evaluate to see LRs achieved')\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\naxes[0].imshow(train_samples[np.random.randint(DIMENSIONALITY,size=DIMENSIONALITY*2),:])\naxes[1].imshow(test_samples[np.random.randint(test_samples.shape[0],size=test_samples.shape[0]),:])\nplt.show()", "Modeling the general sense of similarity\nThe following code cell does a lot of the work necessary to get from multivariate data to univariate score-based likelihood ratios\nIf you attended the lecture from Jacob about score-based LRs then this should be very familiar!\n\nFirst, pair wise comparisons are where source identity is known (because the data was generated with a ground truth)\nThe comparisons are accumalated into groups based on the source identity same or different batch\nThe acumalated score distributions are modeled using a probability density function\nThe parameters of those distributions and a graphical representation is displayed", "#Pairwise distance calculations are going in here\nsame_dists = np.empty((0,1))\ndiff_dists = np.empty((0,1))\n\nfor labInstance in np.unique(train_labels):\n dists = pdist(train_samples[train_labels==labInstance,:],DISTANCE_METRIC)\n # this is already the condensed-form (lower triangle) with no duplicate comparisons.\n same_dists = np.append(same_dists, np.array(dists))\n del dists\n \n dists = cdist(train_samples[train_labels==labInstance,:], train_samples[train_labels!=labInstance,:], DISTANCE_METRIC)\n #print dists.shape\n train_samples[train_labels!=labInstance,:]\n diff_dists = np.append(diff_dists, np.array(dists).flatten())\n#print same_dists.shape\n#print diff_dists.shape\n\nminval = min(np.min(diff_dists),np.min(same_dists))\nmaxval = max(np.max(diff_dists),np.max(same_dists))\n\n# plot the histograms to see difference in distributions\n# Same source data \nmu_s, std_s = norm.fit(same_dists) # fit the intensities wth a normal\nplt.hist(same_dists, np.arange(minval, maxval, abs(minval-maxval)/100), normed=1, facecolor='green', alpha=0.65)\ny_same = mlab.normpdf(np.arange(minval, maxval, abs(minval-maxval)/100), mu_s, std_s) # estimate the pdf over the plot range\nl=plt.plot(np.arange(minval, maxval, abs(minval-maxval)/100), y_same, 'g--', linewidth=1)\n\n# Different source data\nmu_d, std_d = norm.fit(diff_dists) # fit the intensities wth a normal\nplt.hist(diff_dists, np.arange(minval, maxval, abs(minval-maxval)/100), normed=1, facecolor='blue', alpha=0.65)\ny_diff = mlab.normpdf(np.arange(minval, maxval, abs(minval-maxval)/100), mu_d, std_d) # estimate the pdf over the plot range\nl=plt.plot(np.arange(minval, maxval, abs(minval-maxval)/100), y_diff, 'b--', linewidth=1)\n\nprint('same source comparisons made: ', same_dists.shape[0])\nprint('diff source comparisons made: ', diff_dists.shape[0])", "Relating a score to a likelihood ratio\nThe following code cell compares the new seizures (remember we seperated them into the holdout set early on) against the distributions that we modelled from our forensic reference collection to determine how rare a particular score between two illicit drug profiles is in the context of our cummalitive knowlegde of the market as define by our seizures collection.\n* The distribution relating to pair-wise similarities between same source samples is plotted in green\n* The distribution relating to pair-wise similarities between different source samples is plotted in blue\n* The new recovered samples are compared to one another and plotted as dots along the top of the figure", "print(' mu same: ', mu_s, ' std same: ', std_s)\nprint(' mu diff: ', mu_d, ' std diff: ', std_d)\n\nnewDists = squareform(pdist((test_samples),DISTANCE_METRIC)) # new samples (unknown group memebership)\nl=plt.plot(np.arange(minval, maxval, abs(minval-maxval)/100), y_diff, 'b-', linewidth=1)\nl=plt.plot(np.arange(minval, maxval, abs(minval-maxval)/100), y_same, 'g-', linewidth=1)\n\nl=plt.scatter(squareform(newDists), np.ones(squareform(newDists).shape[0], dtype=np.int)*max(y_same))\n# plot the new distances compared to the distributions\n\nlr_holder = [];\n\nfor element in squareform(newDists):\n value = mlab.normpdf(element, mu_s, std_s)/mlab.normpdf(element, mu_d, std_d)\n lr_holder.append(value)\n #print value\n#lr_holder = np.true_divide(lr_holder,1) #inversion becasue now it will be used as a spring for network plotting\nnewDists[newDists==0] = 0.000001 # avoid divide by zero\nnewDists = np.true_divide(newDists,newDists.max()) \n", "Set thresholds for edges in your undirected graph\nIn the following code cell we must set the thresholds for drawing edges between seizures in a graph based on thier similarity and the likelihood ratio of observing that similarity given they originate from the same source. The output figure at the bottom of the previous code cell can help you decide on a realistic threshold. <br>\nThe same points will be used for the graph using similarity scores as the graph using likelihood ratios so that they can be compared", "# SET A THRESHOLD FOR EDGES:\nEDGE_THRESHOLD_DISTANCE = 0.75\nEDGE_THRESHOLD_LR = 1.0", "Generate a graph using the similarity between samples as a linkage function\nThe following code block examines the pairwise similarity between hold out samples and then compares that similarity to the threshold you defined.\nThe weight of the edges is determined by the magnitude of the similarity score (normalized to a range of 0.0-1.0).\n* Closer nodes are more similar to one another. \n* Edges not meeting the threshold criteria are removed.\n* Any unconnected nodes are removed.", "# Plot just the distance based graph\nG = nx.Graph(); # empty graph\nI,J,V = sp.find(newDists) # index pairs associated with distance\nG.add_weighted_edges_from(np.column_stack((I,J,V))) # distance as edge weight\n\n#pos=nx.spectral_layout(G) # automate layout in 2-d space\n#nx.draw(G, pos, node_size=200, edge_color='k', with_labels=True, linewidths=1,prog='neato') # draw\n#print G.number_of_edges()\nedge_ind_collector = []\n\nfor e in G.edges_iter(data=True):\n if G[e[0]][e[1]]['weight'] > EDGE_THRESHOLD_DISTANCE:\n edge_ind_collector.append(e) # remove edges that indicate weak linkage\nG.remove_edges_from(edge_ind_collector)\n\npos=nx.spring_layout(G) # automate layout in 2-d space\nG = nx.convert_node_labels_to_integers(G)\nnx.draw(G, pos, node_size=200, edge_color='k', with_labels=True, linewidths=1,prog='neato') # draw\nprint('node indeces:', G.nodes())\nprint(' seizure ids:', list(test_labels))", "Generate a graph using the likelihood ratio as a linkage function\nThe following code block examines the likelihood ratio of observing a particular score between pairs of the hold out samples. The scores are then compared against the similarities modeled using our forensic reference samples to determine how rare the observation of such a score is. The weight of the edges is determined by the magnitude of the likelihood ratio of observing the score given the following competing hypotheses:\n* Hp: The samples originate from a common source\n* Hd: The samples originate from a different and arbitrary source\nAs with the graph based on similarity scores alone:\n* Edges not meeting the threshold criteria are removed.", "# plot the likelihood based graph\n#print lr_holder\nG2 = nx.Graph(); # empty graph\nI,J,V = sp.find(squareform(lr_holder)) # index pairs associated with distance\nG2.add_weighted_edges_from(np.column_stack((I,J,V))) # LR value as edge weight\n\nedge_ind_collector = []\n\nfor f in G2.edges_iter(data=True):\n if G2[f[0]][f[1]]['weight'] < EDGE_THRESHOLD_LR:\n #print(f)\n edge_ind_collector.append(f) # remove edges that indicate weak linkage\nG2.remove_edges_from(edge_ind_collector)\n#print G.number_of_edges()\n\npos=nx.spring_layout(G2) # automate layout in 2-d space\nG2 = nx.convert_node_labels_to_integers(G2)\nnx.draw(G2, pos, node_size=200, edge_color='k', with_labels=True, linewidths=1,prog='neato')\nprint('node indeces:', G2.nodes())\nprint(' seizure ids:', list(test_labels))", "<center>\nYou can now go back to any of the code cells and experiment with changing parameters! After each change, scroll back down to the graphs to see the results.\nEach time you make a change click on \"Cell > Run All\" in the menu\nFeature selection for hypthoses investigation\nParticular variables may be known to relate to characteristics that are of particular investigative interest. For example some chemical characteristics quantified in illicit drug profiling may relate to residual composition of specific precursor compounds.\nWe can articulate a new source identitiy hypothesis in terms of the features we allow to be used in our model. For example the variables selected thus far are defined in an earlier cell.\n<br/>\n\nscroll back up to cell 9, which should contain the following text\n<code>\n#### Later feature selcection occurs here ####\n# RELEVANT_FEATURES = [2,5,7] ####\n#### Selected features specifically relevant to precursor chemical composition\n#####################################\n</code>\n<br>\nNow make the following change :\n__remove this # symbol -----> <code># RELEVANT_FEATURES = [2,5,7] ####</code>\n\nIntel from graph analytics\nInvestigation of the characteristics of the graph using similar source, score-based, non-anchored LRs as a linkage function can yield interesting discoveries pertaining to particular nodes. In this case nodes may represent pieces of evidence, or case dossiers where multiple types of information are considered as features in a multivariate model.\n<br/>\nIn our example the nodes represent drug seizures and their features are impurity compounds resulting of chemical fingerprinting.\n<br/>\nSome interesting characteristics of the graph might be:\n\nThe most connected element (perhaps indicating a high similarity to the largest number of other seizures)\nThe general degree of connectivity of the graph (perhaps an indication of the interconnectivity of seizures in the market)\nbiapartitie projectiongs of the graph (indicating the number of elements/element groupings likely to be independent of one another)\nConnectivity between pairs of nodes in the graph based on different edge thresholds now that edges are directly related to likelihoods based on our forensic reference collection more or less likely connections have a very intuitive meaning in the graph space.\n<br>\nThere are many more graph theory algorithms that can be deplyed to analyze a large group of forensic traces see:\n<code>\nhttps://networkx.github.io/documentation/latest/reference/algorithms.html\n</code>\n\nThank you for your attendance and your attention!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.20/_downloads/3ca4dabff6bcea5b8da3afc9052669d2/plot_45_projectors_background.ipynb
bsd-3-clause
[ "%matplotlib inline", "Background on projectors and projections\nThis tutorial provides background information on projectors and Signal Space\nProjection (SSP), and covers loading and saving projectors, adding and removing\nprojectors from Raw objects, the difference between \"applied\" and \"unapplied\"\nprojectors, and at what stages MNE-Python applies projectors automatically.\n :depth: 2\nWe'll start by importing the Python modules we need; we'll also define a short\nfunction to make it easier to make several plots that look similar:", "import os\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D # noqa\nfrom scipy.linalg import svd\nimport mne\n\n\ndef setup_3d_axes():\n ax = plt.axes(projection='3d')\n ax.view_init(azim=-105, elev=20)\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_zlabel('z')\n ax.set_xlim(-1, 5)\n ax.set_ylim(-1, 5)\n ax.set_zlim(0, 5)\n return ax", "What is a projection?\nIn the most basic terms, a projection is an operation that converts one set\nof points into another set of points, where repeating the projection\noperation on the resulting points has no effect. To give a simple geometric\nexample, imagine the point $(3, 2, 5)$ in 3-dimensional space. A\nprojection of that point onto the $x, y$ plane looks a lot like a\nshadow cast by that point if the sun were directly above it:", "ax = setup_3d_axes()\n\n# plot the vector (3, 2, 5)\norigin = np.zeros((3, 1))\npoint = np.array([[3, 2, 5]]).T\nvector = np.hstack([origin, point])\nax.plot(*vector, color='k')\nax.plot(*point, color='k', marker='o')\n\n# project the vector onto the x,y plane and plot it\nxy_projection_matrix = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 0]])\nprojected_point = xy_projection_matrix @ point\nprojected_vector = xy_projection_matrix @ vector\nax.plot(*projected_vector, color='C0')\nax.plot(*projected_point, color='C0', marker='o')\n\n# add dashed arrow showing projection\narrow_coords = np.concatenate([point, projected_point - point]).flatten()\nax.quiver3D(*arrow_coords, length=0.96, arrow_length_ratio=0.1, color='C1',\n linewidth=1, linestyle='dashed')", "<div class=\"alert alert-info\"><h4>Note</h4><p>The ``@`` symbol indicates matrix multiplication on NumPy arrays, and was\n introduced in Python 3.5 / NumPy 1.10. The notation ``plot(*point)`` uses\n Python `argument expansion`_ to \"unpack\" the elements of ``point`` into\n separate positional arguments to the function. In other words,\n ``plot(*point)`` expands to ``plot(3, 2, 5)``.</p></div>\n\nNotice that we used matrix multiplication to compute the projection of our\npoint $(3, 2, 5)$onto the $x, y$ plane:\n\\begin{align}\\left[\n \\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\end{matrix}\n \\right]\n \\left[ \\begin{matrix} 3 \\ 2 \\ 5 \\end{matrix} \\right] =\n \\left[ \\begin{matrix} 3 \\ 2 \\ 0 \\end{matrix} \\right]\\end{align}\n...and that applying the projection again to the result just gives back the\nresult again:\n\\begin{align}\\left[\n \\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\end{matrix}\n \\right]\n \\left[ \\begin{matrix} 3 \\ 2 \\ 0 \\end{matrix} \\right] =\n \\left[ \\begin{matrix} 3 \\ 2 \\ 0 \\end{matrix} \\right]\\end{align}\nFrom an information perspective, this projection has taken the point\n$x, y, z$ and removed the information about how far in the $z$\ndirection our point was located; all we know now is its position in the\n$x, y$ plane. Moreover, applying our projection matrix to any point\nin $x, y, z$ space will reduce it to a corresponding point on the\n$x, y$ plane. The term for this is a subspace: the projection matrix\nprojects points in the original space into a subspace of lower dimension\nthan the original. The reason our subspace is the $x,y$ plane (instead\nof, say, the $y,z$ plane) is a direct result of the particular values\nin our projection matrix.\nExample: projection as noise reduction\nAnother way to describe this \"loss of information\" or \"projection into a\nsubspace\" is to say that projection reduces the rank (or \"degrees of\nfreedom\") of the measurement — here, from 3 dimensions down to 2. On the\nother hand, if you know that measurement component in the $z$ direction\nis just noise due to your measurement method, and all you care about are the\n$x$ and $y$ components, then projecting your 3-dimensional\nmeasurement into the $x, y$ plane could be seen as a form of noise\nreduction.\nOf course, it would be very lucky indeed if all the measurement noise were\nconcentrated in the $z$ direction; you could just discard the $z$\ncomponent without bothering to construct a projection matrix or do the matrix\nmultiplication. Suppose instead that in order to take that measurement you\nhad to pull a trigger on a measurement device, and the act of pulling the\ntrigger causes the device to move a little. If you measure how\ntrigger-pulling affects measurement device position, you could then \"correct\"\nyour real measurements to \"project out\" the effect of the trigger pulling.\nHere we'll suppose that the average effect of the trigger is to move the\nmeasurement device by $(3, -1, 1)$:", "trigger_effect = np.array([[3, -1, 1]]).T", "Knowing that, we can compute a plane that is orthogonal to the effect of the\ntrigger (using the fact that a plane through the origin has equation\n$Ax + By + Cz = 0$ given a normal vector $(A, B, C)$), and\nproject our real measurements onto that plane.", "# compute the plane orthogonal to trigger_effect\nx, y = np.meshgrid(np.linspace(-1, 5, 61), np.linspace(-1, 5, 61))\nA, B, C = trigger_effect\nz = (-A * x - B * y) / C\n# cut off the plane below z=0 (just to make the plot nicer)\nmask = np.where(z >= 0)\nx = x[mask]\ny = y[mask]\nz = z[mask]", "Computing the projection matrix from the trigger_effect vector is done\nusing singular value decomposition &lt;svd_&gt;_ (SVD); interested readers may\nconsult the internet or a linear algebra textbook for details on this method.\nWith the projection matrix in place, we can project our original vector\n$(3, 2, 5)$ to remove the effect of the trigger, and then plot it:", "# compute the projection matrix\nU, S, V = svd(trigger_effect, full_matrices=False)\ntrigger_projection_matrix = np.eye(3) - U @ U.T\n\n# project the vector onto the orthogonal plane\nprojected_point = trigger_projection_matrix @ point\nprojected_vector = trigger_projection_matrix @ vector\n\n# plot the trigger effect and its orthogonal plane\nax = setup_3d_axes()\nax.plot_trisurf(x, y, z, color='C2', shade=False, alpha=0.25)\nax.quiver3D(*np.concatenate([origin, trigger_effect]).flatten(),\n arrow_length_ratio=0.1, color='C2', alpha=0.5)\n\n# plot the original vector\nax.plot(*vector, color='k')\nax.plot(*point, color='k', marker='o')\noffset = np.full((3, 1), 0.1)\nax.text(*(point + offset).flat, '({}, {}, {})'.format(*point.flat), color='k')\n\n# plot the projected vector\nax.plot(*projected_vector, color='C0')\nax.plot(*projected_point, color='C0', marker='o')\noffset = np.full((3, 1), -0.2)\nax.text(*(projected_point + offset).flat,\n '({}, {}, {})'.format(*np.round(projected_point.flat, 2)),\n color='C0', horizontalalignment='right')\n\n# add dashed arrow showing projection\narrow_coords = np.concatenate([point, projected_point - point]).flatten()\nax.quiver3D(*arrow_coords, length=0.96, arrow_length_ratio=0.1,\n color='C1', linewidth=1, linestyle='dashed')", "Just as before, the projection matrix will map any point in $x, y, z$\nspace onto that plane, and once a point has been projected onto that plane,\napplying the projection again will have no effect. For that reason, it should\nbe clear that although the projected points vary in all three $x$,\n$y$, and $z$ directions, the set of projected points have only\ntwo effective dimensions (i.e., they are constrained to a plane).\n.. sidebar:: Terminology\nIn MNE-Python, the matrix used to project a raw signal into a subspace is\nusually called a :term:`projector &lt;projector&gt;` or a *projection\noperator* — these terms are interchangeable with the term *projection\nmatrix* used above.\n\nProjections of EEG or MEG signals work in very much the same way: the point\n$x, y, z$ corresponds to the value of each sensor at a single time\npoint, and the projection matrix varies depending on what aspects of the\nsignal (i.e., what kind of noise) you are trying to project out. The only\nreal difference is that instead of a single 3-dimensional point $(x, y,\nz)$ you're dealing with a time series of $N$-dimensional \"points\" (one\nat each sampling time), where $N$ is usually in the tens or hundreds\n(depending on how many sensors your EEG/MEG system has). Fortunately, because\nprojection is a matrix operation, it can be done very quickly even on signals\nwith hundreds of dimensions and tens of thousands of time points.\nSignal-space projection (SSP)\nWe mentioned above that the projection matrix will vary depending on what\nkind of noise you are trying to project away. Signal-space projection (SSP)\n:footcite:UusitaloIlmoniemi1997 is a way of estimating what that projection\nmatrix should be, by\ncomparing measurements with and without the signal of interest. For example,\nyou can take additional \"empty room\" measurements that record activity at the\nsensors when no subject is present. By looking at the spatial pattern of\nactivity across MEG sensors in an empty room measurement, you can create one\nor more $N$-dimensional vector(s) giving the \"direction(s)\" of\nenvironmental noise in sensor space (analogous to the vector for \"effect of\nthe trigger\" in our example above). SSP is also often used for removing\nheartbeat and eye movement artifacts — in those cases, instead of empty room\nrecordings the direction of the noise is estimated by detecting the\nartifacts, extracting epochs around them, and averaging. See\ntut-artifact-ssp for examples.\nOnce you know the noise vectors, you can create a hyperplane that is\northogonal\nto them, and construct a projection matrix to project your experimental\nrecordings onto that hyperplane. In that way, the component of your\nmeasurements associated with environmental noise can be removed. Again, it\nshould be clear that the projection reduces the dimensionality of your data —\nyou'll still have the same number of sensor signals, but they won't all be\nlinearly independent — but typically there are tens or hundreds of sensors\nand the noise subspace that you are eliminating has only 3-5 dimensions, so\nthe loss of degrees of freedom is usually not problematic.\nProjectors in MNE-Python\nIn our example data, SSP &lt;ssp-tutorial&gt; has already been performed\nusing empty room recordings, but the :term:projectors &lt;projector&gt; are\nstored alongside the raw data and have not been applied yet (or,\nsynonymously, the projectors are not active yet). Here we'll load\nthe sample data &lt;sample-dataset&gt; and crop it to 60 seconds; you can\nsee the projectors in the output of :func:~mne.io.read_raw_fif below:", "sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)\nraw.crop(tmax=60).load_data()", "In MNE-Python, the environmental noise vectors are computed using principal\ncomponent analysis &lt;pca_&gt;, usually abbreviated \"PCA\", which is why the SSP\nprojectors usually have names like \"PCA-v1\". (Incidentally, since the process\nof performing PCA uses singular value decomposition &lt;svd_&gt; under the hood,\nit is also common to see phrases like \"projectors were computed using SVD\" in\npublished papers.) The projectors are stored in the projs field of\nraw.info:", "print(raw.info['projs'])", "raw.info['projs'] is an ordinary Python :class:list of\n:class:~mne.Projection objects, so you can access individual projectors by\nindexing into it. The :class:~mne.Projection object itself is similar to a\nPython :class:dict, so you can use its .keys() method to see what\nfields it contains (normally you don't need to access its properties\ndirectly, but you can if necessary):", "first_projector = raw.info['projs'][0]\nprint(first_projector)\nprint(first_projector.keys())", "The :class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked\nobjects all have a boolean :attr:~mne.io.Raw.proj attribute that indicates\nwhether there are any unapplied / inactive projectors stored in the object.\nIn other words, the :attr:~mne.io.Raw.proj attribute is True if at\nleast one :term:projector is present and all of them are active. In\naddition, each individual projector also has a boolean active field:", "print(raw.proj)\nprint(first_projector['active'])", "Computing projectors\nIn MNE-Python, SSP vectors can be computed using general purpose functions\n:func:mne.compute_proj_raw, :func:mne.compute_proj_epochs, and\n:func:mne.compute_proj_evoked. The general assumption these functions make\nis that the data passed contains raw data, epochs or averages of the artifact\nyou want to repair via projection. In practice this typically involves\ncontinuous raw data of empty room recordings or averaged ECG or EOG\nartifacts. A second set of high-level convenience functions is provided to\ncompute projection vectors for typical use cases. This includes\n:func:mne.preprocessing.compute_proj_ecg and\n:func:mne.preprocessing.compute_proj_eog for computing the ECG and EOG\nrelated artifact components, respectively; see tut-artifact-ssp for\nexamples of these uses. For computing the EEG reference signal as a\nprojector, the function :func:mne.set_eeg_reference can be used; see\ntut-set-eeg-ref for more information.\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>It is best to compute projectors only on channels that will be\n used (e.g., excluding bad channels). This ensures that\n projection vectors will remain ortho-normalized and that they\n properly capture the activity of interest.</p></div>\n\nVisualizing the effect of projectors\nYou can see the effect the projectors are having on the measured signal by\ncomparing plots with and without the projectors applied. By default,\nraw.plot() will apply the projectors in the background before plotting\n(without modifying the :class:~mne.io.Raw object); you can control this\nwith the boolean proj parameter as shown below, or you can turn them on\nand off interactively with the projectors interface, accessed via the\n:kbd:Proj button in the lower right corner of the plot window. Here we'll\nlook at just the magnetometers, and a 2-second sample from the beginning of\nthe file.", "mags = raw.copy().crop(tmax=2).pick_types(meg='mag')\nfor proj in (False, True):\n fig = mags.plot(butterfly=True, proj=proj)\n fig.subplots_adjust(top=0.9)\n fig.suptitle('proj={}'.format(proj), size='xx-large', weight='bold')", "Additional ways of visualizing projectors are covered in the tutorial\ntut-artifact-ssp.\nLoading and saving projectors\nSSP can be used for other types of signal cleaning besides just reduction of\nenvironmental noise. You probably noticed two large deflections in the\nmagnetometer signals in the previous plot that were not removed by the\nempty-room projectors — those are artifacts of the subject's heartbeat. SSP\ncan be used to remove those artifacts as well. The sample data includes\nprojectors for heartbeat noise reduction that were saved in a separate file\nfrom the raw data, which can be loaded with the :func:mne.read_proj\nfunction:", "ecg_proj_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_ecg-proj.fif')\necg_projs = mne.read_proj(ecg_proj_file)\nprint(ecg_projs)", "There is a corresponding :func:mne.write_proj function that can be used to\nsave projectors to disk in .fif format:\n.. code-block:: python3\nmne.write_proj('heartbeat-proj.fif', ecg_projs)\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>By convention, MNE-Python expects projectors to be saved with a filename\n ending in ``-proj.fif`` (or ``-proj.fif.gz``), and will issue a warning\n if you forgo this recommendation.</p></div>\n\nAdding and removing projectors\nAbove, when we printed the ecg_projs list that we loaded from a file, it\nshowed two projectors for gradiometers (the first two, marked \"planar\"), two\nfor magnetometers (the middle two, marked \"axial\"), and two for EEG sensors\n(the last two, marked \"eeg\"). We can add them to the :class:~mne.io.Raw\nobject using the :meth:~mne.io.Raw.add_proj method:", "raw.add_proj(ecg_projs)", "To remove projectors, there is a corresponding method\n:meth:~mne.io.Raw.del_proj that will remove projectors based on their index\nwithin the raw.info['projs'] list. For the special case of replacing the\nexisting projectors with new ones, use\nraw.add_proj(ecg_projs, remove_existing=True).\nTo see how the ECG projectors affect the measured signal, we can once again\nplot the data with and without the projectors applied (though remember that\nthe :meth:~mne.io.Raw.plot method only temporarily applies the projectors\nfor visualization, and does not permanently change the underlying data).\nWe'll compare the mags variable we created above, which had only the\nempty room SSP projectors, to the data with both empty room and ECG\nprojectors:", "mags_ecg = raw.copy().crop(tmax=2).pick_types(meg='mag')\nfor data, title in zip([mags, mags_ecg], ['Without', 'With']):\n fig = data.plot(butterfly=True, proj=True)\n fig.subplots_adjust(top=0.9)\n fig.suptitle('{} ECG projector'.format(title), size='xx-large',\n weight='bold')", "When are projectors \"applied\"?\nBy default, projectors are applied when creating :class:epoched\n&lt;mne.Epochs&gt; data from :class:~mne.io.Raw data, though application of the\nprojectors can be delayed by passing proj=False to the\n:class:~mne.Epochs constructor. However, even when projectors have not been\napplied, the :meth:mne.Epochs.get_data method will return data as if the\nprojectors had been applied (though the :class:~mne.Epochs object will be\nunchanged). Additionally, projectors cannot be applied if the data are not\npreloaded &lt;memory&gt;. If the data are memory-mapped_ (i.e., not\npreloaded), you can check the _projector attribute to see whether any\nprojectors will be applied once the data is loaded in memory.\nFinally, when performing inverse imaging (i.e., with\n:func:mne.minimum_norm.apply_inverse), the projectors will be\nautomatically applied. It is also possible to apply projectors manually when\nworking with :class:~mne.io.Raw, :class:~mne.Epochs or\n:class:~mne.Evoked objects via the object's :meth:~mne.io.Raw.apply_proj\nmethod. For all instance types, you can always copy the contents of\n:samp:{&lt;instance&gt;}.info['projs'] into a separate :class:list variable,\nuse :samp:{&lt;instance&gt;}.del_proj({&lt;index of proj(s) to remove&gt;}) to remove\none or more projectors, and then add them back later with\n:samp:{&lt;instance&gt;}.add_proj({&lt;list containing projs&gt;}) if desired.\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>Remember that once a projector is applied, it can't be un-applied, so\n during interactive / exploratory analysis it's a good idea to use the\n object's :meth:`~mne.io.Raw.copy` method before applying projectors.</p></div>\n\nBest practices\nIn general, it is recommended to apply projectors when creating\n:class:~mne.Epochs from :class:~mne.io.Raw data. There are two reasons\nfor this recommendation:\n\n\nIt is computationally cheaper to apply projectors to data after the\n data have been reducted to just the segments of interest (the epochs)\n\n\nIf you are applying amplitude-based rejection criteria to epochs, it is\n preferable to reject based on the signal after projectors have been\n applied, because the projectors may reduce noise in some epochs to\n tolerable levels (thereby increasing the number of acceptable epochs and\n consequenty increasing statistical power in any later analyses).\n\n\nReferences\n.. footbibliography::\n.. LINKS\nhttps://docs.python.org/3/tutorial/controlflow.html#tut-unpacking-arguments" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
geilerloui/deep-learning
embeddings/Skip-Gram_word2vec.ipynb
mit
[ "Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. \n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.\n<img src='assets/tokenize_lookup.png' width=500>\nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.\nWord2Vec\nThe word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.", "import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils", "Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()", "Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.", "words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))", "And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.", "vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]", "Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.", "from collections import Counter\nimport random\n\n\n## Your code here\n\n# f(w_i): frequency of word w_i in the total dataset.\n# t: threshold parameter\n\nthreshold = 1e-5\nwords_counts = Counter(int_words)\ntotal_count = len(int_words)\nfreqs = {word: count/total_count for word, count in words_counts.items()}\np_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in words_counts}\ntrain_words = [word for word in int_words if random.random() < (1 - p_drop[word])]\n", "Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.", "def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n R = np.random.randint(1, window_size+1)\n start = idx - R if (idx - R) > 0 else 0\n stop = idx + R\n target_words = set(words[start:idx] + words[idx+1:stop+1])\n \n return list(target_words)", "Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.", "def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ", "Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.", "train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = tf.placeholder(tf.int32, shape=[None])\n labels = tf.placeholder(tf.int32, shape=[None, None])", "Embedding\nThe embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.", "n_vocab = len(int_to_vocab)\nn_embedding = 200 # Number of embedding features \nwith train_graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here\n embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output", "Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.", "# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here\n softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss(softmax_w,\n softmax_b,\n labels,\n embed,\n n_sampled,\n n_vocab)\n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)", "Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.", "with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints", "Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.", "epochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)", "Restore the trained network if you need to:", "with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)", "Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
darko-itpro/training-python
demos/dates_timezones.ipynb
gpl-3.0
[ "Les dates et les timezone\nLa librairie standard ne définit pas de timezone. Elle fournit une classe timezone, spécialisation simple de tzinfo qui s'instancie en passant en paramètre un timedelta. Celui-ci doit avoir +/- 12 heures.", "import datetime as dt\n\nprint(\"Maintenant naif :\", dt.datetime.now())\n\ninstant = dt.datetime.now(dt.timezone(dt.timedelta(hours=2)))\n\nprint(\"Maintenant aware :\", instant)\nprint(\"Info timezone :\", instant.tzinfo)\n\ninstant = dt.datetime.now(dt.timezone(dt.timedelta(hours=2), name=\"France\"))\nprint(\"Maintenant aware :\", instant)\nprint(\"Info timezone :\", instant.tzinfo)\n\ninstant = dt.datetime.now(dt.timezone(dt.timedelta(hours=2), name=\"plus_2\"))\nautre_instant = dt.datetime.now(dt.timezone(dt.timedelta(hours=1), name=\"plus_1\"))\nprint(\"Maintenant aware 2 :\", instant)\nprint(\"Maintenant aware 1 :\", autre_instant)\n\nautre_instant - instant", "Avec la librairie pytz\nDans la cellule suivante, nous définissons un timezone pour la France", "import pytz\nparis_tz = pytz.timezone('Europe/Paris')\nprint(paris_tz)\nparis_tz", "Les valeurs possibles pour les timezone sont fournies par une constante.", "pytz.all_timezones", "Nous créons d'abord une date naive à laquelle nous renseignons le timezone.", "now_naive = dt.datetime.now()\nprint(\"Instant actuel naif:\", now_naive)\n\nh_paris_aware = paris_tz.localize(now_naive)\nprint(\"France, aware :\", h_paris_aware)", "Nous utilisons cette même date pour créer une date aware à New York.", "new_york_tz = pytz.timezone('America/New_York')\n\nprint(\"Instant actuel naif:\", now_naive)\n\nh_new_york_aware = new_york_tz.localize(now_naive)\nprint(\"New York, aware :\", h_new_york_aware)", "Nous changeons le timezone de cette date aware des US en France.", "h_new_york_in_paris = h_new_york_aware.astimezone(paris_tz)\nprint(\"France from US :\", h_new_york_in_paris)", "Cette nouvelle date n'affiche pas les même heures, mais n'a pas non plus le même timezone. Nous pouvons vérifier qu'il s'agit de la même date.", "h_new_york_aware == h_new_york_in_paris", "Les deux dates localisées contiennent la même information d'heure. Nous vérifions qu'il ne s'agit pas du même instant et qu'il y a un décalage d'une heure.", "print(\"Heure US :\", h_new_york_aware.hour)\nprint(\"Heure Fr :\", h_paris_aware.hour)\nprint(\"Différence :\", h_new_york_aware - h_paris_aware)", "Prise en compte du changement d'heure\nLes problèmes de timezone ne se limitent pas au fuseau horaire mais également à la différence en fonction du changement d'heure. Ci dessous, nous définissons deux instants à minuit pour un jour en heure d'hiver et un jour en heure d'été.", "winter_day = paris_tz.localize(dt.datetime(2019, 3, 30))\nsummer_day = paris_tz.localize(dt.datetime(2019, 4, 2))\n\nprint(\"Hiver :\", winter_day)\nprint(\"Été. :\", summer_day)\n\nprint(\"Différence :\", summer_day - winter_day)\n\nafter_3_days = winter_day + dt.timedelta(days=3)\nprint(\"3 jours plus tard :\", after_3_days)\n\nprint(\"3 jours plus tard relocalisé :\", after_3_days.astimezone(paris_tz))", "Voir donc les bonnes pratiques pour manipuler les dates comme :", "utc_tz = pytz.timezone('UTC')\n\nmeeting = dt.datetime(2020, 10, 15, 15, 30)\n\nparis_tz = pytz.timezone('Europe/Paris')\nmeeting = meeting.astimezone(paris_tz).astimezone(utc_tz)\n\nprint(meeting)\n\nnew_york_tz = pytz.timezone('America/New_York')\nprint(f\"for New Yoerkers : {meeting.astimezone(new_york_tz)}\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Xero-Hige/Notebooks
Algoritmos I/2018-1C/Parcialito_1_Resolucion_Propuesta.ipynb
gpl-3.0
[ "Parcialito 1 (Solucion propuesta)\nEjercicio 1\nEnunciado\n1) Dada la siguiente función:\n``` python\ndef mi_funcion(p,q):\n contador_1 = contador_2 = p\nwhile True:\n if contador_2 &gt; q:\n contador_1 += 2\n contador_2 = p\n print()\n\n if contador_1 &gt; q:\n break\n\n print(contador_1 , end=\" \")\n contador_2 += 2\n\n``\n A) Mostrar la salida de ejecutar 'my_function' con p=3 q=7\n B) Proponga un mejor nombre para los parámetrospyqC) Reescribir la función anterior utilizando solo ciclosfor`\nResolucion\nItem A\n``` bash\n\n\n\nmy_function(3,7)\n3 3 3\n5 5 5\n7 7 7\n```\n\n\n\nItem B\np = inicio ; q = final \nItem C", "def mi_otra_funcion(inicio,final):\n for i in range(inicio,final+1,2):\n for j in range(inicio,final+1,2):\n print(i,end=\" \")\n print()\n\nmi_otra_funcion(3,7)", "Ejercicio 2\nEnunciado\n2) Un crucigrama es una matriz de nxm que contiene celdas. Las celdas son tuplas de dos elementos de la forma (&lt;Color&gt;,&lt;contenido&gt;). Cada celda puede ser BLANCA o NEGRA. El contenido es una cadena, si está vacía, la celda está vacía. Un crucigrama no está correctamente llenado si:\n * Hay una celda BLANCA vacía\n * Hay una celda NEGRA llena\nEscribir una función que dado un crucigrama, devuelva si está correctamente llenado.\nResolucion", "def es_crucigrama_valido(crucigrama):\n '''Recibe un crucigrama y devuelve si esta o no correctamente llenado.\n Un crucigrama no está correctamente llenado si:\n - Hay al menos una celda BLANCA vacía\n - Hay al menos una celda NEGRA llena'''\n \n for fila in crucigrama:\n for celda in fila:\n color,contenido = celda\n\n if color == BLANCO and not contenido:\n return False\n if color == NEGRO and contenido:\n return False\n\n return True", "Ejercicio 3\nEnunciado\n3) Escribir una función que reciba una cadena y devuelva su encriptación en formato rot13. Para encriptar una cadena con rot13 se debe reemplazar cada caracter por el caracter que se encuentra a 13 posiciones en el abecedario.\nSi la cadena contiene números, caracteres especiales o mayúsculas se debe devolver una cadena vacía. \n\nAyuda: usar la constante ascii_lowercase del módulo string que contiene “abcd...xyz” \n\nEj: \npython\n rot13(“zambia”) -&gt; “mnzovn”\n rot13(“mnzovn”) -&gt; “zambia”\n rot13(“z4mbi4”) -&gt; “”\nResolucion", "from string import ascii_lowercase\n\ndef encriptar(cadena):\n encriptado = []\n for c in cadena:\n if not c in ascii_lowercase:\n return \"\"\n\n i = ascii_lowercase.index(c) + 13\n i = i % len(ascii_lowercase)\n\n encriptado.append(ascii_lowercase[i])\n\n return \"\".join(encriptado)\n\nprint(encriptar(\"simonga\"))\nprint(encriptar(\"fvzbatn\"))\nprint(encriptar(\"Zambia\"))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
yoheimune-python-lecture/recommendation-for-movielens
recommend.ipynb
apache-2.0
[ "MovieLens を用いたレコメンデーションの実装サンプル\nここでは、レコメンデーションの実装例を示します。\n1. データの取得\nMovieLens から利用するデータを取得します。取得したデータは data/ ディレクトリに保存し、zip解凍を行います。", "import os\nfrom urllib.request import urlopen\n\n# MovieLensのサイトから、Zipファイルを取得し、ローカルに保存します.\n# この処理は少しだけ時間がかかるので、未ダウンロードの場合のみ、実行します.\nfile_name = \"data/ml-100k.zip\"\nif not os.path.exists(os.path.dirname(file_name)):\n os.makedirs(os.path.dirname(file_name))\nif not os.path.exists(\"data/ml-100k.zip\"):\n url = \"http://files.grouplens.org/datasets/movielens/ml-100k.zip\"\n with urlopen(url) as res:\n with open(\"data/ml-100k.zip\", \"wb\") as f:\n f.write(res.read())\n # Zipファイルを解凍します.\n from shutil import unpack_archive\n unpack_archive(\"data/ml-100k.zip\", \"data/\", \"zip\")", "2. データの前処理\n取得したデータのうち u1.base という学習用データ(全10万件のうち7万件)を利用します。\nまずは、取得したデータをそのままの形で、DataFrameとして読み込んでみます。", "import numpy as np\nimport pandas as pd\nudata = pd.read_csv(\"data/ml-100k/u1.base\", delimiter=\"\\t\", names=(\"user\", \"movie\", \"rating\", \"timestamp\"))\nudata.tail()", "上記の形式だとモデルの学習に用いづらいため、 行が映画、列がユーザーの行列(マトリックス) に変換します。\nここでは pivot メソッドを使用します。\nそして今回は、少しだけ工夫をして、 評価>=3のみ(つまり好評価のみ)を対象 に、評価データを取り込みます。 \nまた評価数の情報は消し、評価>=3の場合には「1」を登録することとします。\n(評価をそのまま使うのか、「1」などにマスキングするのかは、精度が良い方にすればOKです)", "# 評価が3以上のデータを抽出.\nhigh_rate = udata.loc[udata[\"rating\"] >= 3]\n# movieを行, columnsを列にした後、欠損部分(=NaN)を0埋め.\nraw = high_rate.pivot(index=\"movie\", columns=\"user\", values=\"rating\")\ndf = raw.fillna(0)\n# whereメソッドはわかりにくいですが、以下で3未満以外(つまり3以上)を1で埋めて返します\ndf = df.where(df < 3, 1)\n\ndf.head()", "(参考までに)\n評価>=3のデータ数を確認してみましょう。", "# 評価として取り込んだデータの数\ndf.astype(bool).sum(axis=1).sum()", "全70,000件中、66,103件は好評価のようです(94%)。今回は、評価>=3の考慮はあまり意味がないかもしれません(笑)。\nですが、評価データを扱う場合にはそれがプラス/マイナスのどちらなのかを意識することは重要です。\n3. 映画同士の類似度を計算する\nそれでは、学習データからレコメンドモデルを作成したいと思います。 \n前処理から、DataFrameは「1682 x 943」のデータです(映画数=1682、ユーザー数=943)。 \nそしてここでは各映画を、943個の特徴を持つベクトルと考えることにしましょう。 \nこの時、2つのベクトル(=各映画)の近さ(=類似度)をどのように表現すれば良いでしょうか?\n様々な方法がありますがここでは、2つのベクトルのなす角のコサインの値(=コサイン距離)を類似度として考えます。2つのベクトルが重なり合っている(なす角が0度)の場合にはコサイン=1で類似度Max、2つのベクトルが直行する場合にはコサイン=0で類似度0という具合です。\nまずは簡素化して、以下のような映画が2つあるとします。", "item1 = np.array([1,1,0])\nitem2 = np.array([1,0,1])", "上記は、それぞれ3つの特徴を持つベクトルで、コサイン距離(=類似度)は以下のように計算します。", "from scipy.spatial.distance import cosine\nsim = 1 - cosine(item1, item2)\nprint(sim)", "上記の要領で、実際に映画ID=1と映画ID=2の類似度を計算してみると、以下のようになります。", "sim = 1 - cosine(df.iloc[0], df.iloc[1])\nprint(sim)", "上記の 0.32は相対的な数値でありそれ自体に意味はありませんが、他の類似度と比較することで、より類似しているアイテムを見つけることができます。\n上記の雰囲気で、総当たりに全アイテムの類似度を計算します。\nここでは scipy の pdist を用いてお手軽に行います。", "# 上記の雰囲気で、総当たりで全アイテムの距離を計算する.\nfrom scipy.spatial.distance import pdist\n\n# 類似度\nd = pdist(df, \"cosine\")\nd = 1 - d\n\n# 結果を行列に変換します(上記だとベクトルで見辛い!!)\nfrom scipy.spatial.distance import squareform\nd = squareform(d)\n# nan ができるので、0に補正します.\nd[np.isnan(d)] = 0\n\n# ここでちょっとしたトリックで、自分自身は「-1」に補正して、類似度を最低にします.\nd = d - np.eye(d.shape[0])\n\n# 表示してみる.\nprint(d)", "上記で、各映画ごとの類似度を総当たりで計算することができました。\nこの類似度表を用いて、推薦するアイテムを作成します。\n4. レコメンドデータを作成する\n例えば、映画ID=1に類似する映画を、類似度の高い順に並べてみます。 \nメモリ効率や速度を考え、Numpyを使います。", "# 映画ID=1(indexが0始まりなことに注意)\nmovie_id = 0\n\n# 評価の良い順に並べます.\n# ソート後のインデックスを収納.\nid = d[movie_id].argsort()[::-1]\n\n# 最初の5件を表示してみます.\nfor i in id[:5]:\n print(\"{i:0>3d}: {v: .3f}\".format(i=i, v=d[movie_id, i]))", "上記の処理では、指定した映画に類似する映画を知ることができます。\nこの実装を応用して、指定したユーザーへ映画を5本レコメンドする関数を実装します。処理の流れは以下の通りです。 \n指定したユーザーへ映画を5本レコメンドする関数の仕様\n* 指定されたユーザーが評価した映画一覧を、学習用データから取得する\n* 各映画に対してレコメンド候補を取得する(上記の処理がこちら)\n* レコメンド候補から、すでに閲覧済のデータは除去する\n* レコメンド候補から、上位5件を返却する\n具体的な実装は、以下の通りです。", "# 指定したユーザーへレコメンドするアイテムを5個出力する関数\ndef get_recommend_items(user_id):\n # 高く評価した映画のリストを取得\n favorite = df.loc[:, user_id].nonzero()\n # 評価テーブルから、高評価の行を取り出す\n table = d[favorite]\n # 列ごとに類似度を合計\n table[np.where(table < 0)] = 0\n indicator = table.sum(axis=0)\n # 類似度の高い順にソート\n sorted_id = indicator.argsort()[::-1]\n # 評価済み映画のリストを取得\n reviewed = raw[raw.loc[:, user_id].notnull()].index.tolist()\n # 評価済みを削除\n recommend_id = [i for i in sorted_id if i not in reviewed]\n # 5件だけ返す\n return recommend_id[:5]\n\n# 試しにUser_ID=100の人\nrecommends = get_recommend_items(100)\nprint(recommends)", "これで、レコメンド処理の実装が完了しました!!!\n5. レコメンド結果の評価\n上記で作成したレコメンドモデルについて、どれほど良いのか(悪いのか)評価したいと思います。\nここでは評価用のデータ(u1.test)を用いて評価を行います。\n[評価方法]\n* 評価データに存在するユーザーに対して、レコメンドを5件表示する.\n * レコメンドの生成は、上記で定義した「get_recommend_items」を用います.\n* 表示したレコメンド5件のうち、1件以上、評価データ内で閲覧したデータがあれば成功とする.\n* 「成功数 / ユーザー数」で精度を測る.\nまずはテストデータを読み込みます。", "utest = pd.read_csv(\"data/ml-100k/u1.test\", delimiter=\"\\t\", names=(\"user\", \"movie\", \"rating\", \"timestamp\"))\nutest.head()\n\n# 好評価のみを対象とした、行列(行=映画、列=ユーザー)を作成します.\nhigh_rate_test = utest.loc[udata[\"rating\"] >= 3]\nraw_test = high_rate_test.pivot(index=\"movie\", columns=\"user\", values=\"rating\")\ndf_test = raw_test.fillna(0)\ndf_test = df_test.where(df_test < 3, 1)\n\n### 試しに、userId=1の人でテスト.\nuser_id = 1\n# (1) レコメンド対象\nrecommends = set(get_recommend_items(user_id))\n# (2) テストデータ内に存在する閲覧データ\nreal = set(df_test.loc[:, user_id].nonzero()[0])\n# (1) と (2) の積集合\nreal & recommends", "無事にレコメンドができたようです(ホッとしますw)。\n続けて、他の人も評価を行なっていきましょう。", "# テストデータに存在するユーザーの一覧を取得する.\nusers = df_test.columns\n\n# 全ユーザー数\nall = len(users)\n\n# 成功数\ngood = 0\n\n# 1ユーザーごとに、成功 or not を判定する.\nfor user_id in users:\n real = set(df_test.loc[:, user_id].nonzero()[0])\n recommends = set(get_recommend_items(user_id))\n matches = real & recommends\n good += 1 if matches else 0\n\n# 結果を表示.\nprint(\"全件={0}, 成功数={1}, 成功率={2}%\".format(all, good, good * 100 // all))", "今回の場合には、52%の確率で、ユーザーが将来閲覧する映画をレコメンドすることができました。\nめでたしめでたし。\n開発後記\n\n今回のアルゴリズムは、 Amazon Item-to-Item Collaborative Filtering を使いました。興味がモテましたらぜひ論文(英語)も読んでみてください!\n今回は 映画と映画の類似度 を探しましたが、他に ユーザーとユーザーの類似度 を探す方法もあります(User to User Collaborative Filtering)。ただ一般的にユーザー同士の類似度よりも、アイテムの類似度の方が精度が高いことが多いです(ユーザーの嗜好に引っ張られないため)。\nアイテムベースのレコメンドだと、閲覧履歴が1件からレコメンドができるので、コールドスタートアップに便利です。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cxxgtxy/tensorflow
tensorflow/compiler/xla/g3doc/tutorials/jit_compile.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Use XLA with tf.function\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/xla/tutorials/compile\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/compile.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/compile.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nThis tutorial trains a TensorFlow model to classify the MNIST dataset, where the training function is compiled using XLA.\nFirst, load TensorFlow and enable eager execution.", "# In TF 2.4 jit_compile is called experimental_compile\n!pip install tf-nightly\n\nimport tensorflow as tf\ntf.compat.v1.enable_eager_execution()", "Then define some necessary constants and prepare the MNIST dataset.", "# Size of each input image, 28 x 28 pixels\nIMAGE_SIZE = 28 * 28\n# Number of distinct number labels, [0..9]\nNUM_CLASSES = 10\n# Number of examples in each training batch (step)\nTRAIN_BATCH_SIZE = 100\n# Number of training steps to run\nTRAIN_STEPS = 1000\n\n# Loads MNIST dataset.\ntrain, test = tf.keras.datasets.mnist.load_data()\ntrain_ds = tf.data.Dataset.from_tensor_slices(train).batch(TRAIN_BATCH_SIZE).repeat()\n\n# Casting from raw data to the required datatypes.\ndef cast(images, labels):\n images = tf.cast(\n tf.reshape(images, [-1, IMAGE_SIZE]), tf.float32)\n labels = tf.cast(labels, tf.int64)\n return (images, labels)", "Finally, define the model and the optimizer. The model uses a single dense layer.", "layer = tf.keras.layers.Dense(NUM_CLASSES)\noptimizer = tf.keras.optimizers.Adam()", "Define the training function\nIn the training function, you get the predicted labels using the layer defined above, and then minimize the gradient of the loss using the optimizer. In order to compile the computation using XLA, place it inside tf.function with jit_compile=True.", "@tf.function(jit_compile=True)\ndef train_mnist(images, labels):\n images, labels = cast(images, labels)\n\n with tf.GradientTape() as tape:\n predicted_labels = layer(images)\n loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits=predicted_labels, labels=labels\n ))\n layer_variables = layer.trainable_variables\n grads = tape.gradient(loss, layer_variables)\n optimizer.apply_gradients(zip(grads, layer_variables))", "Train and test the model\nOnce you have defined the training function, define the model.", "for images, labels in train_ds:\n if optimizer.iterations > TRAIN_STEPS:\n break\n train_mnist(images, labels)", "And, finally, check the accuracy:", "images, labels = cast(test[0], test[1])\npredicted_labels = layer(images)\ncorrect_prediction = tf.equal(tf.argmax(predicted_labels, 1), labels)\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nprint(\"Prediction accuracy after training: %s\" % accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
abulbasar/machine-learning
Scikit - 12 Neural Network using Numpy.ipynb
apache-2.0
[ "import numpy as np\nimport scipy\nimport scipy.misc\nimport scipy.ndimage\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import OneHotEncoder\nfrom datetime import datetime\n\nimport resource\n\n\nnp.set_printoptions(suppress=True, precision=5)\n\n\n\n%matplotlib inline\n\nclass Laptimer: \n def __init__(self):\n self.start = datetime.now()\n self.lap = 0\n \n def click(self, message):\n td = datetime.now() - self.start\n td = (td.days*86400000 + td.seconds*1000 + td.microseconds / 1000) / 1000\n memory = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / (1024 ** 2)\n print(\"[%d] %s, %.2fs, memory: %dmb\" % (self.lap, message, td, memory))\n self.start = datetime.now()\n self.lap = self.lap + 1\n return td\n \n def reset(self):\n self.__init__()\n \n def __call__(self, message = None):\n return self.click(message)\n \ntimer = Laptimer()\ntimer()\n\ndef normalize_fetures(X):\n return X * 0.98 / 255 + 0.01\n\ndef normalize_labels(y):\n y = OneHotEncoder(sparse=False).fit_transform(y)\n y[y == 0] = 0.01\n y[y == 1] = 0.99\n return y\n\nurl = \"https://raw.githubusercontent.com/makeyourownneuralnetwork/makeyourownneuralnetwork/master/mnist_dataset/mnist_train_100.csv\"\ntrain = pd.read_csv(url, header=None, dtype=\"float64\")\ntrain.sample(10)\n\nX_train = normalize_fetures(train.iloc[:, 1:].values)\ny_train = train.iloc[:, [0]].values.astype(\"int32\")\ny_train_ohe = normalize_labels(y_train)\n\nfig, _ = plt.subplots(5, 6, figsize = (15, 10))\nfor i, ax in enumerate(fig.axes):\n ax.imshow(X_train[i].reshape(28, 28), cmap=\"Greys\", interpolation=\"none\")\n ax.set_title(\"T: %d\" % y_train[i])\n\nplt.tight_layout()\n\nurl = \"https://raw.githubusercontent.com/makeyourownneuralnetwork/makeyourownneuralnetwork/master/mnist_dataset/mnist_test_10.csv\"\ntest = pd.read_csv(url, header=None, dtype=\"float64\")\ntest.sample(10)\n\nX_test = normalize_fetures(test.iloc[:, 1:].values)\ny_test = test.iloc[:, 0].values.astype(\"int32\")", "Neural Networks Classifier\nAuthor: Abul Basar", "class NeuralNetwork:\n\n def __init__(self, layers, learning_rate, random_state = None):\n self.layers_ = layers\n self.num_features = layers[0]\n self.num_classes = layers[-1]\n self.hidden = layers[1:-1]\n self.learning_rate = learning_rate\n \n if not random_state:\n np.random.seed(random_state)\n \n self.W_sets = []\n for i in range(len(self.layers_) - 1):\n n_prev = layers[i]\n n_next = layers[i + 1]\n m = np.random.normal(0.0, pow(n_next, -0.5), (n_next, n_prev))\n self.W_sets.append(m)\n \n def activation_function(self, z):\n return 1 / (1 + np.exp(-z))\n \n def fit(self, training, targets):\n inputs0 = inputs = np.array(training, ndmin=2).T\n assert inputs.shape[0] == self.num_features, \\\n \"no of features {0}, it must be {1}\".format(inputs.shape[0], self.num_features)\n\n targets = np.array(targets, ndmin=2).T\n \n assert targets.shape[0] == self.num_classes, \\\n \"no of classes {0}, it must be {1}\".format(targets.shape[0], self.num_classes)\n\n \n outputs = []\n for i in range(len(self.layers_) - 1):\n W = self.W_sets[i]\n inputs = self.activation_function(W.dot(inputs))\n outputs.append(inputs)\n \n errors = [None] * (len(self.layers_) - 1)\n errors[-1] = targets - outputs[-1]\n #print(\"Last layer\", targets.shape, outputs[-1].shape, errors[-1].shape)\n #print(\"Last layer\", targets, outputs[-1])\n \n #Back propagation\n for i in range(len(self.layers_) - 1)[::-1]:\n W = self.W_sets[i]\n E = errors[i]\n O = outputs[i] \n I = outputs[i - 1] if i > 0 else inputs0\n #print(\"i: \", i, \", E: \", E.shape, \", O:\", O.shape, \", I: \", I.shape, \",W: \", W.shape)\n W += self.learning_rate * (E * O * (1 - O)).dot(I.T)\n if i > 0:\n errors[i-1] = W.T.dot(E)\n \n \n def predict(self, inputs, cls = False):\n inputs = np.array(inputs, ndmin=2).T \n assert inputs.shape[0] == self.num_features, \\\n \"no of features {0}, it must be {1}\".format(inputs.shape[0], self.num_features) \n \n for i in range(len(self.layers_) - 1):\n W = self.W_sets[i]\n input_next = W.dot(inputs)\n inputs = activated = self.activation_function(input_next)\n \n \n return np.argmax(activated.T, axis=1) if cls else activated.T \n \n def score(self, X_test, y_test):\n y_test = np.array(y_test).flatten()\n y_test_pred = nn.predict(X_test, cls=True)\n return np.sum(y_test_pred == y_test) / y_test.shape[0]\n\n\n", "Run neural net classifier on small dataset\nTraining set size: 100, testing set size 10", "nn = NeuralNetwork([784,100,10], 0.3, random_state=0)\nfor i in np.arange(X_train.shape[0]):\n nn.fit(X_train[i], y_train_ohe[i])\n \nnn.predict(X_train[2]), nn.predict(X_train[2], cls=True)\nprint(\"Testing accuracy: \", nn.score(X_test, y_test), \", training accuracy: \", nn.score(X_train, y_train))\n#list(zip(y_test_pred, y_test))", "Load full MNIST dataset.\nTraining set size 60,000 and test set size 10,000\nOriginal: http://yann.lecun.com/exdb/mnist/\nCSV version: \ntraining: https://pjreddie.com/media/files/mnist_train.csv\ntesting: https://pjreddie.com/media/files/mnist_test.csv", "train = pd.read_csv(\"../data/MNIST/mnist_train.csv\", header=None, dtype=\"float64\")\nX_train = normalize_fetures(train.iloc[:, 1:].values)\ny_train = train.iloc[:, [0]].values.astype(\"int32\")\ny_train_ohe = normalize_labels(y_train)\nprint(y_train.shape, y_train_ohe.shape)\n\ntest = pd.read_csv(\"../data/MNIST/mnist_test.csv\", header=None, dtype=\"float64\")\nX_test = normalize_fetures(test.iloc[:, 1:].values)\ny_test = test.iloc[:, 0].values.astype(\"int32\")\n", "Runt the Neural Network classifier and measure performance", "timer.reset()\nnn = NeuralNetwork([784,100,10], 0.3, random_state=0)\nfor i in range(X_train.shape[0]):\n nn.fit(X_train[i], y_train_ohe[i])\ntimer(\"training time\")\naccuracy = nn.score(X_test, y_test)\nprint(\"Testing accuracy: \", nn.score(X_test, y_test), \", Training accuracy: \", nn.score(X_train, y_train))", "Effect of learning rate", "params = 10 ** - np.linspace(0.01, 2, 10)\nscores_train = []\nscores_test = []\n\ntimer.reset()\nfor p in params:\n nn = NeuralNetwork([784,100,10], p, random_state = 0)\n for i in range(X_train.shape[0]):\n nn.fit(X_train[i], y_train_ohe[i])\n scores_train.append(nn.score(X_train, y_train))\n scores_test.append(nn.score(X_test, y_test))\n timer()\n \nplt.plot(params, scores_test, label = \"Test score\")\nplt.plot(params, scores_train, label = \"Training score\")\nplt.xlabel(\"Learning Rate\")\nplt.ylabel(\"Accuracy\")\nplt.legend()\nplt.title(\"Effect of learning rate\")\n\nprint(\"Accuracy scores\")\npd.DataFrame({\"learning_rate\": params, \"train\": scores_train, \"test\": scores_test})", "Effect of Epochs", "epochs = np.arange(20)\nlearning_rate = 0.077\nscores_train, scores_test = [], []\nnn = NeuralNetwork([784,100,10], learning_rate, random_state = 0)\nindices = np.arange(X_train.shape[0])\n\ntimer.reset()\nfor _ in epochs:\n np.random.shuffle(indices)\n for i in indices:\n nn.fit(X_train[i], y_train_ohe[i])\n scores_train.append(nn.score(X_train, y_train))\n scores_test.append(nn.score(X_test, y_test))\n timer(\"test score: %f, training score: %f\" % (scores_test[-1], scores_train[-1]))\n\nplt.plot(epochs, scores_test, label = \"Test score\")\nplt.plot(epochs, scores_train, label = \"Training score\")\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Accuracy\")\nplt.legend(loc = \"lower right\")\nplt.title(\"Effect of Epochs\")\n\nprint(\"Accuracy scores\")\npd.DataFrame({\"epochs\": epochs, \"train\": scores_train, \"test\": scores_test})", "Effect of size (num of nodes) of the single hidden layer", "num_layers = 50 * (np.arange(10) + 1)\nlearning_rate = 0.077\nscores_train, scores_test = [], []\n\ntimer.reset()\nfor p in num_layers:\n nn = NeuralNetwork([784, p,10], learning_rate, random_state = 0)\n indices = np.arange(X_train.shape[0])\n for i in indices:\n nn.fit(X_train[i], y_train_ohe[i])\n scores_train.append(nn.score(X_train, y_train))\n scores_test.append(nn.score(X_test, y_test))\n timer(\"size: %d, test score: %f, training score: %f\" % (p, scores_test[-1], scores_train[-1]))\n\nplt.plot(num_layers, scores_test, label = \"Test score\")\nplt.plot(num_layers, scores_train, label = \"Training score\")\nplt.xlabel(\"Hidden Layer Size\")\nplt.ylabel(\"Accuracy\")\nplt.legend(loc = \"lower right\")\nplt.title(\"Effect of size (num of nodes) of the hidden layer\")\n\nprint(\"Accuracy scores\")\npd.DataFrame({\"layer\": num_layers, \"train\": scores_train, \"test\": scores_test})", "Effect of using multiple hidden layers", "num_layers = np.arange(5) + 1\nlearning_rate = 0.077\nscores_train, scores_test = [], []\n\ntimer.reset()\nfor p in num_layers:\n layers = [100] * p\n layers.insert(0, 784)\n layers.append(10)\n \n nn = NeuralNetwork(layers, learning_rate, random_state = 0)\n indices = np.arange(X_train.shape[0])\n for i in indices:\n nn.fit(X_train[i], y_train_ohe[i])\n scores_train.append(nn.score(X_train, y_train))\n scores_test.append(nn.score(X_test, y_test))\n timer(\"size: %d, test score: %f, training score: %f\" % (p, scores_test[-1], scores_train[-1]))\n\nplt.plot(num_layers, scores_test, label = \"Test score\")\nplt.plot(num_layers, scores_train, label = \"Training score\")\nplt.xlabel(\"No of hidden layers\")\nplt.ylabel(\"Accuracy\")\nplt.legend(loc = \"upper right\")\nplt.title(\"Effect of using multiple hidden layers, \\nNodes per layer=100\")\n\nprint(\"Accuracy scores\")\npd.DataFrame({\"layer\": num_layers, \"train\": scores_train, \"test\": scores_test})", "Rotation", "img = scipy.ndimage.interpolation.rotate(X_train[110].reshape(28, 28), -10, reshape=False)\nprint(img.shape)\nplt.imshow(img, interpolation=None, cmap=\"Greys\")\n\nepochs = np.arange(10)\nlearning_rate = 0.077\nscores_train, scores_test = [], []\nnn = NeuralNetwork([784,250,10], learning_rate, random_state = 0)\nindices = np.arange(X_train.shape[0])\n\ntimer.reset()\nfor _ in epochs:\n np.random.shuffle(indices)\n for i in indices:\n for rotation in [-10, 0, 10]:\n img = scipy.ndimage.interpolation.rotate(X_train[i].reshape(28, 28), rotation, cval=0.01, order=1, reshape=False)\n nn.fit(img.flatten(), y_train_ohe[i])\n scores_train.append(nn.score(X_train, y_train))\n scores_test.append(nn.score(X_test, y_test))\n timer(\"test score: %f, training score: %f\" % (scores_test[-1], scores_train[-1]))\n\nplt.plot(epochs, scores_test, label = \"Test score\")\nplt.plot(epochs, scores_train, label = \"Training score\")\nplt.xlabel(\"Epochs\")\nplt.ylabel(\"Accuracy\")\nplt.legend(loc = \"lower right\")\nplt.title(\"Trained with rotation (+/- 10)\\n Hidden Nodes: 250, LR: 0.077\")\n\nprint(\"Accuracy scores\")\npd.DataFrame({\"epochs\": epochs, \"train\": scores_train, \"test\": scores_test})", "Which charaters NN was most wrong about?", "missed = y_test_pred != y_test\npd.Series(y_test[missed]).value_counts().plot(kind = \"bar\")\nplt.title(\"No of mis classification by digit\")\nplt.ylabel(\"No of misclassification\")\nplt.xlabel(\"Digit\")\n\nfig, _ = plt.subplots(6, 4, figsize = (15, 10))\nfor i, ax in enumerate(fig.axes):\n ax.imshow(X_test[missed][i].reshape(28, 28), interpolation=\"nearest\", cmap=\"Greys\")\n ax.set_title(\"T: %d, P: %d\" % (y_test[missed][i], y_test_pred[missed][i]))\nplt.tight_layout()\n\nimg = scipy.ndimage.imread(\"/Users/abulbasar/Downloads/9-03.png\", mode=\"L\")\nprint(\"Original size:\", img.shape)\nimg = normalize_fetures(scipy.misc.imresize(img, (28, 28)))\nimg = np.abs(img - 0.99)\nplt.imshow(img, cmap=\"Greys\", interpolation=\"none\")\nprint(\"Predicted value: \", nn.predict(img.flatten(), cls=True))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
chrlttv/Teaching
Session3/1.TextProcessing.ipynb
mit
[ "Text Mining\nText mining is the process of automatically extracting high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning.\nTypical text mining applications include:\n- Text classification (or text categorization),\n- Text clustering, \n- Sentiment analysis,\n- Named entity recognition, etc.\nIn this notebook:\n\nPreprocessing: textual normalization, simple tokenization\nStopword removal: its importance\nVerify Zipf Law with Oshumed medical abstract collection\n\n\nHow to use this notebook\nThis environment is called Jupyter Notebook.\nIt has two types of cells:\n * Markdown cells (like this one, where you can write notes)\n * Code cells\nRun code cells by pressing Shift+Enter. Let's try...", "# Run me: press Shift+Enter\nprint(\"Hello, world!!\")", "This is a hands on session, so this is time you write some of code. Let's try that.", "# Write code to print any string...\n\n# Then run the code.", "Preprocessing\nUpper case, Punctuations\nA computer does not require upper case letters and punctuations. \nNote: Python already provides a list of punctuations. We simply need to import it.", "from string import punctuation\n\ns = \"Hello, World!!\"\n\n# Write code to lower case the string\ns = ...\n\n# Write code to remove punctuations\n# HINT: for loop and for each punctuation use string replace() method\nfor ...\n s = ...\n\nprint(s)", "Tokenization : NLTK\nNatural Language Toolkit (NLTK) is a platform to work with human or natural language data using Python.\nAs usual, we will first convert everything to lowercase and remove punctuations.", "raw1 = \"Grenoble is a city in southeastern France, at the foot of the French Alps, on the banks of Isère.\"\nraw2 = \"Grenoble is the capital of the department of Isère and is an important scientific centre in France.\"\n\n# Write code here to convert everything in lower case and to remove punctuation.\n\n\nprint(raw1)\nprint(raw2)\n# Again, SHIFT+ENTER to run the code.", "NLTK already provides us with modules to easily tokenize the text. We will tokenize pieces of raw texts using word_tokenize function of NLTK package.", "import nltk\n\n# Tokenization using NLTK\ntokens1 = nltk.word_tokenize(raw1)\ntokens2 = nltk.word_tokenize(raw2)\n\n# print the tokens\nprint(tokens1)\nprint(tokens2)", "We now build a NLTK Text object to store tokenized texts. One or more text then can be merged to form a TextCollection. This provides many useful operations helpful to statistically analyze a collection of text.", "# Build NLTK Text objects\ntext1 = nltk.Text(tokens1)\ntext2 = nltk.Text(tokens2)\n\n# A list of Text objects\ntext_list = [text1, text2]\n\n# Build NLTK text collection\ntext_collection = nltk.text.TextCollection(text_list)", "NLTK TextCollection object can be used to calculate basic statistics.\n 1. count the number of occurances (or term frequency) of a word\n 2. obtain a frequency distribution of all the words in the text\nNote: The NLTK Text objects created in the intermediate steps can also be used to calculate similar statistics at document level.", "# Frequency of a word\nfreq = text_collection.count(\"grenoble\")\nprint(\"Frequency of word \\'grenoble\\' = \", freq)\n\n# Frequency distribution\nfreq_dist = nltk.FreqDist(text_collection)\nfreq_dist", "Let's automate: write a function\nUsing above steps, we will now write a function. We will call this function raw_to_text. This function will take a list of raw texts and will return a NLTK TextCollection objects, representing the list of input text.", "\"\"\"\nConverts a list of raw text to a NLTK TextCollection object.\nApplies lower-casing and punctuation removal.\nReturns:\ntext_collection - a NLTK TextCollection object\n\"\"\"\ndef raw_to_text(raw_list):\n text_list = []\n for raw in raw_list:\n # Write code for lower-casing and punctuation removal\n \n \n # Write code to tokenize and create NLTK Text object\n # Name the variable 'text' to store the Text object\n \n \n # storing the text in the list\n text_list.append(text) \n\n # Write code to create TextCollection from the list text_list\n text_collection = nltk.text.TextCollection(text_list) # TO DELETE\n \n # return text collection\n return text_collection", "Let's test the function with some sample data", "raw_list_sample = [\"The dog sat on the mat.\",\n \"The cat sat on the mat!\",\n \"We have a mat in our house.\"]\n\n# Call the above raw_to_text function for the sample text\ntext_collection_sample = ...", "Like before we can compute the frequency distribution for this collection.", "# Write code to compute the frequency 'mat' in the collection.\nfreq = ...\nprint(\"Frequency of word \\'mat\\' = \", freq)\n\n# Write code to compute and display the frequency distribution of text_collection_sample\n\n", "Something bigger\nWe will use DBPedia Ontology Classification Dataset. It includes first paragraphs of Wikipedia articles. Each paragraph is assigned one of 14 categories. Here is an example of an abstract under Written Work catgory:\n\nThe Regime: Evil Advances/Before They Were Left Behind is the second prequel novel in the Left Behind series written by Tim LaHaye and Jerry B. Jenkins. It was released on Tuesday November 15 2005. This book covers more events leading up to the first novel Left Behind. It takes place from 9 years to 14 months before the Rapture.\n\nIn this hands-on we will use 15,000 documents belonging to three categories, namely Album, Film and Written Work.\nThe file corpus.txt supplied here, contains 15,000 documents. Each line of the file is a document.\nNow we will:\n 1. Load the documents as a list\n 2. Create a NLTK TextCollection\n 3. Analyze different counts\nNote: Each line of the file corpus.txt is a document", "# Write code to load documents as a list\n\"\"\"\nHint 1: open the file using open()\nHint 2: use read() to load the content\nHint 3: use splitlines() to get separate documents \n\"\"\"\nraw_docs = ...\n\nprint(\"Loaded \" + str(len(raw_docs)) + \" documents.\")\n\n# Write code to create a NLTK TextCollection\n# Hint: use raw_to_text function\ntext_collection = ...\n\n# Print total number of words in these documents\nprint(\"Total number of words = \", len(text_collection))\nprint(\"Total number of unique words = \", len(set(text_collection)))", "Calculate the freq distribution for this text collection of documents. Then let's see the most common words.", "# Write code to compute frequency distribution of text_collection\nfreq_dist = ...\n\n# Let's see most common 10 words.\nfreq_dist.most_common(10)", "Something does not seem right!! Can you point out what?\nLet's try by visualizing it.", "# importing Python package for plotting \nimport matplotlib.pyplot as plt\n\n# To plot\nplt.subplots(figsize=(12,10))\nfreq_dist.plot(30, cumulative=True)", "Observations:\n 1. Just 30 most frequent tokens make up around 260,000 out of 709,460 ($\\approx 36.5\\%$)\n 2. Most of these are very common words such as articles, pronouns etc.\nStop word filtering\nStop words are words which are filtered out before or after processing of natural language data (text). There is no universal stop-word list. Often, stop word lists include short function words, such as \"the\", \"is\", \"at\", \"which\", and \"on\". Removing stop-words has been shown to increase the performance of different tasks like search. \nA file of stop_words.txt is included. We will now:\n 1. Load the contents of the file 'stop_words.txt' where each line is a stop word, and create a stop-word list.\n 2. Modify the function raw_to_text to perform (a) stop-word removal (b) numeric words removal\nNote: Each line of the file stop_words.txt is a stop word.", "# Write code to load stop-word list from file 'stop_words.txt'\n# Hint: use the same strategy you used to load documents\nstopwords = set(...)\n\n\"\"\"\nVERSION 2\nConverts a list of raw text to a NLTK TextCollection object.\nApplies lower-casing, punctuation removal and stop-word removal.\nReturns:\ntext_collection: a NLTK TextCollection object\n\"\"\"\n# Write function \"raw_to_text_2\".\n\"\"\"\nHint 1: consult the above function \"raw_to_text\",\nHint 2: add a new block in the function for removing stop words\nHint 3: to remove stop words from a of tokens - \n - create an ampty list to store clean tokens\n - for each token in the token list:\n if the token is not in stop word list\n store it in the clean token list\n\"\"\"\n\n", "Retest our small sample with the new version.", "raw_list_sample = [\"The dog sat on the mat.\", \n \"The cat sat on the mat!\", \n \"We have a mat in our house.\"]\n\n# Write code to obtain and see freq_dist_sample with the new raw_to_text_2\n# Note: raw_to_text_2 takes two inputs/arguments\ntext_collection_sample = ...\nfreq_dist_sample = ...\n\nfreq_dist_sample", "Finally, rerun with the bigger document set and replot the cumulative word frequencies.\nRecall that we already have the documents loaded in the variable raw_docs", "# Write code to create a NLTK TextCollection with raw_to_text_2\ntext_collection = ...\n\n# Write code to compute frequency distribution of text_collection\nfreq_dist = ...\n\n# Write code to plot the frequencies again\n", "Zipf law\nVerify whether the dataset follows the Zipf law, by plotting the data on a log-log graph, with the axes being log (rank order) and log (frequency). You expect to obtain an alomost straight line.", "import numpy as np\nimport math\n\ncounts = np.array(list(freq_dist.values()))\ntokens = np.array(list(freq_dist.keys()))\nranks = np.arange(1, len(freq_dist)+1)\n\n# Obtaining indices that would sort the array in descending order\nindices = np.argsort(-counts)\nfrequencies = counts[indices]\n\n# Plotting the ranks vs frequencies\nplt.subplots(figsize=(12,10))\nplt.yscale('log')\nplt.xscale('log')\nplt.title(\"Zipf plot for our data\")\nplt.xlabel(\"Frequency rank of token\")\nplt.ylabel(\"Absolute frequency of token\")\nplt.grid()\nplt.plot(ranks, frequencies, 'o', markersize=0.9)\nfor n in list(np.logspace(-0.5, math.log10(len(counts)-1), 17).astype(int)):\n dummy = plt.text(ranks[n], frequencies[n], \" \" + tokens[indices[n]], \n verticalalignment=\"bottom\", horizontalalignment=\"left\")\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mattpitkin/corner.py
docs/_static/notebooks/sigmas.ipynb
bsd-2-clause
[ "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nfrom matplotlib import rcParams\nrcParams[\"font.size\"] = 16\nrcParams[\"font.family\"] = \"sans-serif\"\nrcParams[\"font.sans-serif\"] = [\"Computer Modern Sans\"]\nrcParams[\"text.usetex\"] = True\nrcParams[\"text.latex.preamble\"] = r\"\\usepackage{cmbright}\"\nrcParams[\"savefig.dpi\"] = 100", "A note about sigmas\nWe are regularly asked about the \"sigma\" levels in the 2D histograms. These are not the 68%, etc. values that we're used to for 1D distributions. In two dimensions, a Gaussian density is given by:\npdf(r) = exp(-(r/s)^2/2) / (2*pi*s^2)\n\nThe integral under this density (using polar coordinates and implicitly integrating out the angle) is:\ncdf(x) = Integral(r * exp(-(r/s)^2/2) / s^2, {r, 0, x})\n = 1 - exp(-(x/s)^2/2)\n\nThis means that within \"1-sigma\", the Gaussian contains 1-exp(-0.5) ~ 0.393 or 39.3% of the volume. Therefore the relevant 1-sigma levels for a 2D histogram of samples is 39% not 68%. If you must use 68% of the mass, use the levels keyword argument when you call corner.corner.\nWe can visualize the difference between sigma definitions:", "import corner\nimport numpy as np\nimport matplotlib.pyplot as pl\n\n# Generate some fake data from a Gaussian\nnp.random.seed(42)\nx = np.random.randn(50000, 2)", "First, plot this using the correct (default) 1-sigma level:", "fig = corner.corner(x, quantiles=(0.16, 0.84), levels=(1-np.exp(-0.5),))\nfig.suptitle(\"correct `one-sigma' level\");", "Compare this to the 68% mass level and specifically compare to how the contour compares to the marginalized 68% quantile:", "fig = corner.corner(x, quantiles=(0.16, 0.84), levels=(0.68,))\nfig.suptitle(\"incorrect `one-sigma' level\");" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AutuanLiu/Python
nbs/class_object.ipynb
mit
[ "类与对象", "# 多行结果输出支持\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"", "自定义字符串的格式化\n\n为了自定义字符串的格式化,我们需要在类上面定义 format() 方法", "_formats = {\n 'ymd' : '{d.year}-{d.month}-{d.day}',\n 'mdy' : '{d.month}/{d.day}/{d.year}',\n 'dmy' : '{d.day}/{d.month}/{d.year}'\n }\n\nclass Date:\n def __init__(self, year, month, day):\n self.year = year\n self.month = month\n self.day = day\n\n def __format__(self, code):\n if code == '':\n code = 'ymd'\n fmt = _formats[code]\n return fmt.format(d=self)\n\nd = Date(2012, 12, 21)\nd\nformat(d)\nformat(d, 'mdy')\n'The date is {:ymd}'.format(d)\n'The date is {:mdy}'.format(d)", "让对象支持上下文管理协议\n\n为了让一个对象兼容 with 语句,你需要实现 __enter__() 和 __exit__() 方法\n编写上下文管理器的主要原理是你的代码会放到 with 语句块中执行。 当出现 with 语句的时候,对象的 __enter__() 方法被触发, 它返回的值(如果有的话)会被赋值给 as 声明的变量。然后,with 语句块里面的代码开始执行。 最后,__exit__() 方法被触发进行清理工作\n不管 with 代码块中发生什么,上面的控制流都会执行完,就算代码块中发生了异常也是一样的\n\n在类中封装属性名\n\nPython不去依赖语言特性去封装数据,而是通过遵循一定的属性和方法命名规约来达到这个效果。\n任何以单下划线_开头的名字都应该是内部实现(私有的)\nPython并不会真的阻止别人访问内部名称。但是如果你这么做肯定是不好的,可能会导致脆弱的代码\n使用下划线开头的约定同样适用于模块名和模块级别函数\n使用两个下划线(__)开头的命名表示private,会被重新命名\n有时候你定义的一个变量和某个保留关键字冲突,这时候可以使用单下划线作为后缀", "class A:\n def __init__(self):\n self._internal = 0 # An internal attribute\n self.public = 1 # A public attribute\n \n# @classmethod\n def public_method(self):\n '''\n A public method\n '''\n print(2)\n\n def _internal_method(self):\n print(3)\n\na = A()\na.public\n# Python不会阻止对内部属性或者方法的访问,但是并不建议这么做\na._internal\na.public_method()\na._internal_method()\n\nclass B:\n def __init__(self):\n self.__private = 0\n\n def __private_method(self):\n pass\n\n def public_method(self):\n pass\n self.__private_method()", "使用双下划线开始会导致访问名称变成其他形式。 比如,在前面的类B中,私有属性会被分别重命名为 _B__private 和 _B__private_method 。 这时候你可能会问这样重命名的目的是什么,答案就是继承——这种属性通过继承是无法被覆盖的", "class C(B):\n def __init__(self):\n super().__init__()\n self.__private = 1 # Does not override B.__private\n\n # Does not override B.__private_method()\n def __private_method(self):\n pass", "这里,私有名称 __private 和 __private_method 被重命名为 _C__private 和 _C__private_method ,这个跟父类B中的名称是完全不同的", "# 为了避免关键字之间的冲突问题\nlambda_ = 2.0 # Trailing _ to avoid clash with lambda keyword", "创建可管理的属性\n\n给某个实例attribute增加除访问与修改之外的其他处理逻辑,比如类型检查或合法性验证\n自定义某个属性的一种简单方法是将它定义为一个property", "class Person:\n def __init__(self, first_name):\n self.first_name = first_name\n\n # Getter function\n @property\n def first_name(self):\n return self._first_name\n\n # Setter function\n @first_name.setter\n def first_name(self, value):\n if not isinstance(value, str):\n raise TypeError('Expected a string')\n self._first_name = value\n\n # Deleter function (optional)\n @first_name.deleter\n def first_name(self):\n raise AttributeError(\"Can't delete attribute\")", "上述代码中有三个相关联的方法,这三个方法的名字都必须一样\n第一个方法是一个 getter 函数,它使得 first_name 成为一个属性。 其他两个方法给 first_name 属性添加了 setter 和 deleter 函数。 需要强调的是只有在 first_name 属性被创建后, 后面的两个装饰器 @first_name.setter 和 @first_name.deleter 才能被定义\nproperty的一个关键特征是它看上去跟普通的attribute没什么两样, 但是访问它的时候会自动触发 getter 、setter 和 deleter 方法", "a = Person('autuanliu')\na.first_name\n\n# 设置的时候会执行一个类型检查的方法\na.first_name = 43\n\na.first_name = \"autuan\"\n\ndel a.first_name", "Properties还是一种定义动态计算attribute的方法。 这种类型的attributes并不会被实际的存储,而是在需要的时候计算出来\n可以把 方法 变为 属性,这样在执行构造函数之后,就可以按照访问属性的方式进行访问方法了,而只有在访问的时候才会被计算出来", "import math\nclass Circle:\n def __init__(self, radius):\n self.radius = radius\n\n @property\n def area(self):\n return math.pi * self.radius ** 2\n\n @property\n def diameter(self):\n return self.radius * 2\n\n @property\n def perimeter(self):\n return 2 * math.pi * self.radius", "通过使用properties,将所有的访问接口形式统一起来, 对半径、直径、周长和面积的访问都是通过属性访问,就跟访问简单的attribute是一样的。 如果不这样做的话,那么就要在代码中混合使用简单属性访问和方法调用", "# 实例化一个圆的对象\nxy = Circle(5.2)\n\n# 可以直接通过属性的方式进行访问\nxy.area\nxy.diameter\nxy.perimeter", "调用父类方法\n\n想在子类中调用父类的某个已经被覆盖的方法,可以使用 super() 函数\nsuper() 函数的一个常见用法是在 init() 方法中确保父类被正确的初始化了\nsuper() 的另外一个常见用法出现在覆盖Python特殊方法的代码中", "class A:\n def spam(self):\n print('A.spam')\n\nclass B(A):\n def spam(self):\n print('B.spam')\n super().spam()\n\nb = B()\n\nb.spam()\n\nclass A:\n def __init__(self):\n self.x = 0\n\nclass B(A):\n def __init__(self):\n super().__init__()\n self.y = 1\n\nb = B()\nb.x\nb.y", "使用延迟计算属性\n\n想将一个只读属性定义成一个property,并且只在访问的时候才会计算结果。 但是一旦被访问后,你希望结果值被缓存起来,不用每次都去计算\n定义一个延迟属性的一种高效方法是通过使用一个描述器类\n\n简化数据结构的初始化\n\n你写了很多仅仅用作数据结构的类,不想写太多烦人的 __init__() 函数, 可以在一个基类中写一个公用的 __init__() 函数, r然后继承这个基类\n\n定义接口或者抽象基类\n\n你想定义一个接口或抽象类,并且通过执行类型检查来确保子类实现了某些特定的方法, 使用 abc 模块可以很轻松的定义抽象基类\n抽象类的一个特点是它不能直接被实例化\n抽象类的目的就是让别的类继承它并实现特定的抽象方法\n@abstractmethod 还能注解静态方法、类方法和 properties 。 你只需保证这个注解紧靠在函数定义前即可", "from abc import ABCMeta, abstractmethod\n\n# 抽象类\nclass IStream(metaclass=ABCMeta):\n @abstractmethod\n def read(self, maxbytes=-1):\n pass\n\n @abstractmethod\n def write(self, data):\n pass\n\n# 抽象类不能直接被实例化\na = IStream()\n\nclass ss(IStream):\n def read(self, maxbytes=-1):\n print('read')\n \n def write(self, data):\n print('write')\n\naa = ss()\naa.read()\naa.write(data=1)\n\n# @abstractmethod 还能注解静态方法、类方法和 properties 。 你只需保证这个注解紧靠在函数定义前即可\nclass A(metaclass=ABCMeta):\n @property\n @abstractmethod\n def name(self):\n pass\n\n @name.setter\n @abstractmethod\n def name(self, value):\n pass\n\n @classmethod\n @abstractmethod\n def method1(cls):\n pass\n\n @staticmethod\n @abstractmethod\n def method2():\n pass\n\n# 静态方法实例\nimport math\nclass Circle1:\n def __init__(self, radius):\n self.radius = radius\n\n @staticmethod\n def area(self):\n return math.pi * self.radius ** 2\n\naaa = Circle1(3)\nCircle1.area(aaa)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LeonhardFS/flightbbq
02_DataAcquisition_and_Preparation.ipynb
mit
[ "Data Aquisition and Preparation Process\nThis is our second process notebook. It describes our data aquisition process (sources and characteristics), the data preparation, and finally the creation of two specific subsets for special tasks of our analysis.\n\n1 Data Aquisition Process\n1.1 Flight Delay Central Database\n1.2 Aircraft Information Data\n1.3 Airport Information\n1.4 Weather Data\n\n\n2 Data Preparation Process\n2.1 Get the Main Delay Data for 2014 from Downloaded zip Files\n2.2 Combine Data with External Aircraft Data\n2.3 Combine Data with External Airport Location Data\n2.4 Save the Main Final Dataset\n\n\n3 Creation of Data Subsets for Weather Analysis and Predictive Models\n3.1 Create a Subset with External Weather Data for Selected Airports\n3.2 Creation of the Prediction Datasets\n\n\n\n1 Data Aquisition Process\nThe following sections will outline our data aquisition process. Furthermore, we will describe the most important characteristics and features of the datasets.\n<img src=\"images/DataSources.png\" align=\"left\" width=\"500\" height=\"500\">\n<dt>We used four different data sources</dt>\n1. The official flight database for every domestic flight in the US \n\nHistorical weather data\nAirport information with geodata and names (e.g. for visualization and interpretation of results) \nInformation about aircraft models\n\n1.1 Flight Delay Central Database\nOur main source of data was the Bureau of Transportation Statistics (BTS), which is as statistical agency of the US DEpartment of Transportation (http://www.transtats.bts.gov/). Luckily, the BTS publishes detailed data for every domestic flight in the US (http://www.transtats.bts.gov/Tables.asp?DB_ID=120&DB_Name=Airline%20On-Time%20Performance%20Data&DB_Short_Name=On-Time). However, it is not possible to download the data over a specified period of time, i.e. one can only download data for a month in a given year. To get the data automatically we developed a scraper tool in Python, which automatically performs the requests and downloads the data. Files adressing this issue can be found in the src folder. Scraping the data for ~25 years needs around 2-3 hours as requests are processed slowly on the server side. Furthermore, the BTS provides LookUp Tables for airline codes which have been downloaded manually.\nSince the uncompressed data for each month is around 250-300MB (comma separated), we needed to filter this dataset. A first step to do so is restricting the features. A description of all available columns is available at http://www.transtats.bts.gov/TableInfo.asp?Table_ID=236&DB_Short_Name=On-Time&Info_Only=0. In our analysis we will use a subset of 30 features that have been identified as relevant for the purpose of our analysis.\n2.2 Aircraft Information Data\nMany of us have experienced it before: a flight is delayed because there are some lastminute repairs or other problems with the aircraft. We are curious if the manufacturer or the age of the aircraft influences the probability of delays. Therefore we need more detailed data about the flight. From the delay dataset mentioned in 2.1 we get the tail number of the aircraft for every single flight. This tail number is comparable to car license plates and helps us to identify the manufacturer and age of the airplane. \nWe can get this information from the Federal Aviation Administration. This institution has a database of tail numbers (so called N-Numbers) for each aircraft in the US and also publishes other datasets with information about the respective aircraft (e.g. manufacturer of turbines, owner, etc.). We downloaded the database from the following website:\nhttp://www.faa.gov/licenses_certificates/aircraft_certification/aircraft_registry/releasable_aircraft_download/.\n2.3 Airport Information\nIn addition, we need the exact geolocations for the airports in the dataset, for example to get good visualizations using Tableau Public or other maps. Furthermore, airport names would be helpful to interpret the data (the original dataset just contains the short IATA abbreviations). This data can be easily found online as csvs (see http://openflights.org/data.html). It can be found in the folder data.\n2.4 Weather Data\nA natural cause for many delays seems to be the weather. We decided to include weather data additionally in our analysis. Therefore, we wanted to get historical weather information of the airports. Unfortunately, when it comes to weather data, we couldn't find any public available sources that provide a suitable (free) dataset. However, Wunderground provides a webinterface allowing to query specific IATA / IAOC codes of airports (see i.e. http://www.wunderground.com/history/airport/EDDF/2005/10/3/DailyHistory.html?req_city=Frankfurt+%2F+Main&req_state=&req_statename=Germany&reqdb.zip=00000&reqdb.magic=5&reqdb.wmo=10637). Writing a script allowed us to get historic data of individual airports. \n<table>\n<tr><td>**events**</td><td>a list containing strings of weather events, i.e. \"Rain\", \"Fog\", \"Snow\"</td></tr>\n<tr><td>**humidity**</td><td>humidity measured in percent</td></tr>\n<tr><td>**precipitation**</td><td>precipitation measured in inches </td></tr>\n<tr><td>**sealevelpressure**</td><td>pressure at sea level in inches</td></tr>\n<tr><td>**snowdepth**</td><td>snow depth in inches</td></tr>\n<tr><td>**snowfall**</td><td>snow fall in inches</td></tr>\n<tr><td>**temperature**</td><td>temperature in degree Fahrenheit</td></tr>\n<tr><td>**visibility**</td><td>visibility in miles</td></tr>\n<tr><td>**windspeed**</td><td>wind speed in miles per hour</td></tr>\n</table>\n\nOne drawback of this method is similiar to getting the data from the BTS: the slow processing of requests from the server and the amount of requests necessary to get data matching the huge dataset of the BTS (ca. 15 min for a single year and just one airport). Thus, we decided to focus on the weather at the John F. Kennedy International Airport (New York City) and at the Boston Logan International Airport (Boston) only. Although we just used these two airports we could get some very valuable insights about weather's effects on delays (see process notebook for exploratory analysis). The detailed code for the webscraper can be found in the src-folder. It has not been included in this notebook, as it is not part of our actual analysis and would affect the readability of this notebook. The scraper creates a file weather_data.json in the data-folder that can be used for further analysis.\n2 Data Preparation Process", "# import required modules for data preparation tasks\nimport requests, zipfile, StringIO\nimport pandas as pd\nimport random\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\nimport re\nimport json\nimport os", "2.1 Get the Main Delay Data for 2014 from Downloaded zip Files\nFirst, we want to open and combine the zipped data files for each month of the delay data that has been downloaded according to the process outlined in the data aquisition section above. As we have more than 400,000 recorded flights each month, the dataset is extremely large. We therefore decided to focus on the subset of all flights in 2014 to get the most relevant flight information without missing certain months (this could be important when investigating seasonality effects).", "# reads all predefined months for a year and merge into one data frame\nrawData = pd.DataFrame()\nmonths = ['01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12']\nfor m in months:\n z = zipfile.ZipFile('cache/{y}{mo}.zip'.format(y=str(2014), mo = m))\n rawData = rawData.append(pd.read_csv(z.open(z.namelist()[0])))\n print \"Downloaded\", m\n# reset index of complete dataset for delays to prepare merging in next step\nrawData.reset_index(inplace=True)", "The columns we now have in the dataset are:", "rawData.columns", "However, we just need a subset of these columns for our analysis:", "selectedColumns = [u'index', u'FL_DATE', u'UNIQUE_CARRIER', u'TAIL_NUM', u'FL_NUM', \n u'ORIGIN', u'DEST', u'CRS_DEP_TIME', u'DEP_TIME', u'DEP_DELAY', u'TAXI_OUT', \n u'WHEELS_OFF', u'WHEELS_ON', u'TAXI_IN', u'CRS_ARR_TIME', u'ARR_TIME', u'ARR_DELAY', \n u'CANCELLED', u'DIVERTED', u'CANCELLATION_CODE', u'AIR_TIME', u'DISTANCE', \n u'CARRIER_DELAY', u'WEATHER_DELAY', u'NAS_DELAY', u'SECURITY_DELAY', u'LATE_AIRCRAFT_DELAY',\n u'ORIGIN_CITY_NAME', u'DEST_CITY_NAME']\nrawData = rawData[selectedColumns]", "2.2 Combine Data with External Aircraft Data\nWe also have two tables containing infos about the aircraft and its manufacturer available as comma separated textfiles in the data-folder as outlined in the section above. Both files will be loaded.", "z = zipfile.ZipFile('externalData/AircraftInformation.zip')\n# master table containing tail numbers of aircraft\ndf_master = pd.DataFrame.from_csv(z.open('MASTER.txt'))\n# detailed table containing information about manufacturer, age, etc.\ndf_aircrafts = pd.DataFrame.from_csv(z.open('ACFTREF.txt'))", "We can now join these two tables based on their common ID that is saved in the column MFR MDL CODE of the master table and in the index of the aircraft table respectively.", "master = df_master[['MFR MDL CODE', 'YEAR MFR']].reset_index()\naircrafts = df_aircrafts['MFR'].reset_index()\nmaster.columns = ['TAIL_NUM', 'CODE', 'YEAR']\naircrafts.columns = ['CODE', 'MFR']\njoined = pd.merge(master, aircrafts, how='left', on='CODE')", "We now join this aircraft information with our delay data and extend the original dataset with the two new features: The year in which the aircraft was built (to determine the age) and the manufacturer.", "delayFinal = rawData[['TAIL_NUM','UNIQUE_CARRIER']]\ndelayFinal.TAIL_NUM = delayFinal.TAIL_NUM.str.strip('N')\ndelaymfr = pd.merge(delayFinal, joined, how='left', on=['TAIL_NUM'])\nrawData['AIRCRAFT_YEAR'] = delaymfr.YEAR\nrawData['AIRCRAFT_MFR'] = delaymfr.MFR", "2.3 Combine Data with External Airport Location Data\nNow we load an external dataset that contains the geolocations for each commercial airport in the world. We filter this to get only the airports in the US and then assign the respective geocode of the origin airport to our original delay dataset by merging both tables.", "airportLocation = pd.DataFrame.from_csv('externalData/airport_codes_with_geo_name_ids_and_nl_names-2008-04-14.csv', header=None)\nusAirports = airportLocation[airportLocation[4]=='US'].reset_index()\n# we just need a subsets of the columns (origin, latitude and longitude)\nusAirports = usAirports[[0, 5, 6]]\nusAirports.columns = ['ORIGIN', 'LAT', 'LONG']\ncomplete2014Data = pd.merge(rawData, usAirports, how='left', on='ORIGIN')\n\n1.0*np.sum(complete2014Data.LAT.isnull())/complete2014Data.shape[0]", "Just 0.7% of alll flight origins could not be located, so the merge was quite successful.\n2.4 Save the Main Final Dataset\nThe resulting dataframe complete2014Data will be locally stored as csv file.", "complete2014Data.to_csv('cache/complete2014Data.csv')", "3 Creation of Data Subsets for Weather Analysis and Predictive Models\n3.1 Create a Subset with External Weather Data for Selected Airports\nAs outlined in previous section, we also scraped historical weather data for major US airports from the web. This data can be used as additional features for each flight to get information about the current weather conditions at the airport of departure. The script assumes that there is the weather_data.json file in the data-folder and that this file contains the respective weather information for the JFK airport in new york and the BOS airport in Boston for each day in 2014.", "# load the weather file\nweatherFile = os.path.join('data', 'weather_data.json')\nwith open(weatherFile) as infile:\n weatherDict = json.load(infile)\n\n# extract the weather data for new york and boston out of the json file and save it in weather_df\ndates = []\nframes = []\n\n# create df for weather in new york\nfor datapoint in weatherDict['JFK']:\n date = datapoint['date']\n frames.append(pd.DataFrame(datapoint['data'], index=['%s-%s-%s' % (date[0:4], date[4:6], date[6:8])]))\nweather_jfk = pd.concat(frames).reset_index()\n\n# create df for weather in boston\nfor datapoint in weatherDict['BOS']:\n date = datapoint['date']\n frames.append(pd.DataFrame(datapoint['data'], index=['%s-%s-%s' % (date[0:4], date[4:6], date[6:8])]))\nweather_bos = pd.concat(frames).reset_index()\n\n# get just the departures for the John F. Kennedy airport in New York City and Logan airport in Boston\njfk_delays = complete2014Data[complete2014Data.ORIGIN=='JFK']\nbos_delays = complete2014Data[complete2014Data.ORIGIN=='BOS']\n\n# merge delays with weather_df created above\njfk_dalayWeather = pd.merge(jfk_delays, weather_jfk, how='left', left_on='FL_DATE', right_on = 'index')\nbos_dalayWeather = pd.merge(bos_delays, weather_bos, how='left', left_on='FL_DATE', right_on = 'index')\n\njfk_bos_comparison = pd.concat([jfk_dalayWeather, bos_dalayWeather]).reset_index()\n\n# save everything in a csv\njfk_bos_comparison.to_csv('cache/jfk_bos_weather_2014.csv', encoding='UTF-8')", "3.2 Creation of the Prediction Datasets\nBefore evaluating any models on the data, we have to clean it a bit.\nPruning the data\nWhen cleaning the data set, we have to remove the following entries:\n\nflights that have been cancelled or diverted. We focus on predicting the delay. As a result, we also remove the columns associated with diverted flights.\ncolmuns that give the answer. This is the case of many colmuns related to the arrival of the plane\nrows where a value is missing\n\nNote that data points have to be cleaned in this order because most flights have empty entries for the 'diverted' columns.", "#entries to be dropped in the analysis\nflight_data_dropped = ['QUARTER', 'DAY_OF_MONTH', 'AIRLINE_ID', 'CARRIER', 'FL_NUM', 'TAIL_NUM']\n\nlocation_data_dropped = ['ORIGIN_STATE_FIPS', 'ORIGIN_STATE_NM',\\\n 'ORIGIN_WAC', 'DEST_STATE_FIPS', \\\n 'DEST_STATE_NM', 'DEST_WAC']\n\ndeparture_data_dropped = ['DEP_TIME', 'DEP_DELAY', 'DEP_DELAY_NEW', 'DEP_DEL15', 'DEP_DELAY_GROUP',\\\n 'DEP_TIME_BLK', 'TAXI_OUT', 'WHEELS_OFF']\n\narrival_data_dropped = ['WHEELS_ON', 'TAXI_IN', 'ARR_TIME', 'ARR_DELAY_NEW',\\\n 'ARR_DELAY_GROUP', 'ARR_TIME_BLK']\n\ncancel_data_dropped = ['CANCELLED','CANCELLATION_CODE', 'DIVERTED']\n\nsummaries_dropped = ['CRS_ELAPSED_TIME', 'AIR_TIME', 'FLIGHTS']\n\ncause_delay_dropped = ['CARRIER_DELAY', 'WEATHER_DELAY', 'NAS_DELAY', 'SECURITY_DELAY', 'LATE_AIRCRAFT_DELAY']\n\ngate_return_dropped = ['FIRST_DEP_TIME', 'TOTAL_ADD_GTIME', 'LONGEST_ADD_GTIME']\n\ndiverted_data_dropped = ['DIV_AIRPORT_LANDINGS', 'DIV_REACHED_DEST', 'DIV_ACTUAL_ELAPSED_TIME', \\\n 'DIV_ARR_DELAY', 'DIV_DISTANCE', 'DIV1_AIRPORT', 'DIV1_WHEELS_ON', \\\n 'DIV1_TOTAL_GTIME', 'DIV1_LONGEST_GTIME', 'DIV1_WHEELS_OFF', \\\n 'DIV1_TAIL_NUM', 'DIV2_AIRPORT', 'DIV2_WHEELS_ON', \\\n 'DIV2_TOTAL_GTIME', 'DIV2_LONGEST_GTIME', 'DIV2_WHEELS_OFF', \\\n 'DIV2_TAIL_NUM', 'DIV3_AIRPORT', 'DIV3_WHEELS_ON', \\\n 'DIV3_TOTAL_GTIME', 'DIV3_LONGEST_GTIME', 'DIV3_WHEELS_OFF', 'DIV3_TAIL_NUM', \\\n 'DIV4_AIRPORT', 'DIV4_WHEELS_ON', 'DIV4_TOTAL_GTIME', 'DIV4_LONGEST_GTIME', \\\n 'DIV4_WHEELS_OFF', 'DIV4_TAIL_NUM', 'DIV5_AIRPORT', 'DIV5_WHEELS_ON', \\\n 'DIV5_TOTAL_GTIME', 'DIV5_LONGEST_GTIME', 'DIV5_WHEELS_OFF', 'DIV5_TAIL_NUM']\n\nother_dropped = ['Unnamed: 93']\n\ncolumns_dropped = flight_data_dropped + location_data_dropped + departure_data_dropped + arrival_data_dropped \\\n + cancel_data_dropped + summaries_dropped + cause_delay_dropped + gate_return_dropped + diverted_data_dropped \\\n + other_dropped\n\ndef clean(data, list_col):\n ''' \n Creates a dataset by excluding undesirable columns\n\n Parameters:\n -----------\n\n df: pandas.DataFrame\n Flight dataframe \n\n list_col: <list 'string'>\n Comumns to exclude from the data set\n '''\n\n # security check to drop only columns that exist\n list_col = list(set(list_col) & set(data.columns))\n \n res = data[(data.CANCELLED == 0) & (data.DIVERTED == 0)]\n res.drop(list_col, axis=1, inplace=True)\n res.dropna(axis = 0, inplace = True)\n return res\n\n%%time\ndata2014 = clean(complete2014Data, columns_dropped)\nprint data2014.columns", "Filtering the data for active Airlines only\nAs we want to predict delay times, we throw out any flights that are operated by a shutdown airline.", "df_active_airlines = pd.read_csv('data/cur_airlines.txt', header=None)\ndf_active_airlines.columns = [['UNIQUE_CARRIER']];\ndf_active_airlines.head()\n\nfilteredData2014 = data2014.merge(df_active_airlines, on=['UNIQUE_CARRIER', 'UNIQUE_CARRIER'], how='inner')", "A quick check reveals, that filtering was not (really) necessary as all Airlines are still active today.", "filteredData2014.count()[0], data2014.count()[0]\n\n# save data to cache\nfilteredData2014.to_csv('cache/linear_model_data.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Migal/opt_ctrl_lab_1
lab_1/tests/lab_1_test.ipynb
bsd-3-clause
[ "print(__doc__)\n\n# Author: Ivan Migal ivan.migal@mail.ru\n# License: BSD 3 clause\n\nimport math\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom pylab import rcParams\nfrom mpl_toolkits.mplot3d import Axes3D\n\n%matplotlib inline\nrcParams['figure.figsize'] = 12, 12\nplt.style.use('ggplot')\n# Настройка шрифта\nfont = {'family' : 'DejaVu Sans',\n 'weight' : 'bold',\n 'size' : 16}\n\nmatplotlib.rc('font', **font)", "Вспомогательные функции", "# Плотность источников тепла\ndef func(s, t):\n #return 0.\n return s + t * 4.\n \n# Температура внешней среды\ndef p(t):\n return math.cos(2 * t * math.pi)\n #return t \n\ndef array(f, numval, numdh):\n \"\"\"Создать N-мерный массив.\n \n param: f - функция, которая приминает N аргументов.\n param: numval - диапазоны значений параметров функции. Список\n param: numdh - шаги для параметров. Список\n \n \"\"\"\n def rec_for(f, numdim, numdh, current_l, l_i, arr):\n \"\"\"Рекурсивный цикл.\n \n param: f - функция, которая приминает N аргументов.\n param: numdim - размерность выходной матрицы. Список\n param: numdh - шаги для параметров. Список\n param: current_l - текущая глубина рекурсии.\n param: l_i - промежуточный список индексов. Список\n param: arr - матрица, с которой мы работаем. np.array\n \n \"\"\"\n for i in range(numdim[current_l]):\n l_i.append(i)\n if current_l < len(numdim) - 1:\n rec_for(f, numdim, numdh, current_l + 1, l_i, arr)\n else:\n args = (np.array(l_i) * np.array(numdh))\n arr[tuple(l_i)] = f(*args)\n l_i.pop()\n return arr\n numdim = [int(numval[i] / numdh[i]) + 1 for i in range(len(numdh))]\n arr = np.zeros(numdim)\n arr = rec_for(f, numdim, numdh, 0, [], arr)\n # Надо отобразить так x - j, y - i (для графиков), поэтому используем transpose\n arr = np.transpose(arr)\n return arr\n\ndef TDMA(a, b, c, f):\n \"\"\"Метод прогонки.\n \n param: a - левая поддиагональ. \n param: b - правая поддиагональ.\n param: c - центр.\n param: f - правая часть.\n \"\"\"\n #a, b, c, f = map(lambda k_list: map(float, k_list), (a, b, c, f))\n \n alpha = [0]\n beta = [0]\n n = len(f)\n x = [0] * n\n\n for i in range(n - 1):\n alpha.append(-b[i] / (a[i] * alpha[i] + c[i]))\n beta.append((f[i] - a[i] * beta[i]) / (a[i] * alpha[i] + c[i]))\n\n x[n - 1] = (f[n - 1] - a[n - 1] * beta[n - 1]) / (c[n - 1] + a[n - 1] * alpha[n - 1])\n\n for i in reversed(range(n - 1)):\n x[i] = alpha[i + 1] * x[i + 1] + beta[i + 1]\n\n return x", "Тест для метода прогонки\nsource: http://old.exponenta.ru/educat/class/courses/vvm/theme_5/example.asp\nОтвет: (-3, 1, 5, -8)", "a = [0, 1, 1, 1]\nc = [2, 10, -5, 4]\nb = [1, -5, 2, 0]\nf = [-5, -18, -40, -27]\nx = TDMA(a, b, c, f)\nx", "source: http://kontromat.ru/?page_id=4980 (Ответ там неверный, знак минус у 5 пропущен)\nОтвет: (-10, 5, -2, -10)", "a = [0, -3, -5, -6, -5]\nc = [2, 8, 12, 18, 10]\nb = [-1, -1, 2, -4, 0]\nf = [-25, 72, -69, -156, 20]\nx = TDMA(a, b, c, f)\nx", "Тесты для создания массивов", "X_ = np.arange(0., 1.01, .1)\nY_ = np.arange(0., 2.01, .01)\n#print(np.shape(X_))\nX_, Y_ = np.meshgrid(X_, Y_)\nprint(np.shape(X_), np.shape(Y_))\n\nX_\n\nY_\n\narr = array(func, [1., 2.], [.1, .01])\nprint(np.shape(arr))\narr\n\nZ = arr\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot_surface(X_, Y_, Z, color='r')\nplt.xlabel('s')\nplt.ylabel('t')\n\nplt.show()\n\narr = array(p, [1.], [.001])\narr", "Создание класса модели\nСсылки:\nТрехточечные производные", "# Класс модели для Л.Р №1\nclass Lab1OptCtrlModel():\n \n def __init__(self, p_d):\n self.a, self.l, self.v, self.T = p_d['a'], p_d['l'], p_d['v'], p_d['T']\n self.p, self.f = p_d['p(t)'], p_d['f(s, t)']\n self.p_min, self.p_max, self.R = p_d['p_min'], p_d['p_max'], p_d['R']\n self.fi, self.y = p_d['fi(s)'], p_d['y(s)']\n \n self.dh, self.dt = p_d['dh'], p_d['dt']\n self.N, self.M = p_d['N'], p_d['M']\n \n self.p_arr = []\n self.p_arr.append(array(self.p, [p_d['T']], [p_d['dt']]))\n \n self.f_arr = array(f, [p_d['l'], p_d['T']], [p_d['dh'], p_d['dt']])\n \n self.x_arr = []\n self.x_arr.append(array(self.f, [p_d['l'], p_d['T']], [p_d['dh'], p_d['dt']]))\n self.x_arr[0][0,:] = array(self.fi, [p_d['l']], [p_d['dh']])\n \n def Solve(self, eps=10**-5):\n \n # Число уравнений\n eq_l = self.N - 1\n \n # Инициализация элементов для метода прогонки, которые постоянны\n a, b, c = [0. for i in range(eq_l)], [0. for i in range(eq_l)], [0. for i in range(eq_l)]\n f = [0. for i in range(eq_l)]\n \n a2_dt_dh2 = self.a ** 2 * self.dt / self.dh ** 2\n buf = 1. / (3. + 2. * self.dh * self.v)\n \n # a\n a[1:-1] = [a2_dt_dh2 for i in range(1, eq_l - 1)]\n # Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию\n a[-1] = a2_dt_dh2 * (1. - buf)\n \n # b\n # Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию\n b[0] = 2. / 3. * a2_dt_dh2\n b[1:-1] = [a2_dt_dh2 for i in range(1, eq_l - 1)]\n \n # c\n # Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию\n c[0] = -2. / 3. * a2_dt_dh2 - 1.\n c[1:-1] = [-1. - 2. * a2_dt_dh2 for i in range(1, eq_l - 1)]\n # Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию\n c[-1] = -1. + a2_dt_dh2 * (4. * buf - 2.)\n \n ind = 0\n # Решаем 1 задачу\n for j in range(0, self.M):\n \n # f\n f[0:-1] = [-self.x_arr[ind][j, i] - self.dt * self.f_arr[j, i] for i in range(1, eq_l)]\n # Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию\n f[-1] = -self.x_arr[ind][j, -2] - self.dt * self.f_arr[j, -2]\n f[-1] += -a2_dt_dh2 * 2. * self.dh * self.v * buf * self.p_arr[ind][j + 1]\n \n # Решаем задачу\n \n self.x_arr[ind][j + 1,1:1 + eq_l] = TDMA(a, b, c, f)\n \n # Вычисляем первый и последний элементы\n # Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию\n self.x_arr[ind][j + 1, 0] = 4. / 3. * self.x_arr[ind][j + 1, 1] - 1. / 3. * self.x_arr[ind][j + 1, 2]\n self.x_arr[ind][j + 1, -1] = 4 * buf * self.x_arr[ind][j + 1, -2]\n self.x_arr[ind][j + 1, -1] -= buf * self.x_arr[ind][j + 1, -3]\n self.x_arr[ind][j + 1, -1] += 2. * self.dh * self.v * buf * self.p_arr[ind][j + 1]\n \n return self.x_arr[ind]", "Тесты для 1 задачи", "# Словарь параметров\np_d = {}\n\n# Заданные положительные величины\np_d['a'], p_d['l'], p_d['v'], p_d['T'] = 10., 3., 4., 20.\n\n# Решение тестового примера\ndef x(s, t):\n return math.sin(t) + math.sin(s + math.pi / 2)\n\n# Плотность источников тепла\ndef f(s, t):\n return math.cos(t) + p_d['a'] ** 2 * math.sin(s + math.pi / 2)\n \n# Температура внешней среды\ndef p(t):\n return 1. / p_d['v'] * math.cos(p_d['l'] + math.pi / 2) + math.sin(t) + math.sin(p_d['l'] + math.pi / 2)\n \n# Распределение температуры в начальный момент времени\ndef fi(s):\n return math.sin(s + math.pi / 2)\n\np_d['p(t)'] = p\n\np_d['f(s, t)'] = f\n\n# Заданные числа\np_d['p_min'], p_d['p_max'], p_d['R'] = -10., 10., 100.\n\np_d['fi(s)'] = fi\n\n# Желаемое распределение температуры\ndef y(s):\n return s\n\np_d['y(s)'] = y\n\n# Число точек на пространственной и временной сетке соответственно\np_d['N'], p_d['M'] = 10, 100\n\n# Шаг на пространственной и временной сетке соответственно\np_d['dh'], p_d['dt'] = p_d['l'] / p_d['N'], p_d['T'] / p_d['M']\np_d['l'], p_d['T'], p_d['dh'], p_d['dt']\n\nX_ = np.arange(0., p_d['l'] + p_d['dh'], p_d['dh'])\nY_ = np.arange(0., p_d['T'] + p_d['dt'], p_d['dt'])\nX_, Y_ = np.meshgrid(X_, Y_)\nprint(np.shape(X_), np.shape(Y_))\n\nmodel = Lab1OptCtrlModel(p_d)\nZ = model.x_arr[0]\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot_surface(X_, Y_, Z)\nplt.xlabel('s')\nplt.ylabel('t')\n\nplt.show()\n\nx_arr = model.Solve()\n\nx_arr_1 = array(x, [p_d['l'], p_d['T']], [p_d['dh'], p_d['dt']])\n\nabs(x_arr - x_arr_1)\n\nnp.max(abs(x_arr - x_arr_1))\n\nZ = x_arr_1\n\nfig = plt.figure()\nax = fig.add_subplot(211, projection='3d')\nax.plot_surface(X_, Y_, Z, color='b')\nZ = x_arr\nax.plot_surface(X_, Y_, Z, color='r')\nplt.xlabel('s')\nplt.ylabel('t')\n\nplt.show()\n\nZ = abs(x_arr - x_arr_1)\n\nfig = plt.figure()\nax = fig.add_subplot(211, projection='3d')\nax.plot_surface(X_, Y_, Z, color='b')\nplt.xlabel('s')\nplt.ylabel('t')\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mitdbg/modeldb
demos/webinar-2020-5-6/01-ad_hoc/01-train/NLP training.ipynb
mit
[ "NLP training example\nIn this example, we'll train an NLP model for sentiment analysis of tweets using spaCy.\nFirst we download spaCy language libraries.", "!python -m spacy download en_core_web_sm", "And import the boilerplate code.", "from __future__ import unicode_literals, print_function\n\nimport boto3\nimport json\nimport numpy as np\nimport pandas as pd\nimport spacy", "Data prep\nDownload the dataset from S3.", "S3_BUCKET = \"verta-strata\"\nS3_KEY = \"english-tweets.csv\"\nFILENAME = S3_KEY\n\nboto3.client('s3').download_file(S3_BUCKET, S3_KEY, FILENAME)", "Clean and load data using our library.", "import utils\n\ndata = pd.read_csv(FILENAME).sample(frac=1).reset_index(drop=True)\nutils.clean_data(data)\n\ndata.head()", "Train the model\nWe'll use a pre-trained model from spaCy and fine tune it in our new dataset.", "nlp = spacy.load('en_core_web_sm')", "Update the model with the current data using our library.", "import training\n\ntraining.train(nlp, data, n_iter=20)", "Now we save the model back into S3 to a well known location (make sure it's a location you can write to!) so that we can fetch it later.", "filename = \"/tmp/model.spacy\"\nwith open(filename, 'wb') as f:\n f.write(nlp.to_bytes())\n\nboto3.client('s3').upload_file(filename, S3_BUCKET, \"models/01/model.spacy\")\n\nfilename = \"/tmp/model_metadata.json\"\nwith open(filename, 'w') as f:\n f.write(json.dumps(nlp.meta))\n\nboto3.client('s3').upload_file(filename, S3_BUCKET, \"models/01/model_metadata.json\")", "Deployment\nGreat! Now you have a model that you can use to run predictions against. Follow the next step of this tutorial to see how to do it." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bryanwweber/PyKED
docs/rcm-example.ipynb
bsd-3-clause
[ "RCM modeling with varying reactor volume\nThis example is available as an ipynb (Jupyter Notebook) file in the main GitHub repository at https://github.com/pr-omethe-us/PyKED/blob/master/docs/rcm-example.ipynb\nThe ChemKED file that will be used in this example can be found in the\ntests directory of the PyKED\nrepository at https://github.com/pr-omethe-us/PyKED/blob/master/pyked/tests/testfile_rcm.yaml.\nExamining that file, we find the first section specifies the information about\nthe ChemKED file itself:\nyaml\nfile-authors:\n - name: Kyle E Niemeyer\n ORCID: 0000-0003-4425-7097\nfile-version: 0\nchemked-version: 0.4.0\nThen, we find the information regarding the article in the literature from which\nthis data was taken. In this case, the dataset comes from the work of\nMittal et al.:\nyaml\nreference:\n doi: 10.1002/kin.20180\n authors:\n - name: Gaurav Mittal\n - name: Chih-Jen Sung\n ORCID: 0000-0003-2046-8076\n - name: Richard A Yetter\n journal: International Journal of Chemical Kinetics\n year: 2006\n volume: 38\n pages: 516-529\n detail: Fig. 6, open circle\nexperiment-type: ignition delay\napparatus:\n kind: rapid compression machine\n institution: Case Western Reserve University\n facility: CWRU RCM\nFinally, this file contains just a single datapoint, which describes the experimental\nignition delay, initial mixture composition, initial temperature, initial pressure,\ncompression time, ignition type, and volume history that specifies\nhow the volume of the reactor varies with time, for simulating the compression\nstroke and post-compression processes:\nyaml\ndatapoints:\n- temperature:\n - 297.4 kelvin\n ignition-delay:\n - 1.0 ms\n pressure:\n - 958.0 torr\n composition:\n kind: mole fraction\n species:\n - species-name: H2\n InChI: 1S/H2/h1H\n amount:\n - 0.12500\n - species-name: O2\n InChI: 1S/O2/c1-2\n amount:\n - 0.06250\n - species-name: N2\n InChI: 1S/N2/c1-2\n amount:\n - 0.18125\n - species-name: Ar\n InChI: 1S/Ar\n amount:\n - 0.63125\n ignition-type:\n target: pressure\n type: d/dt max\n rcm-data:\n compression-time:\n - 38.0 ms\n time-histories:\n - type: volume\n time:\n units: s\n column: 0\n volume:\n units: cm3\n column: 1\n values:\n - [0.00E+000, 5.47669375000E+002]\n - [1.00E-003, 5.46608789894E+002]\n - [2.00E-003, 5.43427034574E+002]\n ...\nThe values for the volume history in the time-histories key are truncated here to save space. One application of the\ndata stored in this file is to perform a simulation using Cantera to\ncalculate the ignition delay, including the facility-dependent effects represented in the volume\ntrace. All information required to perform this simulation is present in the ChemKED file, with the\nexception of a chemical kinetic model for H<sub>2</sub>/CO combustion.\nIn Python, additional functionality can be imported into a script or session by the import\nkeyword. Cantera, NumPy, and PyKED must be imported into the session so that we can work with the\ncode. In the case of Cantera and NumPy, we will use many functions from these libraries, so we\nassign them abbreviations (ct and np, respectively) for convenience. From PyKED, we\nwill only be using the ChemKED class, so this is all that is imported:", "import cantera as ct\nimport numpy as np\nfrom pyked import ChemKED", "Next, we have to load the ChemKED file and retrieve the first element of the datapoints\nlist. Although this file only encodes a single experiment, the datapoints attribute will\nalways be a list (in this case, of length 1). The elements of the\ndatapoints list are instances of the DataPoint class, which we store in the variable\ndp. To load the YAML file from the web, we also import and use the PyYAML package, and the built-in urllib package, and use the dict_input argument to ChemKED to read the information.", "from urllib.request import urlopen\nimport yaml\nrcm_link = 'https://raw.githubusercontent.com/pr-omethe-us/PyKED/master/pyked/tests/testfile_rcm.yaml'\nwith urlopen(rcm_link) as response:\n testfile_rcm = yaml.safe_load(response.read())\nck = ChemKED(dict_input=testfile_rcm)\ndp = ck.datapoints[0]", "The initial temperature, pressure, and mixture composition can be read from the\ninstance of the DataPoint class. PyKED uses instances of the Pint Quantity class to\nstore values with units, while Cantera expects a floating-point value in SI\nunits as input. Therefore, we use the built-in capabilities of Pint to convert\nthe units from those specified in the ChemKED file to SI units, and we use the magnitude\nattribute of the Quantity class to take only the numerical part. We also retrieve the\ninitial mixture mole fractions in a format Cantera will understand:", "T_initial = dp.temperature.to('K').magnitude\nP_initial = dp.pressure.to('Pa').magnitude\nX_initial = dp.get_cantera_mole_fraction()", "With these properties defined, we have to create the objects in Cantera that represent the physical\nstate of the system to be studied. In Cantera, the Solution class stores the thermodynamic,\nkinetic, and transport data from an input file in the CTI format. After the Solution object\nis created, we can set the initial temperature, pressure, and mole fractions using the TPX\nattribute of the Solution class. In this example, we will use the GRI-3.0 as the chemical kinetic mechanism for H<sub>2</sub>/CO combustion. GRI-3.0 is built-in to Cantera, so no other input files are needed.", "gas = ct.Solution('gri30.xml')\ngas.TPX = T_initial, P_initial, X_initial", "With the thermodynamic and kinetic data loaded and the initial conditions defined, we need to\ninstall the Solution instance into an IdealGasReactor which implements the equations\nfor mass, energy, and species conservation. In addition, we create a Reservoir to represent\nthe environment external to the reaction chamber. The input file used for the environment,\nair.xml, is also included with Cantera and represents an average composition of air.", "reac = ct.IdealGasReactor(gas)\nenv = ct.Reservoir(ct.Solution('air.xml'))", "To apply the effect of the volume trace to the IdealGasReactor, a Wall must be\ninstalled between the reactor and environment and assigned a velocity. The Wall allows the\nenvironment to do work on the reactor (or vice versa) and change the reactor's thermodynamic state;\nwe use a Reservoir for the environment because in Cantera, Reservoirs always have a\nconstant thermodynamic state and composition. Using a Reservoir accelerates the solution\ncompared to using two IdealGasReactors, since the composition and state of the environment\nare typically not necessary for the solution of autoignition problems. Although we do not show the\ndetails here, a reference implementation of a class that computes the wall velocity given the volume\nhistory of the reactor is available in CanSen, in the\ncansen.profiles.VolumeProfile class, which we import here:", "from cansen.profiles import VolumeProfile\nexp_time = dp.volume_history.time.magnitude\nexp_volume = dp.volume_history.volume.magnitude\nkeywords = {'vproTime': exp_time, 'vproVol': exp_volume}\nct.Wall(reac, env, velocity=VolumeProfile(keywords));", "Then, the IdealGasReactor is installed in a ReactorNet. The ReactorNet\nimplements the connection to the numerical solver (CVODES is\nused in Cantera) to solve the energy and species equations. For this example, it is best practice\nto set the maximum time step allowed in the solution to be the minimum time difference in the time array from the volume trace:", "netw = ct.ReactorNet([reac])\nnetw.set_max_time_step(np.min(np.diff(exp_time)))", "To calculate the ignition delay, we will follow the definition specified in the ChemKED file for\nthis experiment, where the experimentalists used the maximum of the time derivative of the pressure\nto define the ignition delay. To calculate this derivative, we need to store the state variables and the composition on each time step, so we initialize several Python lists to act as storage:", "time = []\ntemperature = []\npressure = []\nvolume = []\nmass_fractions = []", "Finally, the problem is integrated using the step method of the ReactorNet. The\nstep method takes one timestep forward on each call, with step size determined by the CVODES\nsolver (CVODES uses an adaptive time-stepping algorithm). On each step, we add the relevant variables\nto their respective lists. The problem is integrated until a user-specified end time, in this case\n50 ms, although in principle, the user could end the simulation on any condition\nthey choose:", "while netw.time < 0.05:\n time.append(netw.time)\n temperature.append(reac.T)\n pressure.append(reac.thermo.P)\n volume.append(reac.volume)\n mass_fractions.append(reac.Y)\n netw.step()", "At this point, the user would post-process the information in the pressure list to calculate\nthe derivative by whatever algorithm they choose. We will plot the pressure versus the time of the simulation using the Matplotlib library:", "%matplotlib notebook\nimport matplotlib.pyplot as plt\n\nplt.figure()\nplt.plot(time, pressure)\nplt.ylabel('Pressure [Pa]')\nplt.xlabel('Time [s]');", "We can also plot the volume trace and compare to the values derived from the ChemKED file.", "plt.figure()\n\nplt.plot(exp_time, exp_volume/exp_volume[0], label='Experimental volume', linestyle='--')\nplt.plot(time, volume, label='Simulated volume')\nplt.legend(loc='best')\nplt.ylabel('Volume [m^3]')\nplt.xlabel('Time [s]');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
penguinmenac3/ml-notebooks
Machine Learning MNIST with TF.ipynb
gpl-3.0
[ "Machine Learning MNIST with TF\nThis notebook is based upon the notebook published here https://github.com/random-forests/tutorials/blob/master/ep7.ipynb.\nI simply adopt it to the current Tensorflow Version (1.0.0).", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport tensorflow as tf\nlearn = tf.contrib.learn\ntf.logging.set_verbosity(tf.logging.ERROR)", "Import the dataset", "mnist = learn.datasets.load_dataset('mnist')\ndata = mnist.train.images\nlabels = np.asarray(mnist.train.labels, dtype=np.int32)\ntest_data = mnist.test.images\ntest_labels = np.asarray(mnist.test.labels, dtype=np.int32)", "There are 55k examples in train, and 10k in eval. You may wish to limit the size to experiment faster.", "max_examples = 10000\ndata = data[:max_examples]\nlabels = labels[:max_examples]", "Display some digits", "def display(i):\n img = test_data[i]\n plt.title('Example %d. Label: %d' % (i, test_labels[i]))\n plt.imshow(img.reshape((28,28)), cmap=plt.cm.gray_r) \n\ndisplay(0)\n\ndisplay(1)", "These digits are clearly drawn. Here's one that's not.", "display(8)", "Now let's take a look at how many features we have.", "print len(data[0])", "Fit a Linear Classifier\nOur goal here is to get about 90% accuracy with this simple classifier. For more details on how these work, see https://www.tensorflow.org/versions/r0.10/tutorials/mnist/beginners/index.html#mnist-for-ml-beginners", "feature_columns = learn.infer_real_valued_columns_from_input(data)\nclassifier = learn.LinearClassifier(feature_columns=feature_columns, n_classes=10)\nclassifier.fit(data, labels, batch_size=100, steps=1000)", "Evaluate accuracy", "classifier.evaluate(test_data, test_labels)\nprint classifier.evaluate(test_data, test_labels)[\"accuracy\"]", "Classify a few examples\nWe can make predictions on individual images using the predict method", "# here's one it gets right\nidx = [0]\npredictions = classifier.predict(x=np.array(test_data[idx]))\nfor i, p in enumerate(predictions):\n print(\"Predicted %d, Label: %d\" % (p, test_labels[idx[i]]))\n display(idx[i])\n\n# here's one it gets wrong\nidx = [8]\npredictions = classifier.predict(x=np.array(test_data[idx]))\nfor i, p in enumerate(predictions):\n print(\"Predicted %d, Label: %d\" % (p, test_labels[idx[i]]))\n display(idx[i])", "Visualize learned weights\nLet's see if we can reproduce the pictures of the weights in the TensorFlow Basic MNSIT <a href=\"https://www.tensorflow.org/tutorials/mnist/beginners/index.html#mnist-for-ml-beginners\">tutorial</a>.", "weights = classifier.weights_\nf, axes = plt.subplots(2, 5, figsize=(10,4))\naxes = axes.reshape(-1)\nfor i in range(len(axes)):\n a = axes[i]\n a.imshow(weights.T[i].reshape(28, 28), cmap=plt.cm.seismic)\n a.set_title(i)\n a.set_xticks(()) # ticks be gone\n a.set_yticks(())\nplt.show()", "Next steps\n\nTensorFlow Docker images: https://hub.docker.com/r/tensorflow/tensorflow/ \nTF.Learn Quickstart: https://www.tensorflow.org/versions/r0.9/tutorials/tflearn/index.html\nMNIST tutorial: https://www.tensorflow.org/tutorials/mnist/beginners/index.html\nVisualizating MNIST: http://colah.github.io/posts/2014-10-Visualizing-MNIST/\nAdditional notebooks: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker/notebooks\nMore about linear classifiers: https://www.tensorflow.org/versions/r0.10/tutorials/linear/overview.html#large-scale-linear-models-with-tensorflow\nMuch more about linear classifiers: http://cs231n.github.io/linear-classify/\nAdditional TF.Learn samples: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/skflow" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jlawman/jlawman.github.io
content/sklearn/Walkthrough - Implementing the Random Forest Classifier for the First Time.ipynb
mit
[ "Implementing the Random Forest Classifier from sci-kit learn\n1. Import dataset\nThis tutorial uses the iris dataset (https://en.wikipedia.org/wiki/Iris_flower_data_set) which comes preloaded with sklearn.", "#Import dataset\nfrom sklearn.datasets import load_iris\niris = load_iris()", "2. Prepare training and testing data\nEach flower in this dataset contains the following features and labels\n* features - measurements of the flower petals and sepals\n* labels - the flower species (setosa, versicolor, or virginica) represented as a 0, 1, or 2.\nOur train_test_split function will seperate the data as follows\n* (features_train, labels_train) - 80% of the data prepared for training\n* (features_test, labels_test) - 20% of the data prepared for making our predictions and evaluating our model", "#Import train_test_split\nfrom sklearn.model_selection import train_test_split\n\nfeatures_train, features_test, labels_train, labels_test = train_test_split(iris.data,iris.target,test_size=0.2,random_state=1)", "3. Create and fit the Random Forest Classifier\nThis tutorial uses the RandomForestClassifier model for our predictions, but you can experiment with other classifiers. To do so, import another classifier and replace the relevant code in this section.", "#Import classifier\nfrom sklearn.ensemble import RandomForestClassifier\n\n#Create an instance of the RandomForestClassifier\nrfc = RandomForestClassifier()\n\n#Fit our model to the training features and labels\nrfc.fit(features_train,labels_train)", "4. Make Predictions using Random Forest Classifier", "rfc_predictions = rfc.predict(features_test)", "Understanding our predictions\nOur predictions will be an array of 0's 1's, and 2's, depending on which flower our algorithm believes each set of measurements to represent.", "print(rfc_predictions)", "To intepret this, consider the first set of measurements in features_test:", "print(features_test[0])", "Our model believes that these measurements correspond to a setosa iris (label 0).", "print(rfc_predictions[0])", "In this case, our model is correct, since the true label indicates that this was a setosa iris (label 0).", "print(labels_test[0])", "5. Evaluate our model\nFor this section we will import two metrics from sklearn: confusion_matrix and classification_report. They will help us understand how well our model did.", "#Import pandas to create the confusion matrix dataframe\nimport pandas as pd\n\n#Import classification_report and confusion_matrix to evaluate our model\nfrom sklearn.metrics import classification_report, confusion_matrix", "As seen in the confusion matrix below, most predictions are accurate but our model misclassified one specimen of versicolor (our model thought that it was virginca).", "#Create a dataframe with the confusion matrix\nconfusion_df = pd.DataFrame(confusion_matrix(labels_test, rfc_predictions),\n columns=[\"Predicted \" + name for name in iris.target_names],\n index = iris.target_names)\n\nconfusion_df", "As seen in the classification report below, our model has 97% precision, recall, and accuracy.", "print(classification_report(labels_test,rfc_predictions))", "Note on the RandomForestClassifier from sklearn\nDocumentation with full explanation of parameters and use: http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html.\nSome useful parameters to experiment with:\n- min_samples_leaf (the minimum samles which can be put into each lef)\n- n_estimators (the number of decision trains)\n- max_features (the size of the subset of features to be examined at each split)\nAn optional feature to take advantage of:\n- oob_score (a way of seeing how well the estimator did by cross-validiting on the \"out of bag\" data, i.e. the data\n for each tree that was not used in the sample). This would be usefull if you didn't want to split your dataset into a training dataset and a test dataset.\nNote on metrics\nCheck out wikipedia if confusion matrices are new (https://en.wikipedia.org/wiki/Confusion_matrix) or if you want explanation on precision and recall (https://en.wikipedia.org/wiki/Precision_and_recall)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wcmckee/wcmckee.com
posts/redtube.ipynb
mit
[ "RedTube json Python", "import requests\nimport json\nimport random\n\nimport getpass\n#import couchdb\nimport pickle\nimport getpass\n#!flask/bin/python\n#from flask import Flask, jsonify\n\nmyusr = getpass.getuser()\n\nprint(myusr)\n\n#couch = couchdb.Server()\n\nwith open('/home/{}/prn.pickle'.format(myusr), 'rb') as handle:\n prnlis = pickle.load(handle)\n\n#db = couch.create('redtube') \n\n#db = couch['redtube']", "Requests and json are the two main modules used for this. Random can also be handy", "payload = {'output' : 'json', 'data' : 'redtube.Videos.searchVideos', 'page' : 1}\n\ngetprn = requests.get('http://api.redtube.com/', params = payload)\n\ndaprn = getprn.json()\n\nlevid = len(daprn['videos'])\n\nporndick = dict()\n\n#for lev in range(0, levid):\n# print(daprn['videos'][lev]['video'])\n# prntit = (daprn['videos'][lev]['video']['title'])\n# prnnow = prntit.replace(' ', '-')\n# prnlow = prnnow.lower()\n# print(prnlow)\n# try:\n# somelis = list()\n# for dapr in daprn['videos'][lev]['video']['tags']:\n# print(dapr['tag_name'])\n# somelis.append(dapr['tag_name'])\n# porndick.update({daprn['videos'][lev]['video']['video_id'] : {'tags' : \", \".join(str(x) for x in somelis)}})\n #db.save(porndick)\n #try:\n # db = couch.create(prnlow)\n #except PreconditionFailed:\n # db = couch[prnlow]\n #db.save({daprn['videos'][lev]['video']['video_id'] : {'tags' : \", \".join(str(x) for x in somelis)}})\n \n# except KeyError:\n# continue\n\n#for i in db:\n# print(i)\n\n#db.save(porndick)\n\n#for i in db:\n# print(db[i])\n\n#print(pornd['tags'])\n\n#loaPrn = json.loads(getPrn.text)\n#print loaUrl", "Convert it into readable text that you can work with", "lenvid = len(daprn[u'videos'])\n\nlenvid\n\n#aldic = dict()\n\nwith open('/home/{}/prn3.pickle'.format(myusr), 'rb') as handles:\n aldic = pickle.load(handles)\n\nimport shutil\n\nfor napn in range(0, lenvid):\n print(daprn[u'videos'][napn]['video']['url'])\n print(daprn[u'videos'][napn]['video']['title'])\n try:\n letae = len(daprn[u'videos'][napn]['video']['tags'])\n tagna = (daprn[u'videos'][napn]['video']['tags'])\n reqbru = requests.get('http://api.giphy.com/v1/gifs/translate?s={}&api_key=dc6zaTOxFJmzC'.format(tagna))\n brujsn = reqbru.json()\n print(brujsn['data']['images']['fixed_width']['url'])\n gurl = (brujsn['data']['images']['fixed_width']['url'])\n gslug = (brujsn['data']['slug'])\n #fislg = gslug.repl\n \n try:\n somelis = list()\n for dapr in daprn['videos'][lev]['video']['tags']:\n print(dapr['tag_name'])\n somelis.append(dapr['tag_name'])\n porndick.update({daprn['videos'][lev]['video']['video_id'] : {'tags' : \", \".join(str(x) for x in somelis)}})\n \n\n except KeyError:\n continue\n \n aldic.update({gslug : gurl})\n #print(gurl)\n '''\n with open('/home/pi/redtube/posts/{}.meta'.format(gslug), 'w') as blmet:\n blmet.write('.. title: ' + glug + ' \\n' + '.. slug: ' + nameofblogpost + ' \\n' + '.. date: ' + str(nowtime) + ' \\n' + '.. tags: ' + tagblog + '\\n' + '.. link:\\n.. description:\\n.. type: text')\n \n response = requests.get(gurl, stream=True)#\n response\n with open('/home/pi/redtube/galleries/{}.gif'.format(gslug), 'wb') as out_file:\n shutil.copyfileobj(response.raw, out_file)\n del response\n \n tan = tagna.replace(' ', '-')\n tanq = tan.lower()\n print(tanq)\n \n '''\n except KeyError:\n continue \n\nwith open('/home/{}/prn.pickle'.format(myusr), 'wb') as handle:\n pickle.dump(porndick, handle, protocol=pickle.HIGHEST_PROTOCOL)\n\nwith open('/home/{}/prn3.pickle'.format(myusr), 'wb') as handle:\n pickle.dump(aldic, handle, protocol=pickle.HIGHEST_PROTOCOL)\n\n#db.save(aldic)\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ITAM-DS/analisis-numerico-computo-cientifico
libro_optimizacion/temas/1.computo_cientifico/1.5/Definicion_de_funcion_continuidad_derivada.ipynb
apache-2.0
[ "(FCD)=\n1.5 Definición de función, continuidad y derivada\n```{admonition} Notas para contenedor de docker:\nComando de docker para ejecución de la nota de forma local:\nnota: cambiar &lt;ruta a mi directorio&gt; por la ruta de directorio que se desea mapear a /datos dentro del contenedor de docker y &lt;versión imagen de docker&gt; por la versión más actualizada que se presenta en la documentación.\ndocker run --rm -v &lt;ruta a mi directorio&gt;:/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:&lt;versión imagen de docker&gt;\npassword para jupyterlab: qwerty\nDetener el contenedor de docker:\ndocker stop jupyterlab_optimizacion\nDocumentación de la imagen de docker palmoreck/jupyterlab_optimizacion:&lt;versión imagen de docker&gt; en liga.\n```\n\nNota generada a partir de la liga1, liga2 e inicio de liga3.\n```{admonition} Al final de esta nota la comunidad lectora:\n:class: tip\n\n\nAprenderá las definiciones de función y derivada de una función en algunos casos de interés para el curso. En específico el caso de derivada direccional es muy importante.\n\n\nAprenderá que el gradiente y Hessiana de una función son un vector y una matriz de primeras (información de primer orden) y segundas derivadas (información de segundo orden) respectivamente.\n\n\nAprenderá algunas fórmulas utilizadas con el operador nabla de diferenciación.\n\n\nAprenderá la diferencia entre el cálculo algebraico o simbólico y el numérico vía el paquete SymPy.\n\n\n```\nFunción\n```{admonition} Definición\nUna función, $f$, es una regla de correspondencia entre un conjunto nombrado dominio y otro conjunto nombrado codominio.\n```\nNotación\n$f: A \\rightarrow B$ es una función de un conjunto $\\text{dom}f \\subseteq A$ en un conjunto $B$.\n```{admonition} Observación\n:class: tip\n$\\text{dom}f$ (el dominio de $f$) puede ser un subconjunto propio de $A$, esto es, algunos elementos de $A$ y otros no, son mapeados a elementos de $B$.\n```\nEn lo que sigue se considera al espacio $\\mathbb{R}^n$ y se asume que conjuntos y subconjuntos están en este espacio.\n(CACCI)=\nConjunto abierto, cerrado, cerradura e interior\n```{margin} \nUn punto $x$ se nombra punto límite de un conjunto $X$, si existe una sucesión ${x_k} \\subset X$ que converge a $x$. El conjunto de puntos límites se nombra cerradura o closure de $X$ y se denota como $\\text{cl}X$. \nUn conjunto $X$ se nombra cerrado si es igual a su cerradura.\n```\n```{admonition} Definición\nEl interior de un conjunto $X$ es el conjunto de puntos interiores: un punto $x$ de un conjunto $X$ se llama interior si existe una vecindad de $x$ (conjunto abierto* que contiene a $x$) contenida en $X$.\n*Un conjunto $X$ se dice que es abierto si $\\forall x \\in X$ existe una bola abierta* centrada en $x$ y contenida en $X$. Es equivalente escribir que $X$ es abierto si su complemento $\\mathbb{R}^n \\ X$ es cerrado.\n*Una bola abierta con radio $\\epsilon>0$ y centrada en $x$ es el conjunto: $B_\\epsilon(x) ={y \\in \\mathbb{R}^n : ||y-x|| < \\epsilon}$. Ver {ref}Ejemplos de gráficas de normas en el plano &lt;EGNP&gt; para ejemplos de bolas abiertas en el plano.\n```\nEn lo siguiente $\\text{intdom}f$ es el interior del dominio de $f$. \nContinuidad\n```{admonition} Definición\n$f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ es continua en $x \\in \\text{dom}f$ si $\\forall \\epsilon >0 \\exists \\delta > 0$ tal que:\n$$y \\in \\text{dom}f, ||y-x||_2 \\leq \\delta \\implies ||f(y)-f(x)||_2 \\leq \\epsilon$$\n```\n```{admonition} Comentarios\n\n\n$f$ continua en un punto $x$ del dominio de $f$ entonces $f(y)$ es arbitrariamente cercana a $f(x)$ para $y$ en el dominio de $f$ cercana a $x$.\n\n\nOtra forma de definir que $f$ sea continua en $x \\in \\text{dom}f$ es con sucesiones y límites: si ${x_i}{i \\in \\mathbb{N}} \\subseteq \\text{dom}f$ es una sucesión de puntos en el dominio de $f$ que converge a $x \\in \\text{dom}f$, $\\displaystyle \\lim{i \\rightarrow \\infty}x_i = x$, y $f$ es continua en $x$ entonces la sucesión ${f(x_i)}{i \\in \\mathbb{N}}$ converge a $f(x)$: $\\displaystyle \\lim{i \\rightarrow \\infty}f(x_i) = f(x) = f \\left(\\displaystyle \\lim_{i \\rightarrow \\infty} x_i \\right )$.\n```\n\n\nNotación\n$\\mathcal{C}([a,b])={\\text{funciones } f:\\mathbb{R} \\rightarrow \\mathbb{R} \\text{ continuas en el intervalo [a,b]}}$ y $\\mathcal{C}(\\text{dom}f) = {\\text{funciones } f:\\mathbb{R}^n \\rightarrow \\mathbb{R}^m \\text{ continuas en su dominio}}$.\nFunción Diferenciable\nCaso $f: \\mathbb{R} \\rightarrow \\mathbb{R}$\n```{admonition} Definición\n$f$ es diferenciable en $x_0 \\in (a,b)$ si $\\displaystyle \\lim_{x \\rightarrow x_0} \\frac{f(x)-f(x_0)}{x-x_0}$ existe y escribimos:\n$$f^{(1)}(x_0) = \\displaystyle \\lim_{x \\rightarrow x_0} \\frac{f(x)-f(x_0)}{x-x_0}.$$\n```\n$f$ es diferenciable en $[a,b]$ si es diferenciable en cada punto de $[a,b]$. Análogamente definiendo la variable $h=x-x_0$ se tiene:\n$f^{(1)}(x_0) = \\displaystyle \\lim_{h \\rightarrow 0} \\frac{f(x_0+h)-f(x_0)}{h}$ que típicamente se escribe como:\n$$f^{(1)}(x) = \\displaystyle \\lim_{h \\rightarrow 0} \\frac{f(x+h)-f(x)}{h}.$$\n```{admonition} Comentario\nSi $f$ es diferenciable en $x_0$ entonces $f(x) \\approx f(x_0) + f^{(1)}(x_0)(x-x_0)$. Gráficamente:\n<img src=\"https://dl.dropboxusercontent.com/s/3t13ku6pk1pjwxo/f_diferenciable.png?dl=0\" heigth=\"500\" width=\"500\">\n```\nComo las derivadas también son funciones tenemos una notación para las derivadas que son continuas:\nNotación\n$\\mathcal{C}^n([a,b])={\\text{funciones } f:\\mathbb{R} \\rightarrow \\mathbb{R} \\text{ con } n \\text{ derivadas continuas en el intervalo [a,b]}}$.\nEn Python podemos utilizar el paquete SymPy para calcular límites y derivadas de forma simbólica (ver sympy/calculus) que es diferente al cálculo numérico que se revisa en {ref}Polinomios de Taylor y diferenciación numérica &lt;PTDN&gt;.\nEjemplo", "import sympy", "Límite de $\\frac{\\cos(x+h) - \\cos(x)}{h}$ para $h \\rightarrow 0$:", "x, h = sympy.symbols(\"x, h\")\n\nquotient = (sympy.cos(x+h) - sympy.cos(x))/h\n\nsympy.pprint(sympy.limit(quotient, h, 0))", "Lo anterior corresponde a la derivada de $\\cos(x)$:", "x = sympy.Symbol('x')\n\nsympy.pprint(sympy.cos(x).diff(x))", "Si queremos evaluar la derivada podemos usar:", "sympy.pprint(sympy.cos(x).diff(x).subs(x,sympy.pi/2))\n\nsympy.pprint(sympy.Derivative(sympy.cos(x), x))\n\nsympy.pprint(sympy.Derivative(sympy.cos(x), x).doit_numerically(sympy.pi/2))", "Caso $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$\n```{admonition} Definición\n$f$ es diferenciable en $x \\in \\text{intdom}f$ si existe una matriz $Df(x) \\in \\mathbb{R}^{m\\times n}$ tal que:\n$$\\displaystyle \\lim_{z \\rightarrow x, z \\neq x} \\frac{||f(z)-f(x)-Df(x)(z-x)||_2}{||z-x||_2} = 0, z \\in \\text{dom}f$$\nen este caso $Df(x)$ se llama la derivada de $f$ en $x$.\n```\n```{admonition} Observación\n:class: tip\nSólo puede existir a lo más una matriz que satisfaga el límite anterior.\n```\n```{margin}\nUna función afín es de la forma $h(x) = Ax+b$ con $A \\in \\mathbb{R}^{p \\times n}$ y $b \\in \\mathbb{R}^p$. Ver Affine_transformation\n```\n```{admonition} Comentarios:\n\n\n$Df(x)$ también es llamada la Jacobiana de $f$.\n\n\nSe dice que $f$ es diferenciable si $\\text{dom}f$ es abierto y es diferenciable en cada punto de $\\text{dom}f.$\n\n\nLa función: $f(x) + Df(x)(z-x)$ es afín y se le llama aproximación de orden $1$ de $f$ en $x$ (o también cerca de $x$). Para $z$ cercana a $x$ ésta aproximación es cercana a $f(z)$.\n\n\n$Df(x)$ puede encontrarse con la definición de límite anterior o con las derivadas parciales: $Df(x)_{ij} = \\frac{\\partial f_i(x)}{\\partial x_j}, i=1,\\dots,m, j=1,\\dots,n$ definidas como:\n\n\n$$\\frac{\\partial f_i(x)}{\\partial x_j} = \\displaystyle \\lim_{h \\rightarrow 0} \\frac{f_i(x+he_j)-f_i(x)}{h}$$\ndonde: $f_i : \\mathbb{R}^n \\rightarrow \\mathbb{R}$, $i=1,\\dots,m,j=1,\\dots,n$ y $e_j$ $j$-ésimo vector canónico que tiene un número $1$ en la posición $j$ y $0$ en las entradas restantes.\n\nSi $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}, Df(x) \\in \\mathbb{R}^{1\\times n}$, su transpuesta se llama gradiente, se denota $\\nabla f(x)$, es una función $\\nabla f : \\mathbb{R}^n \\rightarrow \\mathbb{R}^n$, recibe un vector y devuelve un vector columna y sus componentes son derivadas parciales: \n\n$$\\nabla f(x) = Df(x)^T = \n \\left[ \\begin{array}{c}\n \\frac{\\partial f(x)}{\\partial x_1}\\\n \\vdots\\\n \\frac{\\partial f(x)}{\\partial x_n}\n \\end{array}\n \\right] = \\left[ \n \\begin{array}{c} \n \\displaystyle \\lim_{h \\rightarrow 0} \\frac{f(x+he_1) - f(x)}{h}\\\n \\vdots\\\n \\displaystyle \\lim_{h \\rightarrow 0} \\frac{f(x+he_n) - f(x)}{h}\n \\end{array}\n \\right] \\in \\mathbb{R}^{n\\times 1}.$$\n\nEn este contexto, la aproximación de primer orden a $f$ en $x$ es: $f(x) + \\nabla f(x)^T(z-x)$ para $z$ cercana a $x$.\n```\n\nNotación\n$\\mathcal{C}^n(\\text{dom}f) = {\\text{funciones } f:\\mathbb{R}^n \\rightarrow \\mathbb{R}^m \\text{ con } n \\text{ derivadas continuas en su dominio}}$.\nEjemplo\n$f : \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2$ dada por:\n$$f(x) = \n\\left [ \n\\begin{array}{c}\nx_1x_2 + x_2^2\\\nx_1^2 + 2x_1x_2 + x_2^2\\\n\\end{array}\n\\right ]\n$$\ncon $x = (x_1, x_2)^T$. Calcular la derivada de $f$.", "x1, x2 = sympy.symbols(\"x1, x2\")", "Definimos funciones $f_1, f_2$ que son componentes del vector $f(x)$.", "f1 = x1*x2 + x2**2\n\nsympy.pprint(f1)\n\nf2 = x1**2 + x2**2 + 2*x1*x2\n\nsympy.pprint(f2)", "Derivadas parciales:\nPara $f_1(x) = x_1x_2 + x_2^2$:\n```{margin}\nDerivada parcial de $f_1$ respecto a $x_1$.\n```", "df1_x1 = f1.diff(x1)\n\nsympy.pprint(df1_x1)", "```{margin}\nDerivada parcial de $f_1$ respecto a $x_2$.\n```", "df1_x2 = f1.diff(x2)\n\nsympy.pprint(df1_x2)", "Para $f_2(x) = x_1^2 + 2x_1 x_2 + x_2^2$:\n```{margin}\nDerivada parcial de $f_2$ respecto a $x_1$.\n```", "df2_x1 = f2.diff(x1)\n\nsympy.pprint(df2_x1)", "```{margin}\nDerivada parcial de $f_2$ respecto a $x_2$.\n```", "df2_x2 = f2.diff(x2)\n\nsympy.pprint(df2_x2)", "Entonces la derivada es:\n$$Df(x) = \n\\left [\n\\begin{array}{cc}\nx_2 & x_1+2x_2\\\n2x_1 + 2x_2 & 2x_1+2x_2\n\\end{array}\n\\right ]\n$$\nOtra opción más fácil es utilizando Matrices:", "f = sympy.Matrix([f1, f2])\n\nsympy.pprint(f)", "```{margin} \nJacobiana de $f$\n```", "sympy.pprint(f.jacobian([x1, x2]))", "Para evaluar por ejemplo en $(x_1, x_2)^T = (0, 1)^T$:", "d = f.jacobian([x1, x2])\n\nsympy.pprint(d.subs([(x1, 0), (x2, 1)]))", "Regla de la cadena\n```{admonition} Definición\nSi $f:\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ es diferenciable en $x\\in \\text{intdom}f$ y $g:\\mathbb{R}^m \\rightarrow \\mathbb{R}^p$ es diferenciable en $f(x)\\in \\text{intdom}g$, se define la composición $h:\\mathbb{R}^n \\rightarrow \\mathbb{R}^p$ por $h(z) = g(f(z))$, la cual es diferenciable en $x$, con derivada:\n$$Dh(x)=Dg(f(x))Df(x)\\in \\mathbb{R}^{p\\times n}.$$\n```\n(EJ1)=\nEjemplo\nSean $f:\\mathbb{R}^n \\rightarrow \\mathbb{R}$, $g:\\mathbb{R} \\rightarrow \\mathbb{R}$, $h:\\mathbb{R}^n \\rightarrow \\mathbb{R}$ con $h(z) = g(f(z))$ entonces: \n$$Dh(x) = Dg(f(x))Df(x) = \\frac{dg(f(x))}{dx}\\nabla f(x)^T \\in \\mathbb{R}^{1\\times n}$$\ny la transpuesta de $Dh(x)$ es: $\\nabla h(x) = Dh(x)^T = \\frac{dg(f(x))}{dx} \\nabla f(x) \\in \\mathbb{R}^{n\\times 1}$.\nEjemplo\n$f(x) = \\cos(x), g(x)=\\sin(x)$ por lo que $h(x) = \\sin(\\cos(x))$. Calcular la derivada de $h$.", "x = sympy.Symbol('x')\n\nf = sympy.cos(x)\n\nsympy.pprint(f)\n\ng = sympy.sin(x)\n\nsympy.pprint(g)\n\nh = g.subs(x, f)\n\nsympy.pprint(h)\n\nsympy.pprint(h.diff(x))", "Otras formas para calcular la derivada de la composición $h$:", "g = sympy.sin\n\nh = g(f)\n\nsympy.pprint(h.diff(x))\n\nh = sympy.sin(f)\n\nsympy.pprint(h.diff(x))", "Ejemplo\n$f(x) = x_1 + \\frac{1}{x_2}, g(x) = e^x$ por lo que $h(x) = e^{x_1 + \\frac{1}{x_2}}$. Calcular la derivada de $h$.", "x1, x2 = sympy.symbols(\"x1, x2\")\n\nf = x1 + 1/x2\n\nsympy.pprint(f)\n\ng = sympy.exp\n\nsympy.pprint(g)\n\nh = g(f)\n\nsympy.pprint(h)", "```{margin}\nDerivada parcial de $h$ respecto a $x_1$.\n```", "sympy.pprint(h.diff(x1))", "```{margin}\nDerivada parcial de $h$ respecto a $x_2$.\n```", "sympy.pprint(h.diff(x2))", "Otra forma para calcular el gradiente de $h$ (derivada de $h$) es utilizando how-to-get-the-gradient-and-hessian-sympy:", "from sympy.tensor.array import derive_by_array\n\nsympy.pprint(derive_by_array(h, (x1, x2)))", "(CP1)=\nCaso particular\nSean:\n\n\n$f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$, $f(x) = Ax +b$ con $A \\in \\mathbb{R}^{m\\times n},b \\in \\mathbb{R}^m$,\n\n\n$g:\\mathbb{R}^m \\rightarrow \\mathbb{R}^p$, \n\n\n$h: \\mathbb{R}^n \\rightarrow \\mathbb{R}^p$, $h(x)=g(f(x))=g(Ax+b)$ con $\\text{dom}h={z \\in \\mathbb{R}^n | Az+b \\in \\text{dom}g}$ entonces:\n\n\n$$Dh(x) = Dg(f(x))Df(x)=Dg(Ax+b)A.$$\n```{admonition} Observación\n:class: tip\nSi $p=1$, $g: \\mathbb{R}^m \\rightarrow \\mathbb{R}$, $h: \\mathbb{R}^n \\rightarrow \\mathbb{R}$ se tiene:\n$$\\nabla h(x) = Dh(x)^T = A^TDg(Ax+b)^T=A^T\\nabla g(Ax+b) \\in \\mathbb{R}^{n\\times 1}.$$\n```\n(EJRestriccionALinea)=\nEjemplo\nEste caso particular considera un caso importante en el que se tienen funciones restringidas a una línea. Si $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}$, $g: \\mathbb{R} \\rightarrow \\mathbb{R}$ está dada por $g(t) = f(x+tv)$ con $x, v \\in \\mathbb{R}^n$ y $t \\in \\mathbb{R}$, entonces escribimos que $g$ es $f$ pero restringida a la línea $x+tv$. La derivada de $g$ es:\n$$Dg(t) = \\nabla f(x+tv)^T v.$$\nEl escalar $Dg(0) = \\nabla f(x)^Tv$ se llama derivada direccional de $f$ en $x$ en la dirección $v$. Un dibujo en el que se considera $\\Delta x: = v$:\n<img src=\"https://dl.dropboxusercontent.com/s/18udjmzmmd7drrz/line_search_backtracking_1.png?dl=0\" heigth=\"300\" width=\"300\">\nComo ejemplo considérese $f(x) = x_1 ^2 + x_2^2$ con $x=(x_1, x_2)^T$ y $g(t) = f(x+tv)$ para $v=(v_1, v_2)^T$ vector fijo y $t \\in \\mathbb{R}$. Calcular $Dg(t)$.\nPrimera opción", "x1, x2 = sympy.symbols(\"x1, x2\")\n\nf = x1**2 + x2**2\n\nsympy.pprint(f)\n\nt = sympy.Symbol('t')\nv1, v2 = sympy.symbols(\"v1, v2\")\n\nnew_args_for_f_function = {\"x1\": x1+t*v1, \"x2\": x2 + t*v2}\n\ng = f.subs(new_args_for_f_function)\n\nsympy.pprint(g)", "```{margin} \nDerivada de $g$ respecto a $t$: $Dg(t)=\\nabla f(x+tv)^T v$.\n```", "sympy.pprint(g.diff(t))", "Segunda opción para calcular la derivada utilizando vectores:", "x = sympy.Matrix([x1, x2])\n\nsympy.pprint(x)\n\nv = sympy.Matrix([v1, v2])\n\nnew_arg_f_function = x+t*v\n\nsympy.pprint(new_arg_f_function)\n\nmapping_for_g_function = {\"x1\": new_arg_f_function[0], \n \"x2\": new_arg_f_function[1]}\n\ng = f.subs(mapping_for_g_function)\n\nsympy.pprint(g)", "```{margin} \nDerivada de $g$ respecto a $t$: $Dg(t)=\\nabla f(x+tv)^T v$.\n```", "sympy.pprint(g.diff(t))", "Tercera opción definiendo a la función $f$ a partir de $x$ symbol Matrix:", "sympy.pprint(x)\n\nf = x[0]**2 + x[1]**2\n\nsympy.pprint(f)\n\nsympy.pprint(new_arg_f_function)\n\ng = f.subs({\"x1\": new_arg_f_function[0], \n \"x2\": new_arg_f_function[1]})\n\nsympy.pprint(g)", "```{margin} \nDerivada de $g$ respecto a $t$: $Dg(t)=\\nabla f(x+tv)^T v$.\n```", "sympy.pprint(g.diff(t))", "En lo siguiente se utiliza derive-by_array, how-to-get-the-gradient-and-hessian-sympy para mostrar cómo se puede hacer un producto punto con SymPy", "sympy.pprint(derive_by_array(f, x))\n\nsympy.pprint(derive_by_array(f, x).subs({\"x1\": new_arg_f_function[0], \n \"x2\": new_arg_f_function[1]}))\n\ngradient_f_new_arg = derive_by_array(f, x).subs({\"x1\": new_arg_f_function[0], \n \"x2\": new_arg_f_function[1]})\n\n\nsympy.pprint(v)", "```{margin} \nDerivada de $g$ respecto a $t$: $Dg(t)=\\nabla f(x+tv)^T v = v^T \\nabla f(x + tv)$.\n```", "sympy.pprint(v.dot(gradient_f_new_arg))", "(EJ2)=\nEjemplo\nSi $h: \\mathbb{R}^n \\rightarrow \\mathbb{R}$ dada por $h(x) = \\log \\left( \\displaystyle \\sum_{i=1}^m \\exp(a_i^Tx+b_i) \\right)$ con $x\\in \\mathbb{R}^n,a_i\\in \\mathbb{R}^n \\forall i=1,\\dots,m$ y $b_i \\in \\mathbb{R} \\forall i=1,\\dots,m$ entonces: \n$$\nDh(x)=\\left(\\displaystyle \\sum_{i=1}^m\\exp(a_i^Tx+b_i) \\right)^{-1}\\left[ \\begin{array}{c}\n \\exp(a_1^Tx+b_1)\\\n \\vdots\\\n \\exp(a_m^Tx+b_m)\n \\end{array}\n \\right]^TA=(1^Tz)^{-1}z^TA\n$$\ndonde: $A=(a_i)_{i=1}^m \\in \\mathbb{R}^{m\\times n}, b \\in \\mathbb{R}^m$, $z=\\left[ \\begin{array}{c}\n \\exp(a_1^Tx+b_1)\\\n \\vdots\\\n \\exp(a_m^Tx+b_m)\n \\end{array}\\right]$ y $1 \\in \\mathbb{R}^m$ es un vector con entradas iguales a $1$. Por lo tanto $\\nabla h(x) = (1^Tz)^{-1}A^Tz$.\nEn este ejemplo $Dh(x) = Dg(f(x))Df(x)$ con:\n\n\n$h(x)=g(f(x))$,\n\n\n$g: \\mathbb{R}^m \\rightarrow \\mathbb{R}$ dada por $g(y)=\\log \\left( \\displaystyle \\sum_{i=1}^m \\exp(y_i) \\right )$,\n\n\n$f(x)=Ax+b.$ \n\n\nPara lo siguiente se utilizó como referencias: liga1, liga2, liga3, liga4, liga5, liga6.", "m = sympy.Symbol('m')\nn = sympy.Symbol('n')", "```{margin} \nVer indexed\n```", "y = sympy.IndexedBase('y')\n\ni = sympy.Symbol('i') #for index of sum\n\ng = sympy.log(sympy.Sum(sympy.exp(y[i]), (i, 1, m)))", "```{margin} \nEsta función es la que queremos derivar.\n```", "sympy.pprint(g)", "Para un caso de $m=3$ en la función $g$ se tiene:", "y1, y2, y3 = sympy.symbols(\"y1, y2, y3\")\n\ng_m_3 = sympy.log(sympy.exp(y1) + sympy.exp(y2) + sympy.exp(y3))\n\nsympy.pprint(g_m_3)", "```{margin} \nVer derive-by_array\n```", "dg_m_3 = derive_by_array(g_m_3, [y1, y2, y3])", "```{margin} \nDerivada de $g$ respecto a $y_1, y_2, y_3$. \n```", "sympy.pprint(dg_m_3)", "```{margin} \nVer Kronecker delta\n```", "sympy.pprint(derive_by_array(g, [y[1], y[2], y[3]]))", "Para la composición $h(x) = g(f(x))$ se utilizan las siguientes celdas:\n```{margin} \nVer indexed\n```", "A = sympy.IndexedBase('A')\nx = sympy.IndexedBase('x')\n\nj = sympy.Symbol('j')\n\nb = sympy.IndexedBase('b')\n\n#we want something like:\nsympy.pprint(sympy.exp(sympy.Sum(A[i, j]*x[j], (j, 1, n)) + b[i]))\n\n#better if we split each step:\narg_sum = A[i, j]*x[j]\n\nsympy.pprint(arg_sum)\n\narg_exp = sympy.Sum(arg_sum, (j, 1, n)) + b[i]\n\nsympy.pprint(arg_exp)\n\nsympy.pprint(sympy.exp(arg_exp))\n\narg_2_sum = sympy.exp(arg_exp)\n\nsympy.pprint(sympy.Sum(arg_2_sum, (i, 1, m)))\n\nh = sympy.log(sympy.Sum(arg_2_sum, (i, 1, m))) \n#complex expression: sympy.log(sympy.Sum(sympy.exp(sympy.Sum(A[i, j]*x[j], (j, 1, n)) + b[i]), (i, 1, m)))\n\nsympy.pprint(h)", "```{margin} \nDerivada de $h$ respecto a $x_1$.\n```", "sympy.pprint(h.diff(x[1]))", "```{margin} \nVer Kronecker delta\n```", "sympy.pprint(derive_by_array(h, [x[1]])) #we can use also: derive_by_array(h, [x[1], x[2], x[3]]", "```{admonition} Pregunta\n:class: tip\n¿Se puede resolver este ejercicio con Matrix Symbol?\n```\n```{admonition} Ejercicio\n:class: tip\nVerificar que lo obtenido con SymPy es igual a lo desarrollado en \"papel\" al inicio del {ref}Ejemplo &lt;EJ2&gt;\n```\nSegunda derivada de una función $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}$.\n```{admonition} Definición\nSea $f:\\mathbb{R}^n \\rightarrow \\mathbb{R}$. La segunda derivada o matriz Hessiana de $f$ en $x \\in \\text{intdom}f$ existe si $f$ es dos veces diferenciable en $x$, se denota $\\nabla^2f(x)$ y sus componentes son segundas derivadas parciales:\n$$\\nabla^2f(x) = \\left[\\begin{array}{cccc}\n\\frac{\\partial^2f(x)}{\\partial x_1^2} &\\frac{\\partial^2f(x)}{\\partial x_2 \\partial x_1}&\\dots&\\frac{\\partial^2f(x)}{\\partial x_n \\partial x_1}\\\n\\frac{\\partial^2f(x)}{\\partial x_1 \\partial x_2} &\\frac{\\partial^2f(x)}{\\partial x_2^2} &\\dots&\\frac{\\partial^2f(x)}{\\partial x_n \\partial x_2}\\\n\\vdots &\\vdots& \\ddots&\\vdots\\\n\\frac{\\partial^2f(x)}{\\partial x_1 \\partial x_n} &\\frac{\\partial^2f(x)}{\\partial x_2 \\partial x_n}&\\dots&\\frac{\\partial^2f(x)}{\\partial x_n^2} \\\n\\end{array}\n\\right]\n$$\n```\n```{admonition} Comentarios:\n\nLa aproximación de segundo orden a $f$ en $x$ (o también para puntos cercanos a $x$) es la función cuadrática en la variable $z$:\n\n$$f(x) + \\nabla f(x)^T(z-x)+\\frac{1}{2}(z-x)^T\\nabla^2f(x)(z-x)$$\n\nSe cumple:\n\n$$\\displaystyle \\lim_{z \\rightarrow x, z \\neq x} \\frac{|f(z)-[f(x)+\\nabla f(x)^T(z-x)+\\frac{1}{2}(z-x)^T\\nabla^2f(x)(z-x)]|}{||z-x||_2} = 0, z \\in \\text{dom}f$$\n\n\nSe tiene lo siguiente:\n\n\n$\\nabla f$ es una función nombrada gradient mapping (o simplemente gradiente).\n\n\n$\\nabla f:\\mathbb{R}^n \\rightarrow \\mathbb{R}^n$ tiene regla de correspondencia $\\nabla f(x)$ (evaluar en $x$ la matriz $Df(\\cdot)^T$).\n\n\nSe dice que $f$ es dos veces diferenciable en $\\text{dom}f$ si $\\text{dom}f$ es abierto y $f$ es dos veces diferenciable en cada punto de $x$.\n\n\n$D\\nabla f(x) = \\nabla^2f(x)$ para $x \\in \\text{intdom}f$.\n\n\n$\\nabla ^2 f(x) : \\mathbb{R}^n \\rightarrow \\mathbb{R}^{n \\times n}$.\n\n\nSi $f \\in \\mathcal{C}^2(\\text{dom}f)$ entonces la Hessiana es una matriz simétrica.\n\n\n\n\n```\nRegla de la cadena para la segunda derivada\n(CP2)=\nCaso particular\nSean:\n\n\n$f:\\mathbb{R}^n \\rightarrow \\mathbb{R}$, \n\n\n$g:\\mathbb{R} \\rightarrow \\mathbb{R}$, \n\n\n$h:\\mathbb{R}^n \\rightarrow \\mathbb{R}$ con $h(x) = g(f(x))$, entonces: \n\n\n$$\\nabla^2h(x) = D\\nabla h(x)$$ \n```{margin} \nVer {ref}Ejemplo 1 de la regla de la cadena &lt;EJ1&gt; \n```\ny \n$$\\nabla h(x)=Dh(x)^T = (Dg(f(x))Df(x))^T=\\frac{dg(f(x))}{dx}\\nabla f(x)$$\npor lo que:\n$$\n\\begin{eqnarray}\n\\nabla^2 h(x) &=& D\\nabla h(x) \\nonumber \\\n&=& D \\left(\\frac{dg(f(x))}{dx}\\nabla f(x)\\right) \\nonumber \\\n&=& \\frac{dg(f(x))}{dx}\\nabla^2 f(x)+\\left(\\frac{d^2g(f(x))}{dx}\\nabla f(x) \\nabla f(x)^T \\right)^T \\nonumber \\\n&=& \\frac{dg(f(x))}{dx}\\nabla^2 f(x)+\\frac{d^2g(f(x))}{dx} \\nabla f(x) \\nabla f(x)^T \\nonumber\n\\end{eqnarray}\n$$\n(CP3)=\nCaso particular\nSean:\n\n\n$f:\\mathbb{R}^n \\rightarrow \\mathbb{R}^m, f(x) = Ax+b$ con $A \\in \\mathbb{R}^{m\\times n}$, $b \\in \\mathbb{R}^m$,\n\n\n$g:\\mathbb{R}^m \\rightarrow \\mathbb{R}^p$,\n\n\n$h:\\mathbb{R}^n \\rightarrow \\mathbb{R}^p$, $h(x) = g(f(x)) = g(Ax+b)$ con $\\text{dom}h={z \\in \\mathbb{R}^n | Az+b \\in \\text{dom}g}$ entonces:\n\n\n```{margin}\nVer {ref}Caso particular &lt;CP1&gt; para la expresión de la derivada.\n```\n$$Dh(x)^T = Dg(f(x))Df(x) = Dg(Ax+b)A.$$\n```{admonition} Observación\n:class: tip\nSi $p=1$, $g: \\mathbb{R}^m \\rightarrow \\mathbb{R}$, $h: \\mathbb{R}^n \\rightarrow \\mathbb{R}$ se tiene: \n$$\\nabla^2h(x) = D \\nabla h(x) = A^T \\nabla^2g(Ax+b)A.$$\n```\nEjemplo\n```{margin}\nVer {ref}Ejemplo &lt;EJRestriccionALinea&gt;\n```\nSi $f:\\mathbb{R}^n \\rightarrow \\mathbb{R}$, $g: \\mathbb{R} \\rightarrow \\mathbb{R}$ está dada por $g(t) = f(x+tv)$ con $x,v \\in \\mathbb{R}^n, t \\in \\mathbb{R}$, esto es, $g$ es $f$ pero restringida a la línea ${x+tv|t \\in \\mathbb{R}}$ , entonces:\n$$Dg(t) = Df(x+tv)v = \\nabla f(x+tv)^Tv$$\nPor lo que:\n$$\\nabla ^2g(t) = D\\nabla f(x+tv)^Tv=v^T\\nabla^2f(x+tv)v.$$\nEjemplo\n```{margin}\nVer {ref}Ejemplo &lt;EJ2&gt;\n```\nSi $h: \\mathbb{R}^n \\rightarrow \\mathbb{R}, h(x) = \\log \\left( \\displaystyle \\sum_{i=1}^m \\exp(a_i^Tx+b_i)\\right)$ con $x \\in \\mathbb{R}^n, a_i \\in \\mathbb{R}^n \\forall i=1,\\dots,m$ y $b_i \\in \\mathbb{R} \\forall i=1,\\dots,m$. \nComo se desarrolló anteriormente $\\nabla h(x) = (1^Tz)^{-1}A^Tz$ con $z=\\left[ \\begin{array}{c}\n \\exp(a_1^Tx+b_1)\\\n \\vdots\\\n \\exp(a_m^Tx+b_m)\n \\end{array}\\right]$ y $A=(a_i)_{i=1}^m \\in \\mathbb{R}^{m\\times n}.$\nPor lo que \n$$\\nabla^2 h(x) = D\\nabla h(x) = A^T \\nabla^2g(Ax+b)A$$ \n```{margin}\n$\\nabla^2 g(y)$ se obtiene de acuerdo a {ref}Caso particular &lt;CP2&gt; tomando $\\log:\\mathbb{R} \\rightarrow \\mathbb{R}, \\displaystyle \\sum_{i=1}^m \\exp(y_i): \\mathbb{R}^m \\rightarrow \\mathbb{R}$\n```\ndonde: $\\nabla^2g(y)=(1^Ty)^{-1}\\text{diag}(y)-(1^Ty)^{-2}yy^T$.\n$$\\therefore \\nabla^2 h(x) = A^T\\left[(1^Tz)^{-1}\\text{diag}(z)-(1^Tz)^{-2}zz^T \\right]A$$\ny $\\text{diag}(c)$ es una matriz diagonal con elementos en su diagonal iguales a las entradas del vector $c$.\n```{admonition} Ejercicio\n:class: tip\nVerificar con el paquete de SymPy las expresiones para la segunda derivada de los dos ejemplos anteriores.\n```\nTablita útil para fórmulas de diferenciación con el operador $\\nabla$\nSean $f,g:\\mathbb{R}^n \\rightarrow \\mathbb{R}$ con $f,g \\in \\mathcal{C}^2$ respectivamente en sus dominios y $\\alpha_1, \\alpha_2 \\in \\mathbb{R}$, $A \\in \\mathbb{R}^{n \\times n}$, $b \\in \\mathbb{R}^n$ son fijas. Diferenciando con respecto a la variable $x \\in \\mathbb{R}^n$ se tiene:\n| | |\n|:--:|:--:|\n|linealidad | $\\nabla(\\alpha_1 f(x) + \\alpha_2 g(x)) = \\alpha_1 \\nabla f(x) + \\alpha_2 \\nabla g(x)$|\n|producto | $\\nabla(f(x)g(x)) = \\nabla f(x) g(x) + f(x) \\nabla g(x)$|\n|producto punto|$\\nabla(b^Tx) = b$ \n|cuadrático|$\\nabla(x^TAx) = 2(A+A^T)x$|\n|segunda derivada| $\\nabla^2(Ax)=A$|\nComentario respecto al cómputo simbólico o algebraico y númerico\nSi bien el cómputo simbólico o algebraico nos ayuda a calcular las expresiones para las derivadas evitando los problemas de errores por redondeo que se revisarán en {ref}Polinomios de Taylor y diferenciación numérica &lt;PTDN&gt;, la complejidad de las expresiones que internamente se manejan es ineficiente vs el cómputo numérico, ver Computer science aspects of computer algebra y GNU_Multiple_Precision_Arithmetic_Library.\nComo ejemplo de la precisión arbitraria que se puede manejar con el cómputo simbólico o algebraico vs el {ref}Sistema en punto flotante &lt;SPF&gt; considérese el cálculo siguiente:", "eps = 1-3*(4/3-1)\n\nprint(\"{:0.16e}\".format(eps))\n\neps_sympy = 1-3*(sympy.Rational(4,3)-1)\n\nprint(\"{:0.16e}\".format(float(eps_sympy)))", "```{admonition} Ejercicios\n:class: tip\n1.Resuelve los ejercicios y preguntas de la nota.\n```\nReferencias\n\nS. P. Boyd, L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
oblassers/fair-data-science
Task-3-Experiment.ipynb
mit
[ "Data Preservation Task 3\nThis experiment takes a dataset about divorces per year after marrige (link: https://www.data.gv.at/katalog/dataset/7fa00c8b-6189-42b8-af93-cc1ebff0a818) and plots the number of divorces per year from 1985 to 2014, for marriges that held between ten and eleven years.\nThe experiment consists of three steps:\n\nConnect to mongodb\nFetch and transform data\nPlot results\n\nImports:", "import pymongo\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport json\nimport re\nfrom pymongo import MongoClient\n%matplotlib inline", "1. Connect to mongoDB\nEstablish a connection to the local mongoDB Docker container", "client = MongoClient('mongodb')\ndb = client.dp\ncollection = db.divorce", "2. Fetch & Transform\nPerform the following steps for the transformation:\n * For each entry, gather the number of divorces from the unnecessarily nested values object\n * The DURATION field contains a string of the form \"x to under y years\". Parse the first value x\n * Delete all other attributes of an entry, except REF_YEAR", "data = db.divorce.find()[0]['data']\nfor entry in data:\n entry['DIVORCES'] = entry['values'][0]['NUMBER']\n s = entry['DURATION']\n tmp = re.findall(r'\\d+', s)\n if (len(tmp) == 1):\n tmp[0] = 0\n del entry['values']\n del entry['NUTS1']\n del entry['NUTS2']\n entry['DURATION'] = tmp[0]", "Transform to JSON for pandas import:", "data_json = json.dumps(data)", "Plot\nParse JSON into a pandas DataFrame object and plot as bar chart", "df = pd.read_json(data_json)\nfiltered = df[df.DURATION == 10].filter(items=['DIVORCES','REF_YEAR'])\nfiltered\n\nfiltered.plot.bar(x='REF_YEAR',y='DIVORCES')", "This figure depicts the number of divorces per year between 1985 and 2014, for all marriges that held more than ten, but less than eleven years." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
marioberges/F16-12-752
lectures/Lecture9.ipynb
gpl-3.0
[ "Lecture #9: Dynamic Time Warping", "import math\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\n%matplotlib inline\n\n#x = np.array(np.random.normal(0,1,size=(1000,1))).reshape(-1, 1)\n#y = np.array(np.random.normal(0,1,size=(1000,1))).reshape(-1, 1)\nx = np.array([0, 0, 1, 1, 2, 4, 2, 1, 2, 0]).reshape(-1, 1)\ny = np.array([0,0,0,0,0,0,0,1, 1, 1, 2, 2, 2, 2, 3, 2, 0]).reshape(-1, 1)\n\nplt.plot(x,'b')\nplt.plot(y,'r')\nplt.show()\n\ndef dtw(x,y, d = lambda i,j: np.linalg.norm(i - j,ord=2)):\n M = len(x) # Number of elements in sequence x\n N = len(y) # Number of elements in sequence y\n C = np.zeros((M,N)) # The local cost matrix\n D = np.zeros((M,N)) # The accumulative cost matrix\n \n # First, let's fill out D (time complexity O(M*N)):\n for m in range(len(x)):\n for n in range(len(y)):\n if (m == 0 and n == 0):\n D[m][n] = C[m][n] = d(x[m],y[n])\n elif m == 0 and n > 0:\n C[m][n] = d(x[m],y[n])\n D[m][n] = C[m][n] + D[m][n-1]\n elif m > 0 and n == 0:\n C[m][n] = d(x[m],y[n])\n D[m][n] = C[m][n] + D[m-1][n]\n else:\n C[m][n] = d(x[m],y[n])\n D[m][n] = C[m][n] + np.min([D[m-1][n], D[m][n-1], D[m-1][n-1]]) \n\n # Then, using D we can easily find the optimal path, starting from the end\n\n p = [(M-1, N-1)] # This will store the a list with the indexes of D for the optimal path\n m,n = p[-1] \n\n while (m != 0 and n !=0): \n options = [[D[max(m-1,0)][n], D[m][max(n-1,0)], D[max(m-1,0)][max(n-1,0)]],\n [(max(m-1,0),n),(m,max(n-1,0)),(max(m-1,0),max(n-1,0))]]\n p.append(options[1][np.argmin(options[0])])\n m,n = p[-1]\n \n pstar = np.asarray(p[::-1]) \n optimal_cost = D[-1][-1]\n \n return optimal_cost, pstar, C, D\n\noptimal_cost, pstar, local_cost, accumulative_cost = dtw(x,y)\n\nprint(\"The DTW distance is: {}\".format(optimal_cost))\nprint(\"The optimal path is: \\n{}\".format(pstar))", "Let's see what the path looks like on top of the accumulative cost matrix (and, because we can, let's also plot the local cost matrix):", "def plotWarping(D,C,pstar):\n fig1 = plt.figure()\n plt.imshow(D.T,origin='lower',cmap='gray',interpolation='nearest')\n plt.colorbar()\n plt.title('Accumulative Cost Matrix')\n plt.plot(pstar[:,0], pstar[:,1],'w-')\n plt.show()\n\n fig2 = plt.figure()\n plt.imshow(C.T,origin='lower',cmap='gray',interpolation='nearest')\n plt.colorbar()\n plt.title('Local Cost Matrix')\n plt.show()\n\n return fig1, fig2\n\nplotWarping(accumulative_cost,local_cost,pstar)", "Now let's have a bit of fun with this new function.", "import pickle\n\npkf = open('data/loadCurves.pkl','rb')\ndata,loadCurves = pickle.load(pkf)\npkf.close()", "First, let's try comparing the first and last day of the dataset.", "y = loadCurves.ix[1].values.reshape(-1,1)\nx = loadCurves.ix[365].values.reshape(-1,1)\n\nplt.plot(x,'r')\nplt.plot(y,'b')\nplt.show()\n\nDstar, Pstar, C, D = dtw(x,y)\nplotWarping(D,C,Pstar)\nprint(\"The DTW distance between them is: {}\".format(Dstar))", "But why don't we just calculate that distance across all possible pairs?", "#loadCurves = loadCurves.replace(np.inf,np.nan).fillna(0)\n\n#dtwMatrix = np.zeros((365,365))\nfor i in range(1,31):\n for j in range(1,365):\n x = loadCurves.ix[i].values.reshape(-1,1)\n y = loadCurves.ix[j].values.reshape(-1,1)\n \n dtwMatrix[i][j],_,_,_ = dtw(x,y)\n\nplt.imshow(dtwMatrix,origin='bottom',cmap='gray')\nplt.colorbar()\n\ndtwMatrix[10][30:33]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
n-witt/MachineLearningWithText_SS2017
tutorials/4 Introducing Scikit-Learn.ipynb
gpl-3.0
[ "Introducing Scikit-Learn\n\nScikit-Learn contains solid implementations of various machine learning algorithms\nClean and uniform API\nHelpful documentation \n\nRepresenting data in Scikit-Learn\nMachine learning is about building models for data. But what is a good way to represent data?: Tables!\n* Rows: Individual elements of the dataset\n* Columns: quantities ralted to an individul element\nConsider the iris dataset.", "import seaborn as sns\niris = sns.load_dataset('iris')\niris.head()", "Each row is an observed flower. These rows are called samples and the number of rows is called n_samples.\nLikewise, each column contains a quantatative measure which is called feature, with the number of features called n_features\nThe table is often called feature matrix and by convention it's often named X.\nScikit-Learn assumes the feature matrix to be of shape [n_samples, n_features].\nUsually there is also a target array of length n_samples often named y.", "%matplotlib inline\nimport seaborn as sns; sns.set()\nsns.pairplot(iris, hue='species', size=1.5);", "Let's split the data according to the convention:", "X_iris, y_iris = iris.drop('species', axis=1), iris['species']\nX_iris.shape, y_iris.shape", "To summarize, in order to use Scikit-Learn, the data layout should look like this:\n\nBasics of the API\nMost commonly, the steps in using the Scikit-Learn estimator API are as follows:\n\nChoose a class of model by importing the appropriate estimator class from Scikit-Learn.\nChoose model hyperparameters by instantiating this class with desired values.\nArrange data into a features matrix and target vector following the discussion above.\nFit the model to your data by calling the fit() method of the model instance.\nApply the Model to new data:\nFor supervised learning, often we predict labels for unknown data using the predict() method.\nFor unsupervised learning, we often transform or infer properties of the data using the transform() or predict() method.\n\n\n\nExample: Simple linear Regression\nHere is the data:", "import matplotlib.pyplot as plt\nimport numpy as np\n\nrng = np.random.RandomState(42)\n\nx = 10 * rng.rand(50)\ny = 2 * x - 1 + rng.randn(50)\n\nplt.scatter(x, y);", "1. Choose a class of model\nIn Scikit-Learn, every class of model is represented by a Python class. For linear regression we do:", "from sklearn.linear_model import LinearRegression", "2. Model instantiation with hyperparameters\nFor our linear regression example, we can instantiate the LinearRegression class and specify that we would like to fit the intercept using the fit_intercept hyperparameter:", "model = LinearRegression(fit_intercept=True)\nmodel", "Other models have different parameters. Refer to the documentation.\n3. Arrange data into a features matrix and target vector", "X = x[:, np.newaxis]\nX.shape", "4. Fit the model to your data (i.e. learning)", "model.fit(X, y)", "This fit() command causes a number of model-dependent internal computations to take place.\nThe results of these computations are stored in model-specific attributes that the user can explore.\nIn Scikit-Learn, by convention all model parameters that were learned during the fit() process have trailing underscores.\nFor this linear model, we have:", "model.coef_\n\nmodel.intercept_", "Comparing to the data definition, we see that they are very close to the input slope of 2 and intercept of -1.\n5. Predict labels for unknown data\nIn Scikit-Learn, the prediction can be done using the predict() method.\nFor the sake of this example, our \"new data\" will be a grid of x values, and we will ask what y values the model predicts:", "xfit = np.linspace(-1, 11)", "Again, we have to coerce our data into a [n_samples, n_features] feature matrix:", "Xfit = xfit[:, np.newaxis]\nyfit = model.predict(Xfit)", "Finally, let's visualize the results by plotting first the raw data, and then this model fit:", "plt.scatter(x, y)\nplt.plot(xfit, yfit);", "Training and Test Set\nOften the question is this: \ngiven a model trained on a portion of a given dataset, how well can we predict the remaining labels? \nWe would like to evaluate the model on data it has not seen before, and so we will split the data into a training set and a testing set. This could be done by hand, but it is more convenient to use the train_test_split utility function:", "from sklearn.model_selection import train_test_split\nXtrain, Xtest, ytrain, ytest = train_test_split(X_iris, y_iris, random_state=1)\nXtrain.shape, Xtest.shape" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rnwatanabe/projectPR
ExampleNotebooks/AntidromicStimulationofMNandRC.ipynb
gpl-3.0
[ "This notebook simulates an antidromic stimulus reaching a pool of motoneurons and a renshaw cell.\nPablo Alejandro", "import sys\nsys.path.insert(0, '..')\nimport time\n\nimport matplotlib.pyplot as plt\n%matplotlib notebook\nplt.rcParams['figure.figsize']= 7,7\nimport numpy as np\n\nfrom Configuration import Configuration\nfrom MotorUnitPool import MotorUnitPool\nfrom InterneuronPool import InterneuronPool\nfrom SynapsesFactory import SynapsesFactory\n\nconf = Configuration('confAntidromicStimulationofMNandRC.rmto')\n\npools = dict()\npools[0] = MotorUnitPool(conf, 'SOL')\npools[1] = InterneuronPool(conf, 'RC', 'ext')\n\nfor i in xrange(0,len(pools[0].unit)):\n pools[0].unit[i].createStimulus()\n\nSyn = SynapsesFactory(conf, pools)\n\nt = np.arange(0.0, conf.simDuration_ms, conf.timeStep_ms)\n\nRC_mV = np.zeros_like(t)\nMN_mV = np.zeros_like(t)\n\ntic = time.clock()\nfor i in xrange(0, len(t)):\n pools[0].atualizeMotorUnitPool(t[i]) # MN pool\n pools[2].atualizePool(t[i]) # RC synaptic Noise\n pools[1].atualizeInterneuronPool(t[i]) # RC pool\n RC_mV[i] = pools[1].v_mV[0] \n MN_mV[i] = pools[0].v_mV[1] \ntoc = time.clock()\nprint str(toc - tic) + ' seconds'\n\nplt.figure()\nplt.plot(t, pools[0].unit[0].nerveStimulus_mA)", "The antidromic stimulus at the PTN.", "pools[0].listSpikes()\nplt.figure()\nplt.plot(pools[0].poolSomaSpikes[:, 0],\n pools[0].poolSomaSpikes[:, 1]+1, '.')", "The spike times of each MN along the simulation.", "plt.figure()\nplt.plot(t, pools[0].Muscle.force, '-')", "The force produced.", "plt.figure()\nplt.plot(t, MN_mV, '-')", "The membrande potential at the soma of the first motorneuron.", "plt.figure()\nplt.plot(t, RC_mV, '-')\nplt.xlim((90,145))", "The membrande potential at the soma of the Renshaw cell." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
analysiscenter/dataset
examples/experiments/weights_distributions/weights_distributions.ipynb
apache-2.0
[ "Distributions of weights in ResNet34 and ResNet50\nIn this notebook we will compare the distribution of weights from two almost identical architectures. More information about what different architectures you can read in this notebook.", "import sys\n\nsys.path.append('../../utils')\n\nimport pickle\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom utils import plot_weights", "First of all, load the weights that we saved in the tutorials.", "bottle_weights_path = 'path/with/saved_bottle_weights.pkl'\nres_weights_path = 'path/with/saved_res_weights.pkl'\n\nwith open(bottle_weights_path, 'rb') as f:\n bottle_names, bottle_weights, bottle_params = pickle.load(f)\nwith open(res_weights_path', 'rb') as f:\n res_names, res_weights, res_params = pickle.load(f)", "Below is drawn the distribution of weights of 0, 4th, 7th, 14th blocks from the ResNet50 model. Drawing function you can see in utils.", "plot_weights(bottle_names, bottle_weights, bottle_params, ['r', 'c', 'b', 'g'], [4, 4], [0, 4, 7, 14])", "It's not difficult to notice, that distribution of 1x1 convolutions has a larger variance than in 3x3 convolution. Therefore, they put a stronger influence on the output.\nBlack lines show the initial distribution of weights\n\nNow let's draw distribution of 0th, 3rd, 7th, 14th blocks from the ResNet34 model.", "plot_weights(res_names, res_weights, res_params, ['g', 'y', 'r'], [4, 3], [0, 3, 7, 14], bottleneck=False)", "It is not difficult to see that the distribution of the first and the second 3x3 convolutions are the same.\n\nNow, let's compare the distribution of the second layer of ResNet34 architecture and the 3х3 layer of ResNet50 from 3rd, 6th, 9th, 13th blocks. Will they be the same?", "indices = [i for i in range(len(bottle_names)) if 'conv' in bottle_names[i][:8]]\n_, ax = plt.subplots(2, 2, sharex='all', figsize=(23, 24))\nax = ax.reshape(-1)\nnum_plot = 0\nnum_blocks = [3, 6, 9, 13]\nres_layers = np.where(res_names == 'layer-4')[0][num_blocks]\nbottle_layers = np.where(bottle_names == 'layer-4')[0][num_blocks]\nfor i,j in zip(res_layers, bottle_layers):\n ax[num_plot].set_title('convolution layer with kernel 3x3 №{}'.format(num_blocks[num_plot]), fontsize=18)\n sns.distplot(res_weights[i].reshape(-1), ax=ax[num_plot], color='y', label='simple')\n sns.distplot(bottle_weights[j].reshape(-1), ax=ax[num_plot], color='c', label='bottleneck')\n ax[num_plot].legend()\n ax[num_plot].set_xlabel('value', fontsize=20)\n ax[num_plot].set_ylabel('quantity', fontsize=20)\n num_plot += 1\n if num_plot == ax.shape[0]:\n break\n ", "Graphs show, that its distributions are the same. Therefore the first 3x3 convolution layer from ResNet34 replaces the two 1x1 convolutions from ResNet50.\nIt's time to conclude:\n\nConvolutions of 1x1 size put a stronger influence on the output than 3x3.\nThe distribution of all layers with the 3x3 convolutions is the same.\n\nRead and apply another experiments:\n* previous experiment\n* return to the table of contents.\nIf you still have not completed our tutorial, you can fix it right now!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nick-youngblut/SIPSim
ipynb/bac_genome/fullCyc/Day1_fullDataset/.ipynb_checkpoints/rep3-checkpoint.ipynb
mit
[ "Goal\n\nSimulating fullCyc Day1 control gradients\nNot simulating incorporation (all 0% isotope incorp.)\nDon't know how much true incorporatation for emperical data\n\n\nUsing parameters inferred from TRIMMED emperical data (fullCyc Day1 seq data), or if not available, default SIPSim parameters\nDetermining whether simulated taxa show similar distribution to the emperical data\n\nInput parameters\n\nphyloseq.bulk file \ntaxon mapping file\nlist of genomes\nfragments simulated for all genomes\nbulk community richness\n\nworkflow\n\nCreating a community file from OTU abundances in bulk soil samples\nphyloseq.bulk --> OTU table --> filter to sample --> community table format\nFragment simulation\nsimulated_fragments --> parse out fragments for target OTUs \nsimulated_fragments --> parse out fragments from random genomes to obtain richness of interest\ncombine fragment python objects\nConvert fragment lists to kde object\nAdd diffusion\nMake incorp config file\nAdd isotope incorporation\nCalculating BD shift from isotope incorp\nSimulating gradient fractions\nSimulating OTU table\nSimulating PCR\nSubsampling from the OTU table\n\nInit", "import os\nimport glob\nimport re\nimport nestly\n\n%load_ext rpy2.ipython\n%load_ext pushnote\n\n%%R\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(tidyr)\nlibrary(gridExtra)\nlibrary(phyloseq)", "BD min/max", "%%R\n## min G+C cutoff\nmin_GC = 13.5\n## max G+C cutoff\nmax_GC = 80\n## max G+C shift\nmax_13C_shift_in_BD = 0.036\n\n\nmin_BD = min_GC/100.0 * 0.098 + 1.66 \nmax_BD = max_GC/100.0 * 0.098 + 1.66 \n\nmax_BD = max_BD + max_13C_shift_in_BD\n\ncat('Min BD:', min_BD, '\\n')\ncat('Max BD:', max_BD, '\\n')", "Nestly\n\nassuming fragments already simulated", "workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'\nbuildDir = os.path.join(workDir, 'rep3')\nR_dir = '/home/nick/notebook/SIPSim/lib/R/'\n\nfragFile= '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags.pkl'\n\nnreps = 3\n\n# building tree structure\nnest = nestly.Nest()\n\n# varying params\nnest.add('rep', [x + 1 for x in xrange(nreps)])\n\n\n## set params\nnest.add('abs', ['1e9'], create_dir=False)\nnest.add('percIncorp', [0], create_dir=False)\nnest.add('percTaxa', [0], create_dir=False)\nnest.add('np', [8], create_dir=False)\nnest.add('subsample_dist', ['lognormal'], create_dir=False)\nnest.add('subsample_mean', [9.432], create_dir=False)\nnest.add('subsample_scale', [0.5], create_dir=False)\nnest.add('subsample_min', [10000], create_dir=False)\nnest.add('subsample_max', [30000], create_dir=False)\n\n### input/output files\nnest.add('buildDir', [buildDir], create_dir=False)\nnest.add('R_dir', [R_dir], create_dir=False)\nnest.add('fragFile', [fragFile], create_dir=False)\nnest.add('physeqDir', [physeqDir], create_dir=False)\nnest.add('physeq_bulkCore', [physeq_bulkCore], create_dir=False)\nnest.add('bandwidth', [0.6], create_dir=False)\nnest.add('comm_params', ['mean:-7.6836085,sigma:0.9082843'], create_dir=False)\n\n# building directory tree\nnest.build(buildDir)\n\n# bash file to run\nbashFile = os.path.join(buildDir, 'SIPSimRun.sh')\n\n%%writefile $bashFile\n#!/bin/bash\n\nexport PATH={R_dir}:$PATH\n\necho '#-- SIPSim pipeline --#'\n\necho '# converting fragments to KDE'\nSIPSim fragment_KDE \\\n {fragFile} \\\n > fragsParsed_KDE.pkl\n \necho '# making a community file'\nSIPSim KDE_info \\\n -t fragsParsed_KDE.pkl \\\n > taxon_names.txt\nSIPSim communities \\\n --abund_dist_p {comm_params} \\\n taxon_names.txt \\\n > comm.txt\n \necho '# adding diffusion' \nSIPSim diffusion \\\n fragsParsed_KDE.pkl \\\n --bw {bandwidth} \\\n --np {np} \\\n > fragsParsed_KDE_dif.pkl \n\necho '# adding DBL contamination'\nSIPSim DBL \\\n fragsParsed_KDE_dif.pkl \\\n --bw {bandwidth} \\\n --np {np} \\\n > fragsParsed_KDE_dif_DBL.pkl\n \necho '# making incorp file'\nSIPSim incorpConfigExample \\\n --percTaxa {percTaxa} \\\n --percIncorpUnif {percIncorp} \\\n > {percTaxa}_{percIncorp}.config\n\necho '# adding isotope incorporation to BD distribution'\nSIPSim isotope_incorp \\\n fragsParsed_KDE_dif_DBL.pkl \\\n {percTaxa}_{percIncorp}.config \\\n --comm comm.txt \\\n --bw {bandwidth} \\\n --np {np} \\\n > fragsParsed_KDE_dif_DBL_inc.pkl\n\necho '# simulating gradient fractions'\nSIPSim gradient_fractions \\\n comm.txt \\\n > fracs.txt \n\necho '# simulating an OTU table'\nSIPSim OTU_table \\\n fragsParsed_KDE_dif_DBL_inc.pkl \\\n comm.txt \\\n fracs.txt \\\n --abs {abs} \\\n --np {np} \\\n > OTU_abs{abs}.txt\n \n#-- w/ PCR simulation --#\necho '# simulating PCR'\nSIPSim OTU_PCR \\\n OTU_abs{abs}.txt \\\n > OTU_abs{abs}_PCR.txt \n \necho '# subsampling from the OTU table (simulating sequencing of the DNA pool)'\nSIPSim OTU_subsample \\\n --dist {subsample_dist} \\\n --dist_params mean:{subsample_mean},sigma:{subsample_scale} \\\n --min_size {subsample_min} \\\n --max_size {subsample_max} \\\n OTU_abs{abs}_PCR.txt \\\n > OTU_abs{abs}_PCR_sub.txt\n \necho '# making a wide-formatted table'\nSIPSim OTU_wideLong -w \\\n OTU_abs{abs}_PCR_sub.txt \\\n > OTU_abs{abs}_PCR_sub_w.txt\n \necho '# making metadata (phyloseq: sample_data)'\nSIPSim OTU_sampleData \\\n OTU_abs{abs}_PCR_sub.txt \\\n > OTU_abs{abs}_PCR_sub_meta.txt\n \n\n#-- w/out PCR simulation --# \necho '# subsampling from the OTU table (simulating sequencing of the DNA pool)'\nSIPSim OTU_subsample \\\n --dist {subsample_dist} \\\n --dist_params mean:{subsample_mean},sigma:{subsample_scale} \\\n --min_size {subsample_min} \\\n --max_size {subsample_max} \\\n OTU_abs{abs}.txt \\\n > OTU_abs{abs}_sub.txt\n \necho '# making a wide-formatted table'\nSIPSim OTU_wideLong -w \\\n OTU_abs{abs}_sub.txt \\\n > OTU_abs{abs}_sub_w.txt\n \necho '# making metadata (phyloseq: sample_data)'\nSIPSim OTU_sampleData \\\n OTU_abs{abs}_sub.txt \\\n > OTU_abs{abs}_sub_meta.txt \n\n!chmod 777 $bashFile\n!cd $workDir; \\\n nestrun --template-file $bashFile -d rep3 --log-file log.txt -j 3\n\n%pushnote SIPsim rep3 complete", "BD min/max\n\nwhat is the min/max BD that we care about?", "%%R\n## min G+C cutoff\nmin_GC = 13.5\n## max G+C cutoff\nmax_GC = 80\n## max G+C shift\nmax_13C_shift_in_BD = 0.036\n\n\nmin_BD = min_GC/100.0 * 0.098 + 1.66 \nmax_BD = max_GC/100.0 * 0.098 + 1.66 \n\nmax_BD = max_BD + max_13C_shift_in_BD\n\ncat('Min BD:', min_BD, '\\n')\ncat('Max BD:', max_BD, '\\n')", "Loading non-PCR subsampled OTU tables", "OTU_files = !find $buildDir -name \"OTU_abs1e9_sub.txt\"\nOTU_files\n\n%%R -i OTU_files\n# loading files\n\ndf.SIM = list()\nfor (x in OTU_files){\n SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep3/', '', x)\n SIM_rep = gsub('/OTU_abs1e9_sub.txt', '', SIM_rep)\n df.SIM[[SIM_rep]] = read.delim(x, sep='\\t') \n }\ndf.SIM = do.call('rbind', df.SIM)\ndf.SIM$SIM_rep = gsub('\\\\.[0-9]+$', '', rownames(df.SIM))\nrownames(df.SIM) = 1:nrow(df.SIM)\ndf.SIM %>% head(n=3)", "BD range where an OTU is detected\n\nDo the simulated OTU BD distributions span the same BD range of the emperical data?", "comm_files = !find $buildDir -name \"comm.txt\"\ncomm_files\n\n%%R -i comm_files\n\ndf.SIM.comm = list()\nfor (x in comm_files){\n SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep3/', '', x)\n SIM_rep = gsub('/comm.txt', '', SIM_rep)\n df.SIM.comm[[SIM_rep]] = read.delim(x, sep='\\t') \n }\n\ndf.SIM.comm = do.call(rbind, df.SIM.comm)\ndf.SIM.comm$SIM_rep = gsub('\\\\.[0-9]+$', '', rownames(df.SIM.comm))\nrownames(df.SIM.comm) = 1:nrow(df.SIM.comm)\ndf.SIM.comm = df.SIM.comm %>%\n rename('bulk_abund' = rel_abund_perc) %>%\n mutate(bulk_abund = bulk_abund / 100)\ndf.SIM.comm %>% head(n=3)\n\n%%R -w 800 -h 400\n# Plotting the pre-fractionation abundances of each taxon\n\ndf.SIM.comm.s = df.SIM.comm %>%\n group_by(taxon_name) %>%\n summarize(median_rank = median(rank),\n mean_abund = mean(bulk_abund),\n sd_abund = sd(bulk_abund))\n\ndf.SIM.comm.s$taxon_name = reorder(df.SIM.comm.s$taxon_name, -df.SIM.comm.s$mean_abund)\n\nggplot(df.SIM.comm.s, aes(taxon_name, mean_abund, \n ymin=mean_abund-sd_abund,\n ymax=mean_abund+sd_abund)) +\n geom_linerange(alpha=0.4) +\n geom_point(alpha=0.6, size=1.2) +\n scale_y_log10() +\n labs(x='taxon', y='Relative abundance', title='Pre-fractionation abundance') +\n theme_bw() +\n theme(\n text = element_text(size=16),\n axis.text.x = element_blank()\n )\n\n%%R\n\n## joining SIP & comm (pre-fractionation)\ndf.SIM.j = inner_join(df.SIM, df.SIM.comm, c('library' = 'library',\n 'taxon' = 'taxon_name',\n 'SIM_rep' = 'SIM_rep')) %>%\n filter(BD_mid >= min_BD, \n BD_mid <= max_BD)\n \ndf.SIM.j %>% head(n=3)\n\n%%R\n# calculating BD range\ndf.SIM.j.f = df.SIM.j %>%\n filter(count > 0) %>%\n group_by(SIM_rep) %>%\n mutate(max_BD_range = max(BD_mid) - min(BD_mid)) %>%\n ungroup() %>%\n group_by(SIM_rep, taxon) %>%\n summarize(mean_bulk_abund = mean(bulk_abund),\n min_BD = min(BD_mid),\n max_BD = max(BD_mid),\n BD_range = max_BD - min_BD,\n BD_range_perc = BD_range / first(max_BD_range) * 100) %>%\n ungroup() \n \ndf.SIM.j.f %>% head(n=3) %>% as.data.frame\n\n%%R -h 300 -w 550\n## plotting\nggplot(df.SIM.j.f, aes(mean_bulk_abund, BD_range_perc, color=SIM_rep)) +\n geom_point(alpha=0.5, shape='O') +\n scale_x_log10() +\n scale_y_continuous() +\n labs(x='Pre-fractionation abundance', y='% of total BD range') +\n #geom_vline(xintercept=0.001, linetype='dashed', alpha=0.5) +\n theme_bw() +\n theme(\n text = element_text(size=16),\n panel.grid = element_blank(),\n legend.position = 'none'\n )", "Assessing diversity\nAsigning zeros", "%%R\n# giving value to missing abundances\nmin.pos.val = df.SIM.j %>%\n filter(rel_abund > 0) %>%\n group_by() %>%\n mutate(min_abund = min(rel_abund)) %>%\n ungroup() %>%\n filter(rel_abund == min_abund)\n\nmin.pos.val = min.pos.val[1,'rel_abund'] %>% as.numeric\nimp.val = min.pos.val / 10\n\n\n# convert numbers\ndf.SIM.j[df.SIM.j$rel_abund == 0, 'abundance'] = imp.val\n\n# another closure operation\ndf.SIM.j = df.SIM.j %>%\n group_by(SIM_rep, fraction) %>%\n mutate(rel_abund = rel_abund / sum(rel_abund))\n\n\n# status\ncat('Below detection level abundances converted to: ', imp.val, '\\n')", "Plotting Shannon diversity for each", "%%R\nshannon_index_long = function(df, abundance_col, ...){\n # calculating shannon diversity index from a 'long' formated table\n ## community_col = name of column defining communities\n ## abundance_col = name of column defining taxon abundances\n df = df %>% as.data.frame\n cmd = paste0(abundance_col, '/sum(', abundance_col, ')')\n df.s = df %>%\n group_by_(...) %>%\n mutate_(REL_abundance = cmd) %>%\n mutate(pi__ln_pi = REL_abundance * log(REL_abundance),\n shannon = -sum(pi__ln_pi, na.rm=TRUE)) %>%\n ungroup() %>% \n dplyr::select(-REL_abundance, -pi__ln_pi) %>%\n distinct_(...) \n return(df.s)\n}\n\n%%R -w 800 -h 300\n# calculating shannon\ndf.SIM.shan = shannon_index_long(df.SIM.j, 'count', 'library', 'fraction') %>%\n filter(BD_mid >= min_BD, \n BD_mid <= max_BD) \n\ndf.SIM.shan.s = df.SIM.shan %>%\n group_by(BD_bin = ntile(BD_mid, 24)) %>%\n summarize(mean_BD = mean(BD_mid),\n mean_shannon = mean(shannon),\n sd_shannon = sd(shannon))\n\n# plotting\np = ggplot(df.SIM.shan.s, aes(mean_BD, mean_shannon, \n ymin=mean_shannon-sd_shannon,\n ymax=mean_shannon+sd_shannon)) +\n geom_pointrange() +\n labs(x='Buoyant density', y='Shannon index') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np", "Plotting variance", "%%R -w 800 -h 350\ndf.SIM.j.var = df.SIM.j %>%\n group_by(SIM_rep, fraction) %>%\n mutate(variance = var(rel_abund)) %>%\n ungroup() %>%\n distinct(SIM_rep, fraction) %>%\n select(SIM_rep, fraction, variance, BD_mid)\n\nggplot(df.SIM.j.var, aes(BD_mid, variance, color=SIM_rep)) +\n geom_point() +\n geom_line() +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )", "Notes\n\nspikes at low & high G+C\nabsence of taxa or presence of taxa at those locations?\n\nPlotting absolute abundance distributions", "OTU_files = !find $buildDir -name \"OTU_abs1e9.txt\"\nOTU_files\n\n%%R -i OTU_files\n# loading files\n\ndf.abs = list()\nfor (x in OTU_files){\n SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep3/', '', x)\n SIM_rep = gsub('/OTU_abs1e9.txt', '', SIM_rep)\n df.abs[[SIM_rep]] = read.delim(x, sep='\\t') \n }\ndf.abs = do.call('rbind', df.abs)\ndf.abs$SIM_rep = gsub('\\\\.[0-9]+$', '', rownames(df.abs))\nrownames(df.abs) = 1:nrow(df.abs)\ndf.abs %>% head(n=3)\n\n%%R -w 800 \n\nggplot(df.abs, aes(BD_mid, count, fill=taxon)) +\n geom_area(stat='identity', position='dodge', alpha=0.5) +\n labs(x='Buoyant density', y='Subsampled community\\n(absolute abundance)') +\n facet_grid(SIM_rep ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none',\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank()\n )\n\n%%R -w 800 \n\np1 = ggplot(df.abs %>% filter(BD_mid < 1.7), aes(BD_mid, count, fill=taxon, color=taxon)) +\n labs(x='Buoyant density', y='Subsampled community\\n(absolute abundance)') +\n facet_grid(SIM_rep ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none',\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank()\n )\n\np2 = p1 + geom_line(alpha=0.25) + scale_y_log10()\np1 = p1 + geom_area(stat='identity', position='dodge', alpha=0.5) \n\ngrid.arrange(p1, p2, ncol=2)\n\n%%R -w 800 \n\np1 = ggplot(df.abs %>% filter(BD_mid > 1.72), aes(BD_mid, count, fill=taxon, color=taxon)) +\n labs(x='Buoyant density', y='Subsampled community\\n(absolute abundance)') +\n facet_grid(SIM_rep ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none',\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank()\n )\n\n\np2 = p1 + geom_line(alpha=0.25) + scale_y_log10()\np1 = p1 + geom_area(stat='identity', position='dodge', alpha=0.5) \n\ngrid.arrange(p1, p2, ncol=2)", "Conclusions\n\nDBL is a bit too permissive\nlow abundant taxa are spread out a bit more than emperical\nVariance spiking:\nabundance distributions are too tight\nemperical data variance suggests some extra unevenness in heavy fractions\nsome taxon DNA seems to be 'smeared' out into the heavy fractions\n\n\npossible fixes:\nmore abundant, high G+C genomes\nmore diffusion\nmore 'smearing' into the heavy fractions\n\n\nTODO:\ndetermine what's changing in emperical data between Days 1,3,6 & 14,30,48" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.20/_downloads/85b80d223414f32365a9175978a38cb4/plot_limo_data.ipynb
bsd-3-clause
[ "%matplotlib inline", "Single trial linear regression analysis with the LIMO dataset\nHere we explore the structure of the data contained in the\nLIMO dataset.\nThis example replicates and extends some of the main analysis\nand tools integrated in LIMO MEEG, a MATLAB toolbox originally designed\nto interface with EEGLAB_.\nIn summary, the example:\n\n\nFetches epoched data files for a single subject of the LIMO dataset [1]_.\n If the LIMO files are not found on disk, the\n fetcher :func:mne.datasets.limo.load_data() will automatically download\n the files from a remote repository.\n\n\nDuring import, information about the data (i.e., sampling rate, number of\n epochs per condition, number and name of EEG channels per subject, etc.) is\n extracted from the LIMO :file:.mat files stored on disk and added to the\n epochs structure as metadata.\n\n\nFits linear models on the single subject's data and visualizes inferential\n measures to evaluate the significance of the estimated effects.\n\n\nReferences\n.. [1] Guillaume, Rousselet. (2016). LIMO EEG Dataset, [dataset].\n University of Edinburgh, Centre for Clinical Brain Sciences.\n https://doi.org/10.7488/ds/1556.\n.. [2] Rousselet, G. A., Gaspar, C. M., Pernet, C. R., Husk, J. S.,\n Bennett, P. J., & Sekuler, A. B. (2010). Healthy aging delays scalp EEG\n sensitivity to noise in a face discrimination task.\n Frontiers in psychology, 1, 19. https://doi.org/10.3389/fpsyg.2010.00019\n.. [3] Rousselet, G. A., Pernet, C. R., Bennett, P. J., & Sekuler, A. B.\n (2008). Parametric study of EEG sensitivity to phase noise during face\n processing. BMC neuroscience, 9(1), 98.\n https://doi.org/10.1186/1471-2202-9-98", "# Authors: Jose C. Garcia Alanis <alanis.jcg@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom mne.datasets.limo import load_data\nfrom mne.stats import linear_regression\nfrom mne.viz import plot_events, plot_compare_evokeds\nfrom mne import combine_evoked\n\n\nprint(__doc__)\n\n# subject to use\nsubj = 1", "About the data\nIn the original LIMO experiment (see [2]), participants performed a\ntwo-alternative forced choice task, discriminating between two face stimuli.\nThe same two faces were used during the whole experiment,\nwith varying levels of noise added, making the faces more or less\ndiscernible to the observer (see Fig 1 in [3]_ for a similar approach).\nThe presented faces varied across a noise-signal (or phase-coherence)\ncontinuum spanning from 0 to 85% in increasing steps of 5%.\nIn other words, faces with high phase-coherence (e.g., 85%) were easy to\nidentify, while faces with low phase-coherence (e.g., 5%) were hard to\nidentify and by extension very hard to discriminate.\nLoad the data\nWe'll begin by loading the data from subject 1 of the LIMO dataset.", "# This step can take a little while if you're loading the data for the\n# first time.\nlimo_epochs = load_data(subject=subj)", "Note that the result of the loading process is an\n:class:mne.EpochsArray containing the data ready to interface\nwith MNE-Python.", "print(limo_epochs)", "Visualize events\nWe can visualise the distribution of the face events contained in the\nlimo_epochs structure. Events should appear clearly grouped, as the\nepochs are ordered by condition.", "fig = plot_events(limo_epochs.events, event_id=limo_epochs.event_id)\nfig.suptitle(\"Distribution of events in LIMO epochs\")", "As it can be seen above, conditions are coded as Face/A and Face/B.\nInformation about the phase-coherence of the presented faces is stored in the\nepochs metadata. These information can be easily accessed by calling\nlimo_epochs.metadata. As shown below, the epochs metadata also contains\ninformation about the presented faces for convenience.", "print(limo_epochs.metadata.head())", "Now let's take a closer look at the information in the epochs\nmetadata.", "# We want include all columns in the summary table\nepochs_summary = limo_epochs.metadata.describe(include='all').round(3)\nprint(epochs_summary)", "The first column of the summary table above provides more or less the same\ninformation as the print(limo_epochs) command we ran before. There are\n1055 faces (i.e., epochs), subdivided in 2 conditions (i.e., Face A and\nFace B) and, for this particular subject, there are more epochs for the\ncondition Face B.\nIn addition, we can see in the second column that the values for the\nphase-coherence variable range from -1.619 to 1.642. This is because the\nphase-coherence values are provided as a z-scored variable in the LIMO\ndataset. Note that they have a mean of zero and a standard deviation of 1.\nVisualize condition ERPs\nLet's plot the ERPs evoked by Face A and Face B, to see how similar they are.", "# only show -250 to 500 ms\nts_args = dict(xlim=(-0.25, 0.5))\n\n# plot evoked response for face A\nlimo_epochs['Face/A'].average().plot_joint(times=[0.15],\n title='Evoked response: Face A',\n ts_args=ts_args)\n# and face B\nlimo_epochs['Face/B'].average().plot_joint(times=[0.15],\n title='Evoked response: Face B',\n ts_args=ts_args)", "We can also compute the difference wave contrasting Face A and Face B.\nAlthough, looking at the evoked responses above, we shouldn't expect great\ndifferences among these face-stimuli.", "# Face A minus Face B\ndifference_wave = combine_evoked([limo_epochs['Face/A'].average(),\n -limo_epochs['Face/B'].average()],\n weights='equal')\n\n# plot difference wave\ndifference_wave.plot_joint(times=[0.15], title='Difference Face A - Face B')", "As expected, no clear pattern appears when contrasting\nFace A and Face B. However, we could narrow our search a little bit more.\nSince this is a \"visual paradigm\" it might be best to look at electrodes\nlocated over the occipital lobe, as differences between stimuli (if any)\nmight easier to spot over visual areas.", "# Create a dictionary containing the evoked responses\nconditions = [\"Face/A\", \"Face/B\"]\nevokeds = {condition: limo_epochs[condition].average()\n for condition in conditions}\n\n# concentrate analysis an occipital electrodes (e.g. B11)\npick = evokeds[\"Face/A\"].ch_names.index('B11')\n\n# compare evoked responses\nplot_compare_evokeds(evokeds, picks=pick, ylim=dict(eeg=(-15, 7.5)))", "We do see a difference between Face A and B, but it is pretty small.\nVisualize effect of stimulus phase-coherence\nSince phase-coherence\ndetermined whether a face stimulus could be easily identified,\none could expect that faces with high phase-coherence should evoke stronger\nactivation patterns along occipital electrodes.", "phase_coh = limo_epochs.metadata['phase-coherence']\n# get levels of phase coherence\nlevels = sorted(phase_coh.unique())\n# create labels for levels of phase coherence (i.e., 0 - 85%)\nlabels = [\"{0:.2f}\".format(i) for i in np.arange(0., 0.90, 0.05)]\n\n# create dict of evokeds for each level of phase-coherence\nevokeds = {label: limo_epochs[phase_coh == level].average()\n for level, label in zip(levels, labels)}\n\n# pick channel to plot\nelectrodes = ['C22', 'B11']\n# create figures\nfor electrode in electrodes:\n fig, ax = plt.subplots(figsize=(8, 4))\n plot_compare_evokeds(evokeds,\n axes=ax,\n ylim=dict(eeg=(-20, 15)),\n picks=electrode,\n cmap=(\"Phase coherence\", \"magma\"))", "As shown above, there are some considerable differences between the\nactivation patterns evoked by stimuli with low vs. high phase-coherence at\nthe chosen electrodes.\nPrepare data for linear regression analysis\nBefore we test the significance of these differences using linear\nregression, we'll interpolate missing channels that were\ndropped during preprocessing of the data.\nFurthermore, we'll drop the EOG channels (marked by the \"EXG\" prefix)\npresent in the data:", "limo_epochs.interpolate_bads(reset_bads=True)\nlimo_epochs.drop_channels(['EXG1', 'EXG2', 'EXG3', 'EXG4'])", "Define predictor variables and design matrix\nTo run the regression analysis,\nwe need to create a design matrix containing information about the\nvariables (i.e., predictors) we want to use for prediction of brain\nactivity patterns. For this purpose, we'll use the information we have in\nlimo_epochs.metadata: phase-coherence and Face A vs. Face B.", "# name of predictors + intercept\npredictor_vars = ['face a - face b', 'phase-coherence', 'intercept']\n\n# create design matrix\ndesign = limo_epochs.metadata[['phase-coherence', 'face']].copy()\ndesign['face a - face b'] = np.where(design['face'] == 'A', 1, -1)\ndesign['intercept'] = 1\ndesign = design[predictor_vars]", "Now we can set up the linear model to be used in the analysis using\nMNE-Python's func:~mne.stats.linear_regression function.", "reg = linear_regression(limo_epochs,\n design_matrix=design,\n names=predictor_vars)", "Extract regression coefficients\nThe results are stored within the object reg,\nwhich is a dictionary of evoked objects containing\nmultiple inferential measures for each predictor in the design matrix.", "print('predictors are:', list(reg))\nprint('fields are:', [field for field in getattr(reg['intercept'], '_fields')])", "Plot model results\nNow we can access and plot the results of the linear regression analysis by\ncalling :samp:reg['{&lt;name of predictor&gt;}'].{&lt;measure of interest&gt;} and\nusing the\n:meth:~mne.Evoked.plot_joint method just as we would do with any other\nevoked object.\nBelow we can see a clear effect of phase-coherence, with higher\nphase-coherence (i.e., better \"face visibility\") having a negative effect on\nthe activity measured at occipital electrodes around 200 to 250 ms following\nstimulus onset.", "reg['phase-coherence'].beta.plot_joint(ts_args=ts_args,\n title='Effect of Phase-coherence',\n times=[0.23])", "We can also plot the corresponding T values.", "# use unit=False and scale=1 to keep values at their original\n# scale (i.e., avoid conversion to micro-volt).\nts_args = dict(xlim=(-0.25, 0.5),\n unit=False)\ntopomap_args = dict(scalings=dict(eeg=1),\n average=0.05)\n\nfig = reg['phase-coherence'].t_val.plot_joint(ts_args=ts_args,\n topomap_args=topomap_args,\n times=[0.23])\nfig.axes[0].set_ylabel('T-value')", "Conversely, there appears to be no (or very small) systematic effects when\ncomparing Face A and Face B stimuli. This is largely consistent with the\ndifference wave approach presented above.", "ts_args = dict(xlim=(-0.25, 0.5))\n\nreg['face a - face b'].beta.plot_joint(ts_args=ts_args,\n title='Effect of Face A vs. Face B',\n times=[0.23])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dsacademybr/PythonFundamentos
Cap09/Mini-Projeto2/Mini-Projeto2 - Analise4.ipynb
gpl-3.0
[ "<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 9</font>\nDownload: http://github.com/dsacademybr\nMini-Projeto 2 - Análise Exploratória em Conjunto de Dados do Kaggle\nAnálise 4", "# Imports\nimport os\nimport subprocess\nimport stat\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\nsns.set(style = \"white\")\n%matplotlib inline\n\n# Dataset\nclean_data_path = \"dataset/autos.csv\"\ndf = pd.read_csv(clean_data_path,encoding = \"latin-1\")\n\n# Calcule a média de preço por marca e por veículo\n", "Preço médio de um veículo por marca, bem como tipo de veículo", "# Crie um Heatmap com Preço médio de um veículo por marca, bem como tipo de veículo\n\n\n# Salvando o plot\nfig.savefig(\"plots/Analise4/heatmap-price-brand-vehicleType.png\")", "Fim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
cstorm125/ladybug
notebook/.ipynb_checkpoints/capture_face-checkpoint.ipynb
mit
[ "Capture Faces from Scraped Pictures\nWe used haarcascade for frontal face from OpenCV to capture the frontal faces from the pictures scraped from My Ladyboy Date and Date in Asia, and cropped them to the 224 by 224 size for input into the model. Girl and Ladyboy pictures are only the first profile pictures on respective dating sites whereas Ladyboy Big are the pictures in the detail section.", "import cv2\nfrom PIL import Image\nimport math\nimport copy\n\n#the usual data science stuff\nimport os,sys\nimport glob\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n%matplotlib inline\n\nladyboy_big_input = '../data/ladyboy_big/'\nladyboy_big_output = '../data/processed/ladyboy_big/'\nladyboy_input = '../data/ladyboy/'\nladyboy_output = '../data/processed/ladyboy/'\ngirl_input = '../data/girl/'\ngirl_output = '../data/processed/girl/'\n\ncascade_file_src = \"haarcascade_frontalface_default.xml\"\nfaceCascade = cv2.CascadeClassifier(cascade_file_src)", "Ladyboy", "#i=0\nfor root, dirs, files in os.walk(ladyboy_input):\n for name in files:\n #print(i)\n #i+=1\n imagePath = os.path.join(root, name)\n\n # load image on gray scale :\n image = cv2.imread(imagePath)\n gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\n # detect faces in the image :\n faces = faceCascade.detectMultiScale(gray, 1.2, 5)\n\n #skip if face not detected\n if(len(faces)==0):\n continue\n\n #open image\n im = Image.open(imagePath)\n\n #get box dimensions\n (x, y, w, h) = faces[0]\n center_x = x+w/2\n center_y = y+h/2\n b_dim = min(max(w,h)*1.2,im.width, im.height)\n box = (int(center_x-b_dim/2), int(center_y-b_dim/2), \n int(center_x+b_dim/2), int(center_y+b_dim/2))\n # Crop Image\n crpim = im.crop(box).resize((224,224))\n #plt.imshow(np.asarray(crpim))\n #save file\n crpim.save(ladyboy_output+name,format='JPEG')", "Ladyboy Big", "#i=0\nfor root, dirs, files in os.walk(ladyboy_big_input):\n for name in files:\n #print(i)\n #i+=1\n imagePath = os.path.join(root, name)\n\n # load image on gray scale :\n image = cv2.imread(imagePath)\n gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\n # detect faces in the image :\n faces = faceCascade.detectMultiScale(gray, 1.2, 5)\n\n #skip if face not detected\n if(len(faces)==0):\n continue\n\n #open image\n im = Image.open(imagePath)\n\n #get box dimensions\n (x, y, w, h) = faces[0]\n center_x = x+w/2\n center_y = y+h/2\n b_dim = min(max(w,h)*1.2,im.width, im.height)\n box = (int(center_x-b_dim/2), int(center_y-b_dim/2), \n int(center_x+b_dim/2), int(center_y+b_dim/2))\n # Crop Image\n crpim = im.crop(box).resize((224,224))\n #plt.imshow(np.asarray(crpim))\n #save file\n crpim.save(ladyboy_big_output+name,format='JPEG')", "Girl", "#i=0\nfor root, dirs, files in os.walk(girl_input):\n for name in files:\n #print(i)\n #i+=1\n imagePath = os.path.join(root, name)\n\n # load image on gray scale :\n image = cv2.imread(imagePath)\n gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\n # detect faces in the image :\n faces = faceCascade.detectMultiScale(gray, 1.2, 5)\n\n #skip if face not detected\n if(len(faces)==0):\n continue\n\n #open image\n im = Image.open(imagePath)\n\n #get box dimensions\n (x, y, w, h) = faces[0]\n center_x = x+w/2\n center_y = y+h/2\n b_dim = min(max(w,h)*1.2,im.width, im.height)\n box = (int(center_x-b_dim/2), int(center_y-b_dim/2), \n int(center_x+b_dim/2), int(center_y+b_dim/2))\n # Crop Image\n crpim = im.crop(box).resize((224,224))\n #plt.imshow(np.asarray(crpim))\n #save file\n crpim.save(girl_output+name,format='JPEG')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hektor-monteiro/python-notebooks
aula-8_derivadas.ipynb
gpl-2.0
[ "Derivadas numéricas\nA definição de derivada é:\n$$\\frac{df}{dx} = \\lim_{h \\to 0} {f(x+h)-f(x) \\over h}$$ \nO método mais simples para o cáculo de derivadas numéricas é o de diferenças finitas:\n$$f'(x) \\approx {f(x+h)-f(x) \\over h}$$ \nfoward diference\n$$f'(x) \\approx {f(x)-f(x-h) \\over h}$$ \nbackward diference\nA escolha de um ou outro método vai depender do problema a ser resolvido. Por exemplo em casos onde se necessita da derivada na fronteira de um intervalo. Dependendo da posição da fronteira somente um ou outro método será aplicavel. De modo geral ambos dão aproximadamente o mesmo resultado.\nAo contrário das integrais, as derivadas numericas são afetadas pelo erro de arredondamento assim como erro de resolução devido a escolha do tamanho $h$ do intervalo.\nNo caso das derivadas, como estamos lidando com uma subtração de numeros e estes tendem a ser muito proximos pois $h$ deve ser o menor possível, o erro de arredondamento fica importante.\nO livro texto mostra como obter o $h$ ideal para que não se cometam grandes erros de arredondamento.\nDe modo geral, usando expansão de Taylor para expressar a função $f(x)$ pode-se mostrar que:\n$$ h = \\sqrt{4C \\left|\\frac{f(x)}{f''(x)} \\right |} $$\n$$ \\epsilon = h|f''(x)| = ( 4C|f(x)f''(x)|)^{1/2} $$\nonde C é a precisão da máquina. Ou seja, o tamanho ideal seria da ordem de $\\sqrt{C}$, assim como o erro obtido. No caso do Python este valor é da ordem de 1.0e-8.\nUma maneira de se obter um resultado melhor é usando a diferença central:\n$$f'(x) \\approx {f(x+h/2)-f(x-h/2) \\over h}$$ \nou central or symmetric diference\nusando as expansões obtemos para o $h$ ideal e o erro:\n$$ h = \\left( 24C \\left|\\frac{f(x)}{f'''(x)} \\right | \\right)^{1/3} $$\n$$ \\epsilon = \\frac{1}{8}h^2|f'''(x)| = ( \\frac{9}{8}C^2[f(x)]^2|f'''(x)|)^{1/3} $$\nneste caso usando a precisão típica do Python temos um $h$ ideal de 1.0e-5 que leva a um erro da ordem de 1.0e-10.\nVeja abaixo a interpretação geométricas das 3 metodologias de cálculo:\nfonte: https://en.wikipedia.org/wiki/Finite_differencehttps://en.wikipedia.org/wiki/Finite_difference", "from IPython.display import SVG, display\ndisplay(SVG(url='https://upload.wikimedia.org/wikipedia/commons/9/90/Finite_difference_method.svg'))\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef f(x):\n return -(0.1*x**4)-(0.15*x**3)-(0.5*x**2)-(0.25*x)+1.2\n #return np.sin(x)\n\ndef df(x):\n return (-0.4*x**3)-(0.45*x**2)-(1.0*x)-0.25\n #return np.cos(x)\n\nx = np.linspace(0,1,100) \nx0 = 0.5\nn = 30\n\nh = np.zeros(n,float)\nerr = np.zeros(n,float)\n\nfor i in range(n):\n h[i] = float(10**(-i))\n dfnum = (f(x0+h[i]/2.)-f(x0-h[i]/2.))/(h[i])\n dftrue = df(x0)\n err[i] = np.abs(dfnum-dftrue)\n \nfig, ax = plt.subplots(2)\nax[0].plot(x,f(x))\nax[1].loglog(h,err)\n\n \n\nplt.scatter(x,f(x))\n\ndef f(x):\n #return -(0.1*x**4)-(0.15*x**3)-(0.5*x**2)-(0.25*x)+1.2\n return np.sin(x)\n\ndef df(x):\n #return (-0.4*x**3)-(0.45*x**2)-(1.0*x)-0.25\n return np.cos(x)\n\nh = 1.0e-1\n\na = 0; b=2*np.pi\n\nx = np.arange(a,b+h,h)\ndfnum = np.zeros(x.size)\n\nfor i in range(x.size):\n if ((i > 0) & (i < x.size-2)):\n dfnum[i] = (f(x[i]+h)-f(x[i]-h))/(2*h)\n elif (i == 0):\n dfnum[i] = (f(x[i]+h)-f(x[i]))/h\n else:\n dfnum[i] = (f(x[i])-f(x[i]-h))/h\n \nplt.plot(x,dfnum,'.r')\nplt.plot(x,f(x))\nplt.plot(x,df(x))\nplt.show()", "Derivada de dados tabulados\nMuitas vezes temos que lidar com o problema de obter derivadas de dados obtidos experimentalmente. Em geral para esses dados não temos a opção de calcular a função em pontos específicos e muitas vezes nem mesmo temos a função. Existem duas situações típicas: 1) dados amostrados regularmente 2) dados sem regularidade na amostragem.\nNo primeiro caso usamos a formula das diferenças centrais. No entanto, como em geral não temos como gerar outros pontos, devemos adaptar o método. No caso de dados amostrados em intervalos regulares de tamanho h usamos:\n$$f'(x) \\approx {f(x+h)-f(x-h) \\over 2h}$$ \nOutra estratégia é calcular a derivada com a diferença central usual mas em um ponto no meio do intervalo. A desvantagem é que a derivada seria calculada em um ponto para o qual dados não foram obtidos e em algumas aplicações isso pode não ser útil.\nNo segundo caso, não ha outra alternativa a não ser fazer interpolação para regularizar a amostragem dos dados. Veremos isso mais adiante no curso.\naproximações de ordem mais alta\nUma maneira de interpretar o procedimento de cálculo de uma derivada numérica é que estamos ajustando uma reta em dois pontos distantes h um do outro e tomando a inclinação dessa reta como nossa derivada. Nesse contexto podemos pensar em ajustar polinomios de ordem mais altas para calcular as derivadas, assim como fizemos com os métodos de Newton-Cotes de ordens maiores.\nNo livro texto é mostrado uma aproximação de polinomio de segundo grau, que leva a formula da diferença central, assim como uma tabela com coeficientes para aproximações de ordens maiores:\nhttp://www.umich.edu/~mejn/cp/chapters/int.pdf\nUsando o Python\nA função SciPy scipy.misc.derivative calcula derivados usando a fórmula de diferença central.", "from scipy.misc import derivative\n\nx = np.arange(0,5)\nder = derivative(np.exp,x,dx=0.1)\n\nprint(der)", "Segunda Derivada\nDerivadas Parciais", "interv[10]", "Exercícios:\n1 - Escreva um programa para calcular as derivadas parciais de funções polinmiais multivariadas como por exemplo $f(x,y) = x^2y + xy -y^2$. Explore graficamente a função e os erros cometidos em função da amostragem adotada." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/awi/cmip6/models/sandbox-2/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: AWI\nSource ID: SANDBOX-2\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:38\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'awi', 'sandbox-2', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
lseongjoo/learn-python
function.ipynb
mit
[ "from __future__ import print_function\n\ndef hello_world():\n msg = 'hello, world!'\n return msg\n\ngreeting = hello_world()\nprint(greeting)", "인자의 기본값", "def greetings(hour, lang='kr', extra_msg=None):\n # 시간값 확인\n if hour < 0 or hour > 24:\n return\n \n # 언어에 따라 메시지 설정\n msgs = {'kr': [u'좋은', u'아침', u'오후', u'저녁', u'밤'],\n 'en': [u'Good', u'morning', u'afternoon', u'evening', \n u'night']}\n\n # 별도로 설정된 메시지가 있으면, 해당 메시지 반영\n if not extra_msg is None:\n for key, value in extra_msg.items():\n msgs[key] = value\n \n # 해당 언어가 없으면 함수 종료\n if not lang in msgs:\n return\n \n msg_prefix = msgs[lang][0]\n \n # 시간에 따른 메시지 설정\n msg = msg_prefix + ' '\n if 6 < hour < 12:\n msg += msgs[lang][1]\n elif 12<= hour < 18:\n msg += msgs[lang][2]\n elif 18<= hour < 21:\n msg += msgs[lang][3]\n else:\n msg += msgs[lang][4]\n return msg\n\nprint(greetings(9, lang='fr', \n extra_msg={'fr': \n ['bon', \n 'jour', 'soir', 'nuit']}))\nprint(greetings(13))\nprint(greetings(19))\nprint(greetings(22))\nprint(greetings(-2))\n\n# return의 효과\ndef many_exits(exit_no):\n if exit_no==1:\n return 'Exit 1'\n if exit_no==2:\n return 'Exit 2'\n if exit_no==3:\n return 'Exit 3'\n \n return '그런 출구는 없습니다.'\n\nprint(many_exits(1))\nprint(many_exits(2))\nprint(many_exits(9))", "함수 인자\n함수의 인자를 학습하는 예시", "def juicer(ingredient, customer_name):\n result = customer_name + u' 님, '\n\n # TODO: 가능한 메뉴인지 확인\n menu = [u'딸기', u'사과', u'망고']\n if not ingredient in menu:\n return None\n \n result += ingredient + u' 주스'\n if ingredient == u'딸기':\n price = 10\n elif ingredient == u'사과':\n price = 15\n elif ingredient == u'망고':\n price = 20\n \n return result, price\n\nresult = juicer(u'딸기', u'성주')\nif result is None:\n print('그런 메뉴 없습니다.')\nelse:\n msg, price = result # 결과 튜플 언패킹\n print(msg + u' 나왔습니다.')\n print('가격: ' + str(price))\n# 출력 예시: 성주 님, 딸기 주스 나왔습니다.\n# 딸기 주스는 10원입니다.\n# 사과 주스는 15원입니다.\n# 망고 주스는 20원입니다.", "두 개 이상의 결과 반환(return)", "def swap(x,y):\n return (y,x)\n\nprint(swap(1,2))\n\n# 함수 정의\ndef foo(a,b):\n return a*2, b*2\n\n# 함수 호출\naa, bb = foo(1,2)\nprint(aa,bb)\n\nresult = foo(1,2)\nprint(type(result))\nprint(result[0], result[1])", "인자에 미치는 효과", "def double(x):\n x = x*2\n \nx = 1\ndouble(x)\nprint(x)\n\ndef square_not_safe(seq):\n for i, n in enumerate(seq):\n seq[i] = seq[i]**2\n\ndef square_safe(seq):\n # 값 복사\n seq = list(seq[:])\n for i, n in enumerate(seq):\n seq[i] = seq[i]**2\n \n return seq\n\nnums = [1,2,3,4]\nresult = square_safe(nums)\nprint(nums)\nprint(result)\n\n# list --> tuple\nnums = tuple(nums)\nsquare_not_safe(nums)", "함수의 스코프에서 선언된 변수는 지역변수", "def func(x):\n y = x\n \nfunc(1)\nprint(y)", "제네레이터 (Generator)", "def countdown(n):\n print('카운트 다운 시작!')\n while n>0:\n yield n\n n -=1\n\nc = countdown(10)\n\nc.next()\n\nc.next()\n\nfor c in countdown(10):\n print(c, end=' ')", "도전과제\n리스트, 문자열과 같은 시퀀스를 받아서 홀수번째 원소만 선택해 반환하는 함수 its_odd를 정의하시오.\n예: \nresult = its_odd(u'파이썬')\nprint(result) # '파썬'\n\nresult = its_odd([1,2,3,4,5])\nprint(result) # [1,3,5]\n\na. 입력과 출력을 다음 형식으로 파일 its_odd_result.txt에 저장하시오.\n파이썬 --> 파썬\n[1,2,3,4,5] --> [1,3,5]", "def its_odd(seq):\n result = seq[::2]\n return result\n\nresult = its_odd(u'파이썬')\nprint(result)\n\nprint(its_odd([1,2,3,4,5]))\n\ntype(its_odd)\n\nits_odd(range(10))\n\nxx = its_odd\nxx(range(10))\n\ndef save_to_file(func, input_seq, filename, str_format=u'{} --> {}'):\n output_seq = func(input_seq)\n f = open(filename, 'w')\n # TODO: 문자열 인코딩 수행\n text_encoded = str_format.format(input_seq, output_seq).encode('utf-8')\n f.write(text_encoded)\n f.close()\n\nsave_to_file(its_odd, [1,2,3,4,5], 'its_odd_result.txt')\n\ndef its_even(seq):\n return seq[1::2]\n\nsave_to_file(its_even, [1,2,3,4,5], 'its_even_result.txt')", "그런데, 유니코드 저장을 하려고 하니까 ... ?", "save_to_file(its_odd, u'파이썬', 'its_odd_result.txt')", "도전과제\n피보나치 수열은 다음과 같다.\n0 1 1 2 3 5 8 ... \na. 임의의 n개의 피보나치 수열을 리스트로 반환하는 함수 generate_fibo를 작성하시오.\nb. generate_fibo 호출 시, 시작 숫자 두 개를 지정할 수 있도록 하시오. 인자를 설정하지 않으면 수열은 0,1로 시작한다. 생성 개수를 지정하지 않으면, 기본적으로 10개의 숫자를 생성하도록 만드시오.", "def generate_fibo(a=0,b=1,n=10):\n fibos = [a,b]\n # 지정된 개수만큼 원소 추가\n while len(fibos)<n:\n # 새 원소 생성 후 a, b 값 교체\n a,b = (b, a+b)\n # 새로 생성된 값을 결과에 추가\n fibos.append(b)\n \n return fibos\n\ngenerate_fibo()", "도전과제\n포커 카드 52장을 게임 참가자에게 각각 5장씩 나눠주는 프로그램을 작성하시오. 포커 카드는 참가자에게 나눠주기 전에 무작위로 섞여야 한다. 참가자의 수는 2-4명이 될 수 있다.\na. 포커 카드를 나눠주고 나서 각 참가자가 받은 포커 카드를 모두 출력한다.\n예:\n이성주 : H2, D2, SJ, C10, S3\n김성주 : C3, D4, CK, SK, H9\nb. 각 참자자의 카드의 숫자를 기준으로 오름차순으로 정렬해 출력한다.", "from __future__ import print_function\nimport random\n\n# 1. 카드덱 생성\ndef generate_card():\n # 52장 카드 덱 생성\n suits = ['Heart', 'Diamond', 'Clover', 'Spade']\n ranks = range(2,11)+['J', 'Q', 'K', 'A']\n deck = []\n for s in suits:\n for r in ranks:\n card = s + str(r)\n deck.append(card)\n return deck\n\n# 2. 카드덱 나눠주기\ndef play_card_game(deck, players):\n # 나눠주기 전에 섞어야지\n random.shuffle(deck)\n \n # 인제 나눠줘야지 ...\n for person in players:\n person['hand'] = deck[:5]\n deck = deck[5:]\n \n return\n\ndeck = generate_card()\nplayers = [{'name':'이성주'}, {'name':'김성주'}]\nplay_card_game(deck, players)\n\nprint(len(deck))\n\nfor person in players:\n print(person['name'], end=': ')\n print(person['hand'])", "도전과제\n52장의 포커 카드가 있다. 이 카드를 사용해 블랙잭 게임을 한다.\n블랙잭 게임은 각 참가자가 처음에 두 장의 카드를 받는다.\n각 카드의 숫자를 모두 더 해 21인지 확인한다.\n\n21이면 블랙잭! 게임이 종료되고 승리한다.\n숫자의 합이 21보다 작으면 한 장의 카드를 더 받는다.\n숫자의 합이 21보다 크면 게임에서 패배한다.\n\na. 받은 카드패를 파일로 출력해 저장한다.\n참가자: 이성주\n2015-07-08\nHJ, HK 패!\nHJ, S10 블랙잭!", "from __future__ import print_function\nimport random\n\n# 52장의 카드덱 생성\ndef gen_deck():\n ranks = list(range(2,11))+['J', 'Q', 'K', 'A']\n suits = ['Spade', 'Heart', 'Diamond', 'Clover']\n \n deck = [] # 카드덱 초기화\n for s in suits:\n for r in ranks:\n deck.append((s, r))\n \n # 잘 섞기\n random.shuffle(deck)\n return deck\n\ndef get_card_value(hand):\n \"\"\"카드패의 숫자값 합계\"\"\"\n # 현재 hand의 카드의 숫자를 모두 더한다.\n value=0\n for card in hand:\n # 현재 카드의 숫자값\n rank = card[1]\n if rank=='A':\n value = value + 14\n elif rank=='K':\n value = value + 13\n elif rank == 'Q':\n value = value + 12\n elif rank == 'J':\n value = value + 11\n else:\n # 숫자인 경우는 그냥 더해준다.\n value = value + rank\n \n return value\n\ndef play_blackjack(player, output_file=None):\n # 블랙잭 게임 시작\n deck=gen_deck()\n\n # 카드 두 장 받기\n player['hand'] = [deck.pop(), deck.pop()]\n\n while True:\n play_log = u'카드패: {}'.format(player['hand'])\n play_log += u'\\n'\n\n # 카드패 숫자값 합계\n hand_value = get_card_value(player['hand'])\n\n play_log += u'카드 숫자값= {}'.format(hand_value)\n play_log += u'\\n'\n if hand_value == 21:\n play_log += u'블랙잭!!!!!!!' \n play_log += u'\\n'\n print(play_log)\n break\n elif hand_value > 21:\n play_log += u'돈 잃었다...'\n play_log += '\\n'\n print(play_log)\n break\n elif hand_value < 21:\n # 카드를 한 장 더 받는다.\n player['hand'].append(deck.pop())\n play_log += u'인생을 계속 살아봐야 아는 거지 ... 한 장 더'\n play_log += u'\\n'\n print(play_log)\n \n # 게임 결과를 파일로 출력\n if output_file is not None:\n f = open(output_file, 'a')\n text_encoded = play_log.encode('utf-8')\n f.write(text_encoded)\n f.close()\n \nplay_blackjack({'name':u'이성주'}, output_file='blackjack_log.txt')\n\ndeck = gen_deck()\nhand = deck[:3]\nprint(hand)\nget_card_value(hand)", "도전과제\n5개의 임의의 카드패 100개 정보를 담은 파일을 생성한다. 이 파일의 각 카드패에 포커 (같은 숫자 4개)가 있는지 확인하는 프로그램을 작성한다.", "# list의 원소를 문자열로 반환하는 구문\ntext = ''\nfor c in ['a','b','c']:\n text += c\nprint(text)\n\n# sequence 형의 원소를 문자열로 반환\n# 파이썬 표준 라이브러리 활용\nimport string\ntext = string.join(('Spade', str(6)), sep=' ')\nprint(text)\n\n# 문자열을 list로 변환\n'a,b,c'.split(',')\n\nimport string\n\ndef card_to_string(card):\n return string.join((str(card[0]), str(card[1])), sep=' ')\n\ndef gen_hands(filename = 'hand_100.txt', n=100):\n # 카드패 파일 생성\n f = open(filename, 'w')\n for i in range(n):\n hand = gen_deck()[:5]\n # 카드패 정보를 문자열로 변환\n for card in hand:\n card_str = card_to_string(card)\n f.write(card_str)\n f.write(',')\n f.write('\\n')\n \n f.close()\n\nfrom __future__ import print_function\n\ndef is_pocker(hand):\n \"\"\"카드패가 포커(4장의 같은 숫자 포함)인지 확인\"\"\"\n # TODO: 포커 탐지\n \"\"\"\n X O O O O\n O X O O O\n O O X O O\n O O O X O\n O O O O X\n \"\"\"\n # 카드패에서 숫자만 추출\n ranks = []\n for card in hand:\n ranks.append(str(card[1]))\n\n #print(ranks) # 디버깅용 출력\n \n # 자료 구조를 바꾸는 방법\n # 더 좋은 방법을 찾아야 할 듯 ...\n if len(set(ranks)) == 2:\n return True\n \n return False\n\ndef to_list(hand_str_list):\n hand_list = []\n for l in hand_str_list:\n # 카드패를 list로 변환\n hand_str = l.split(',')[:-1]\n # 카드패의 각 원소를 튜플로 변환\n hand = []\n for card in hand_str:\n hand.append(tuple(card.split()))\n hand_list.append(hand)\n return hand_list\n\ndef check_pockers(filename):\n \"\"\"파일을 읽어들여, 카드패에서 포커 패턴 탐지\"\"\"\n f = open(filename)\n hand_list = to_list(f.readlines())\n f.close() # 파일 읽기 종료\n \n # 각 패의 포커 탐지\n for hand in hand_list:\n if is_pocker(hand):\n print(hand, end=': ')\n print('포커!')\n\nis_pocker([('H',7), ('D',7), ('S',7), ('D',2), ('C',7)])\n\nfilename = 'hand_10000.txt'\ngen_hands(filename, n=10000)\ncheck_pockers(filename)", "도전과제\n카드의 모음을 패(hand)라고 한다. 카드패에서 다음의 경우를 판정하는 프로그램을 작성한다.\n1. 같은 숫자 네 개가 있는 경우\n2. 숫자 다섯 개가 연속되는 경우\n3. 같은 문양이 다섯 개인 경우\n4. 같은 숫자 쌍이 있는 경우" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
IEEE-NITK/DeepNLP
Project-Code/thrones2vec/GoT_vectors.ipynb
mit
[ "import codecs\nimport glob\nimport logging\nimport multiprocessing\nimport os\nimport pprint\nimport re\n\n\nimport nltk\nimport gensim.models.word2vec as w2v\nimport sklearn.manifold\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nfrom bhtsne import tsne\n\n# get pretrained word vectors trained on the entire GoT text using word2vec in gensim\n\nthrones2vec = w2v.Word2Vec.load(os.path.join(\"trained\", \"thrones2vec.w2v\"))\n\n# get all word vectors\nall_word_vecs = thrones2vec.syn0", "What patterns do we see if we average a set of similar/related words(names) and find the words with highest cosine similarity with our average vector?", "def best_avgs(words, all_vecs,k=10):\n \n from operator import itemgetter\n \n ## get word embeddings for the words in our input array\n embs = np.array([thrones2vec[word] for word in words])\n \n #calculate its average\n avg = np.sum(embs,axis=0)/len(words)\n \n # Cosine Similarity with every word vector in the corpus\n denom = np.sqrt(np.sum(all_vecs*all_vecs,axis=1,keepdims=True)) \\\n * np.sqrt(np.sum(avg*avg))\n \n similarity = all_vecs.dot(avg.T).reshape(all_vecs.shape[0],1) \\\n / denom\n similarity = similarity.reshape(1,all_vecs.shape[0])[0]\n \n # Finding the 10 largest words with highest similarity\n # Since we are averaging we might end up getting the input words themselves \n # among the top values\n # we need to make sure we get back len(words)+k closest words and then \n # remove all input words we supplied\n \n nClosest = k + len(words)\n \n # Get indices of the most similar word vectors to our avgvector\n ind = np.argpartition(similarity, -(nClosest))[-nClosest:]\n \n names = [thrones2vec.index2word[indx] for indx in ind]\n similarity = similarity[ind]\n uniq = [(person,similar) for person,similar in zip(names,similarity) if person not in words]\n \n \n return sorted(uniq,key=itemgetter(1),reverse=True)[:k]", "Let's see what we get\nWe will supply the names of all the children of Ned and Catelyn Stark and see what we get back as best avgs", "children = [\"Arya\",\"Robb\",\"Sansa\",\"Bran\",\"Jon\"]\n\nbest_avgs(children, all_word_vecs, 10)", "And the top two best averages? Their parents: Ned and Catelyn.\nMath is Beautiful :)\nSee if we can get some context about two families from their best average vectors", "families = [\"Lannister\",\"Stark\"]\nbest_avgs(families, all_word_vecs, 10)", "Spoilers", "families = [\"Tully\",\"Stark\"]\nbest_avgs(families, all_word_vecs, 10)", "Model correctly predicted the relationship between the Two Families", "families = [\"Lannister\",\"Baratheon\"]\nbest_avgs(families, all_word_vecs, 10)", "Who's the usurper? a person who takes a position of power or importance illegally or by force.", "thrones2vec.most_similar(\"usurper\")", "Here we obtain words that are used in the same context as usurper or than have some similarity of usage with it. So the model is able to capture this kind of relationship as well.", "thrones2vec.most_similar(\"Tyrion\")\n\nthrones2vec.most_similar(\"Dothraki\")\n\ndef nearest_similarity_cosmul(start1, end1, end2):\n similarities = thrones2vec.most_similar_cosmul(\n positive=[end2, start1],\n negative=[end1]\n )\n start2 = similarities[0][0]\n print(\"{start1} is related to {end1}, as {start2} is related to {end2}\".format(**locals()))\n return start2\n\nnearest_similarity_cosmul(\"woman\",\"man\",\"king\")\n\nnearest_similarity_cosmul(\"Jaime\",\"Lannister\",\"Stark\")\n\nthrones2vec.most_similar(\"Jaime\")", "Arya - John + Ghost = ?", "thrones2vec.most_similar(positive=['Ghost', 'Arya'], negative=['Jon'])", "Dimensionality reduction using tsne", "Y = tsne(all_word_vecs.astype('float64'))\n\npoints = pd.DataFrame(\n [\n (word, coords[0], coords[1])\n for word, coords in [\n (word, Y[thrones2vec.vocab[word].index])\n for word in thrones2vec.vocab\n ]\n ],\n columns=[\"word\", \"x\", \"y\"]\n)\n\npoints.head(10)\n\nsns.set_context(\"poster\")\n\n%pylab inline\n\npoints.plot.scatter(\"x\", \"y\", s=10, figsize=(20, 12))\n\n\ndef plot_region(x_bounds, y_bounds):\n slice = points[\n (x_bounds[0] <= points.x) &\n (points.x <= x_bounds[1]) & \n (y_bounds[0] <= points.y) &\n (points.y <= y_bounds[1])\n ]\n inwords=[]\n ax = slice.plot.scatter(\"x\", \"y\", s=35, figsize=(10, 8))\n for i, point in slice.iterrows():\n inwords.append(point.word)\n ax.text(point.x + 0.005, point.y + 0.005, point.word, fontsize=11)\n print(\", \".join(inwords))\n\nplot_region(x_bounds=(-8.0,-6.0), y_bounds=(-29.0, -26.0))\n\npoints.loc[points[\"word\"]==\"Jaime\",:]\n\nplot_region(x_bounds=(28,34), y_bounds=(-5.0,-2.0))\n\ndef coords(word):\n coord = points.loc[points[\"word\"]==word,:].values[0]\n return coord[1],coord[2]\n\ncoords(\"Jon\")\n\ndef plot_close_to(word):\n x,y = coords(word)\n plot_region(x_bounds=(x-1.0,x+1.0), y_bounds=(y-1.0,y+1.0))\n \n \n\nplot_close_to(\"apples\")\n\nplot_close_to(\"Winterfell\")\n\nplot_close_to(\"Payne\")\n\nfor i in [\"king\",\"queen\",\"man\",\"woman\"]:\n print(coords(i))\n\nplot_close_to(\"Needle\")" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/cas/cmip6/models/fgoals-g3/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: CAS\nSource ID: FGOALS-G3\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cas', 'fgoals-g3', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
suriyan/ethnicolr
ethnicolr/examples/ethnicolr_app_contrib20xx-census_ln.ipynb
mit
[ "Application: 2000/2010 Political Campaign Contributions by Race\nUsing ethnicolr, we look to answer three basic questions:\n<ol>\n<li>What proportion of contributions were made by blacks, whites, Hispanics, and Asians? \n<li>What proportion of unique contributors were blacks, whites, Hispanics, and Asians?\n<li>What proportion of total donations were given by blacks, whites, Hispanics, and Asians?\n</ol>", "import pandas as pd\n\ndf = pd.read_csv('/opt/names/fec_contrib/contribDB_2000.csv', nrows=100)\ndf.columns\n\nfrom ethnicolr import census_ln", "Load and Subset on Individual Contributors", "df = pd.read_csv('/opt/names/fec_contrib/contribDB_2000.csv', usecols=['amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name'])\nsdf = df[df.contributor_type=='I'].copy()\nrdf2000 = census_ln(sdf, 'contributor_lname', 2000)\nrdf2000['year'] = 2000\n\ndf = pd.read_csv('/opt/names/fec_contrib/contribDB_2010.csv.zip', usecols=['amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name'])\nsdf = df[df.contributor_type=='I'].copy()\nrdf2010 = census_ln(sdf, 'contributor_lname', 2010)\nrdf2010['year'] = 2010\n\nrdf = pd.concat([rdf2000, rdf2010])\nrdf.head(20)\n\nrdf.replace('(S)', 0, inplace=True)\n\nrdf[['pctwhite', 'pctblack', 'pctapi', 'pctaian', 'pct2prace', 'pcthispanic']] = rdf[['pctwhite', 'pctblack', 'pctapi', 'pctaian', 'pct2prace', 'pcthispanic']].astype(float)\n\ngdf.apply(lambda r: r / r.sum(), axis=1).style.format(\"{:.2%}\")", "What proportion of contributons were by blacks, whites, Hispanics, and Asians?", "rdf['white'] = rdf.pctwhite / 100.0\nrdf['black'] = rdf.pctblack / 100.0\nrdf['api'] = rdf.pctapi / 100.0\nrdf['hispanic'] = rdf.pcthispanic / 100.0\ngdf = rdf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'})\ngdf.apply(lambda r: r / r.sum(), axis=1).style.format(\"{:.2%}\")", "What proportion of the donors were blacks, whites, Hispanics, and Asians?", "udf = rdf.drop_duplicates(subset=['contributor_name']).copy()\nudf['white'] = udf.pctwhite / 100.0\nudf['black'] = udf.pctblack / 100.0\nudf['api'] = udf.pctapi / 100.0\nudf['hispanic'] = udf.pcthispanic / 100.0\ngdf = udf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'})\ngdf.apply(lambda r: r / r.sum(), axis=1).style.format(\"{:.2%}\")", "What proportion of the total donation was given by blacks, whites, Hispanics, and Asians?", "rdf['white'] = rdf.amount * rdf.pctwhite / 100.0\nrdf['black'] = rdf.amount * rdf.pctblack / 100.0\nrdf['api'] = rdf.amount * rdf.pctapi / 100.0\nrdf['hispanic'] = rdf.amount * rdf.pcthispanic / 100.0\ngdf = rdf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'}) / 10e6\ngdf.style.format(\"{:0.2f}\")\n\n\ngdf.apply(lambda r: r / r.sum(), axis=1).style.format(\"{:.2%}\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
dev/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
bsd-3-clause
[ "%matplotlib inline", "Working with Epoch metadata\nThis tutorial shows how to add metadata to ~mne.Epochs objects, and\nhow to use Pandas query strings &lt;pandas:indexing.query&gt; to select and\nplot epochs based on metadata properties.\nFor this tutorial we'll use a different dataset than usual: the\nkiloword-dataset, which contains EEG data averaged across 75 subjects\nwho were performing a lexical decision (word/non-word) task. The data is in\n~mne.Epochs format, with each epoch representing the response to a\ndifferent stimulus (word). As usual we'll start by importing the modules we\nneed and loading the data:", "import os\nimport numpy as np\nimport pandas as pd\nimport mne\n\nkiloword_data_folder = mne.datasets.kiloword.data_path()\nkiloword_data_file = os.path.join(kiloword_data_folder,\n 'kword_metadata-epo.fif')\nepochs = mne.read_epochs(kiloword_data_file)", "Viewing Epochs metadata\n.. sidebar:: Restrictions on metadata DataFrames\nMetadata dataframes are less flexible than typical\n :class:Pandas DataFrames &lt;pandas.DataFrame&gt;. For example, the allowed\n data types are restricted to strings, floats, integers, or booleans;\n and the row labels are always integers corresponding to epoch numbers.\n Other capabilities of :class:DataFrames &lt;pandas.DataFrame&gt; such as\n :class:hierarchical indexing &lt;pandas.MultiIndex&gt; are possible while the\n ~mne.Epochs object is in memory, but will not survive saving and\n reloading the ~mne.Epochs object to/from disk.\nThe metadata attached to ~mne.Epochs objects is stored as a\n:class:pandas.DataFrame containing one row for each epoch. The columns of\nthis :class:~pandas.DataFrame can contain just about any information you\nwant to store about each epoch; in this case, the metadata encodes\ninformation about the stimulus seen on each trial, including properties of\nthe visual word form itself (e.g., NumberOfLetters, VisualComplexity)\nas well as properties of what the word means (e.g., its Concreteness) and\nits prominence in the English lexicon (e.g., WordFrequency). Here are all\nthe variables; note that in a Jupyter notebook, viewing a\n:class:pandas.DataFrame gets rendered as an HTML table instead of the\nnormal Python output block:", "epochs.metadata", "Viewing the metadata values for a given epoch and metadata variable is done\nusing any of the Pandas indexing &lt;pandas:/reference/indexing.rst&gt;\nmethods such as :obj:~pandas.DataFrame.loc,\n:obj:~pandas.DataFrame.iloc, :obj:~pandas.DataFrame.at,\nand :obj:~pandas.DataFrame.iat. Because the\nindex of the dataframe is the integer epoch number, the name- and index-based\nselection methods will work similarly for selecting rows, except that\nname-based selection (with :obj:~pandas.DataFrame.loc) is inclusive of the\nendpoint:", "print('Name-based selection with .loc')\nprint(epochs.metadata.loc[2:4])\n\nprint('\\nIndex-based selection with .iloc')\nprint(epochs.metadata.iloc[2:4])", "Modifying the metadata\nLike any :class:pandas.DataFrame, you can modify the data or add columns as\nneeded. Here we convert the NumberOfLetters column from :class:float to\n:class:integer &lt;int&gt; data type, and add a :class:boolean &lt;bool&gt; column\nthat arbitrarily divides the variable VisualComplexity into high and low\ngroups.", "epochs.metadata['NumberOfLetters'] = \\\n epochs.metadata['NumberOfLetters'].map(int)\n\nepochs.metadata['HighComplexity'] = epochs.metadata['VisualComplexity'] > 65\nepochs.metadata.head()", "Selecting epochs using metadata queries\nAll ~mne.Epochs objects can be subselected by event name, index, or\n:term:slice (see tut-section-subselect-epochs). But\n~mne.Epochs objects with metadata can also be queried using\nPandas query strings &lt;pandas:indexing.query&gt; by passing the query\nstring just as you would normally pass an event name. For example:", "print(epochs['WORD.str.startswith(\"dis\")'])", "This capability uses the :meth:pandas.DataFrame.query method under the\nhood, so you can check out the documentation of that method to learn how to\nformat query strings. Here's another example:", "print(epochs['Concreteness > 6 and WordFrequency < 1'])", "Note also that traditional epochs subselection by condition name still works;\nMNE-Python will try the traditional method first before falling back on rich\nmetadata querying.", "epochs['solenoid'].plot_psd()", "One use of the Pandas query string approach is to select specific words for\nplotting:", "words = ['typhoon', 'bungalow', 'colossus', 'drudgery', 'linguist', 'solenoid']\nepochs['WORD in {}'.format(words)].plot(n_channels=29)", "Notice that in this dataset, each \"condition\" (A.K.A., each word) occurs only\nonce, whereas with the sample-dataset dataset each condition (e.g.,\n\"auditory/left\", \"visual/right\", etc) occurred dozens of times. This makes\nthe Pandas querying methods especially useful when you want to aggregate\nepochs that have different condition names but that share similar stimulus\nproperties. For example, here we group epochs based on the number of letters\nin the stimulus word, and compare the average signal at electrode Pz for\neach group:", "evokeds = dict()\nquery = 'NumberOfLetters == {}'\nfor n_letters in epochs.metadata['NumberOfLetters'].unique():\n evokeds[str(n_letters)] = epochs[query.format(n_letters)].average()\n\nmne.viz.plot_compare_evokeds(evokeds, cmap=('word length', 'viridis'),\n picks='Pz')", "Metadata can also be useful for sorting the epochs in an image plot. For\nexample, here we order the epochs based on word frequency to see if there's a\npattern to the latency or intensity of the response:", "sort_order = np.argsort(epochs.metadata['WordFrequency'])\nepochs.plot_image(order=sort_order, picks='Pz')", "Although there's no obvious relationship in this case, such analyses may be\nuseful for metadata variables that more directly index the time course of\nstimulus processing (such as reaction time).\nAdding metadata to an Epochs object\nYou can add a metadata :class:~pandas.DataFrame to any\n~mne.Epochs object (or replace existing metadata) simply by\nassigning to the :attr:~mne.Epochs.metadata attribute:", "new_metadata = pd.DataFrame(data=['foo'] * len(epochs), columns=['bar'],\n index=range(len(epochs)))\nepochs.metadata = new_metadata\nepochs.metadata.head()", "You can remove metadata from an ~mne.Epochs object by setting its\nmetadata to None:", "epochs.metadata = None" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
feststelltaste/software-analytics
courses/20191014_ML-Summit/Parsing and Analysing vmstat Data the Easy Way (Demo Notebook).ipynb
gpl-3.0
[ "Idea\nUsing the vmstat command line utility to quickly determine the root cause of performance problems.", "%less ../datasets/vmstat_loadtest.log", "Data Input\nIn this version, we use a helper library that I've built to read in data sources into pandas' DataFrame.", "from ozapfdis.linux import vmstat\n\nstats = vmstat.read_logfile(\"../datasets/vmstat_loadtest.log\")\nstats.head()", "Data Selection", "cpu_data = stats.iloc[:, -5:]\ncpu_data.head()", "Visualization", "%matplotlib inline\ncpu_data.plot.area();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
steven-murray/halomod
devel/HMcode.ipynb
mit
[ "Test against HMcode", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom scipy.interpolate import InterpolatedUnivariateSpline as spline\nfrom pathlib import Path\n\n%load_ext autoreload\n%autoreload 2", "Run HMcode\nFirst, we run HMCode. We use ihm=2, which is the simple halo model calculation (needs to be set in-code and recompiled). The precise version (git hash) we use is a53bb0ea574bc19d45467e6847cb0753eca49e93 (Nov 6 2019). \nLine numbers / function names used throughout refer to this version.", "hmcode_dir = Path(\"/home/steven/Documents/Projects/halos/HALOMOD/other-codes/HMcode\")\n\ndef read_power(fname):\n # Each column is the power at a different redshift.\n with open(fname, 'r') as fl:\n line = fl.readline().split(\"#####\")[-1].split(' ')[1:]\n redshifts = [float(x) for x in line]\n\n data = np.genfromtxt(fname, skip_header=1)\n k = data[:, 0]\n return k, redshifts, data[:, 1:]\n\nk, redshifts, data = read_power(hmcode_dir / 'power.dat')", "Make halomod model", "from halomod import DMHaloModel\n\nhm = DMHaloModel(\n exclusion_model=None,\n sd_bias_model=None,\n transfer_model=\"EH_BAO\",\n cosmo_params={\n \"Tcmb0\":2.725, # Line 596\n 'Om0': 0.3, # Line 587\n 'Ob0': 0.05, # Line 589\n 'H0': 70.0 # Line 591\n },\n hc_spectrum=\"linear\",\n halo_concentration_model=\"Bullock01\",\n halo_concentration_params={\"K\":4, \"F\":0.01}, # Line 376\n hmf_model=\"SMT\",\n sigma_8 = 0.8, # Line 593\n n = 0.96, # Line 594 \n Mmin=2, # Line 795\n Mmax=18, # Line 796,\n lnk_min = np.log(1e-3), # Line 50\n lnk_max = np.log(1e2), # Line 51\n dlnk = np.log(1e2/1e-3) / 129, # Line 52\n dlog10m = 16 / 256,\n mdef_model='SOMean',\n disable_mass_conversion=True\n)", "The big picture (1h+2h)", "fig,ax = plt.subplots(2,1,sharex=True,subplot_kw={\"xscale\":'log',\"xlim\":(1e-3,1e4)}, figsize=(13, 7))\nax[0].plot(hm.k,hm.power_auto_matter * hm.k**3/ (2*np.pi**2),label=\"tot\")\nax[0].plot(hm.k,hm.power_1h_auto_matter * hm.k**3/ (2*np.pi**2),label='1h')\nax[0].plot(hm.k,hm.power_2h_auto_matter * hm.k**3/ (2*np.pi**2),label=\"2h\")\nax[0].plot(k,data[:,0],label=\"hmcode\")\nax[0].set_yscale('log')\n\nax[0].set_ylim((1e-16,1e5))\nax[0].legend(loc=0)\n\nspl = spline(np.log(hm.k),np.log(hm.power_auto_matter*hm.k**3/(2*np.pi**2)))\nax[1].plot(k,np.exp(spl(np.log(k)))/data[:,0] -1)\nax[1].grid(True)", "Intermediate Products\nSince we are using the forked version of HMCode which prints out intermediate results, we can compare.", "hmc_intermediate = np.genfromtxt(\"/home/steven/Documents/Projects/halos/HALOMOD/other-codes/HMcode/mass_data.dat\")\n\nm = hmc_intermediate[0]\n\ndef get(thing, iz=0):\n things = ['rv', 'nu', 'rr', 'sig', 'sigf', 'zc', 'c', 'gnu']\n indx = iz * len(things) + 1 + things.index(thing)\n return hmc_intermediate[indx]", "Mass to Radius", "# Redshift 0\nr = spline(hm.m, hm.radii)(m)\nplt.plot(m, r / get('rr')-1)\nplt.xscale('log')", "sigma", "sig = spline(hm.m, hm.sigma)(m)\nplt.plot(m, sig/get('sig') -1)\n# There's an \"rsplit\" parameter of 1e-2 in HMcode where the integral is treated differently\n# Maybe it corresponds to the break?\nplt.axvline(m[np.where(r > 1e-2)[0][0]]) \nplt.xscale('log')", "We see that $\\sigma$ is quite different at low masses. HMCode uses a \"rapidising\" function (Line 1839) to make the integral faster -- potentially that is causing problems at low mass. On the other hand, maybe hmf is doing the wrong thing?\nIf we update the k-bounds for hmf (the thing we're integrating over here), let's see what happens:", "hm.update(lnk_max=np.log(1e5))\n\nsig = spline(hm.m, hm.sigma)(m)\nplt.plot(m, sig/get('sig') -1)\n# There's an \"rsplit\" parameter of 1e-2 in HMcode where the integral is treated differently\n# Maybe it corresponds to the break?\nplt.axvline(m[np.where(r > 1e-2)[0][0]]) \nplt.xscale('log')", "This fixes the issue. So even though HMCode seems to be able to get away with using a smaller k-range, hmf cannot.\nnu", "nu = np.sqrt(spline(hm.m, hm.nu)(m))\nplt.plot(m, nu/get('nu')-1)\nplt.xscale('log')", "Growth Function", "hmc_growth = np.genfromtxt(\"/home/steven/Documents/Projects/halos/HALOMOD/other-codes/HMcode/growth_data.dat\")\n\nzz = 1/hmc_growth[:,0] -1\n\nhm_growth_fn = hm.growth.growth_factor_fn()\n\nhm.update(growth_params={\"dlna\": 0.01, \"amin\": 1e-12})\n\nfrom scipy.integrate import quad\n\nintg = lambda z: (1 + z)/hm.cosmo.H(z).value**3\n\ng0 = quad(intg, 0, np.inf)[0]\n\ngrowths = []\nfor i, z in enumerate(zz):\n growths.append(quad(intg, z, np.inf)[0] * hm.cosmo.H(z).value/(g0 * hm.cosmo.H(0).value))\n\nhm_growths = np.array([hm.growth.growth_factor(z) for z in zz])\n\nplt.plot(zz, hm_growths/hmc_growth[:, 1] -1)\n#plt.plot(zz, [hm.growth.growth_factor(z) for z in zz])\n#plt.plot(zz, growths)\nplt.xscale('log')\n#plt.yscale('log')", "Though there is some discrepancy for large redshifts (>100), the discrepancy seems to be in favour of hmf (when compared to pure quadrature integration) and anyway, discrepancy is well under 5% for redshifts actually used (as collapse redshift).", "plt.plot(zz, hm_growths/hmc_growth[:, 1] -1)\n#plt.plot(zz, [hm.growth.growth_factor(z) for z in zz])\n#plt.plot(zz, growths)\nplt.xscale('log')\nplt.xlim(1e-2, 10)\nplt.ylim(-0.02, 0.02)\n#plt.yscale('log')", "Sig-f", "nuf = 1.686 / get('sigf')\nr = hm.halo_concentration.filter.mass_to_radius(hm.halo_concentration.params[\"F\"] * m, hm.halo_concentration.mean_density0)\nhm_nu = hm.halo_concentration.filter.nu(r, 1.686)\n\nhm_nu = np.sqrt(spline(hm.m, hm_nu)(m))\n\nplt.plot(m, hm_nu/nuf - 1)\nplt.xscale('log')", "Collapse Redshift", "zc = spline(hm.m, hm.halo_concentration.zc(hm.m))(m)\nprint(\"Maximum collapse redshift: \", zc.max())\nplt.plot(m, zc/get('zc')-1)\nplt.xscale('log')\nplt.ylim(-0.01,0.01)", "Concentration", "c = spline(hm.m, hm.cmz_relation)(m)\nplt.plot(m, c/get('c')-1)\nplt.xscale('log')", "2-halo\nThe 2-halo term in HMcode, with imead=0, is just the linear power, which means we should get it exactly if our transfer function is correct, and normalisation as well.", "k, _, data_lin = read_power(hmcode_dir / 'power_linear.dat')\n\nspl = spline(hm.k,hm.delta_k)\nplt.plot(k, np.abs(spl(k)/data_lin[:,0] -1) )\n\nplt.xscale('log')\nplt.yscale('log')", "I think we can be pretty confident that our linear power spectrum is lining up, to within 0.06%", "k, _, data_2h = read_power(hmcode_dir / 'power_2halo.dat')\n\nspl = spline(hm.k, hm.power_2h_auto_matter * hm.k**3 / (2*np.pi**2))\nplt.plot(k,np.abs(spl(k)/data_2h[:, 0] -1 ) )\nplt.xscale('log')\nplt.yscale('log')", "As we would hope, this is precisely the same plot as for the linear power. There are some weird things around the scale of the BAO peak, which may even come from our spline interpolation, but things are pretty close overall.\nMass Function", "gnu = spline(hm.m, hm.fsigma / np.sqrt(hm.nu))(m)\nplt.plot(m, gnu /get('gnu') - 1)\nplt.xscale('log')\nplt.ylim(-.01,.01)", "Virial Radius", "plt.plot(hm.m, hm.halo_profile._halo_mass_to_radius(m)/get('rv') - 1)\nplt.xscale('log')", "Halo Profile (u)", "ukm = np.genfromtxt(hmcode_dir / 'ukm.dat')\nwith open(hmcode_dir / '1h_integrand.dat') as fl:\n kk = float(fl.readline().split('=')[-1].strip())\n\n\nhm_ukm = hm.halo_profile.u(kk, m)\nplt.plot(m, hm_ukm/ukm[0] - 1)\nplt.xscale('log')", "1-halo", "k, _, data_1h = read_power(hmcode_dir / 'power_1halo.dat')", "Now, the way that HMCode is written is a bit confusing on the face of it, but that's because it integrates over $\\nu$ instead of $m$ directly. It turns out this is actually a little easier than integrating over $m$. Here's the math for posterity:\nAn integral of any function over mass with a factor of the mass function can be written:\n\\begin{equation}\n I = \\int \\frac{dn}{dm} g(m) dm.\n\\end{equation}\nNow,\n\\begin{align}\n \\frac{dn}{dm} &= - \\frac{\\bar{\\rho}}{m^2} \\nu f(\\nu) \\frac{d\\ln\\sigma}{d\\ln m} \\\n &= - \\frac{\\bar{\\rho}}{m \\sigma} \\nu f(\\nu) \\frac{d\\sigma}{d m} \\\n &= - \\frac{\\bar{\\rho}}{m \\delta_c} \\nu^2 f(\\nu) \\frac{d \\sigma}{d\\nu} \\frac{d \\nu}{dm}\n &= \\frac{\\bar{\\rho}}{m} f(\\nu) \\frac{d \\nu}{dm}\n\\end{align}\nSo we have\n\\begin{equation}\n I = \\int \\frac{\\bar{\\rho}}{m} f(\\nu) g(m) d\\nu.\n\\end{equation}", "spl = spline(hm.k,hm.power_1h_auto_matter * hm.k**3 / (2*np.pi**2))\nplt.plot(k,np.abs(spl(k)/data_1h[:, 0]-1))\n\nplt.xscale('log')\nplt.yscale('log')", "There is a 60% difference at small scales here... !!??\nFull Picture (with increased k range)", "fig,ax = plt.subplots(2,1,sharex=True,subplot_kw={\"xscale\":'log',\"xlim\":(1e-3,1e4)}, figsize=(13, 7))\nax[0].plot(hm.k,hm.power_auto_matter * hm.k**3/ (2*np.pi**2),label=\"tot\")\nax[0].plot(hm.k,hm.power_1h_auto_matter * hm.k**3/ (2*np.pi**2),label='1h')\nax[0].plot(hm.k,hm.power_2h_auto_matter * hm.k**3/ (2*np.pi**2),label=\"2h\")\nax[0].plot(k,data[:,0],label=\"hmcode\")\nax[0].set_yscale('log')\n\nax[0].set_ylim((1e-16,1e5))\nax[0].legend(loc=0)\n\nspl = spline(np.log(hm.k),np.log(hm.power_auto_matter*hm.k**3/(2*np.pi**2)))\nax[1].plot(k,np.exp(spl(np.log(k)))/data[:,0] -1)\nax[1].grid(True)", "High Redshift", "hm.update(z=4)\n\nfig,ax = plt.subplots(2,1,sharex=True,subplot_kw={\"xscale\":'log',\"xlim\":(1e-3,1e4)}, figsize=(13, 7))\nax[0].plot(hm.k,hm.power_auto_matter * hm.k**3/ (2*np.pi**2),label=\"tot\")\nax[0].plot(hm.k,hm.power_1h_auto_matter * hm.k**3/ (2*np.pi**2),label='1h')\nax[0].plot(hm.k,hm.power_2h_auto_matter * hm.k**3/ (2*np.pi**2),label=\"2h\")\n\nax[0].plot(k,data[:,-1],label=\"hmcode\")\nax[0].set_yscale('log')\n\nax[0].set_ylim((1e-16,1e5))\nax[0].legend(loc=0)\n\nspl = spline(np.log(hm.k),np.log(hm.power_auto_matter*hm.k**3/(2*np.pi**2)))\nax[1].plot(k,np.exp(spl(np.log(k)))/data[:,-1] -1)\nax[1].grid(True)", "Seems to be just the 1-halo term which hasn't evolved properly. Let's look at the bits again.\nRadii", "r = spline(hm.m, hm.radii)(m)\nplt.plot(m, r / get('rr', iz=15)-1)\nplt.xscale('log')", "Sigma", "sig = spline(hm.m, hm.sigma)(m)\nplt.plot(m, sig/get('sig',iz=15) -1)\nplt.xscale('log')", "This is obviously off by a fraction of a percent... not sure if that's worth worrying about.\nnu", "nu = np.sqrt(spline(hm.m, hm.nu)(m))\nplt.plot(m, nu/get('nu', iz=15)-1)\nplt.xscale('log')", "Sig-f", "nuf = 1.686 / get('sigf', iz=15)\nr = hm.halo_concentration.filter.mass_to_radius(hm.halo_concentration.params[\"F\"] * m, hm.halo_concentration.mean_density0)\nhm_nu = hm.halo_concentration.filter.nu(r, 1.686) / hm.growth_factor**2\n\nhm_nu = np.sqrt(spline(hm.m, hm_nu)(m))\n\nplt.plot(m, hm_nu)\nplt.plot(m, nuf)\nplt.xscale('log')\n\nplt.plot(m, hm_nu/nuf - 1)\nplt.xscale('log')", "Collapse Redshift", "zc = hm.halo_concentration.zc(m, z=4)\nprint(\"Maximum collapse redshift: \", zc.max())\nplt.plot(m, zc/get('zc',iz=15)-1)\nplt.xscale('log')\n#plt.ylim(-0.05,0.05)", "Concentration", "c = spline(hm.m, hm.cmz_relation)(m)\nplt.plot(m, c/get('c', iz=15)-1)\nplt.xscale('log')", "2-halo", "spl = spline(hm.k, hm.power_2h_auto_matter * hm.k**3 / (2*np.pi**2))\nplt.plot(k,np.abs(spl(k)/data_2h[:, -1] -1 ) )\nplt.xscale('log')\nplt.yscale('log')", "Mass Function", "gnu = spline(hm.m, hm.fsigma / np.sqrt(hm.nu))(m)\nplt.plot(m, gnu /get('gnu',iz=15) - 1)\nplt.xscale('log')\nplt.ylim(-.05,.05)", "The difference here seems to be due to slight differences in $\\nu$ (of 0.5%) which get blown up in the exponential of the mass function:\n\\begin{equation}\n \\exp(-q \\nu^2 / 2)/\\exp(-q \\nu^2 (1 + \\delta)^2 / 2) \\approx \\exp(-q \\nu^2 / 2 ( 1 - (1 + \\delta)^2)) \\approx \\exp(-q \\nu^2 \\delta ).\n\\end{equation}\nWith $\\delta \\approx 0.005$ and $q = 0.707$, $\\nu$ does not need to be very large before differences of a few percent will arise.\nOn the other hand, at these higher redshifts, we don't expect these larger masses to contribute very significantly to the 1-halo integral.\nVirial Radius", "plt.plot(hm.m, hm.halo_profile._halo_mass_to_radius(m)/get('rv',iz=15) - 1)\nplt.xscale('log')", "Ukm", "hm_ukm = hm.halo_profile.u(kk, m)\nplt.plot(m, hm_ukm/ukm[-1] - 1)\nplt.xscale('log')", "1-halo", "spl = spline(hm.k,hm.power_1h_auto_matter * hm.k**3 / (2*np.pi**2))\nplt.plot(k,np.abs(spl(k)/data_1h[:, -1]-1))\n\nplt.xscale('log')\nplt.yscale('log')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DominikDitoIvosevic/Uni
STRUCE/.ipynb_checkpoints/SU-2019-LAB01-Regresija-checkpoint.ipynb
mit
[ "Sveučilište u Zagrebu\nFakultet elektrotehnike i računarstva \nStrojno učenje 2019/2020\nhttp://www.fer.unizg.hr/predmet/su\n\nLaboratorijska vježba 1: Regresija\nVerzija: 1.2\nZadnji put ažurirano: 27. rujna 2019.\n(c) 2015-2019 Jan Šnajder, Domagoj Alagić \nObjavljeno: 30. rujna 2019.\nRok za predaju: 21. listopada 2019. u 07:00h\n\nUpute\nPrva laboratorijska vježba sastoji se od deset zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija. \nOsigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.\nVježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.", "# Učitaj osnovne biblioteke...\nimport numpy as np\nimport sklearn\nimport matplotlib.pyplot as plt\n%pylab inline", "Zadatci\n1. Jednostavna regresija\nZadan je skup primjera $\\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\\mathbf{X}$ dimenzija $N\\times n$ (u ovom slučaju $4\\times 1$) i vektorom oznaka $\\textbf{y}$, dimenzija $N\\times 1$ (u ovom slučaju $4\\times 1$), na sljedeći način:", "X = np.array([[0],[1],[2],[4]])\ny = np.array([4,1,2,5])\n\nX1 = X\ny1 = y", "(a)\nProučite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\\mathbf{\\Phi}$ koja ne koristi preslikavanje u prostor više dimenzije (samo će svakom primjeru biti dodane dummy jedinice; $m=n+1$).", "from sklearn.preprocessing import PolynomialFeatures\n\npoly = PolynomialFeatures(1)\nphi = poly.fit_transform(X)\nprint(phi)\n\n# Vaš kôd ovdje", "(b)\nUpoznajte se s modulom linalg. Izračunajte težine $\\mathbf{w}$ modela linearne regresije kao $\\mathbf{w}=(\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi})^{-1}\\mathbf{\\Phi}^\\intercal\\mathbf{y}$. Zatim se uvjerite da isti rezultat možete dobiti izračunom pseudoinverza $\\mathbf{\\Phi}^+$ matrice dizajna, tj. $\\mathbf{w}=\\mathbf{\\Phi}^+\\mathbf{y}$, korištenjem funkcije pinv.", "from numpy import linalg\n\npinverse1 = pinv(phi)\npinverse2 = matmul(inv(matmul(transpose(phi), phi)), transpose(phi))\n\n#print(pinverse1)\n#print(pinverse2)\n\nw = matmul(pinverse2, y)\nprint(w)\n \n# Vaš kôd ovdje", "Radi jasnoće, u nastavku je vektor $\\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ označen kao $\\tilde{\\mathbf{x}}$.\n(c)\nPrikažite primjere iz $\\mathcal{D}$ i funkciju $h(\\tilde{\\mathbf{x}})=\\mathbf{w}^\\intercal\\tilde{\\mathbf{x}}$. Izračunajte pogrešku učenja prema izrazu $E(h|\\mathcal{D})=\\frac{1}{2}\\sum_{i=1}^N(\\tilde{\\mathbf{y}}^{(i)} - h(\\tilde{\\mathbf{x}}))^2$. Možete koristiti funkciju srednje kvadratne pogreške mean_squared_error iz modula sklearn.metrics.\nQ: Gore definirana funkcija pogreške $E(h|\\mathcal{D})$ i funkcija srednje kvadratne pogreške nisu posve identične. U čemu je razlika? Koja je \"realnija\"?", "import sklearn.metrics as mt\n\nwt = w #(np.array([w]))\n\nprint(wt)\nprint(phi)\n\nhx = np.dot(phi, w)\n\nE = mt.mean_squared_error(hx, y)\nprint(E)\n\n# Vaš kôd ovdje", "(d)\nUvjerite se da za primjere iz $\\mathcal{D}$ težine $\\mathbf{w}$ ne možemo naći rješavanjem sustava $\\mathbf{w}=\\mathbf{\\Phi}^{-1}\\mathbf{y}$, već da nam doista treba pseudoinverz.\nQ: Zašto je to slučaj? Bi li se problem mogao riješiti preslikavanjem primjera u višu dimenziju? Ako da, bi li to uvijek funkcioniralo, neovisno o skupu primjera $\\mathcal{D}$? Pokažite na primjeru.", "# Vaš kôd ovdje\n\nw = matmul(inv(phi), y)\nprint(w)", "(e)\nProučite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su težine koje izračunava ta funkcija (dostupne pomoću atributa coef_ i intercept_) jednake onima koje ste izračunali gore. Izračunajte predikcije modela (metoda predict) i uvjerite se da je pogreška učenja identična onoj koju ste ranije izračunali.", "from sklearn.linear_model import LinearRegression\n\n# Vaš kôd ovdje\nlr = LinearRegression().fit(X, y)\n#print(lr.score(X, y))\n#print(lr.coef_)\n#print(lr.intercept_)\nprint([lr.intercept_, lr.coef_[0]])\n\nprint(wt)\n\npr = lr.predict(X)\nE = mt.mean_squared_error(pr, y)\nprint(E)", "2. Polinomijalna regresija i utjecaj šuma\n(a)\nRazmotrimo sada regresiju na većem broju primjera. Definirajte funkciju make_labels(X, f, noise=0) koja uzima matricu neoznačenih primjera $\\mathbf{X}{N\\times n}$ te generira vektor njihovih oznaka $\\mathbf{y}{N\\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\\mathcal{N}(0,\\sigma^2)$, gdje je $f:\\mathbb{R}^n\\to\\mathbb{R}$ stvarna funkcija koja je generirala podatke (koja nam je u stvarnosti nepoznata), a $\\sigma$ je standardna devijacija Gaussovog šuma, definirana parametrom noise. Za generiranje šuma možete koristiti funkciju numpy.random.normal. \nGenerirajte skup za učenje od $N=50$ primjera uniformno distribuiranih u intervalu $[-5,5]$ pomoću funkcije $f(x) = 5 + x -2 x^2 -5 x^3$ uz šum $\\sigma=200$:", "from numpy.random import normal\n\ndef make_labels(X, f, noise=0) :\n # Vaš kôd ovdje\n N = numpy.random.normal\n fx = f(X)\n #nois = [N(0, noise) for _ in range(X.shape[0])]\n #print(nois)\n #y = f(X) + nois\n y = [ f(x) + N(0, noise) for x in X ]\n \n return y\n\n\ndef make_instances(x1, x2, N) :\n return np.array([np.array([x]) for x in np.linspace(x1,x2,N)])", "Prikažite taj skup funkcijom scatter.", "# Vaš kôd ovdje\nN = 50\ndef f(x):\n return 5 + x - 2*x*x - 5*x*x*x\nnoise = 200\n\nX2 = make_instances(-5, 5, N)\ny2 = make_labels(X2, f, noise)\n\n#print(X)\n#print(y)\n\ns = scatter(X2, y2)", "(b)\nTrenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikažite naučeni model $h(\\mathbf{x})=\\mathbf{w}^\\intercal\\tilde{\\mathbf{x}}$ i primjere za učenje. Izračunajte pogrešku učenja modela.", "# Vaš kôd ovdje\nimport sklearn.linear_model as lm\n\ndef polyX(d):\n\n p3 = PolynomialFeatures(d).fit_transform(X2)\n l2 = LinearRegression().fit(p3, y2)\n h2 = l2.predict(p3)\n\n E = mt.mean_squared_error(h2, y2)\n print('d: ' + str(d) + ' E: ' + str(E))\n #print(p3)\n plot(X2, h2, label = str(d))\n\nscatter(X2, y2)\npolyX(3)\n", "3. Odabir modela\n(a)\nNa skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\\mathcal{H}_d$ različite složenosti, gdje je $d$ stupanj polinoma, $d\\in{1,3,5,10,20}$. Prikažite na istome grafikonu skup za učenje i funkcije $h_d(\\mathbf{x})$ za svih pet modela (preporučujemo koristiti plot unutar for petlje). Izračunajte pogrešku učenja svakog od modela.\nQ: Koji model ima najmanju pogrešku učenja i zašto?", "# Vaš kôd ovdje\nfigure(figsize=(15,10))\nscatter(X2, y2)\npolyX(1)\npolyX(3)\npolyX(5)\npolyX(10)\npolyX(20)\n\ns = plt.legend(loc=\"center right\")\n\n", "(b)\nRazdvojite skup primjera iz zadatka 2 pomoću funkcije model_selection.train_test_split na skup za učenja i skup za ispitivanje u omjeru 1:1. Prikažite na jednom grafikonu pogrešku učenja i ispitnu pogrešku za modele polinomijalne regresije $\\mathcal{H}_d$, sa stupnjem polinoma $d$ u rasponu $d\\in [1,2,\\ldots,20]$. Budući da kvadratna pogreška brzo raste za veće stupnjeve polinoma, umjesto da iscrtate izravno iznose pogrešaka, iscrtajte njihove logaritme.\nNB: Podjela na skupa za učenje i skup za ispitivanje mora za svih pet modela biti identična.\nQ: Je li rezultat u skladu s očekivanjima? Koji biste model odabrali i zašto?\nQ: Pokrenite iscrtavanje više puta. U čemu je problem? Bi li problem bio jednako izražen kad bismo imali više primjera? Zašto?", "from sklearn.model_selection import train_test_split\n\n# Vaš kôd ovdje\n\nxTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)\n\ntestError = []\n\nfor d in range(1,33):\n\n polyXTrain = PolynomialFeatures(d).fit_transform(xTr) \n polyXTest = PolynomialFeatures(d).fit_transform(xTest)\n\n l2 = LinearRegression().fit(polyXTrain, yTr)\n h2 = l2.predict(polyXTest)\n\n E = mt.mean_squared_error(h2, yTest)\n print('d: ' + str(d) + ' E: ' + str(E))\n testError.append(E)\n #print(p3)\n #plot(polyXTest, h2, label = str(d))\n\nplot(numpy.log(numpy.array(testError)))", "(c)\nTočnost modela ovisi o (1) njegovoj složenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) količini šuma. Kako biste to analizirali, nacrtajte grafikone pogrešaka kao u 3b, ali za sve kombinacija broja primjera $N\\in{100,200,1000}$ i količine šuma $\\sigma\\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebite funkciju subplots kako biste pregledno posložili grafikone u tablicu $3\\times 3$. Podatci se generiraju na isti način kao u zadatku 2.\nNB: Pobrinite se da svi grafikoni budu generirani nad usporedivim skupovima podataka, na sljedeći način. Generirajte najprije svih 1000 primjera, podijelite ih na skupove za učenje i skupove za ispitivanje (dva skupa od po 500 primjera). Zatim i od skupa za učenje i od skupa za ispitivanje načinite tri različite verzije, svaka s drugačijom količinom šuma (ukupno 2x3=6 verzija podataka). Kako bi simulirali veličinu skupa podataka, od tih dobivenih 6 skupova podataka uzorkujte trećinu, dvije trećine i sve podatke. Time ste dobili 18 skupova podataka -- skup za učenje i za testiranje za svaki od devet grafova.", "# Vaš kôd ovdje\n\n# Vaš kôd ovdje\nfigure(figsize=(15,15))\n\nN = 1000\ndef f(x):\n return 5 + x - 2*x*x - 5*x*x*x\n\nX3 = make_instances(-5, 5, N)\n\nxAllTrain, xAllTest = train_test_split(X3, test_size=0.5)\ni = 0\nj = 0\n\nfor N in [100, 200, 1000]:\n for noise in [100, 200, 500]:\n j += 1\n \n xTrain = xAllTrain[:N]\n xTest = xAllTest[:N]\n yTrain = make_labels(xTrain, f, noise)\n yTest = make_labels(xTest, f, noise)\n\n trainError = []\n testError = []\n\n for d in range(1,21):\n\n polyXTrain = PolynomialFeatures(d).fit_transform(xTrain) \n polyXTest = PolynomialFeatures(d).fit_transform(xTest)\n\n l2 = LinearRegression().fit(polyXTrain, yTrain)\n h2 = l2.predict(polyXTest)\n\n testE = mt.mean_squared_error(h2, yTest)\n testError.append(testE)\n \n h2 = l2.predict(polyXTrain)\n trainE = mt.mean_squared_error(h2, yTrain)\n trainError.append(trainE)\n #print('d: ' + str(d) + ' E: ' + str(E))\n #print(p3)\n #plot(polyXTest, h2, label = str(d))\n\n subplot(3,3,j, title = \"N: \" + str(N) + \" noise: \" + str(noise))\n plot(numpy.log(numpy.array(trainError)), label = 'train') \n plot(numpy.log(numpy.array(testError)), label = 'test')\n plt.legend(loc=\"center right\")\n \n\n\n\n\n#print(X)\n#print(y)\n\n#s = scatter(X2, y2)", "Q: Jesu li rezultati očekivani? Obrazložite.\n4. Regularizirana regresija\n(a)\nU gornjim eksperimentima nismo koristili regularizaciju. Vratimo se najprije na primjer iz zadatka 1. Na primjerima iz tog zadatka izračunajte težine $\\mathbf{w}$ za polinomijalni regresijski model stupnja $d=3$ uz L2-regularizaciju (tzv. ridge regression), prema izrazu $\\mathbf{w}=(\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I})^{-1}\\mathbf{\\Phi}^\\intercal\\mathbf{y}$. Napravite izračun težina za regularizacijske faktore $\\lambda=0$, $\\lambda=1$ i $\\lambda=10$ te usporedite dobivene težine.\nQ: Kojih je dimenzija matrica koju treba invertirati?\nQ: Po čemu se razlikuju dobivene težine i je li ta razlika očekivana? Obrazložite.", "# Vaš kôd ovdje\n\ndef reg2(lambd):\n phi4 = PolynomialFeatures(3).fit_transform(X1)\n w = matmul( matmul(inv( matmul(transpose(phi4), phi4) + lambd * identity(len(phi4))), transpose(phi4)), y1)\n print(w)\n \nreg2(0)\nreg2(1)\nreg2(10)", "(b)\nProučite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\\alpha$ odgovara parametru $\\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispišite težine $\\mathbf{w}$ (atributi coef_ i intercept_).\nQ: Jesu li težine identične onima iz zadatka 4a? Ako nisu, objasnite zašto je to tako i kako biste to popravili.", "\nfrom sklearn.linear_model import Ridge\n\nphi4 = PolynomialFeatures(3).fit_transform(X1)\nr = Ridge(0).fit(phi4, y1)\nprint(r.coef_)\nprint(r.intercept_)\n\n# Vaš kôd ovdje", "5. Regularizirana polinomijalna regresija\n(a)\nVratimo se na slučaj $N=50$ slučajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\\mathcal{H}_{\\lambda,d}$ za $\\lambda\\in{0,100}$ i $d\\in{2,10}$ (ukupno četiri modela). Skicirajte pripadne funkcije $h(\\mathbf{x})$ i primjere (na jednom grafikonu; preporučujemo koristiti plot unutar for petlje).\nQ: Jesu li rezultati očekivani? Obrazložite.", "# Vaš kôd ovdje\n\nN = 50\n\nfigure(figsize = (15, 15))\nx123 = scatter(X2, y2)\n\nfor lambd in [0, 100]:\n for d in [2, 10]:\n phi2 = PolynomialFeatures(d).fit_transform(X2)\n r = Ridge(lambd).fit(phi2, y2)\n h2 = r.predict(phi2)\n #print(d)\n plot(X2, h2, label=\"lambda \" + str(lambd) + \" d \" + str(d))\n \nx321 = plt.legend(loc=\"center right\")\n", "(b)\nKao u zadataku 3b, razdvojite primjere na skup za učenje i skup za ispitivanje u omjeru 1:1. Prikažite krivulje logaritama pogreške učenja i ispitne pogreške u ovisnosti za model $\\mathcal{H}_{d=10,\\lambda}$, podešavajući faktor regularizacije $\\lambda$ u rasponu $\\lambda\\in{0,1,\\dots,50}$.\nQ: Kojoj strani na grafikonu odgovara područje prenaučenosti, a kojoj podnaučenosti? Zašto?\nQ: Koju biste vrijednosti za $\\lambda$ izabrali na temelju ovih grafikona i zašto?", "# Vaš kôd ovdje\n\n\nxTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)\nfigure(figsize=(10,10))\ntrainError = []\ntestError = []\n\nfor lambd in range(0,51):\n\n polyXTrain = PolynomialFeatures(10).fit_transform(xTr) \n polyXTest = PolynomialFeatures(10).fit_transform(xTest)\n\n l2 = Ridge(lambd).fit(polyXTrain, yTr)\n h2 = l2.predict(polyXTest)\n\n E = mt.mean_squared_error(h2, yTest)\n #print('d: ' + str(d) + ' E: ' + str(E))\n testError.append(log( E))\n \n h2 = l2.predict(polyXTrain)\n E = mt.mean_squared_error(h2, yTr)\n trainError.append(log(E))\n #print(p3)\n #plot(polyXTest, h2, label = str(d))\n\nplot(numpy.log(numpy.array(testError)), label=\"test\")\nplot(numpy.log(numpy.array(trainError)), label=\"train\")\nlegend()", "6. L1-regularizacija i L2-regularizacija\nSvrha regularizacije jest potiskivanje težina modela $\\mathbf{w}$ prema nuli, kako bi model bio što jednostavniji. Složenost modela može se okarakterizirati normom pripadnog vektora težina $\\mathbf{w}$, i to tipično L2-normom ili L1-normom. Za jednom trenirani model možemo izračunati i broj ne-nul značajki, ili L0-normu, pomoću sljedeće funkcije koja prima vektor težina $\\mathbf{w}$:", "def nonzeroes(coef, tol=1e-6): \n return len(coef) - len(coef[np.isclose(0, coef, atol=tol)])", "(a)\nZa ovaj zadatak upotrijebite skup za učenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=10$, mijenjajući hiperparametar $\\lambda$ u rasponu ${1,2,\\dots,100}$. Za svaki od treniranih modela izračunajte L{0,1,2}-norme vektora težina $\\mathbf{w}$ te ih prikažite kao funkciju od $\\lambda$. Pripazite što točno šaljete u funkciju za izračun normi.\nQ: Objasnite oblik obiju krivulja. Hoće li krivulja za $\\|\\mathbf{w}\\|_2$ doseći nulu? Zašto? Je li to problem? Zašto?\nQ: Za $\\lambda=100$, koliki je postotak težina modela jednak nuli, odnosno koliko je model rijedak?", "# Vaš kôd ovdje\nd = 10\n\nl0 = []\nl1 = []\nl2 = []\n\n\nxTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)\n\nfor lambd in range(0,101):\n\n polyXTrain = PolynomialFeatures(10).fit_transform(xTr) \n polyXTest = PolynomialFeatures(10).fit_transform(xTest)\n\n r = Ridge(lambd).fit(polyXTrain, yTr)\n \n r.coef_[0] = r.intercept_\n \n l0.append(nonzeroes(r.coef_))\n #print(r.coef_)\n l1.append(numpy.linalg.norm(r.coef_, ord=1))\n l2.append(numpy.linalg.norm(r.coef_, ord=2))\n \n\nfigure(figsize=(10,10))\nplot(l0, label=\"l0\")\nlegend()\n\nfigure(figsize=(10,10))\nplot(l1, label=\"l1\")\nlegend()\n\nfigure(figsize=(10,10))\nplot(l2, label=\"l2\")\nlegend()", "(b)\nGlavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome što L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge težine pritegnute na nulu. Pokažite da je to doista tako, ponovivši gornji eksperiment s L1-regulariziranom regresijom, implementiranom u klasi Lasso u modulu sklearn.linear_model. Zanemarite upozorenja.", "# Vaš kôd ovdje", "7. Značajke različitih skala\nČesto se u praksi možemo susreti sa podatcima u kojima sve značajke nisu jednakih magnituda. Primjer jednog takvog skupa je regresijski skup podataka grades u kojem se predviđa prosjek ocjena studenta na studiju (1--5) na temelju dvije značajke: bodova na prijamnom ispitu (1--3000) i prosjeka ocjena u srednjoj školi. Prosjek ocjena na studiju izračunat je kao težinska suma ove dvije značajke uz dodani šum.\nKoristite sljedeći kôd kako biste generirali ovaj skup podataka.", "n_data_points = 500\nnp.random.seed(69)\n\n# Generiraj podatke o bodovima na prijamnom ispitu koristeći normalnu razdiobu i ograniči ih na interval [1, 3000].\nexam_score = np.random.normal(loc=1500.0, scale = 500.0, size = n_data_points) \nexam_score = np.round(exam_score)\nexam_score[exam_score > 3000] = 3000\nexam_score[exam_score < 0] = 0\n\n# Generiraj podatke o ocjenama iz srednje škole koristeći normalnu razdiobu i ograniči ih na interval [1, 5].\ngrade_in_highschool = np.random.normal(loc=3, scale = 2.0, size = n_data_points)\ngrade_in_highschool[grade_in_highschool > 5] = 5\ngrade_in_highschool[grade_in_highschool < 1] = 1\n\n# Matrica dizajna.\ngrades_X = np.array([exam_score,grade_in_highschool]).T\n\n# Završno, generiraj izlazne vrijednosti.\nrand_noise = np.random.normal(loc=0.0, scale = 0.5, size = n_data_points)\nexam_influence = 0.9\ngrades_y = ((exam_score / 3000.0) * (exam_influence) + (grade_in_highschool / 5.0) \\\n * (1.0 - exam_influence)) * 5.0 + rand_noise\ngrades_y[grades_y < 1] = 1\ngrades_y[grades_y > 5] = 5", "a) Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj značajki (x-os). Iscrtajte dva odvojena grafa.", "# Vaš kôd ovdje", "b) Naučite model L2-regularizirane regresije ($\\lambda = 0.01$), na podacima grades_X i grades_y:", "# Vaš kôd ovdje", "Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.", "from sklearn.preprocessing import StandardScaler\n\n# Vaš kôd ovdje", "Q: Gledajući grafikone iz podzadatka (a), koja značajka bi trebala imati veću magnitudu, odnosno važnost pri predikciji prosjeka na studiju? Odgovaraju li težine Vašoj intuiciji? Objasnite. \n8. Multikolinearnost i kondicija matrice\na) Izradite skup podataka grades_X_fixed_colinear tako što ćete u skupu grades_X_fixed iz\nzadatka 7b duplicirati zadnji stupac (ocjenu iz srednje škole). Time smo efektivno uveli savršenu multikolinearnost.", "# Vaš kôd ovdje", "Ponovno, naučite na ovom skupu L2-regularizirani model regresije ($\\lambda = 0.01$).", "# Vaš kôd ovdje", "Q: Usporedite iznose težina s onima koje ste dobili u zadatku 7b. Što se dogodilo?\nb) Slučajno uzorkujte 50% elemenata iz skupa grades_X_fixed_colinear i naučite dva modela L2-regularizirane regresije, jedan s $\\lambda=0.01$ i jedan s $\\lambda=1000$). Ponovite ovaj pokus 10 puta (svaki put s drugim podskupom od 50% elemenata). Za svaki model, ispišite dobiveni vektor težina u svih 10 ponavljanja te ispišite standardnu devijaciju vrijednosti svake od težina (ukupno šest standardnih devijacija, svaka dobivena nad 10 vrijednosti).", "# Vaš kôd ovdje", "Q: Kako regularizacija utječe na stabilnost težina?\nQ: Jesu li koeficijenti jednakih magnituda kao u prethodnom pokusu? Objasnite zašto.\nc) Koristeći numpy.linalg.cond izračunajte kondicijski broj matrice $\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I}$, gdje je $\\mathbf{\\Phi}$ matrica dizajna (grades_X_fixed_colinear). Ponovite i za $\\lambda=0.01$ i za $\\lambda=10$.", "# Vaš kôd ovdje", "Q: Kako regularizacija utječe na kondicijski broj matrice $\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I}$?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bambinos/bambi
docs/notebooks/negative_binomial.ipynb
mit
[ "Negative Binomial Regression (Students absence example)\nNegative binomial distribution review\nI always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows: The negative binomial distribution is the distribution of a random variable that is defined as the number of independent Bernoulli trials until the k-th \"success\". In short, we repeat a Bernoulli experiment until we observe k successes and record the number of trials it required.\n$$\nY \\sim \\text{NB}(k, p)\n$$\nwhere $0 \\le p \\le 1$ is the probability of success in each Bernoulli trial, $k > 0$, usually integer, and $y \\in {k, k + 1, \\cdots}$\nThe probability mass function (pmf) is \n$$\np(y | k, p)= \\binom{y - 1}{y-k}(1 -p)^{y - k}p^k\n$$\nIf you, like me, find it hard to remember whether $y$ starts at $0$, $1$, or $k$, try to think twice about the definition of the variable. But how? First, recall we aim to have $k$ successes. And success is one of the two possible outcomes of a trial, so the number of trials can never be smaller than the number of successes. Thus, we can be confident to say that $y \\ge k$.\nBut this is not the only way of defining the negative binomial distribution, there are plenty of options! One of the most interesting, and the one you see in PyMC3, the library we use in Bambi for the backend, is as a continuous mixture. The negative binomial distribution describes a Poisson random variable whose rate is also a random variable (not a fixed constant!) following a gamma distribution. Or in other words, conditional on a gamma-distributed variable $\\mu$, the variable $Y$ has a Poisson distribution with mean $\\mu$.\nUnder this alternative definition, the pmf is\n$$\n\\displaystyle p(y | k, \\alpha) = \\binom{y + \\alpha - 1}{y} \\left(\\frac{\\alpha}{\\mu + \\alpha}\\right)^\\alpha\\left(\\frac{\\mu}{\\mu + \\alpha}\\right)^y\n$$\nwhere $\\mu$ is the parameter of the Poisson distribution (the mean, and variance too!) and $\\alpha$ is the rate parameter of the gamma.", "import arviz as az\nimport bambi as bmb\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom scipy.stats import nbinom\n\naz.style.use(\"arviz-darkgrid\")\n\nimport warnings\nwarnings.simplefilter(action='ignore', category=FutureWarning)", "In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k successes and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes.", "y = np.arange(0, 30)\nk = 3\np1 = 0.5\np2 = 0.3\n\nfig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)\n\nax[0].bar(y, nbinom.pmf(y, k, p1))\nax[0].set_xticks(np.linspace(0, 30, num=11))\nax[0].set_title(f\"k = {k}, p = {p1}\")\n\nax[1].bar(y, nbinom.pmf(y, k, p2))\nax[1].set_xticks(np.linspace(0, 30, num=11))\nax[1].set_title(f\"k = {k}, p = {p2}\")\n\nfig.suptitle(\"Y = Number of failures until k successes\", fontsize=16);", "For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156.", "print(nbinom.pmf(y, k, p1)[0])\nprint(nbinom.pmf(y, k, p1)[3])", "Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values.", "fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)\n\nax[0].bar(y + k, nbinom.pmf(y, k, p1))\nax[0].set_xticks(np.linspace(3, 30, num=10))\nax[0].set_title(f\"k = {k}, p = {p1}\")\n\nax[1].bar(y + k, nbinom.pmf(y, k, p2))\nax[1].set_xticks(np.linspace(3, 30, num=10))\nax[1].set_title(f\"k = {k}, p = {p2}\")\n\nfig.suptitle(\"Y = Number of trials until k successes\", fontsize=16);", "Negative binomial in GLM\nThe negative binomial distribution belongs to the exponential family, and the canonical link function is \n$$\ng(\\mu_i) = \\log\\left(\\frac{\\mu_i}{k + \\mu_i}\\right) = \\log\\left(\\frac{k}{\\mu_i} + 1\\right)\n$$\nbut it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results.\nLoad and explore Students data\nThis example is based on this UCLA example.\nSchool administrators study the attendance behavior of high school juniors at two schools. Predictors of the number of days of absence include the type of program in which the student is enrolled and a standardized test in math. We have attendance data on 314 high school juniors.\nThe variables of insterest in the dataset are\n\ndaysabs: The number of days of absence. It is our response variable.\nprogr: The type of program. Can be one of 'General', 'Academic', or 'Vocational'.\nmath: Score in a standardized math test.", "data = pd.read_stata(\"https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta\")\n\ndata.head()", "We assign categories to the values 1, 2, and 3 of our \"prog\" variable.", "data[\"prog\"] = data[\"prog\"].map({1: \"General\", 2: \"Academic\", 3: \"Vocational\"})\ndata.head()", "The Academic program is the most popular program (167/314) and General is the least popular one (40/314)", "data[\"prog\"].value_counts()", "Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values.", "fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex=\"col\")\nprograms = list(data[\"prog\"].unique())\nprograms.sort()\n\nfor idx, program in enumerate(programs):\n # Histogram\n ax[idx, 0].hist(data[data[\"prog\"] == program][\"math\"], edgecolor='black', alpha=0.9)\n ax[idx, 0].axvline(data[data[\"prog\"] == program][\"math\"].mean(), color=\"C1\")\n \n # Barplot\n days = data[data[\"prog\"] == program][\"daysabs\"]\n days_mean = days.mean()\n days_counts = days.value_counts()\n values = list(days_counts.index)\n count = days_counts.values\n ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9)\n ax[idx, 1].axvline(days_mean, color=\"C1\")\n \n # Titles\n ax[idx, 0].set_title(program)\n ax[idx, 1].set_title(program)\n\nplt.setp(ax[-1, 0], xlabel=\"Math score\")\nplt.setp(ax[-1, 1], xlabel=\"Days of absence\");", "The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score.\nOn the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average.\nModels\nWe are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program. \nIn order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling math and comparing how long it took to fit.\nWe are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem\n\nEach of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.\nThe whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).\n\nWe also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.\nBut then, why negative binomial? Can't we just use a Poisson likelihood?\nYes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better.\nModel 1\n$$\n\\log{Y_i} = \\beta_1 \\text{Academic}_i + \\beta_2 \\text{General}_i + \\beta_3 \\text{Vocational}_i + \\beta_4 \\text{Math_std}_i\n$$\nModel 2\n$$\n\\log{Y_i} = \\beta_1 \\text{Academic}_i + \\beta_2 \\text{General}_i + \\beta_3 \\text{Vocational}_i + \\beta_4 \\text{Math_std}_i\n + \\beta_5 \\text{General}_i \\cdot \\text{Math_std}_i + \\beta_6 \\text{Vocational}_i \\cdot \\text{Math_std}_i\n$$\nIn both cases we have the following dummy variables\n$$\\text{Academic}_i = \n\\left{ \n \\begin{array}{ll}\n 1 & \\textrm{if student is under Academic program} \\\n 0 & \\textrm{other case} \n \\end{array}\n\\right.\n$$\n$$\\text{General}_i = \n\\left{ \n \\begin{array}{ll}\n 1 & \\textrm{if student is under General program} \\\n 0 & \\textrm{other case} \n \\end{array}\n\\right.\n$$\n$$\\text{Vocational}_i = \n\\left{ \n \\begin{array}{ll}\n 1 & \\textrm{if student is under Vocational program} \\\n 0 & \\textrm{other case} \n \\end{array}\n\\right.\n$$\nand $Y$ represents the days of absence.\nSo, for example, the first model for a student under the Vocational program reduces to\n$$\n\\log{Y_i} = \\beta_3 + \\beta_4 \\text{Math_std}_i\n$$\nAnd one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\\beta_0$ above. This choice allows us to represent the effect of each program directly with $\\beta_1$, $\\beta_2$, and $\\beta_3$.\nModel fit\nIt's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The 0 on the right hand side of ~ simply means we don't want to have the intercept term that is added by default. scale(math) tells Bambi we want to use standardize math before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here.\nModel 1", "model_additive = bmb.Model(\"daysabs ~ 0 + prog + scale(math)\", data, family=\"negativebinomial\")\nidata_additive = model_additive.fit()", "Model 2\nFor this second model we just add prog:scale(math) to indicate the interaction. A shorthand would be to use y ~ 0 + prog*scale(math), which uses the full interaction operator. In other words, it just means we want to include the interaction between prog and scale(math) as well as their main effects.", "model_interaction = bmb.Model(\"daysabs ~ 0 + prog + scale(math) + prog:scale(math)\", data, family=\"negativebinomial\")\nidata_interaction = model_interaction.fit()", "Explore models\nThe first thing we do is calling az.summary(). Here we pass the InferenceData object the .fit() returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics.", "az.summary(idata_additive)\n\naz.summary(idata_interaction)", "The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with plot_forest(). There we simply pass a list containing the InferenceData objects of the models we want to compare.", "az.plot_forest(\n [idata_additive, idata_interaction],\n model_names=[\"Additive\", \"Interaction\"],\n var_names=[\"prog\", \"scale(math)\"],\n combined=True,\n figsize=(8, 4)\n);", "One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of scale(math) is slightly lower in the model that considers the interaction, but the difference is not significant. \nWe can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.\nIn addition, the marginal posterior for math shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs).", "az.plot_forest(idata_interaction, var_names=[\"prog:scale(math)\"], combined=True, figsize=(8, 4))\nplt.axvline(0);", "Plot predicted mean response\nWe finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals.", "math_score = np.arange(1, 100)\n\n# This function takes a model and an InferenceData object.\n# It returns of length 3 with predictions for each type of program.\ndef predict(model, idata):\n predictions = []\n for program in programs:\n new_data = pd.DataFrame({\"math\": math_score, \"prog\": [program] * len(math_score)})\n new_idata = model.predict(\n idata, \n data=new_data,\n inplace=False\n )\n prediction = new_idata.posterior.stack(sample=[\"chain\", \"draw\"])[\"daysabs_mean\"].values\n predictions.append(prediction)\n \n return predictions\n\nprediction_additive = predict(model_additive, idata_additive)\nprediction_interaction = predict(model_interaction, idata_interaction)\n\nmu_additive = [prediction.mean(1) for prediction in prediction_additive]\nmu_interaction = [prediction.mean(1) for prediction in prediction_interaction]\n\nfig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4))\n\nfor idx, program in enumerate(programs):\n ax[0].plot(math_score, mu_additive[idx], label=f\"{program}\", color=f\"C{idx}\", lw=2)\n az.plot_hdi(math_score, prediction_additive[idx].T, color=f\"C{idx}\", ax=ax[0])\n\n ax[1].plot(math_score, mu_interaction[idx], label=f\"{program}\", color=f\"C{idx}\", lw=2)\n az.plot_hdi(math_score, prediction_interaction[idx].T, color=f\"C{idx}\", ax=ax[1])\n\nax[0].set_title(\"Additive\");\nax[1].set_title(\"Interaction\");\nax[0].set_xlabel(\"Math score\")\nax[1].set_xlabel(\"Math score\")\nax[0].set_ylim(0, 25)\nax[0].legend(loc=\"upper right\");", "As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0.\nIf you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use az.compare() to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?\nAlso, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace family=\"negativebinomial\" with family=\"poisson\" and then you're ready to compare results!", "%load_ext watermark\n%watermark -n -u -v -iv -w" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PrairieLearn/PrairieLearn
exampleCourse/questions/demo/annotated/MarkovChainGroupActivity/MarkovChains-Intro/workspace/Markov-Chains-1.ipynb
agpl-3.0
[ "import numpy as np\nimport numpy.linalg as la\nimport matplotlib.pyplot as plt", "Introduction to Markov Chains\nA Markov chain is a mathematical model used to describe a set of states and the probability of transitioning between them. In this simple example, we use Markov chain to model the weather. We have two states to represent the possible weather for a day: Sunny and Snowy. After collecting weather data for many years, you observed that the chance of a snowy day occurring after one snowy day is 90% and that the chance of a snowy day after one sunny day is 70%.\nWe can see this visually with the following graph. Do you understand how we were able to obtain the other numbers? Recall that we are dealing with probabilities that should sum up to 100%.\n<img src=\"weather_graph.png\" width=446px></img>\nThis is a directed graph because edges have direction. We can represent this (unsurprisingly) using a matrix, similarly to how we created the adjacency matrix, using the following notation: the columns of the matrix represent outgoing edges, while the rows represent incoming edges:\n<img src=\"weather_matrix.png\" width=305px></img>\nhence each entry of the matrix is given by:\n$$ M_{ij} = \\text{probability of moving from } j \\text{ to } i $$\nThe matrix above is called the Markov matrix, which has the following properties:\n\n\n$M_{ij}$ entry of a transition matrix has the probability of transitioning from state $j$ to state $i$\n\n\nSince the entries are probabilities, they are always non-negative real numbers, and the columns should sum to 1.\n\n\nTry this!\nWrite the matrix above as a 2d numpy array. Define it as the variable M.\nNow that we have created the model, we can use it to calculate various probabilities. Let's say that today was a sunny day, which we can represent by a vector that is 100% sunny and 0% snowy.\nTry this!\nWrite this initial vector as a 1d numpy array, where the first entry corresponds to Sunny and the second entry corresponds to Snowy. Recal that the sum of the states should be equal to 1. Define it as the variable x.\nIf we multiply our transition matrix by our state vector, we can find the probability of having each type of day tomorrow:", "x1 = M @ x\nx1", "This doesn't give us any new information, so lets see what happens when we multiply the state vector again:", "x2 = M @ x1\nx2", "Now, we have \"simulated\" the Markov chain twice, which tells us the weather probability in two days. What would happen if we multiplied our new vector by the matrix a large number of times?\nTry this!\nWrite a loop to left-multiply (${\\bf Mx}$) the state vector $15$ times, printing out each intermediate value. Start your iterations using the state vector defined above as x.", "xc = x.copy()\n# Write loop here\n", "You can see that for enough iterations we will eventually converge to a steady state ${\\bf x}^* $, and multiplying this steady state by the Markov matrix will no longer modify the vector, i.e.\n$$ {\\bf M}{\\bf x}^ = {\\bf x}^ $$\nNote that this is an eigensystem problem, where $(1,{\\bf x}^*)$ is an eigenpair. Indeed, we found the eigenvector of ${\\bf M}$ with corresponding eigenvalue $\\lambda = 1$!\nComputing the eigenvector like this is called the Power Iteration method, and can be used to find the eigenvector that corresponds to the dominant eigenvalue (largest eigenvalue in magnitude).\nCheck your answers!\nImplement the function power_iteration() that takes a matrix M and starting vector x, and computes the eigenvector corresponding to dominant eigenvalue (same as you have done above).\nFor simplicity, use $100$ iterations for your loop.", "#grade (enter your code in this cell - DO NOT DELETE THIS LINE) \ndef power_iteration(M, x):\n # Perform power iteration and return steady state vector xstar\n xc = x.copy()\n return xc", "Run your power_iteration() function on M and a new vector,\n$$ {\\bf x} = \\begin{bmatrix} 0.5 \\ 0.5\\end{bmatrix} $$\nDo you get the same result as before?", "power_iteration(M, np.array([0.5, 0.5]))", "As long as the starting state vector x is normalized (the entries add up to one), the steady state solution will be the same. There is one caveat to this statement, which we will discuss in the next section.\nTake a look at the code snippet below. Notice that the steady state solution does not change, regardless of the initial vector (here generated at random).", "# run this as many times as you want, the bottom vector should always stay the same!\nrandom_vector = np.random.rand(2)\nrandom_vector /= np.sum(random_vector) # normalize\n\nprint(random_vector)\nprint(power_iteration(M, random_vector))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SonneSun/self_driving_car_projects
2_Traffic_Sign_Classifier.ipynb
apache-2.0
[ "Deep Learning\nProject: Build a Traffic Sign Recognition Classifier\n\nStep 0: Load The Data", "# Load pickled data\nimport time\nimport pickle\nimport tensorflow as tf\nfrom tensorflow.contrib.layers import flatten\n\ntraining_file = 'train.p'\ntesting_file = 'test.p'\n\nwith open(training_file, mode='rb') as f:\n train = pickle.load(f)\nwith open(testing_file, mode='rb') as f:\n test = pickle.load(f)\n \nX_train, y_train = train['features'], train['labels']\nX_test, y_test = test['features'], test['labels']\n\n# #This is used for my GPU version.\n# with open('train27.p', 'wb') as handle:\n# pickle.dump(train, handle, protocol=2)\n \n# with open('test27.p', 'wb') as handle: \n# pickle.dump(test, handle, protocol=2)", "Step 1: Dataset Summary & Exploration\nThe pickled data is a dictionary with 4 key/value pairs:\n\n'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).\n'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.\n'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.\n'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES\n\nComplete the basic data summary below.", "### Replace each question mark with the appropriate value.\nimport numpy as np\n# Number of training examples\nn_train = X_train.shape[0]\n\n# Number of testing examples.\nn_test = X_test.shape[0]\n\n# What's the shape of an traffic sign image?\nimage_shape = X_train[0].shape\n\n# How many unique classes/labels there are in the dataset.\nn_classes = len(np.unique(y_train))\n\nprint(\"Number of training examples =\", n_train)\nprint(\"Number of testing examples =\", n_test)\nprint(\"Image data shape =\", image_shape)\nprint(\"Number of classes =\", n_classes)", "Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.\nThe Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.\nNOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.", "### Data exploration visualization goes here.\n### Feel free to use as many code cells as needed.\nimport random\nimport matplotlib.pyplot as plt\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\nindex = random.randint(0, len(X_train))\nimage = X_train[index].squeeze()\n\nplt.figure(figsize=(2,2))\nplt.imshow(image)\nprint(y_train[index])\n\n### Check the traffic signs distribution\nimport seaborn as sns\nplt.hist([y_train, y_test], color=['r','b'], alpha=0.5)\nplt.show()", "From the plot above, we can find that the traffic signs are not evenly distributed.\n\nStep 2: Design and Test a Model Architecture\nDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.\nThere are various aspects to consider when thinking about this problem:\n\nNeural network architecture\nPlay around preprocessing techniques (normalization, rgb to grayscale, etc)\nNumber of examples per label (some have more than others).\nGenerate fake data.\n\nHere is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.\nNOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!\nImplementation\nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.\n1) Data preprocessing", "### Preprocess the data here.\n\ndef normalize_grayscale(image_data):\n a = -0.5\n b = 0.5\n grayscale_min = 0\n grayscale_max = 255\n return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )\n\nX_train = normalize_grayscale(X_train)\nX_test = normalize_grayscale(X_test)\n\nprint(X_train.shape)\n\nfrom sklearn.utils import shuffle\n\nX_train, y_train = shuffle(X_train, y_train)", "2) Train, Test split", "### Generate additional data (OPTIONAL!)\n### and split the data into training/validation/testing sets here.\n### Feel free to use as many code cells as needed.\n\nn_train_new = int(n_train * 0.8)\n\nX_validation, y_validation = X_train[n_train_new:,], y_train[n_train_new:,]\nX_train, y_train = X_train[:n_train_new,], y_train[:n_train_new,]\n\nprint(X_train.shape)\nprint(X_validation.shape)", "Question 1\nDescribe how you preprocessed the data. Why did you choose that technique?\nAnswer:\nIn this part, I first normalize the image data and then shuffle the training dataset.\n- Normalization: avoid the influence of different scales regarding the feature.\n- Shuffle: avoid the influence of the ordering of data.\nQuestion 2\nDescribe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?\nAnswer:\nTesting data are read directly from pickle file. For training and validation, I use 80% for training and 20% for validation.\n3) Architecture\nBasic - Logistic regression\nThe model below is a simple logistic regression, and the training accuracy is 86.1%. We can use it as a baseline.", "def basic(x):\n mu = 0\n sigma = 0.1\n n_input = image_shape[0] * image_shape[1] * image_shape[2]\n flat_x = tf.reshape(x, [-1, n_input])\n W = tf.Variable(tf.truncated_normal(shape=(n_input, n_classes), mean = mu, stddev = sigma))\n b = tf.Variable(tf.zeros(n_classes))\n logits = tf.matmul(flat_x, W) + b\n return logits\n\n######################## Training ##########################\nEPOCHS = 10\nBATCH_SIZE = 128\nrate = 0.001\n\nx = tf.placeholder(tf.float32, (None, 32, 32, 3))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 43)\n\nlogits = basic(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = one_hot_y)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)\n\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples\n\nimport time\n\nstart = time.time()\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(X_validation, y_validation)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n \nprint(time.time() - start)", "Since convolutional layer is quite effective for image classification, we add one convolutional layer together with a pooling layer to the baseline model. \nAdd one convolutional layer", "def advan1(x):\n mu = 0\n sigma = 0.1\n \n # Convolutional. Input = 32x32x3. Output = 30x30x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 3, 6), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n\n # Activation.\n conv1 = tf.nn.relu(conv1)\n # Pooling Input = 30x30x6. Output = 15x15x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n # Flatten\n fc0 = flatten(conv1)\n \n fc1_W = tf.Variable(tf.truncated_normal(shape=(1350, 43), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(43))\n logits = tf.matmul(fc0, fc1_W) + fc1_b\n\n return logits\n \n######################## Training ##########################\nEPOCHS = 10\nBATCH_SIZE = 128\nrate = 0.001\n\nx = tf.placeholder(tf.float32, (None, 32, 32, 3))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 43)\n\nlogits = advan1(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = one_hot_y)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)\n\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples\n\nimport time\n\nstart = time.time()\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(X_validation, y_validation)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n \nprint(time.time() - start)\n", "Compare advan1(x) architecture and basic(x) architecture we can find that the training accuracy imporves from 0.865 to 0.929, which implies the effectiveness of convolutional layer. \nFinal architecture", "def advan2(x): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0\n sigma = 0.1\n \n # Layer 1: Convolutional. Input = 32x32x3. Output = 30x30x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 3, 6), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n\n # Activation.\n conv1 = tf.nn.relu(conv1)\n\n # Pooling. Input = 30x30x6. Output = 15x15x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n \n # Dropout\n conv1_drop = tf.nn.dropout(conv1, keep_prob)\n\n # Layer 2: Convolutional. Output = 12x12x16.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(4, 4, 6, 16), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(16))\n conv2 = tf.nn.conv2d(conv1_drop, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n \n # Activation.\n conv2 = tf.nn.relu(conv2)\n\n # Pooling. Input = 12x12x16. Output = 6x6x16.\n conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n \n # Dropout\n conv2_drop = tf.nn.dropout(conv2, keep_prob)\n\n # Flatten. Input = 6x6x16. Output = 576.\n fc0 = flatten(conv2_drop)\n \n # Layer 3: Fully Connected. Input = 576. Output = 240.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(576, 240), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(240))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b\n \n # Activation.\n fc1 = tf.nn.relu(fc1)\n\n # Layer 4: Fully Connected. Input = 240. Output = 120.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(240, 120), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(120))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n \n # Activation.\n fc2 = tf.nn.relu(fc2)\n\n # Layer 5: Fully Connected. Input = 120. Output = 43.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(120, 43), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(43))\n logits = tf.matmul(fc2, fc3_W) + fc3_b \n \n return logits\n\n\nEPOCHS = 15\nBATCH_SIZE = 256\nrate = 0.002\n\nx = tf.placeholder(tf.float32, (None, 32, 32, 3))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 43)\nkeep_prob = tf.placeholder(tf.float32)\nsaver = tf.train.Saver()\n\nlogits = advan2(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = one_hot_y)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)\n\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples\n\nimport time\n\nstart = time.time()\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.75})\n \n validation_accuracy = evaluate(X_validation, y_validation)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n saver.save(sess, './lenet')\n print(\"Model saved\")\n \nprint(time.time() - start)\n", "Question 3\nWhat does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow\n from the classroom.\nAnswer:\nThe final fine tuned architecture is shown above.\n\nI use CNN, with 2 convolutional layers, followed by max pooling and dropout, and 3 fully connected layers.\nFor the two convolutional layers, one is 3X3 filter and one is 4X4 filter. Both use Relu as activations.\nAfter the convolutional layer, the network is flattened.\n\nFor the three fully connected layers, all use Relu as activations.\n\n\nThe reason for max pooling is: to help over-fitting by providing an abstracted form of the representation. As well, it reduces the computational cost by reducing the number of parameters to learn and provides basic translation invariance to the internal representation.\n\nThe reason for dropout layer is: to prevent overfitting.\nThe reason of using Relu is: besides adding non-linearity and sparsity, it can also help reduce the likelihood of vanishing gradient.\n\nTest", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n test_accuracy = evaluate(X_test, y_test)\n print(\"Test Accuracy = {:.3f}\".format(test_accuracy))", "The test accuracy is 0.925, compared to validation accuracy 0.982 indicates the overfitting of the training data.\nQuestion 4\nHow did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)\nAnswer:\n\nOptimizer: AdamOptimizer\nBatch size: 256\nEpochs: 15\nLearning rate: 0.002\n\nQuestion 5\nWhat approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.\nAnswer:\n\nThe first step is to build up a simple one layer nerual network and run the model as a baseline.\nThe second step I add convolutional layer to the model.\nThe third step I keep the general architecture and modify the filter size for each convolutional layer. Since the image size is 32X32, 5X5 filters might be too large. So I choose 3X3 and 4X4 instead. \nIn order to avoid overfitting I then add max pooling layer and dropout layer. First I start with the keep_probability = 0.5, then I slowly increase it. When it is set to 0.75 the validation accuracy is the best.\nThen I increase the batch size to 256 to lower the weight update noise and increase the computation time a bit. Since the time is still within the acceptable range.\nNext I tuned the learning rate and choose 0.002 as the final value.\n\n\nStep 3: Test a Model on New Images\nTake several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.\nYou may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.\nImplementation\nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.", "import os\nos.listdir(\"trafficsign_test/\")\n\nfrom scipy.misc import imread, imsave, imresize\n\nimage_num = len(os.listdir(\"trafficsign_test/\"))\nX_test_real = np.zeros((image_num,32,32,3), dtype = np.uint8)\n\nfor i in range(image_num):\n im_name = \"trafficsign_test/\" + str(i+1) + '.jpg'\n img = imread(im_name)\n img_resize = imresize(img, (32, 32))\n X_test_real[i] = img_resize\n \nX_test_real_norm = normalize_grayscale(X_test_real)\n\n### Load the images and plot them here.\n### Feel free to use as many code cells as needed.\nfor i in range(image_num):\n image = X_test_real[i].squeeze()\n plt.figure(figsize=(2,2))\n plt.imshow(image)", "Question 6\nChoose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.\nAnswer:\n\nThe images are shown above.\nThe factors that could potentially influence the accuracy might be the background colors.", "### Run the predictions here.\n### Feel free to use as many code cells as needed.\n\npredict_label = tf.argmax(logits, 1)\npredict_top5 = tf.nn.top_k(logits, k=5)\npredict_prob = tf.nn.softmax(logits)\n\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n labels = sess.run(predict_label, feed_dict={x: X_test_real_norm, keep_prob: 1.0})\n test_predict = sess.run(predict_label, feed_dict={x: X_test, keep_prob: 1.0})\n top5 = sess.run(predict_top5, feed_dict={x: X_test_real_norm, keep_prob: 1.0})\n probs = sess.run(predict_prob, feed_dict={x: X_test_real_norm, keep_prob: 1.0})\n print(labels)", "Compute confusion matrix for test data", "from sklearn.metrics import confusion_matrix\n\nmm = confusion_matrix(y_test, test_predict)\n#test image\n#1\nprint(mm[23,:])\n#3\nprint(mm[8,:]) \n#4\nprint(mm[22,:])\n#5\nprint(mm[17,:])\n\n#Interpret predictions in terms of sign names\n\nimport csv\nimport pandas as pd\n\nsignnames = pd.read_csv('signnames.csv')\nsignnames.to_dict()\nsign_dict = dict(zip(signnames.ClassId, signnames.SignName))\n\nfor i in range(image_num):\n print (sign_dict[labels[i]])", "Question 7\nIs your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.\nNOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.\nAnswer:\n\nThe predictons accuracy is 1 out of 5, 20% accurate on captured pictures.\nThe prediction accuracy is 92.5% on the test dataset. 20% is greatly lower than 92.5%, so the model might suffer from overfitting. To be more specific: the correct labels for the four images(except 2nd) are slippery road(label 23), speed limit(label 8), bumpy road(label 22), no entry(label 17). The accuracy in the test data seems quite hight(shown in confusion matrix) compared to the external images.\nThe second sign doesn't appear in the 43 categories. And the model regards it as No passing.", "### Visualize the softmax probabilities here.\n### Feel free to use as many code cells as needed.\n\nplt.figure(1)\nplt.subplot(231)\nplt.plot(probs[0])\nplt.subplot(232)\nplt.plot(probs[1])\nplt.subplot(233)\nplt.plot(probs[2])\nplt.subplot(234)\nplt.plot(probs[3])\nplt.subplot(235)\nplt.plot(probs[4])", "Question 8\nUse the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)\ntf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.\nThe top 5 predictions are:", "for i in range(5):\n t = [sign_dict[x] for x in top5[1][i]]\n print(t)\n print('\\n')", "Answer:\n\nIf we look at the top 5 predictions, the prediction of the 1st image, which is slippery road, lies within top 5. \nThe correct sign for the 3rd image, due to the background colors having too much noise, still doesn't appear in top 5.\nThe sign for the 2nd image is out of 43 categories. So there is no correct prediction.\n\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \\n\",\n \"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/mixed_lm_example.ipynb
bsd-3-clause
[ "Linear Mixed Effects Models", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\nfrom statsmodels.tools.sm_exceptions import ConvergenceWarning", "Note: The R code and the results in this notebook has been converted to markdown so that R is not required to build the documents. The R results in the notebook were computed using R 3.5.1 and lme4 1.1.\nipython\n%load_ext rpy2.ipython\nipython\n%R library(lme4)\narray(['lme4', 'Matrix', 'tools', 'stats', 'graphics', 'grDevices',\n 'utils', 'datasets', 'methods', 'base'], dtype='&lt;U9')\nComparing R lmer to statsmodels MixedLM\nThe statsmodels imputation of linear mixed models (MixedLM) closely follows the approach outlined in Lindstrom and Bates (JASA 1988). This is also the approach followed in the R package LME4. Other packages such as Stata, SAS, etc. should also be consistent with this approach, as the basic techniques in this area are mostly mature.\nHere we show how linear mixed models can be fit using the MixedLM procedure in statsmodels. Results from R (LME4) are included for comparison.\nHere are our import statements:\nGrowth curves of pigs\nThese are longitudinal data from a factorial experiment. The outcome variable is the weight of each pig, and the only predictor variable we will use here is \"time\". First we fit a model that expresses the mean weight as a linear function of time, with a random intercept for each pig. The model is specified using formulas. Since the random effects structure is not specified, the default random effects structure (a random intercept for each group) is automatically used.", "data = sm.datasets.get_rdataset(\"dietox\", \"geepack\").data\nmd = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"])\nmdf = md.fit(method=[\"lbfgs\"])\nprint(mdf.summary())", "Here is the same model fit in R using LMER:\nipython\n%%R\ndata(dietox, package='geepack')\nipython\n%R print(summary(lmer('Weight ~ Time + (1|Pig)', data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 | Pig)\n Data: dietox\nREML criterion at convergence: 4809.6\nScaled residuals: \n Min 1Q Median 3Q Max \n-4.7118 -0.5696 -0.0943 0.4877 4.7732 \nRandom effects:\n Groups Name Variance Std.Dev.\n Pig (Intercept) 40.39 6.356 \n Residual 11.37 3.371 \nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.72352 0.78805 19.95\nTime 6.94251 0.03339 207.94\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.275\n```\nNote that in the statsmodels summary of results, the fixed effects and random effects parameter estimates are shown in a single table. The random effect for animal is labeled \"Intercept RE\" in the statsmodels output above. In the LME4 output, this effect is the pig intercept under the random effects section.\nThere has been a lot of debate about whether the standard errors for random effect variance and covariance parameters are useful. In LME4, these standard errors are not displayed, because the authors of the package believe they are not very informative. While there is good reason to question their utility, we elected to include the standard errors in the summary table, but do not show the corresponding Wald confidence intervals.\nNext we fit a model with two random effects for each animal: a random intercept, and a random slope (with respect to time). This means that each pig may have a different baseline weight, as well as growing at a different rate. The formula specifies that \"Time\" is a covariate with a random coefficient. By default, formulas always include an intercept (which could be suppressed here using \"0 + Time\" as the formula).", "md = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"], re_formula=\"~Time\")\nmdf = md.fit(method=[\"lbfgs\"])\nprint(mdf.summary())", "Here is the same model fit using LMER in R:\nipython\n%R print(summary(lmer(\"Weight ~ Time + (1 + Time | Pig)\", data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 + Time | Pig)\n Data: dietox\nREML criterion at convergence: 4434.1\nScaled residuals: \n Min 1Q Median 3Q Max \n-6.4286 -0.5529 -0.0416 0.4841 3.5624 \nRandom effects:\n Groups Name Variance Std.Dev. Corr\n Pig (Intercept) 19.493 4.415 \n Time 0.416 0.645 0.10\n Residual 6.038 2.457 \nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.73865 0.55012 28.61\nTime 6.93901 0.07982 86.93\nCorrelation of Fixed Effects:\n (Intr)\nTime 0.006 \n```\nThe random intercept and random slope are only weakly correlated $(0.294 / \\sqrt{19.493 * 0.416} \\approx 0.1)$. So next we fit a model in which the two random effects are constrained to be uncorrelated:", "0.294 / (19.493 * 0.416) ** 0.5\n\nmd = smf.mixedlm(\"Weight ~ Time\", data, groups=data[\"Pig\"], re_formula=\"~Time\")\nfree = sm.regression.mixed_linear_model.MixedLMParams.from_components(\n np.ones(2), np.eye(2)\n)\n\nmdf = md.fit(free=free, method=[\"lbfgs\"])\nprint(mdf.summary())", "The likelihood drops by 0.3 when we fix the correlation parameter to 0. Comparing 2 x 0.3 = 0.6 to the chi^2 1 df reference distribution suggests that the data are very consistent with a model in which this parameter is equal to 0.\nHere is the same model fit using LMER in R (note that here R is reporting the REML criterion instead of the likelihood, where the REML criterion is twice the log likelihood):\nipython\n%R print(summary(lmer(\"Weight ~ Time + (1 | Pig) + (0 + Time | Pig)\", data=dietox)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: Weight ~ Time + (1 | Pig) + (0 + Time | Pig)\n Data: dietox\nREML criterion at convergence: 4434.7\nScaled residuals: \n Min 1Q Median 3Q Max \n-6.4281 -0.5527 -0.0405 0.4840 3.5661 \nRandom effects:\n Groups Name Variance Std.Dev.\n Pig (Intercept) 19.8404 4.4543\n Pig.1 Time 0.4234 0.6507\n Residual 6.0282 2.4552\nNumber of obs: 861, groups: Pig, 72\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 15.73875 0.55444 28.39\nTime 6.93899 0.08045 86.25\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.086\n```\nSitka growth data\nThis is one of the example data sets provided in the LMER R library. The outcome variable is the size of the tree, and the covariate used here is a time value. The data are grouped by tree.", "data = sm.datasets.get_rdataset(\"Sitka\", \"MASS\").data\nendog = data[\"size\"]\ndata[\"Intercept\"] = 1\nexog = data[[\"Intercept\", \"Time\"]]", "Here is the statsmodels LME fit for a basic model with a random intercept. We are passing the endog and exog data directly to the LME init function as arrays. Also note that endog_re is specified explicitly in argument 4 as a random intercept (although this would also be the default if it were not specified).", "md = sm.MixedLM(endog, exog, groups=data[\"tree\"], exog_re=exog[\"Intercept\"])\nmdf = md.fit()\nprint(mdf.summary())", "Here is the same model fit in R using LMER:\nipython\n%R\ndata(Sitka, package=\"MASS\")\nprint(summary(lmer(\"size ~ Time + (1 | tree)\", data=Sitka)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: size ~ Time + (1 | tree)\n Data: Sitka\nREML criterion at convergence: 164.8\nScaled residuals: \n Min 1Q Median 3Q Max \n-2.9979 -0.5169 0.1576 0.5392 4.4012 \nRandom effects:\n Groups Name Variance Std.Dev.\n tree (Intercept) 0.37451 0.612 \n Residual 0.03921 0.198 \nNumber of obs: 395, groups: tree, 79\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 2.2732443 0.0878955 25.86\nTime 0.0126855 0.0002654 47.80\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.611\n```\nWe can now try to add a random slope. We start with R this time. From the code and output below we see that the REML estimate of the variance of the random slope is nearly zero.\nipython\n%R print(summary(lmer(\"size ~ Time + (1 + Time | tree)\", data=Sitka)))\n```\nLinear mixed model fit by REML ['lmerMod']\nFormula: size ~ Time + (1 + Time | tree)\n Data: Sitka\nREML criterion at convergence: 153.4\nScaled residuals: \n Min 1Q Median 3Q Max \n-2.7609 -0.5173 0.1188 0.5270 3.5466 \nRandom effects:\n Groups Name Variance Std.Dev. Corr \n tree (Intercept) 2.217e-01 0.470842 \n Time 3.288e-06 0.001813 -0.17\n Residual 3.634e-02 0.190642 \nNumber of obs: 395, groups: tree, 79\nFixed effects:\n Estimate Std. Error t value\n(Intercept) 2.273244 0.074655 30.45\nTime 0.012686 0.000327 38.80\nCorrelation of Fixed Effects:\n (Intr)\nTime -0.615\nconvergence code: 0\nModel failed to converge with max|grad| = 0.793203 (tol = 0.002, component 1)\nModel is nearly unidentifiable: very large eigenvalue\n - Rescale variables?\n```\nIf we run this in statsmodels LME with defaults, we see that the variance estimate is indeed very small, which leads to a warning about the solution being on the boundary of the parameter space. The regression slopes agree very well with R, but the likelihood value is much higher than that returned by R.", "exog_re = exog.copy()\nmd = sm.MixedLM(endog, exog, data[\"tree\"], exog_re)\nmdf = md.fit()\nprint(mdf.summary())", "We can further explore the random effects structure by constructing plots of the profile likelihoods. We start with the random intercept, generating a plot of the profile likelihood from 0.1 units below to 0.1 units above the MLE. Since each optimization inside the profile likelihood generates a warning (due to the random slope variance being close to zero), we turn off the warnings here.", "import warnings\n\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\")\n likev = mdf.profile_re(0, \"re\", dist_low=0.1, dist_high=0.1)", "Here is a plot of the profile likelihood function. We multiply the log-likelihood difference by 2 to obtain the usual $\\chi^2$ reference distribution with 1 degree of freedom.", "import matplotlib.pyplot as plt\n\nplt.figure(figsize=(10, 8))\nplt.plot(likev[:, 0], 2 * likev[:, 1])\nplt.xlabel(\"Variance of random intercept\", size=17)\nplt.ylabel(\"-2 times profile log likelihood\", size=17)", "Here is a plot of the profile likelihood function. The profile likelihood plot shows that the MLE of the random slope variance parameter is a very small positive number, and that there is low uncertainty in this estimate.", "re = mdf.cov_re.iloc[1, 1]\nwith warnings.catch_warnings():\n # Parameter is often on the boundary\n warnings.simplefilter(\"ignore\", ConvergenceWarning)\n likev = mdf.profile_re(1, \"re\", dist_low=0.5 * re, dist_high=0.8 * re)\n\nplt.figure(figsize=(10, 8))\nplt.plot(likev[:, 0], 2 * likev[:, 1])\nplt.xlabel(\"Variance of random slope\", size=17)\nlbl = plt.ylabel(\"-2 times profile log likelihood\", size=17)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ja/guide/keras/functional.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Functional API\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/functional\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で表示</a> </td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/functional.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab で実行</a> </td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/functional.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/functional.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a> </td>\n</table>\n\nセットアップ", "import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers", "前書き\nKeras Functional API は、tf.keras.Sequential API よりも柔軟なモデルの作成が可能で、非線形トポロジー、共有レイヤー、さらには複数の入力または出力を持つモデル処理することができます。\nこれは、ディープラーニングのモデルは通常、レイヤーの有向非巡回グラフ(DAG)であるという考えに基づいてます。要するに、Functional API はレイヤーのグラフを構築する方法です。\n次のモデルを考察してみましょう。\n(input: 784-dimensional vectors)\n ↧\n[Dense (64 units, relu activation)]\n ↧\n[Dense (64 units, relu activation)]\n ↧\n[Dense (10 units, softmax activation)]\n ↧\n(output: logits of a probability distribution over 10 classes)\nこれは 3つ のレイヤーを持つ単純なグラフです。Functional API を使用してモデルを構築するために、まずは入力ノードを作成することから始めます。", "inputs = keras.Input(shape=(784,))", "データの形状は、784次元のベクトルとして設定されます。各サンプルの形状のみを指定するため、バッチサイズは常に省略されます。\n例えば、(32, 32, 3)という形状の画像入力がある場合には、次を使用します。", "# Just for demonstration purposes.\nimg_inputs = keras.Input(shape=(32, 32, 3))", "返されるinputsには、モデルに供給する入力データの形状とdtypeについての情報を含みます。形状は次のとおりです。", "inputs.shape", "dtype は次のとおりです。", "inputs.dtype", "このinputsオブジェクトのレイヤーを呼び出して、レイヤーのグラフに新しいノードを作成します。", "dense = layers.Dense(64, activation=\"relu\")\nx = dense(inputs)", "「レイヤー呼び出し」アクションは、「入力」から作成したこのレイヤーまで矢印を描くようなものです。denseレイヤーに入力を「渡して」、xを取得します。\nレイヤーのグラフにあと少しレイヤーを追加してみましょう。", "x = layers.Dense(64, activation=\"relu\")(x)\noutputs = layers.Dense(10)(x)", "この時点で、レイヤーのグラフの入力と出力を指定することにより、Modelを作成できます。", "model = keras.Model(inputs=inputs, outputs=outputs, name=\"mnist_model\")", "モデルの概要がどのようなものか、確認しましょう。", "model.summary()", "また、モデルをグラフとしてプロットすることも可能です。", "keras.utils.plot_model(model, \"my_first_model.png\")", "そしてオプションで、プロットされたグラフに各レイヤーの入力形状と出力形状を表示します 。", "keras.utils.plot_model(model, \"my_first_model_with_shape_info.png\", show_shapes=True)", "この図とコードはほぼ同じです。コードバージョンでは、接続矢印は呼び出し演算に置き換えられています。\n「レイヤーのグラフ」はディープラーニングモデルの直感的なメンタルイメージであり、Functional API はこのメンタルイメージを忠実に映すモデルを作成する方法です。\nトレーニング、評価、推論\nFunctional API を使用して構築されたモデルの学習、評価、推論は、Sequentialモデルの場合とまったく同じように動作します。\nModel クラスにはトレーニングループ(fit() メソッド)と評価ループ(evaluate() メソッド)が組み込まれています。これらのループをカスタマイズすることで、教師あり学習を超えるトレーニングのルーチン(GAN など)を簡単に実装することができます。\nここでは、MNIST 画像データを読み込み、ベクトルに再形成し、(検証分割のパフォーマンスを監視しながら)データ上でモデルを当てはめ、その後、テストデータ上でモデルを評価します。", "(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\n\nx_train = x_train.reshape(60000, 784).astype(\"float32\") / 255\nx_test = x_test.reshape(10000, 784).astype(\"float32\") / 255\n\nmodel.compile(\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer=keras.optimizers.RMSprop(),\n metrics=[\"accuracy\"],\n)\n\nhistory = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)\n\ntest_scores = model.evaluate(x_test, y_test, verbose=2)\nprint(\"Test loss:\", test_scores[0])\nprint(\"Test accuracy:\", test_scores[1])", "さらに詳しくはトレーニングと評価ガイドをご覧ください。\n保存とシリアライズ\nFunctional API を使用して構築されたモデルの保存とシリアル化は、Sequentialのモデルと同じように動作します。Functional モデルを保存する標準的な方法は、model.save()を呼び出して、モデル全体を単一のファイルとして保存します。モデルを作成したコードが使用できなくなったとしても、後でこのファイルから同じモデルを再作成することが可能です。\n保存されたファイルには次を含みます。\n\nモデルのアーキテクチャ\nモデルの重み値(トレーニングの間に学習された値)\nある場合は、モデルトレーニング構成(compile に渡される構成)\nあれば、オプティマイザとその状態(中断した所からトレーニングを再開するため)", "model.save(\"path_to_my_model\")\ndel model\n# Recreate the exact same model purely from the file:\nmodel = keras.models.load_model(\"path_to_my_model\")", "詳細は、モデルシリアライゼーションと保存ガイドをご覧ください。\nレイヤー群の同じグラフを使用してマルチモデルを定義する\nFunctional API では、モデルはレイヤー群のグラフでそれらの出入力を指定することにより生成されます。それはレイヤー群の単一のグラフ が複数のモデルを生成するために使用できることを意味しています。\n次の例では、レイヤー群の同じスタックを使用して 2 つのモデルのインスタンス化を行います。これらは画像入力を 16 次元ベクトルに変換するencoderモデルと、トレーニングのためのエンドツーエンドautoencoderモデルです。", "encoder_input = keras.Input(shape=(28, 28, 1), name=\"img\")\nx = layers.Conv2D(16, 3, activation=\"relu\")(encoder_input)\nx = layers.Conv2D(32, 3, activation=\"relu\")(x)\nx = layers.MaxPooling2D(3)(x)\nx = layers.Conv2D(32, 3, activation=\"relu\")(x)\nx = layers.Conv2D(16, 3, activation=\"relu\")(x)\nencoder_output = layers.GlobalMaxPooling2D()(x)\n\nencoder = keras.Model(encoder_input, encoder_output, name=\"encoder\")\nencoder.summary()\n\nx = layers.Reshape((4, 4, 1))(encoder_output)\nx = layers.Conv2DTranspose(16, 3, activation=\"relu\")(x)\nx = layers.Conv2DTranspose(32, 3, activation=\"relu\")(x)\nx = layers.UpSampling2D(3)(x)\nx = layers.Conv2DTranspose(16, 3, activation=\"relu\")(x)\ndecoder_output = layers.Conv2DTranspose(1, 3, activation=\"relu\")(x)\n\nautoencoder = keras.Model(encoder_input, decoder_output, name=\"autoencoder\")\nautoencoder.summary()", "ここでは、デコーディングアーキテクチャはエンコーディングアーキテクチャに対して厳密に対称的であるため、出力形状は入力形状(28, 28, 1)と同じです。\nConv2Dレイヤーの反対はConv2DTransposeレイヤーで、MaxPooling2Dレイヤーの反対はUpSampling2Dレイヤーです。\nレイヤー同様、全てのモデルは呼び出し可能\n任意のモデルをInput上、あるいはもう一つのレイヤーの出力上で呼び出すことによって、それがレイヤーであるかのように扱うことができます。モデルを呼び出すことにより、単にモデルのアーキテクチャを再利用しているのではなく、その重みも再利用していることになります。\nこれを実際に見るために、エンコーダモデルとデコーダモデルを作成し、それらを 2 回の呼び出しに連鎖してオートエンコーダ―モデルを取得する、オートエンコーダの異なる例を次に示します。", "encoder_input = keras.Input(shape=(28, 28, 1), name=\"original_img\")\nx = layers.Conv2D(16, 3, activation=\"relu\")(encoder_input)\nx = layers.Conv2D(32, 3, activation=\"relu\")(x)\nx = layers.MaxPooling2D(3)(x)\nx = layers.Conv2D(32, 3, activation=\"relu\")(x)\nx = layers.Conv2D(16, 3, activation=\"relu\")(x)\nencoder_output = layers.GlobalMaxPooling2D()(x)\n\nencoder = keras.Model(encoder_input, encoder_output, name=\"encoder\")\nencoder.summary()\n\ndecoder_input = keras.Input(shape=(16,), name=\"encoded_img\")\nx = layers.Reshape((4, 4, 1))(decoder_input)\nx = layers.Conv2DTranspose(16, 3, activation=\"relu\")(x)\nx = layers.Conv2DTranspose(32, 3, activation=\"relu\")(x)\nx = layers.UpSampling2D(3)(x)\nx = layers.Conv2DTranspose(16, 3, activation=\"relu\")(x)\ndecoder_output = layers.Conv2DTranspose(1, 3, activation=\"relu\")(x)\n\ndecoder = keras.Model(decoder_input, decoder_output, name=\"decoder\")\ndecoder.summary()\n\nautoencoder_input = keras.Input(shape=(28, 28, 1), name=\"img\")\nencoded_img = encoder(autoencoder_input)\ndecoded_img = decoder(encoded_img)\nautoencoder = keras.Model(autoencoder_input, decoded_img, name=\"autoencoder\")\nautoencoder.summary()", "ご覧のように、モデルはネストすることができ、(モデルはちょうどレイヤーのようなものであるため)サブモデルを含むことができます。モデル・ネスティングのための一般的なユースケースは アンサンブル です。モデルのセットを (それらの予測を平均する) 単一のモデルにアンサンブルする方法の例を次に示します。", "def get_model():\n inputs = keras.Input(shape=(128,))\n outputs = layers.Dense(1)(inputs)\n return keras.Model(inputs, outputs)\n\n\nmodel1 = get_model()\nmodel2 = get_model()\nmodel3 = get_model()\n\ninputs = keras.Input(shape=(128,))\ny1 = model1(inputs)\ny2 = model2(inputs)\ny3 = model3(inputs)\noutputs = layers.average([y1, y2, y3])\nensemble_model = keras.Model(inputs=inputs, outputs=outputs)", "複雑なグラフトポロジーを操作する\nマルチ入力と出力を持つモデル\nFunctional API はマルチ入力と出力の操作を容易にします。これは Sequential API では処理できません。\nたとえば、顧客が発行したチケットを優先度別にランク付けし、正しい部門にルーティングするシステムを構築する場合、モデルには次の 3 つの入力があります。\n\nチケットの件名 (テキスト入力)\nチケットの本文(テキスト入力)\nユーザーが追加した任意のタグ(カテゴリ入力)\n\nこのモデルには 2 つの出力があります。\n\n0 と 1 の間のプライオリティスコア(スカラーシグモイド出力)\nチケットを処理すべき部門(部門集合に渡るソフトマックス出力)\n\nこのモデルは Functional API を使用すると数行で構築が可能です。", "num_tags = 12 # Number of unique issue tags\nnum_words = 10000 # Size of vocabulary obtained when preprocessing text data\nnum_departments = 4 # Number of departments for predictions\n\ntitle_input = keras.Input(\n shape=(None,), name=\"title\"\n) # Variable-length sequence of ints\nbody_input = keras.Input(shape=(None,), name=\"body\") # Variable-length sequence of ints\ntags_input = keras.Input(\n shape=(num_tags,), name=\"tags\"\n) # Binary vectors of size `num_tags`\n\n# Embed each word in the title into a 64-dimensional vector\ntitle_features = layers.Embedding(num_words, 64)(title_input)\n# Embed each word in the text into a 64-dimensional vector\nbody_features = layers.Embedding(num_words, 64)(body_input)\n\n# Reduce sequence of embedded words in the title into a single 128-dimensional vector\ntitle_features = layers.LSTM(128)(title_features)\n# Reduce sequence of embedded words in the body into a single 32-dimensional vector\nbody_features = layers.LSTM(32)(body_features)\n\n# Merge all available features into a single large vector via concatenation\nx = layers.concatenate([title_features, body_features, tags_input])\n\n# Stick a logistic regression for priority prediction on top of the features\npriority_pred = layers.Dense(1, name=\"priority\")(x)\n# Stick a department classifier on top of the features\ndepartment_pred = layers.Dense(num_departments, name=\"department\")(x)\n\n# Instantiate an end-to-end model predicting both priority and department\nmodel = keras.Model(\n inputs=[title_input, body_input, tags_input],\n outputs=[priority_pred, department_pred],\n)", "では、モデルをプロットします。", "keras.utils.plot_model(model, \"multi_input_and_output_model.png\", show_shapes=True)", "このモデルをコンパイルする時に、各出力に異なる損失を割り当てることができます。また、各損失に異なる重みを割り当てて、トレーニング損失全体へのそれらの寄与をモジュール化することも可能です。", "model.compile(\n optimizer=keras.optimizers.RMSprop(1e-3),\n loss=[\n keras.losses.BinaryCrossentropy(from_logits=True),\n keras.losses.CategoricalCrossentropy(from_logits=True),\n ],\n loss_weights=[1.0, 0.2],\n)", "出力レイヤーの名前が異なるため、対応するレイヤー名を使用して、損失と損失の重みを指定することも可能です。", "model.compile(\n optimizer=keras.optimizers.RMSprop(1e-3),\n loss={\n \"priority\": keras.losses.BinaryCrossentropy(from_logits=True),\n \"department\": keras.losses.CategoricalCrossentropy(from_logits=True),\n },\n loss_weights={\"priority\": 1.0, \"department\": 0.2},\n)", "入力とターゲットの NumPy 配列のリストを渡し、モデルをトレーニングします。", "# Dummy input data\ntitle_data = np.random.randint(num_words, size=(1280, 10))\nbody_data = np.random.randint(num_words, size=(1280, 100))\ntags_data = np.random.randint(2, size=(1280, num_tags)).astype(\"float32\")\n\n# Dummy target data\npriority_targets = np.random.random(size=(1280, 1))\ndept_targets = np.random.randint(2, size=(1280, num_departments))\n\nmodel.fit(\n {\"title\": title_data, \"body\": body_data, \"tags\": tags_data},\n {\"priority\": priority_targets, \"department\": dept_targets},\n epochs=2,\n batch_size=32,\n)", "Datasetオブジェクトで fit を呼び出す時、それは([title_data, body_data, tags_data], [priority_targets, dept_targets])などのリストのタプル、または({'title': title_data, 'body': body_data, 'tags': tags_data}、{'priority': priority_targets, 'department': dept_targets})などのディクショナリのタプルを yield する必要があります。\nさらに詳しい説明については、トレーニングと評価ガイドをご覧ください。\nトイ ResNet モデル\n複数の入力と出力を持つモデルに加えて、Functional API では非線形接続トポロジー、つまりシーケンシャルに接続されていないレイヤーを持つモデルの操作を容易にします。これはSequential API では扱うことができません。\nこれの一般的なユースケースは、残差接続です。これを実証するために、CIFAR10 向けのトイ ResNet モデルを構築してみましょう。", "inputs = keras.Input(shape=(32, 32, 3), name=\"img\")\nx = layers.Conv2D(32, 3, activation=\"relu\")(inputs)\nx = layers.Conv2D(64, 3, activation=\"relu\")(x)\nblock_1_output = layers.MaxPooling2D(3)(x)\n\nx = layers.Conv2D(64, 3, activation=\"relu\", padding=\"same\")(block_1_output)\nx = layers.Conv2D(64, 3, activation=\"relu\", padding=\"same\")(x)\nblock_2_output = layers.add([x, block_1_output])\n\nx = layers.Conv2D(64, 3, activation=\"relu\", padding=\"same\")(block_2_output)\nx = layers.Conv2D(64, 3, activation=\"relu\", padding=\"same\")(x)\nblock_3_output = layers.add([x, block_2_output])\n\nx = layers.Conv2D(64, 3, activation=\"relu\")(block_3_output)\nx = layers.GlobalAveragePooling2D()(x)\nx = layers.Dense(256, activation=\"relu\")(x)\nx = layers.Dropout(0.5)(x)\noutputs = layers.Dense(10)(x)\n\nmodel = keras.Model(inputs, outputs, name=\"toy_resnet\")\nmodel.summary()", "モデルをプロットします。", "keras.utils.plot_model(model, \"mini_resnet.png\", show_shapes=True)", "モデルをトレーニングします。", "(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()\n\nx_train = x_train.astype(\"float32\") / 255.0\nx_test = x_test.astype(\"float32\") / 255.0\ny_train = keras.utils.to_categorical(y_train, 10)\ny_test = keras.utils.to_categorical(y_test, 10)\n\nmodel.compile(\n optimizer=keras.optimizers.RMSprop(1e-3),\n loss=keras.losses.CategoricalCrossentropy(from_logits=True),\n metrics=[\"acc\"],\n)\n# We restrict the data to the first 1000 samples so as to limit execution time\n# on Colab. Try to train on the entire dataset until convergence!\nmodel.fit(x_train[:1000], y_train[:1000], batch_size=64, epochs=1, validation_split=0.2)", "レイヤーを共有する\nFunctional API のもう 1 つの良い使い方は、共有レイヤーを使用するモデルです。共有レイヤーは、同じモデルで複数回再利用されるレイヤーインスタンスのことで、レイヤーグラフ内の複数のパスに対応するフィーチャを学習します。\n共有レイヤーは、似たような空間からの入力(例えば、似た語彙を特徴とする 2 つの異なるテキスト)をエンコードするためにしばしば使用されます。これにより、それら異なる入力間での情報の共有を可能になり、より少ないデータでそのようなモデルをトレーニングすることが可能になります。与えられた単語が入力のいずれかに見られる場合、それは共有レイヤーを通過する全ての入力の処理に有用です。\nFunctional API でレイヤーを共有するには、同じレイヤーインスタンスを複数回呼び出します。例えば、2 つの異なるテキスト入力間で共有される Embedding レイヤを以下に示します。", "# Embedding for 1000 unique words mapped to 128-dimensional vectors\nshared_embedding = layers.Embedding(1000, 128)\n\n# Variable-length sequence of integers\ntext_input_a = keras.Input(shape=(None,), dtype=\"int32\")\n\n# Variable-length sequence of integers\ntext_input_b = keras.Input(shape=(None,), dtype=\"int32\")\n\n# Reuse the same layer to encode both inputs\nencoded_input_a = shared_embedding(text_input_a)\nencoded_input_b = shared_embedding(text_input_b)", "レイヤーのグラフのノードを抽出して再利用する\n操作しているレイヤーのグラフは静的なデータ構造であるため、アクセスして検査をすることができます。そして、これが関数型モデルを画像としてプロットする方法でもあります。\nこれはまた、中間レイヤー(グラフ内の「ノード」)のアクティブ化にアクセスが可能で、他の場所で再利用できることを意味します。これは特徴抽出などに非常に便利です。\n例を見てみましょう。これは ImageNet 上で事前トレーニングされた、重みを持つ VGG19 モデルです。", "vgg19 = tf.keras.applications.VGG19()", "そしてこれらはグラフデータ構造をクエリして得られる、モデルの中間的なアクティブ化です。", "features_list = [layer.output for layer in vgg19.layers]", "これらの機能を使用して、中間レイヤーのアクティブ化の値を返す新しい特徴抽出モデルを作成します。", "feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)\n\nimg = np.random.random((1, 224, 224, 3)).astype(\"float32\")\nextracted_features = feat_extraction_model(img)", "これは特に、ニューラルスタイル転送などのタスクに有用です。\nカスタム層を使用して API を拡張する\ntf.kerasには、例えば、次のような幅広い組み込みレイヤーが含まれています。\n\n畳み込みレイヤー : Conv1D、Conv2D、Conv3D、Conv2DTranspose\nPooling レイヤー : MaxPooling1D、MaxPooling2D、MaxPooling3D、AveragePooling1D\nRNN レイヤー : GRU、LSTM、ConvLSTM2D\nBatchNormalization、Dropout、Embedding、など\n\n必要なものが見つからない場合は、独自のレイヤーを作成して容易に API を拡張することができます。すべてのレイヤーはLayerクラスをサブクラス化して実装します。\n\ncallメソッドは、レイヤーが行う計算を指定します。\nbuildメソッドは、レイヤーの重みを作成します(__init__でも重みを作成できるため、これは単なるスタイル慣習です)。\n\nレイヤーの新規作成に関する詳細については、カスタムレイヤーとモデルガイドをご覧ください。\ntf.keras.layers.Denseの基本的な実装を以下に示します。", "class CustomDense(layers.Layer):\n def __init__(self, units=32):\n super(CustomDense, self).__init__()\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(\n shape=(input_shape[-1], self.units),\n initializer=\"random_normal\",\n trainable=True,\n )\n self.b = self.add_weight(\n shape=(self.units,), initializer=\"random_normal\", trainable=True\n )\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n\n\ninputs = keras.Input((4,))\noutputs = CustomDense(10)(inputs)\n\nmodel = keras.Model(inputs, outputs)", "カスタムレイヤーでシリアル化をサポートするには、レイヤーインスタンスのコンストラクタ引数を返す get_config メソッドを定義します。", "class CustomDense(layers.Layer):\n def __init__(self, units=32):\n super(CustomDense, self).__init__()\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(\n shape=(input_shape[-1], self.units),\n initializer=\"random_normal\",\n trainable=True,\n )\n self.b = self.add_weight(\n shape=(self.units,), initializer=\"random_normal\", trainable=True\n )\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n\n def get_config(self):\n return {\"units\": self.units}\n\n\ninputs = keras.Input((4,))\noutputs = CustomDense(10)(inputs)\n\nmodel = keras.Model(inputs, outputs)\nconfig = model.get_config()\n\nnew_model = keras.Model.from_config(config, custom_objects={\"CustomDense\": CustomDense})", "オプションで、config ディクショナリが与えられたレイヤーインスタンスを再作成する際に使用されたクラスメソッド from_config(cls, config) を実装します。デフォルトの from_config の実装は以下の通りです。\npython\ndef from_config(cls, config):\n return cls(**config)\nいつ Functional API を使用するか\n新しいモデルを作成する場合または Modelクラスを直接サブクラス化する場合に、Keras Functional API を使用する必要があるのでしょうか? 一般的に Functional API はより高レベル、より容易かつ安全で、サブクラス化されたモデルがサポートしない多くの特徴を持っています。\nただし、レイヤーの有向非巡回グラフ(DAG)として容易に表現できないモデルを構築する場合には、モデルのサブクラス化がより大きな柔軟性を与えます。例えば、Functional API では Tree-RNN を実装できず、Modelを直接サブクラス化する必要があります。\nFunctional API とモデルのサブクラス化の違いに関する詳細については、TensorFlow 2.0 における Symbolic API と Imperative API とは?をご覧ください。\nFunctional API の長所 :\n以下のプロパティは、(データ構造体でもある)Sequential モデルには真であり、(Python のバイトコードであり、データ構造体ではない)サブクラス化されたモデルには真ではありません。\n低い冗長性\nsuper(MyClass, self).__init__(...)、def call(self, ...):などがありません。\n比較しよう :\npython\ninputs = keras.Input(shape=(32,))\nx = layers.Dense(64, activation='relu')(inputs)\noutputs = layers.Dense(10)(x)\nmlp = keras.Model(inputs, outputs)\nサブクラス化されたバージョンと比べます。\n```python\nclass MLP(keras.Model):\ndef init(self, kwargs):\n super(MLP, self).init(kwargs)\n self.dense_1 = layers.Dense(64, activation='relu')\n self.dense_2 = layers.Dense(10)\ndef call(self, inputs):\n x = self.dense_1(inputs)\n return self.dense_2(x)\nInstantiate the model.\nmlp = MLP()\nNecessary to create the model's state.\nThe model doesn't have a state until it's called at least once.\n_ = mlp(tf.zeros((1, 32)))\n```\n連結グラフを定義しながらモデルを検証する\nFunctional API では、入力仕様(形状とdtype)が(Inputを使用して)あらかじめ作成されています。レイヤーを呼び出すたびに、レイヤーは渡された仕様が想定と一致しているかどうかをチェックし、一致していない場合には有用なエラーメッセージを表示します。\nこれにより、Functional API を使用して構築できるモデルは全て確実に実行されます。収束関連のデバッグ以外の全てのデバッグは、実行時ではなく、モデル構築中に静的に行われます。これはコンパイラの型チェックに類似しています。\n関数型モデルはプロット可能かつ検査可能です\nモデルをグラフとしてプロットすることが可能で、このグラフの中間ノードに簡単にアクセスすることができます。例えば(前の例で示したように)中間レイヤーのアクティブ化を抽出して再利用します。\npython\nfeatures_list = [layer.output for layer in vgg19.layers]\nfeat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)\n関数型モデルは、シリアル化やクローン化が可能です\n関数型モデルはコードの一部ではなくデータ構造であるため、安全なシリアル化が可能です。また、単一のファイルとして保存できるため、元のコードにアクセスすることなく全く同じモデルを再作成することができます。詳細はシリアル化と保存に関するガイドをご覧ください。\nサブクラス化されたモデルをシリアライズするには、実装者がモデルレベルでget_config()およびfrom_config()メソッドを指定する必要があります。\nFunctional API の弱点 :\n動的アーキテクチャをサポートしません\nFunctional API は、モデルをレイヤーの DAG として扱います。これはほとんどのディープラーニングアーキテクチャでは真ですが、必ずしも全てのアーキテクチャに該当するわけではありません。例えば、再帰的ネットワークや Tree RNN はこの想定に従わないため、Functional API では実装できません。\n異なる API スタイルをうまく組み合わせる\nFunctional API とモデルのサブクラス化のいずれかを選択することは、モデルの 1 つのカテゴリに制限する二者択一ではありません。tf.keras API 内の全てのモデルは、それらがSequential モデルでも、関数型モデルでも、新規に書かれたサブクラス化されたモデルであっても、お互いに相互作用することができます。\n関数型モデルやSequentialモデルは、サブクラス化されたモデルやレイヤーの一部として常に使用することができます。", "units = 32\ntimesteps = 10\ninput_dim = 5\n\n# Define a Functional model\ninputs = keras.Input((None, units))\nx = layers.GlobalAveragePooling1D()(inputs)\noutputs = layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\n\n\nclass CustomRNN(layers.Layer):\n def __init__(self):\n super(CustomRNN, self).__init__()\n self.units = units\n self.projection_1 = layers.Dense(units=units, activation=\"tanh\")\n self.projection_2 = layers.Dense(units=units, activation=\"tanh\")\n # Our previously-defined Functional model\n self.classifier = model\n\n def call(self, inputs):\n outputs = []\n state = tf.zeros(shape=(inputs.shape[0], self.units))\n for t in range(inputs.shape[1]):\n x = inputs[:, t, :]\n h = self.projection_1(x)\n y = h + self.projection_2(state)\n state = y\n outputs.append(y)\n features = tf.stack(outputs, axis=1)\n print(features.shape)\n return self.classifier(features)\n\n\nrnn_model = CustomRNN()\n_ = rnn_model(tf.zeros((1, timesteps, input_dim)))", "次のいずれかのパターンに従ったcallメソッドを実装していれば、Functional API で任意のサブクラス化されたレイヤーやモデルを使用することができます。\n\ncall(self, inputs, **kwargs) -- ここでいうinputsは、テンソルまたはテンソルのネストされた構造(テンソルのリストなど)であり、**kwargsは非テンソルの引数(非 inputs)です。\ncall(self, inputs, training=None, **kwargs) -- このtraining は、レイヤーがトレーニングモードと推論モードで振る舞うべきかどうかを示すブールです。\ncall(self, inputs, mask=None, **kwargs) -- このmaskは、ブールマスクテンソルです。(例えば RNNに便利です。)\ncall(self, inputs, training=None, mask=None, **kwargs) -- もちろん、マスキングとトレーニング固有の動作の両方を同時に持つことができます。\n\nさらに、カスタムレイヤーやモデルでget_configメソッドを実装する場合、作成した関数型モデルは依然としてシリアル化やクローン化が可能です。\n新規に書かれたカスタム RNN を関数型モデルで使用する簡単な例を以下に示します。", "units = 32\ntimesteps = 10\ninput_dim = 5\nbatch_size = 16\n\n\nclass CustomRNN(layers.Layer):\n def __init__(self):\n super(CustomRNN, self).__init__()\n self.units = units\n self.projection_1 = layers.Dense(units=units, activation=\"tanh\")\n self.projection_2 = layers.Dense(units=units, activation=\"tanh\")\n self.classifier = layers.Dense(1)\n\n def call(self, inputs):\n outputs = []\n state = tf.zeros(shape=(inputs.shape[0], self.units))\n for t in range(inputs.shape[1]):\n x = inputs[:, t, :]\n h = self.projection_1(x)\n y = h + self.projection_2(state)\n state = y\n outputs.append(y)\n features = tf.stack(outputs, axis=1)\n return self.classifier(features)\n\n\n# Note that you specify a static batch size for the inputs with the `batch_shape`\n# arg, because the inner computation of `CustomRNN` requires a static batch size\n# (when you create the `state` zeros tensor).\ninputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))\nx = layers.Conv1D(32, 3)(inputs)\noutputs = CustomRNN()(x)\n\nmodel = keras.Model(inputs, outputs)\n\nrnn_model = CustomRNN()\n_ = rnn_model(tf.zeros((1, 10, 5)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
brettavedisian/phys202-2015-work
assignments/assignment10/ODEsEx02.ipynb
mit
[ "Ordinary Differential Equations Exercise 2\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed", "Lorenz system\nThe Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:\n$$ \\frac{dx}{dt} = \\sigma(y-x) $$\n$$ \\frac{dy}{dt} = x(\\rho-z) - y $$\n$$ \\frac{dz}{dt} = xy - \\beta z $$\nThe solution vector is $[x(t),y(t),z(t)]$ and $\\sigma$, $\\rho$, and $\\beta$ are parameters that govern the behavior of the solutions.\nWrite a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.", "def lorentz_derivs(yvec, t, sigma, rho, beta):\n \"\"\"Compute the the derivatives for the Lorentz system at yvec(t).\"\"\"\n x=yvec[0]\n y=yvec[1]\n z=yvec[2]\n dx=sigma*(y-x)\n dy=x*(rho-z)-y\n dz=x*y-beta*z\n return np.array([dx,dy,dz])\n\nassert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])", "Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.", "def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Solve the Lorenz system for a single initial condition.\n \n Parameters\n ----------\n ic : array, list, tuple\n Initial conditions [x,y,z].\n max_time: float\n The max time to use. Integrate with 250 points per time unit.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \n Returns\n -------\n soln : np.ndarray\n The array of the solution. Each row will be the solution vector at that time.\n t : np.ndarray\n The array of time points used.\n \n \"\"\"\n t=np.linspace(0,max_time,int(250.0*max_time))\n soln=odeint(lorentz_derivs,ic,t,args=(sigma,rho,beta))\n return soln,t\n\nassert True # leave this to grade solve_lorenz", "Write a function plot_lorentz that:\n\nSolves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.\nPlot $[x(t),z(t)]$ using a line to show each trajectory.\nColor each line using the hot colormap from Matplotlib.\nLabel your plot and choose an appropriate x and y limit.\n\nThe following cell shows how to generate colors that can be used for the lines:", "N = 5\ncolors = plt.cm.hot(np.linspace(0,1,N))\nfor i in range(N):\n # To use these colors with plt.plot, pass them as the color argument\n print(colors[i])\n\ndef plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Plot [x(t),z(t)] for the Lorenz system.\n \n Parameters\n ----------\n N : int\n Number of initial conditions and trajectories to plot.\n max_time: float\n Maximum time to use.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \"\"\"\n np.random.seed(1)\n \n ic=np.random.rand(N,3)*30-15 \n \n plt.figure(figsize=(9,6))\n \n # This takes the solutions of solve_lorentz in the x and z position of the \n # array and uses the initial conditions of their respective positions.\n for i in ic: \n plt.plot(solve_lorentz(i,max_time,sigma,rho,beta)[0][:,0],solve_lorentz(i,max_time,sigma,rho,beta)[0][:,2]);\n # I could not find a way to make the color mapping work\n \n plt.xlabel('x(t)'),plt.ylabel('z(t)');\n plt.title('Lorentz Parametric System')\n\nplot_lorentz();\n\nassert True # leave this to grade the plot_lorenz function", "Use interact to explore your plot_lorenz function with:\n\nmax_time an integer slider over the interval $[1,10]$.\nN an integer slider over the interval $[1,50]$.\nsigma a float slider over the interval $[0.0,50.0]$.\nrho a float slider over the interval $[0.0,50.0]$.\nbeta fixed at a value of $8/3$.", "interact(plot_lorentz, max_time=[1,10], N=[1,50], sigma=[0.0,50.0], rho=[0.0,50.0], beta=fixed(8/3));", "Describe the different behaviors you observe as you vary the parameters $\\sigma$, $\\rho$ and $\\beta$ of the system:\n$\\bullet$ As $\\sigma$ is varied, the shapes of the motion either become larger or smaller and move closer together as $\\sigma$ increases.\n$\\bullet$ As $\\rho$ is increased, the trajectory of each path circles more and more around a point on the graph.\n$\\bullet$ $\\beta$ is fixed." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/ensae_teaching_cs
_doc/notebooks/td1a_algo/td1a_sobel_correction.ipynb
mit
[ "1A.algo - filtre de Sobel - correction\nCorrection.\nExercice 1 : application d'un filtre", "from pyquickhelper.loghelper import noLOG\nfrom pyensae.datasource import download_data\nf = download_data(\"python.png\", url=\"http://imgs.xkcd.com/comics/\")\nfrom IPython.display import Image\nImage(\"python.png\")", "Mais avant de pouvoir faire des calculs dessus, il faut pouvoir convertir l'image en un tableau numpy avec la fonction numpy.asarray.", "import PIL\nimport PIL.Image\nim = PIL.Image.open(\"python.png\")\nfrom PIL.ImageDraw import Draw\nimport numpy\ntab = numpy.asarray(im).copy()\ntab.flags.writeable = True # afin de pouvoir modifier l'image\n\"dimension\",tab.shape, \" type\", type(tab[0,0])", "Tout d'abord, nous allons utiliser la fonction scipy.ndimage.filters.uniform_filter qui fait cela automatiquement.", "import scipy.ndimage.filters as filters\nfiltre = numpy.ones( (3,3) )\nimgf = filters.uniform_filter (tab, size=3)\nimf = PIL.Image.fromarray(numpy.uint8(imgf))\nimf.save(\"python_filtre.png\")\nImage(\"python_filtre.png\")", "Ensuite, voici une version s'appuyant sur le calcul matriciel. Lors du calcul du filtre, chaque pixel recevra 1/9 de la valeur du pixel placé juste avant sur la même ligne. C'est comme si on ajoutait à la matrice actuelle 8 fois la même matrice décalée d'un cran dans toutes les directions comme illustrée par la figure suivante.", "from pyquickhelper.helpgen import NbImage\nNbImage(\"td11_cor_grid.png\")", "Il faudra traiter différements les pixels du bord qui ont moins de voisins (lui même inclus) que les autres ce que reflète la matrice suivante :", "nbv = numpy.ones ( tab.shape ) * 9\nlx,ly = tab.shape\nnbv [:,0] = 6\nnbv [0,:] = 6\nnbv [ :, ly-1] = 6\nnbv [ lx-1,:] = 6\nnbv[0,0] = nbv[0,ly-1] = nbv[lx-1,0] = nbv[lx-1,ly-1] = 4\nnbv [:4,:3]", "Il ne reste plus qu'à programmer le flitre :", "def filtre_sobel(image, filtre):\n\n nbv = numpy.ones ( image.shape ) * 9\n lx,ly = image.shape\n nbv [:,0] = 6\n nbv [0,:] = 6\n nbv [ :, ly-1] = 6\n nbv [ lx-1,:] = 6\n nbv[0,0] = nbv[0,ly-1] = nbv[lx-1,0] = nbv[lx-1,ly-1] = 4\n nbv [:4,:3] \n \n res = numpy.zeros ( image.shape )\n for i in range(-1,2) :\n for j in range(-1,2) :\n coef = filtre [ i+1,j+1]\n mat = image [ max(i,0): min(lx+i,lx), max(j,0): min(ly+j,ly) ]\n mx,my = mat.shape\n i0,j0 = max(-i,0), max(-j,0)\n res [i0:i0+mx,j0:j0+my] += mat\n res /= nbv\n return res\n\nres = filtre_sobel(tab, filtre)\nim2 = PIL.Image.fromarray(numpy.uint8(res))\nim2.save(\"python_filtre2.png\")\nImage(\"python_filtre2.png\")", "On programme maintenant la même fonction mais sans utiliser le calcul matriciel :", "def filtre_sobel_python(image, filtre):\n res = numpy.zeros ( image.shape )\n for i in range(0, res.shape[0]):\n for j in range(0, res.shape[1]):\n nb = 0\n for k in range(-1,2) :\n for l in range(-1,2) :\n if k+i > 0 and k+i < res.shape[0] and l+j > 0 and l+j < res.shape[1] :\n res[i,j] += image[k+i, l+j]\n nb += 1\n res[i,j] /= nb\n return res\n\nres = filtre_sobel_python(tab, filtre)\nim3 = PIL.Image.fromarray(numpy.uint8(res))\nim3.save(\"python_filtre3.png\")\nImage(\"python_filtre3.png\")", "Il n'y pas vraiment besoin de mesurer le temps pour s'apercevoir que c'est beaucoup plus long." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dtamayo/reboundx
ipython_examples/Radiation_Forces_Circumplanetary_Dust.ipynb
gpl-3.0
[ "Radiation Forces on Circumplanetary Dust\nThis example shows how to integrate circumplanetary dust particles under the action of radiation forces. We use Saturn's Phoebe ring as an example, a distant ring of debris. \nWe have to make sure we add all quantities in the same units. Here we choose to use SI units. We begin by adding the Sun and Saturn, and use Saturn's orbital plane as the reference plane:", "import rebound\nimport reboundx\nimport numpy as np\nsim = rebound.Simulation()\nsim.G = 6.674e-11 # SI units\nsim.dt = 1.e4 # Initial timestep in sec.\nsim.N_active = 2 # Make it so dust particles don't interact with one another gravitationally\nsim.add(m=1.99e30, hash=\"Sun\") # add Sun with mass in kg\nsim.add(m=5.68e26, a=1.43e12, e=0.056, pomega = 0., f=0., hash=\"Saturn\") # Add Saturn at pericenter\nps = sim.particles", "Now let's set up REBOUNDx and add radiation_forces. We also have to set the speed of light in the units we want to use.", "rebx = reboundx.Extras(sim)\nrf = rebx.load_force(\"radiation_forces\")\nrebx.add_force(rf)\nrf.params[\"c\"] = 3.e8", "By default, the radiation_forces effect assumes the particle at index 0 is the source of the radiation. If you'd like to use a different one, or it's possible that the radiation source might move to a different index (e.g. with a custom merger routine), you can add a radiation_source flag to the appropriate particle like this:", "ps[\"Sun\"].params[\"radiation_source\"] = 1", "Here we show how to add two dust grains to the simulation in different ways. Let's first initialize their orbits. In both cases we use the orbital elements of Saturn's irregular satellite Phoebe, which the dust grains will inherit upon release (Tamayo et al. 2011). Since the dust grains don't interact with one another, putting them on top of each other is OK.", "a = 1.3e10 # in meters\ne = 0.16\ninc = 175*np.pi/180.\nOmega = 0. # longitude of node\nomega = 0. # argument of pericenter\nf = 0. # true anomaly\n\n# Add two dust grains with the same orbit\nsim.add(primary=ps[\"Saturn\"], a=a, e=e, inc=inc, Omega=Omega, omega=omega, f=f, hash=\"p1\")\nsim.add(primary=ps[\"Saturn\"], a=a, e=e, inc=inc, Omega=Omega, omega=omega, f=f, hash=\"p2\")", "Now we add the grains' physical properties. In order for particles to feel radiation forces, we have to set their beta parameter. $\\beta$ is the ratio of the radiation force to the gravitational force from the star (Burns et al. 1979). One can either set it directly:", "ps[\"p1\"].params[\"beta\"] = 0.01", "or we can calculate it from more fundamental parameters. REBOUNDx has a convenience function that takes the gravitational constant, speed of light, radiation source's mass and luminosity, and then the grain's physical radius, bulk density, and radiation pressure coefficient Q_pr (Burns et al. 1979, equals 1 in the limit that the grain size is >> the radiation's wavelength).", "grain_radius = 1.e-5 # grain radius in m\ndensity = 1000. # kg/m^3 = 1g/cc\nQ_pr = 1.\nluminosity = 3.85e26 # Watts\nps[\"p2\"].params[\"beta\"] = rebx.rad_calc_beta(sim.G, rf.params[\"c\"], ps[0].m, luminosity, grain_radius, density, Q_pr)\nprint(\"Particle 2's beta parameter = {0}\".format(ps[\"p2\"].params[\"beta\"]))", "Now let's run for 100 years (about 3 Saturn orbits), and look at how the eccentricity varies over a Saturn year:", "yr = 365*24*3600 # s\nNoutput = 1000\ntimes = np.linspace(0,100.*yr, Noutput)\ne1, e2 = np.zeros(Noutput), np.zeros(Noutput)\n\nsim.move_to_com() # move to center of mass frame first\n\nfor i, time in enumerate(times):\n sim.integrate(time)\n e1[i] = ps[\"p1\"].calculate_orbit(primary=ps[\"Saturn\"]).e\n e2[i] = ps[\"p2\"].calculate_orbit(primary=ps[\"Saturn\"]).e\n \n%matplotlib inline\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots(figsize=(15,5))\n\nax.plot(times/yr, e1, label=r\"$\\beta$={0:.1e}\".format(ps[\"p1\"].params[\"beta\"]))\nax.plot(times/yr, e2, label=r\"$\\beta$={0:.1e}\".format(ps[\"p2\"].params[\"beta\"]))\nax.set_xlabel('Time (yrs)', fontsize=24)\nax.set_ylabel('Eccentricity', fontsize=24)\nplt.legend(fontsize=24)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
XPRIZE/GLEXP-Team-SlideSpeech
FitNeuralNets/testFitNeuralNet.ipynb
apache-2.0
[ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport numpy.random as random\nimport os\nimport Image\nimport codecs, json", "<font color='green'>Load the data needed to fit the neural net</font>\nGet the names of the files and letters saved by the lettersketch app", "dirName = \"../lettersketch/assets/train_images/UpperCase/StraightLines/\"\nfileNames = []\nfileLetters = []\nfor fileName in os.listdir(dirName):\n if fileName.endswith(\".png\") and (not \"__\" in fileName):\n fileNames.append(dirName+fileName)\n letter = fileName.split(\"_\")[1]\n fileLetters.append(letter)\n \n#print fileNames", "Read in the letters saved by the lettersketch app after converting into grayscale and resizing", "#def rgb2gray(rgb):\n# return np.dot(rgb[...,:3], [0.299, 0.587, 0.144])\n#\n#data = np.array([rgb2gray(mpimg.imread(fileName)) for fileName in fileNames], dtype = np.float64)\n#data.shape\n\ndata = np.array([np.asarray(Image.open(fileName).convert('L').resize((28, 28), Image.NEAREST)) \n for fileName in fileNames], dtype = np.float64)\nprint data.shape", "Reshape the image arrays", "data = data.reshape(data.shape[0], data.shape[1]*data.shape[2])\nprint data.shape", "Convert white into black background", "for dt in np.nditer(data, op_flags=['readwrite']):\n dt[...] = dt/255.0\n if (dt == 1.0):\n dt[...] = 0.0", "Load the test data, Convert white into black background, Reshape the data, and Plot a sample", "testDirName = \"../lettersketch/assets/test_images/\"\ntestNames = []\nfor fileName in os.listdir(testDirName):\n if fileName.endswith(\".png\") and (\"__\" in fileName):\n testNames.append(testDirName+fileName)\n\ntestData = np.array([np.asarray(Image.open(fileName).convert('L').resize((28, 28), Image.NEAREST)) \n for fileName in testNames], dtype = np.float64)\nprint testData.shape\n\ntestData = testData.reshape(testData.shape[0], testData.shape[1]*testData.shape[2])\nprint testData.shape\n\nfor dt in np.nditer(testData, op_flags=['readwrite']):\n dt[...] = dt/255.0\n if (dt == 1.0):\n dt[...] = 0.0\n\nimgplot = plt.imshow(data[9].reshape(28,28), cmap=\"gray\")\n\ntestplot = plt.imshow(testData[1].reshape(28,28), cmap =\"gray\")", "Create a label dictionary for the letters in the training set", "labelDict = {'E':0, 'F':1, 'H':2, 'I':3, 'L':4, 'T':5}\nprint fileLetters", "Assign labels to the images in the training set", "fileLabels = [labelDict[letter] for letter in fileLetters]\nprint fileLabels", "Vectorize the labels", "def vectorizeLabels(label):\n vector = np.zeros((6))\n vector[label] = 1.0\n return vector\n\ndataLabels = np.array([vectorizeLabels(label) for label in fileLabels])\nprint dataLabels[0]", "Join data and data labels", "print data.shape\nprint dataLabels.shape\ntraining_data = zip(data, dataLabels)\n#print training_data[0]\n\n%load nnutils.py", "<font color='green'>Utilities for working with a standard neural net</font>", "#---------------------------------------------------\n# A neural net class\n#---------------------------------------------------\nclass NeuralNet(object):\n\n \"\"\" Constructor\n # layers: vector of length numLayers containing the\n # the number of neurons in each layer\n # e.g., layers = (4, 3, 2) -> 4 input values, 3 neurons in hidden layer, 2 output values\n # biases: initialized with random numbers except for first layer\n # e.g., biases = [ [b11, b21, b31]^T, \n # [b12, b22]^T ]\n # weights: initialized with random numbers \n # e.g., [ [[w111, w121, w131, w141], \n # [w211, w221, w231, w241],\n # [w311, w321, w331, w341]],\n # [[w112, w122, w132],\n # [w212, w222, w232]] ]\n \"\"\"\n def __init__(self, layers):\n\n self.numLayers = len(layers)\n self.numNeurons = layers\n self.biases = [random.randn(layer, 1) for layer in layers[1:]]\n self.weights = [random.randn(layer2, layer1) \n for layer1, layer2 in zip(layers[:-1], layers[1:])]\n \n\n \"\"\" Batch stochastic gradient descent to find minimum of objective function\n # training_data: [(x1,y1),(x2,y2),....]\n # where x1, x2, x3, ... are input data vectors\n # y1, y2, y3, ... are labels\n # max_iterations: number of iterations\n # batch_size: size of training batch\n # learning_rate: gradient descent parameter \n \"\"\"\n def batchStochasticGradientDescent(self, training_data, max_iterations, batch_size,\n learning_rate):\n\n # Get the number of training images\n nTrain = len(training_data)\n\n # Loop thru iterations\n for it in xrange(max_iterations):\n\n # Shuffle the training data\n random.shuffle(training_data)\n\n # Choose subsets of the training data\n batches = [ training_data[start:start+batch_size]\n for start in xrange(0, nTrain, batch_size) ]\n\n # Loop thru subsets\n for batch in batches:\n self.updateBatch(batch, learning_rate)\n \n #print \"Iteration {0} complete\".format(it)\n \n #print \"weights = \", self.weights\n #print \"biases = \", self.biases\n\n \"\"\" Partial update of weights and biases using gradient descent\n # with back propagation\n \"\"\"\n def updateBatch(self, batch, learning_rate): \n\n # Initialize gradC_w and gradC_b\n gradC_w = [np.zeros(w.shape) for w in self.weights]\n gradC_b = [np.zeros(b.shape) for b in self.biases]\n \n # Loop through samples in the batch\n for xx, yy in batch:\n \n # Compute correction to weights & biases using forward and backprop\n delta_gradC_w, delta_gradC_b = self.updateGradient(xx, yy)\n\n # Update the gradients\n gradC_w = [grad + delta_grad for grad, delta_grad in zip(gradC_w, delta_gradC_w)]\n gradC_b = [grad + delta_grad for grad, delta_grad in zip(gradC_b, delta_gradC_b)]\n\n # Update the weight and biases\n self.weights = [ weight - (learning_rate/len(batch))*grad\n for weight, grad in zip(self.weights, gradC_w) ]\n self.biases = [ bias - (learning_rate/len(batch))*grad\n for bias, grad in zip(self.biases, gradC_b) ]\n\n # Forward and then backpropagation to compute the gradient of the objective function\n def updateGradient(self, xx, yy):\n\n # Reshape into column vectors\n xx = np.reshape(xx, (len(xx), 1))\n yy = np.reshape(yy, (len(yy), 1))\n \n # Initialize gradC_w and gradC_b\n gradC_w = [np.zeros(w.shape) for w in self.weights]\n gradC_b = [np.zeros(b.shape) for b in self.biases]\n\n # Compute forward pass through net\n # Initial activation value = input value\n activationValue = xx\n activationValues = [xx]\n layerOutputValues = []\n \n # Loop through layers\n for weight, bias in zip(self.weights, self.biases):\n\n #print weight.shape\n #print activationValue.shape\n layerOutputValue = np.dot(weight, activationValue) + bias\n layerOutputValues.append(layerOutputValue)\n\n activationValue = self.activationFunction(layerOutputValue)\n activationValues.append(activationValue)\n \n # Compute backpropagation corrections\n # Initial deltas\n delta = self.derivOfCostFunction(activationValues[-1], yy) * self.derivActivationFunction(layerOutputValues[-1])\n\n gradC_b[-1] = delta\n gradC_w[-1] = np.dot(delta, activationValues[-2].transpose())\n\n # Loop backward thru layers\n for layer in xrange(2, self.numLayers):\n\n layerOutputValue = layerOutputValues[-layer]\n derivActivation = self.derivActivationFunction(layerOutputValue)\n \n delta = np.dot(self.weights[-layer + 1].transpose(), delta)*derivActivation\n\n gradC_b[-layer] = delta\n gradC_w[-layer] = np.dot(delta, activationValues[-layer-1].transpose())\n \n # Return updated gradients\n return(gradC_w, gradC_b)\n\n # The activation function\n def activationFunction(self, xx):\n\n return 1.0/(1.0 + np.exp(-xx))\n\n # Derivative of activation function\n def derivActivationFunction(self, xx):\n\n return self.activationFunction(xx)*(1.0 - self.activationFunction(xx))\n \n # Derivative of the cost function with respect to output values\n def derivOfCostFunction(self, xx, yy):\n return (xx - yy)\n\n # The feedforward output computation for the network\n # inputVector: (n, 1) array\n # n = number of inputs to network\n # outputVector: ????\n def forwardCompute(self, inputVector):\n\n for bias, weight in zip(self.biases, self.weights):\n xx = np.dot(weight, inputVector) + bias\n inputVector = self.activationFunction(xx)\n\n return inputVector\n \n\nnet = NeuralNet([4, 3, 2])\n\nnet.weights\n\nnet.biases", "<font color='green'>Fit the neural net to the input data</font>", "testNN = NeuralNet([28*28, 49, 16, 6])\nmax_iterations = 100\nbatch_size = 1\nlearning_rate = 0.01\ntestNN.batchStochasticGradientDescent(training_data, max_iterations, batch_size,\n learning_rate)", "Save the weights and biases in JSON", "weightsList = testNN.weights\nfor weights in weightsList:\n print weights.shape\n \nbiasesList = testNN.biases\nfor biases in biasesList:\n print biases.shape\n\n\nweightsFileName = \"../lettersketch/assets/json/UpperCase_StraightLines_weights.json\"\nbiasesFileName = \"../lettersketch/assets/json/UpperCase_StraightLines_biases.json\"\nfor weights in weightsList:\n json.dump(weights.tolist(), codecs.open(weightsFileName, 'a', encoding='utf-8'),\n separators=(',', ':'), sort_keys=True, indent=2)\n\nfor biases in biasesList:\n json.dump(biases.tolist(), codecs.open(biasesFileName, 'a', encoding='utf-8'),\n separators=(',', ':'), sort_keys=True, indent=2)", "<font color='green'>Test how well the model is performing</font>", "fig = plt.figure()\n\nnum_image_rows = np.ceil(np.sqrt(testData.shape[0]))\nfor i in range(0, testData.shape[0]):\n a = fig.add_subplot(num_image_rows/2, num_image_rows*2, i+1)\n a.set_title(i)\n testplot = plt.imshow(testData[i].reshape(28,28), cmap =\"gray\")\n plt.axis('off')\n\nresult = testNN.forwardCompute(np.reshape(testData[60], (28*28,1)))\nletterIndex = np.argmax(result)\nprint letterIndex\nprint labelDict.keys()[labelDict.values().index(letterIndex)]" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
theandygross/TCGA_differential_expression
Notebooks/OneOffs/Quyen_proteases.ipynb
mit
[ "cd ..\n\nimport NotebookImport\nfrom DX_screen import *\n\ncd ../DX/Notebooks/\n\nfrom Imports import *\nfrom Preprocessing.ClinicalDataFilters import *", "uPA protease", "paired_bp_tn_split??\n\ncc = codes.ix[matched_rna.columns.get_level_values(0)].dropna().unique()\nr = pd.DataFrame({c: ttest_rel(matched_rna.ix['PLAU'].ix[ti(codes==c)])\n for c in cc}).T\n\nfig, ax = subplots(figsize=(7,3))\ncc = ['HNSC','LUSC','LUAD','BLCA','THCA','BRCA','COAD','READ']\npaired_bp_tn_split(matched_rna.ix['PLAU'], codes[codes.isin(cc)], ax=ax)\nfig.savefig('/cellar/users/agross/figures/plau.pdf')\n\nr.sort('p')\n\nttest_rel(matched_rna.ix['PLAU'])", "TPA protease", "paired_bp_tn_split(matched_rna.ix['PLAT'], codes)", "Collagenase", "paired_bp_tn_split(matched_rna.ix['MMP1'], codes)", "elastases", "g = ['CELA1','CELA2A','CELA2B','CELA3A','CELA3B','CTRC','ELANE','MMP12']\n\npaired_bp_tn_split?\n\nfig, axs = subplots(8, 1, figsize=(15,20), sharex=True)\nfor i,gene in enumerate(g):\n paired_bp_tn_split(matched_rna.ix[gene], codes, ax=axs[i],\n data_type='')", "Cathepsin", "g = ['CTSA','CTSB','CTSC','CTSD','CTSE','CTSF','CTSG','CTSH',\n 'CTSK','CTSL1','CTSL2','CTSO','CTSS','CTSW','CTSZ']\nlen(g)\n\nfig, axs = subplots(15, 1, figsize=(15,40), sharex=True)\nfor i,gene in enumerate(g):\n paired_bp_tn_split(matched_rna.ix[gene], codes, ax=axs[i],\n data_type='')", "Is there a way for you to query TCGA about all extracellular proteases in an unbiased fashion? i.e. not by asking about specific proteases by name but asking about all extracellular proteases?\nIf yes, can you please help me do this?\nIf no, the data that you already have is really useful - can we put them in the same table, ranking the most highly expressed proteases for all cancers, with HNSCC being the first cancer on the x axis (similar to panel a in the figure inserted above)." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]